NASA Astrophysics Data System (ADS)
Qian, Xiaoshan
2018-01-01
The traditional model of evaporation process parameters have continuity and cumulative characteristics of the prediction error larger issues, based on the basis of the process proposed an adaptive particle swarm neural network forecasting method parameters established on the autoregressive moving average (ARMA) error correction procedure compensated prediction model to predict the results of the neural network to improve prediction accuracy. Taking a alumina plant evaporation process to analyze production data validation, and compared with the traditional model, the new model prediction accuracy greatly improved, can be used to predict the dynamic process of evaporation of sodium aluminate solution components.
Schmitt, John; Beller, Justin; Russell, Brian; Quach, Anthony; Hermann, Elizabeth; Lyon, David; Breit, Jeffrey
2017-01-01
As the biopharmaceutical industry evolves to include more diverse protein formats and processes, more robust control of Critical Quality Attributes (CQAs) is needed to maintain processing flexibility without compromising quality. Active control of CQAs has been demonstrated using model predictive control techniques, which allow development of processes which are robust against disturbances associated with raw material variability and other potentially flexible operating conditions. Wide adoption of model predictive control in biopharmaceutical cell culture processes has been hampered, however, in part due to the large amount of data and expertise required to make a predictive model of controlled CQAs, a requirement for model predictive control. Here we developed a highly automated, perfusion apparatus to systematically and efficiently generate predictive models using application of system identification approaches. We successfully created a predictive model of %galactosylation using data obtained by manipulating galactose concentration in the perfusion apparatus in serialized step change experiments. We then demonstrated the use of the model in a model predictive controller in a simulated control scenario to successfully achieve a %galactosylation set point in a simulated fed‐batch culture. The automated model identification approach demonstrated here can potentially be generalized to many CQAs, and could be a more efficient, faster, and highly automated alternative to batch experiments for developing predictive models in cell culture processes, and allow the wider adoption of model predictive control in biopharmaceutical processes. © 2017 The Authors Biotechnology Progress published by Wiley Periodicals, Inc. on behalf of American Institute of Chemical Engineers Biotechnol. Prog., 33:1647–1661, 2017 PMID:28786215
NASA Astrophysics Data System (ADS)
Shin, Yung C.; Bailey, Neil; Katinas, Christopher; Tan, Wenda
2018-05-01
This paper presents an overview of vertically integrated comprehensive predictive modeling capabilities for directed energy deposition processes, which have been developed at Purdue University. The overall predictive models consist of vertically integrated several modules, including powder flow model, molten pool model, microstructure prediction model and residual stress model, which can be used for predicting mechanical properties of additively manufactured parts by directed energy deposition processes with blown powder as well as other additive manufacturing processes. Critical governing equations of each model and how various modules are connected are illustrated. Various illustrative results along with corresponding experimental validation results are presented to illustrate the capabilities and fidelity of the models. The good correlations with experimental results prove the integrated models can be used to design the metal additive manufacturing processes and predict the resultant microstructure and mechanical properties.
NASA Astrophysics Data System (ADS)
Shin, Yung C.; Bailey, Neil; Katinas, Christopher; Tan, Wenda
2018-01-01
This paper presents an overview of vertically integrated comprehensive predictive modeling capabilities for directed energy deposition processes, which have been developed at Purdue University. The overall predictive models consist of vertically integrated several modules, including powder flow model, molten pool model, microstructure prediction model and residual stress model, which can be used for predicting mechanical properties of additively manufactured parts by directed energy deposition processes with blown powder as well as other additive manufacturing processes. Critical governing equations of each model and how various modules are connected are illustrated. Various illustrative results along with corresponding experimental validation results are presented to illustrate the capabilities and fidelity of the models. The good correlations with experimental results prove the integrated models can be used to design the metal additive manufacturing processes and predict the resultant microstructure and mechanical properties.
Downey, Brandon; Schmitt, John; Beller, Justin; Russell, Brian; Quach, Anthony; Hermann, Elizabeth; Lyon, David; Breit, Jeffrey
2017-11-01
As the biopharmaceutical industry evolves to include more diverse protein formats and processes, more robust control of Critical Quality Attributes (CQAs) is needed to maintain processing flexibility without compromising quality. Active control of CQAs has been demonstrated using model predictive control techniques, which allow development of processes which are robust against disturbances associated with raw material variability and other potentially flexible operating conditions. Wide adoption of model predictive control in biopharmaceutical cell culture processes has been hampered, however, in part due to the large amount of data and expertise required to make a predictive model of controlled CQAs, a requirement for model predictive control. Here we developed a highly automated, perfusion apparatus to systematically and efficiently generate predictive models using application of system identification approaches. We successfully created a predictive model of %galactosylation using data obtained by manipulating galactose concentration in the perfusion apparatus in serialized step change experiments. We then demonstrated the use of the model in a model predictive controller in a simulated control scenario to successfully achieve a %galactosylation set point in a simulated fed-batch culture. The automated model identification approach demonstrated here can potentially be generalized to many CQAs, and could be a more efficient, faster, and highly automated alternative to batch experiments for developing predictive models in cell culture processes, and allow the wider adoption of model predictive control in biopharmaceutical processes. © 2017 The Authors Biotechnology Progress published by Wiley Periodicals, Inc. on behalf of American Institute of Chemical Engineers Biotechnol. Prog., 33:1647-1661, 2017. © 2017 The Authors Biotechnology Progress published by Wiley Periodicals, Inc. on behalf of American Institute of Chemical Engineers.
Towards a generalized energy prediction model for machine tools
Bhinge, Raunak; Park, Jinkyoo; Law, Kincho H.; Dornfeld, David A.; Helu, Moneer; Rachuri, Sudarsan
2017-01-01
Energy prediction of machine tools can deliver many advantages to a manufacturing enterprise, ranging from energy-efficient process planning to machine tool monitoring. Physics-based, energy prediction models have been proposed in the past to understand the energy usage pattern of a machine tool. However, uncertainties in both the machine and the operating environment make it difficult to predict the energy consumption of the target machine reliably. Taking advantage of the opportunity to collect extensive, contextual, energy-consumption data, we discuss a data-driven approach to develop an energy prediction model of a machine tool in this paper. First, we present a methodology that can efficiently and effectively collect and process data extracted from a machine tool and its sensors. We then present a data-driven model that can be used to predict the energy consumption of the machine tool for machining a generic part. Specifically, we use Gaussian Process (GP) Regression, a non-parametric machine-learning technique, to develop the prediction model. The energy prediction model is then generalized over multiple process parameters and operations. Finally, we apply this generalized model with a method to assess uncertainty intervals to predict the energy consumed to machine any part using a Mori Seiki NVD1500 machine tool. Furthermore, the same model can be used during process planning to optimize the energy-efficiency of a machining process. PMID:28652687
Towards a generalized energy prediction model for machine tools.
Bhinge, Raunak; Park, Jinkyoo; Law, Kincho H; Dornfeld, David A; Helu, Moneer; Rachuri, Sudarsan
2017-04-01
Energy prediction of machine tools can deliver many advantages to a manufacturing enterprise, ranging from energy-efficient process planning to machine tool monitoring. Physics-based, energy prediction models have been proposed in the past to understand the energy usage pattern of a machine tool. However, uncertainties in both the machine and the operating environment make it difficult to predict the energy consumption of the target machine reliably. Taking advantage of the opportunity to collect extensive, contextual, energy-consumption data, we discuss a data-driven approach to develop an energy prediction model of a machine tool in this paper. First, we present a methodology that can efficiently and effectively collect and process data extracted from a machine tool and its sensors. We then present a data-driven model that can be used to predict the energy consumption of the machine tool for machining a generic part. Specifically, we use Gaussian Process (GP) Regression, a non-parametric machine-learning technique, to develop the prediction model. The energy prediction model is then generalized over multiple process parameters and operations. Finally, we apply this generalized model with a method to assess uncertainty intervals to predict the energy consumed to machine any part using a Mori Seiki NVD1500 machine tool. Furthermore, the same model can be used during process planning to optimize the energy-efficiency of a machining process.
The Complexity of Developmental Predictions from Dual Process Models
ERIC Educational Resources Information Center
Stanovich, Keith E.; West, Richard F.; Toplak, Maggie E.
2011-01-01
Drawing developmental predictions from dual-process theories is more complex than is commonly realized. Overly simplified predictions drawn from such models may lead to premature rejection of the dual process approach as one of many tools for understanding cognitive development. Misleading predictions can be avoided by paying attention to several…
A new model integrating short- and long-term aging of copper added to soils
Zeng, Saiqi; Li, Jumei; Wei, Dongpu
2017-01-01
Aging refers to the processes by which the bioavailability/toxicity, isotopic exchangeability, and extractability of metals added to soils decline overtime. We studied the characteristics of the aging process in copper (Cu) added to soils and the factors that affect this process. Then we developed a semi-mechanistic model to predict the lability of Cu during the aging process with descriptions of the diffusion process using complementary error function. In the previous studies, two semi-mechanistic models to separately predict short-term and long-term aging of Cu added to soils were developed with individual descriptions of the diffusion process. In the short-term model, the diffusion process was linearly related to the square root of incubation time (t1/2), and in the long-term model, the diffusion process was linearly related to the natural logarithm of incubation time (lnt). Both models could predict short-term or long-term aging processes separately, but could not predict the short- and long-term aging processes by one model. By analyzing and combining the two models, we found that the short- and long-term behaviors of the diffusion process could be described adequately using the complementary error function. The effect of temperature on the diffusion process was obtained in this model as well. The model can predict the aging process continuously based on four factors—soil pH, incubation time, soil organic matter content and temperature. PMID:28820888
A dual-process account of auditory change detection.
McAnally, Ken I; Martin, Russell L; Eramudugolla, Ranmalee; Stuart, Geoffrey W; Irvine, Dexter R F; Mattingley, Jason B
2010-08-01
Listeners can be "deaf" to a substantial change in a scene comprising multiple auditory objects unless their attention has been directed to the changed object. It is unclear whether auditory change detection relies on identification of the objects in pre- and post-change scenes. We compared the rates at which listeners correctly identify changed objects with those predicted by change-detection models based on signal detection theory (SDT) and high-threshold theory (HTT). Detected changes were not identified as accurately as predicted by models based on either theory, suggesting that some changes are detected by a process that does not support change identification. Undetected changes were identified as accurately as predicted by the HTT model but much less accurately than predicted by the SDT models. The process underlying change detection was investigated further by determining receiver-operating characteristics (ROCs). ROCs did not conform to those predicted by either a SDT or a HTT model but were well modeled by a dual-process that incorporated HTT and SDT components. The dual-process model also accurately predicted the rates at which detected and undetected changes were correctly identified.
NASA Astrophysics Data System (ADS)
Tian, Yingtao; Robson, Joseph D.; Riekehr, Stefan; Kashaev, Nikolai; Wang, Li; Lowe, Tristan; Karanika, Alexandra
2016-07-01
Laser welding of advanced Al-Li alloys has been developed to meet the increasing demand for light-weight and high-strength aerospace structures. However, welding of high-strength Al-Li alloys can be problematic due to the tendency for hot cracking. Finding suitable welding parameters and filler material for this combination currently requires extensive and costly trial and error experimentation. The present work describes a novel coupled model to predict hot crack susceptibility (HCS) in Al-Li welds. Such a model can be used to shortcut the weld development process. The coupled model combines finite element process simulation with a two-level HCS model. The finite element process model predicts thermal field data for the subsequent HCS hot cracking prediction. The model can be used to predict the influences of filler wire composition and welding parameters on HCS. The modeling results have been validated by comparing predictions with results from fully instrumented laser welds performed under a range of process parameters and analyzed using high-resolution X-ray tomography to identify weld defects. It is shown that the model is capable of accurately predicting the thermal field around the weld and the trend of HCS as a function of process parameters.
Multivariable Time Series Prediction for the Icing Process on Overhead Power Transmission Line
Li, Peng; Zhao, Na; Zhou, Donghua; Cao, Min; Li, Jingjie; Shi, Xinling
2014-01-01
The design of monitoring and predictive alarm systems is necessary for successful overhead power transmission line icing. Given the characteristics of complexity, nonlinearity, and fitfulness in the line icing process, a model based on a multivariable time series is presented here to predict the icing load of a transmission line. In this model, the time effects of micrometeorology parameters for the icing process have been analyzed. The phase-space reconstruction theory and machine learning method were then applied to establish the prediction model, which fully utilized the history of multivariable time series data in local monitoring systems to represent the mapping relationship between icing load and micrometeorology factors. Relevant to the characteristic of fitfulness in line icing, the simulations were carried out during the same icing process or different process to test the model's prediction precision and robustness. According to the simulation results for the Tao-Luo-Xiong Transmission Line, this model demonstrates a good accuracy of prediction in different process, if the prediction length is less than two hours, and would be helpful for power grid departments when deciding to take action in advance to address potential icing disasters. PMID:25136653
Heil, Lieke; Kwisthout, Johan; van Pelt, Stan; van Rooij, Iris; Bekkering, Harold
2018-01-01
Evidence is accumulating that our brains process incoming information using top-down predictions. If lower level representations are correctly predicted by higher level representations, this enhances processing. However, if they are incorrectly predicted, additional processing is required at higher levels to "explain away" prediction errors. Here, we explored the potential nature of the models generating such predictions. More specifically, we investigated whether a predictive processing model with a hierarchical structure and causal relations between its levels is able to account for the processing of agent-caused events. In Experiment 1, participants watched animated movies of "experienced" and "novice" bowlers. The results are in line with the idea that prediction errors at a lower level of the hierarchy (i.e., the outcome of how many pins fell down) slow down reporting of information at a higher level (i.e., which agent was throwing the ball). Experiments 2 and 3 suggest that this effect is specific to situations in which the predictor is causally related to the outcome. Overall, the study supports the idea that a hierarchical predictive processing model can account for the processing of observed action outcomes and that the predictions involved are specific to cases where action outcomes can be predicted based on causal knowledge.
Experimental Errors in QSAR Modeling Sets: What We Can Do and What We Cannot Do.
Zhao, Linlin; Wang, Wenyi; Sedykh, Alexander; Zhu, Hao
2017-06-30
Numerous chemical data sets have become available for quantitative structure-activity relationship (QSAR) modeling studies. However, the quality of different data sources may be different based on the nature of experimental protocols. Therefore, potential experimental errors in the modeling sets may lead to the development of poor QSAR models and further affect the predictions of new compounds. In this study, we explored the relationship between the ratio of questionable data in the modeling sets, which was obtained by simulating experimental errors, and the QSAR modeling performance. To this end, we used eight data sets (four continuous endpoints and four categorical endpoints) that have been extensively curated both in-house and by our collaborators to create over 1800 various QSAR models. Each data set was duplicated to create several new modeling sets with different ratios of simulated experimental errors (i.e., randomizing the activities of part of the compounds) in the modeling process. A fivefold cross-validation process was used to evaluate the modeling performance, which deteriorates when the ratio of experimental errors increases. All of the resulting models were also used to predict external sets of new compounds, which were excluded at the beginning of the modeling process. The modeling results showed that the compounds with relatively large prediction errors in cross-validation processes are likely to be those with simulated experimental errors. However, after removing a certain number of compounds with large prediction errors in the cross-validation process, the external predictions of new compounds did not show improvement. Our conclusion is that the QSAR predictions, especially consensus predictions, can identify compounds with potential experimental errors. But removing those compounds by the cross-validation procedure is not a reasonable means to improve model predictivity due to overfitting.
Experimental Errors in QSAR Modeling Sets: What We Can Do and What We Cannot Do
2017-01-01
Numerous chemical data sets have become available for quantitative structure–activity relationship (QSAR) modeling studies. However, the quality of different data sources may be different based on the nature of experimental protocols. Therefore, potential experimental errors in the modeling sets may lead to the development of poor QSAR models and further affect the predictions of new compounds. In this study, we explored the relationship between the ratio of questionable data in the modeling sets, which was obtained by simulating experimental errors, and the QSAR modeling performance. To this end, we used eight data sets (four continuous endpoints and four categorical endpoints) that have been extensively curated both in-house and by our collaborators to create over 1800 various QSAR models. Each data set was duplicated to create several new modeling sets with different ratios of simulated experimental errors (i.e., randomizing the activities of part of the compounds) in the modeling process. A fivefold cross-validation process was used to evaluate the modeling performance, which deteriorates when the ratio of experimental errors increases. All of the resulting models were also used to predict external sets of new compounds, which were excluded at the beginning of the modeling process. The modeling results showed that the compounds with relatively large prediction errors in cross-validation processes are likely to be those with simulated experimental errors. However, after removing a certain number of compounds with large prediction errors in the cross-validation process, the external predictions of new compounds did not show improvement. Our conclusion is that the QSAR predictions, especially consensus predictions, can identify compounds with potential experimental errors. But removing those compounds by the cross-validation procedure is not a reasonable means to improve model predictivity due to overfitting. PMID:28691113
Remaining Useful Life Prediction for Lithium-Ion Batteries Based on Gaussian Processes Mixture
Li, Lingling; Wang, Pengchong; Chao, Kuei-Hsiang; Zhou, Yatong; Xie, Yang
2016-01-01
The remaining useful life (RUL) prediction of Lithium-ion batteries is closely related to the capacity degeneration trajectories. Due to the self-charging and the capacity regeneration, the trajectories have the property of multimodality. Traditional prediction models such as the support vector machines (SVM) or the Gaussian Process regression (GPR) cannot accurately characterize this multimodality. This paper proposes a novel RUL prediction method based on the Gaussian Process Mixture (GPM). It can process multimodality by fitting different segments of trajectories with different GPR models separately, such that the tiny differences among these segments can be revealed. The method is demonstrated to be effective for prediction by the excellent predictive result of the experiments on the two commercial and chargeable Type 1850 Lithium-ion batteries, provided by NASA. The performance comparison among the models illustrates that the GPM is more accurate than the SVM and the GPR. In addition, GPM can yield the predictive confidence interval, which makes the prediction more reliable than that of traditional models. PMID:27632176
Quantifying the predictive consequences of model error with linear subspace analysis
White, Jeremy T.; Doherty, John E.; Hughes, Joseph D.
2014-01-01
All computer models are simplified and imperfect simulators of complex natural systems. The discrepancy arising from simplification induces bias in model predictions, which may be amplified by the process of model calibration. This paper presents a new method to identify and quantify the predictive consequences of calibrating a simplified computer model. The method is based on linear theory, and it scales efficiently to the large numbers of parameters and observations characteristic of groundwater and petroleum reservoir models. The method is applied to a range of predictions made with a synthetic integrated surface-water/groundwater model with thousands of parameters. Several different observation processing strategies and parameterization/regularization approaches are examined in detail, including use of the Karhunen-Loève parameter transformation. Predictive bias arising from model error is shown to be prediction specific and often invisible to the modeler. The amount of calibration-induced bias is influenced by several factors, including how expert knowledge is applied in the design of parameterization schemes, the number of parameters adjusted during calibration, how observations and model-generated counterparts are processed, and the level of fit with observations achieved through calibration. Failure to properly implement any of these factors in a prediction-specific manner may increase the potential for predictive bias in ways that are not visible to the calibration and uncertainty analysis process.
Ding, Jinliang; Chai, Tianyou; Wang, Hong
2011-03-01
This paper presents a novel offline modeling for product quality prediction of mineral processing which consists of a number of unit processes in series. The prediction of the product quality of the whole mineral process (i.e., the mixed concentrate grade) plays an important role and the establishment of its predictive model is a key issue for the plantwide optimization. For this purpose, a hybrid modeling approach of the mixed concentrate grade prediction is proposed, which consists of a linear model and a nonlinear model. The least-squares support vector machine is adopted to establish the nonlinear model. The inputs of the predictive model are the performance indices of each unit process, while the output is the mixed concentrate grade. In this paper, the model parameter selection is transformed into the shape control of the probability density function (PDF) of the modeling error. In this context, both the PDF-control-based and minimum-entropy-based model parameter selection approaches are proposed. Indeed, this is the first time that the PDF shape control idea is used to deal with system modeling, where the key idea is to turn model parameters so that either the modeling error PDF is controlled to follow a target PDF or the modeling error entropy is minimized. The experimental results using the real plant data and the comparison of the two approaches are discussed. The results show the effectiveness of the proposed approaches.
Textile composite processing science
NASA Technical Reports Server (NTRS)
Loos, Alfred C.; Hammond, Vincent H.; Kranbuehl, David E.; Hasko, Gregory H.
1993-01-01
A multi-dimensional model of the Resin Transfer Molding (RTM) process was developed for the prediction of the infiltration behavior of a resin into an anisotropic fiber preform. Frequency dependent electromagnetic sensing (FDEMS) was developed for in-situ monitoring of the RTM process. Flow visualization and mold filling experiments were conducted to verify sensor measurements and model predictions. Test results indicated good agreement between model predictions, sensor readings, and experimental data.
ERIC Educational Resources Information Center
Hickok, Gregory
2012-01-01
Speech recognition is an active process that involves some form of predictive coding. This statement is relatively uncontroversial. What is less clear is the source of the prediction. The dual-stream model of speech processing suggests that there are two possible sources of predictive coding in speech perception: the motor speech system and the…
Calibration and prediction of removal function in magnetorheological finishing.
Dai, Yifan; Song, Ci; Peng, Xiaoqiang; Shi, Feng
2010-01-20
A calibrated and predictive model of the removal function has been established based on the analysis of a magnetorheological finishing (MRF) process. By introducing an efficiency coefficient of the removal function, the model can be used to calibrate the removal function in a MRF figuring process and to accurately predict the removal function of a workpiece to be polished whose material is different from the spot part. Its correctness and feasibility have been validated by simulations. Furthermore, applying this model to the MRF figuring experiments, the efficiency coefficient of the removal function can be identified accurately to make the MRF figuring process deterministic and controllable. Therefore, all the results indicate that the calibrated and predictive model of the removal function can improve the finishing determinacy and increase the model applicability in a MRF process.
NASA Astrophysics Data System (ADS)
Müller, M. F.; Thompson, S. E.
2016-02-01
The prediction of flow duration curves (FDCs) in ungauged basins remains an important task for hydrologists given the practical relevance of FDCs for water management and infrastructure design. Predicting FDCs in ungauged basins typically requires spatial interpolation of statistical or model parameters. This task is complicated if climate becomes non-stationary, as the prediction challenge now also requires extrapolation through time. In this context, process-based models for FDCs that mechanistically link the streamflow distribution to climate and landscape factors may have an advantage over purely statistical methods to predict FDCs. This study compares a stochastic (process-based) and statistical method for FDC prediction in both stationary and non-stationary contexts, using Nepal as a case study. Under contemporary conditions, both models perform well in predicting FDCs, with Nash-Sutcliffe coefficients above 0.80 in 75 % of the tested catchments. The main drivers of uncertainty differ between the models: parameter interpolation was the main source of error for the statistical model, while violations of the assumptions of the process-based model represented the main source of its error. The process-based approach performed better than the statistical approach in numerical simulations with non-stationary climate drivers. The predictions of the statistical method under non-stationary rainfall conditions were poor if (i) local runoff coefficients were not accurately determined from the gauge network, or (ii) streamflow variability was strongly affected by changes in rainfall. A Monte Carlo analysis shows that the streamflow regimes in catchments characterized by frequent wet-season runoff and a rapid, strongly non-linear hydrologic response are particularly sensitive to changes in rainfall statistics. In these cases, process-based prediction approaches are favored over statistical models.
Grossberg, Stephen
2009-01-01
An intimate link exists between the predictive and learning processes in the brain. Perceptual/cognitive and spatial/motor processes use complementary predictive mechanisms to learn, recognize, attend and plan about objects in the world, determine their current value, and act upon them. Recent neural models clarify these mechanisms and how they interact in cortical and subcortical brain regions. The present paper reviews and synthesizes data and models of these processes, and outlines a unified theory of predictive brain processing. PMID:19528003
NASA Astrophysics Data System (ADS)
Roberts, Michael J.; Braun, Noah O.; Sinclair, Thomas R.; Lobell, David B.; Schlenker, Wolfram
2017-09-01
We compare predictions of a simple process-based crop model (Soltani and Sinclair 2012), a simple statistical model (Schlenker and Roberts 2009), and a combination of both models to actual maize yields on a large, representative sample of farmer-managed fields in the Corn Belt region of the United States. After statistical post-model calibration, the process model (Simple Simulation Model, or SSM) predicts actual outcomes slightly better than the statistical model, but the combined model performs significantly better than either model. The SSM, statistical model and combined model all show similar relationships with precipitation, while the SSM better accounts for temporal patterns of precipitation, vapor pressure deficit and solar radiation. The statistical and combined models show a more negative impact associated with extreme heat for which the process model does not account. Due to the extreme heat effect, predicted impacts under uniform climate change scenarios are considerably more severe for the statistical and combined models than for the process-based model.
Prediction and assimilation of surf-zone processes using a Bayesian network: Part I: Forward models
Plant, Nathaniel G.; Holland, K. Todd
2011-01-01
Prediction of coastal processes, including waves, currents, and sediment transport, can be obtained from a variety of detailed geophysical-process models with many simulations showing significant skill. This capability supports a wide range of research and applied efforts that can benefit from accurate numerical predictions. However, the predictions are only as accurate as the data used to drive the models and, given the large temporal and spatial variability of the surf zone, inaccuracies in data are unavoidable such that useful predictions require corresponding estimates of uncertainty. We demonstrate how a Bayesian-network model can be used to provide accurate predictions of wave-height evolution in the surf zone given very sparse and/or inaccurate boundary-condition data. The approach is based on a formal treatment of a data-assimilation problem that takes advantage of significant reduction of the dimensionality of the model system. We demonstrate that predictions of a detailed geophysical model of the wave evolution are reproduced accurately using a Bayesian approach. In this surf-zone application, forward prediction skill was 83%, and uncertainties in the model inputs were accurately transferred to uncertainty in output variables. We also demonstrate that if modeling uncertainties were not conveyed to the Bayesian network (i.e., perfect data or model were assumed), then overly optimistic prediction uncertainties were computed. More consistent predictions and uncertainties were obtained by including model-parameter errors as a source of input uncertainty. Improved predictions (skill of 90%) were achieved because the Bayesian network simultaneously estimated optimal parameters while predicting wave heights.
Generalized role for the cerebellum in encoding internal models: evidence from semantic processing.
Moberget, Torgeir; Gullesen, Eva Hilland; Andersson, Stein; Ivry, Richard B; Endestad, Tor
2014-02-19
The striking homogeneity of cerebellar microanatomy is strongly suggestive of a corresponding uniformity of function. Consequently, theoretical models of the cerebellum's role in motor control should offer important clues regarding cerebellar contributions to cognition. One such influential theory holds that the cerebellum encodes internal models, neural representations of the context-specific dynamic properties of an object, to facilitate predictive control when manipulating the object. The present study examined whether this theoretical construct can shed light on the contribution of the cerebellum to language processing. We reasoned that the cerebellum might perform a similar coordinative function when the context provided by the initial part of a sentence can be highly predictive of the end of the sentence. Using functional MRI in humans we tested two predictions derived from this hypothesis, building on previous neuroimaging studies of internal models in motor control. First, focal cerebellar activation-reflecting the operation of acquired internal models-should be enhanced when the linguistic context leads terminal words to be predictable. Second, more widespread activation should be observed when such predictions are violated, reflecting the processing of error signals that can be used to update internal models. Both predictions were confirmed, with predictability and prediction violations associated with increased blood oxygenation level-dependent signal in the posterior cerebellum (Crus I/II). Our results provide further evidence for cerebellar involvement in predictive language processing and suggest that the notion of cerebellar internal models may be extended to the language domain.
2017-09-01
efficacy of statistical post-processing methods downstream of these dynamical model components with a hierarchical multivariate Bayesian approach to...Bayesian hierarchical modeling, Markov chain Monte Carlo methods , Metropolis algorithm, machine learning, atmospheric prediction 15. NUMBER OF PAGES...scale processes. However, this dissertation explores the efficacy of statistical post-processing methods downstream of these dynamical model components
Testing process predictions of models of risky choice: a quantitative model comparison approach
Pachur, Thorsten; Hertwig, Ralph; Gigerenzer, Gerd; Brandstätter, Eduard
2013-01-01
This article presents a quantitative model comparison contrasting the process predictions of two prominent views on risky choice. One view assumes a trade-off between probabilities and outcomes (or non-linear functions thereof) and the separate evaluation of risky options (expectation models). Another view assumes that risky choice is based on comparative evaluation, limited search, aspiration levels, and the forgoing of trade-offs (heuristic models). We derived quantitative process predictions for a generic expectation model and for a specific heuristic model, namely the priority heuristic (Brandstätter et al., 2006), and tested them in two experiments. The focus was on two key features of the cognitive process: acquisition frequencies (i.e., how frequently individual reasons are looked up) and direction of search (i.e., gamble-wise vs. reason-wise). In Experiment 1, the priority heuristic predicted direction of search better than the expectation model (although neither model predicted the acquisition process perfectly); acquisition frequencies, however, were inconsistent with both models. Additional analyses revealed that these frequencies were primarily a function of what Rubinstein (1988) called “similarity.” In Experiment 2, the quantitative model comparison approach showed that people seemed to rely more on the priority heuristic in difficult problems, but to make more trade-offs in easy problems. This finding suggests that risky choice may be based on a mental toolbox of strategies. PMID:24151472
Morin, Xavier; Thuiller, Wilfried
2009-05-01
Obtaining reliable predictions of species range shifts under climate change is a crucial challenge for ecologists and stakeholders. At the continental scale, niche-based models have been widely used in the last 10 years to predict the potential impacts of climate change on species distributions all over the world, although these models do not include any mechanistic relationships. In contrast, species-specific, process-based predictions remain scarce at the continental scale. This is regrettable because to secure relevant and accurate predictions it is always desirable to compare predictions derived from different kinds of models applied independently to the same set of species and using the same raw data. Here we compare predictions of range shifts under climate change scenarios for 2100 derived from niche-based models with those of a process-based model for 15 North American boreal and temperate tree species. A general pattern emerged from our comparisons: niche-based models tend to predict a stronger level of extinction and a greater proportion of colonization than the process-based model. This result likely arises because niche-based models do not take phenotypic plasticity and local adaptation into account. Nevertheless, as the two kinds of models rely on different assumptions, their complementarity is revealed by common findings. Both modeling approaches highlight a major potential limitation on species tracking their climatic niche because of migration constraints and identify similar zones where species extirpation is likely. Such convergent predictions from models built on very different principles provide a useful way to offset uncertainties at the continental scale. This study shows that the use in concert of both approaches with their own caveats and advantages is crucial to obtain more robust results and that comparisons among models are needed in the near future to gain accuracy regarding predictions of range shifts under climate change.
Predictability of process resource usage - A measurement-based study on UNIX
NASA Technical Reports Server (NTRS)
Devarakonda, Murthy V.; Iyer, Ravishankar K.
1989-01-01
A probabilistic scheme is developed to predict process resource usage in UNIX. Given the identity of the program being run, the scheme predicts CPU time, file I/O, and memory requirements of a process at the beginning of its life. The scheme uses a state-transition model of the program's resource usage in its past executions for prediction. The states of the model are the resource regions obtained from an off-line cluster analysis of processes run on the system. The proposed method is shown to work on data collected from a VAX 11/780 running 4.3 BSD UNIX. The results show that the predicted values correlate well with the actual. The correlation coefficient betweeen the predicted and actual values of CPU time is 0.84. Errors in prediction are mostly small. Some 82 percent of errors in CPU time prediction are less than 0.5 standard deviations of process CPU time.
Predictability of process resource usage: A measurement-based study of UNIX
NASA Technical Reports Server (NTRS)
Devarakonda, Murthy V.; Iyer, Ravishankar K.
1987-01-01
A probabilistic scheme is developed to predict process resource usage in UNIX. Given the identity of the program being run, the scheme predicts CPU time, file I/O, and memory requirements of a process at the beginning of its life. The scheme uses a state-transition model of the program's resource usage in its past executions for prediction. The states of the model are the resource regions obtained from an off-line cluster analysis of processes run on the system. The proposed method is shown to work on data collected from a VAX 11/780 running 4.3 BSD UNIX. The results show that the predicted values correlate well with the actual. The correlation coefficient between the predicted and actual values of CPU time is 0.84. Errors in prediction are mostly small. Some 82% of errors in CPU time prediction are less than 0.5 standard deviations of process CPU time.
NASA Astrophysics Data System (ADS)
Müller, M. F.; Thompson, S. E.
2015-09-01
The prediction of flow duration curves (FDCs) in ungauged basins remains an important task for hydrologists given the practical relevance of FDCs for water management and infrastructure design. Predicting FDCs in ungauged basins typically requires spatial interpolation of statistical or model parameters. This task is complicated if climate becomes non-stationary, as the prediction challenge now also requires extrapolation through time. In this context, process-based models for FDCs that mechanistically link the streamflow distribution to climate and landscape factors may have an advantage over purely statistical methods to predict FDCs. This study compares a stochastic (process-based) and statistical method for FDC prediction in both stationary and non-stationary contexts, using Nepal as a case study. Under contemporary conditions, both models perform well in predicting FDCs, with Nash-Sutcliffe coefficients above 0.80 in 75 % of the tested catchments. The main drives of uncertainty differ between the models: parameter interpolation was the main source of error for the statistical model, while violations of the assumptions of the process-based model represented the main source of its error. The process-based approach performed better than the statistical approach in numerical simulations with non-stationary climate drivers. The predictions of the statistical method under non-stationary rainfall conditions were poor if (i) local runoff coefficients were not accurately determined from the gauge network, or (ii) streamflow variability was strongly affected by changes in rainfall. A Monte Carlo analysis shows that the streamflow regimes in catchments characterized by a strong wet-season runoff and a rapid, strongly non-linear hydrologic response are particularly sensitive to changes in rainfall statistics. In these cases, process-based prediction approaches are strongly favored over statistical models.
Predicting subsurface contaminant transport and transformation requires mathematical models based on a variety of physical, chemical, and biological processes. The mathematical model is an attempt to quantitatively describe observed processes in order to permit systematic forecas...
NASA Astrophysics Data System (ADS)
Liu, Zhenchen; Lu, Guihua; He, Hai; Wu, Zhiyong; He, Jian
2018-01-01
Reliable drought prediction is fundamental for water resource managers to develop and implement drought mitigation measures. Considering that drought development is closely related to the spatial-temporal evolution of large-scale circulation patterns, we developed a conceptual prediction model of seasonal drought processes based on atmospheric and oceanic standardized anomalies (SAs). Empirical orthogonal function (EOF) analysis is first applied to drought-related SAs at 200 and 500 hPa geopotential height (HGT) and sea surface temperature (SST). Subsequently, SA-based predictors are built based on the spatial pattern of the first EOF modes. This drought prediction model is essentially the synchronous statistical relationship between 90-day-accumulated atmospheric-oceanic SA-based predictors and SPI3 (3-month standardized precipitation index), calibrated using a simple stepwise regression method. Predictor computation is based on forecast atmospheric-oceanic products retrieved from the NCEP Climate Forecast System Version 2 (CFSv2), indicating the lead time of the model depends on that of CFSv2. The model can make seamless drought predictions for operational use after a year-to-year calibration. Model application to four recent severe regional drought processes in China indicates its good performance in predicting seasonal drought development, despite its weakness in predicting drought severity. Overall, the model can be a worthy reference for seasonal water resource management in China.
NASA Technical Reports Server (NTRS)
Kranbuehl, D.; Kingsley, P.; Hart, S.; Loos, A.; Hasko, G.; Dexter, B.
1992-01-01
In-situ frequency dependent electromagnetic sensors (FDEMS) and the Loos resin transfer model have been used to select and control the processing properties of an epoxy resin during liquid pressure RTM impregnation and cure. Once correlated with viscosity and degree of cure the FDEMS sensor monitors and the RTM processing model predicts the reaction advancement of the resin, viscosity and the impregnation of the fabric. This provides a direct means for predicting, monitoring, and controlling the liquid RTM process in-situ in the mold throughout the fabrication process and the effects of time, temperature, vacuum and pressure. Most importantly, the FDEMS-sensor model system has been developed to make intelligent decisions, thereby automating the liquid RTM process and removing the need for operator direction.
NASA Astrophysics Data System (ADS)
Maslowski, W.
2017-12-01
The Regional Arctic System Model (RASM) has been developed to better understand the operation of Arctic System at process scale and to improve prediction of its change at a spectrum of time scales. RASM is a pan-Arctic, fully coupled ice-ocean-atmosphere-land model with marine biogeochemistry extension to the ocean and sea ice models. The main goal of our research is to advance a system-level understanding of critical processes and feedbacks in the Arctic and their links with the Earth System. The secondary, an equally important objective, is to identify model needs for new or additional observations to better understand such processes and to help constrain models. Finally, RASM has been used to produce sea ice forecasts for September 2016 and 2017, in contribution to the Sea Ice Outlook of the Sea Ice Prediction Network. Future RASM forecasts, are likely to include increased resolution for model components and ecosystem predictions. Such research is in direct support of the US environmental assessment and prediction needs, including those of the U.S. Navy, Department of Defense, and the recent IARPC Arctic Research Plan 2017-2021. In addition to an overview of RASM technical details, selected model results are presented from a hierarchy of climate models together with available observations in the region to better understand potential oceanic contributions to polar amplification. RASM simulations are analyzed to evaluate model skill in representing seasonal climatology as well as interannual and multi-decadal climate variability and predictions. Selected physical processes and resulting feedbacks are discussed to emphasize the need for fully coupled climate model simulations, high model resolution and sensitivity of simulated sea ice states to scale dependent model parameterizations controlling ice dynamics, thermodynamics and coupling with the atmosphere and ocean.
NASA Technical Reports Server (NTRS)
Johnston, John D.; Parrish, Keith; Howard, Joseph M.; Mosier, Gary E.; McGinnis, Mark; Bluth, Marcel; Kim, Kevin; Ha, Hong Q.
2004-01-01
This is a continuation of a series of papers on modeling activities for JWST. The structural-thermal- optical, often referred to as "STOP", analysis process is used to predict the effect of thermal distortion on optical performance. The benchmark STOP analysis for JWST assesses the effect of an observatory slew on wavefront error. The paper begins an overview of multi-disciplinary engineering analysis, or integrated modeling, which is a critical element of the JWST mission. The STOP analysis process is then described. This process consists of the following steps: thermal analysis, structural analysis, and optical analysis. Temperatures predicted using geometric and thermal math models are mapped to the structural finite element model in order to predict thermally-induced deformations. Motions and deformations at optical surfaces are input to optical models and optical performance is predicted using either an optical ray trace or WFE estimation techniques based on prior ray traces or first order optics. Following the discussion of the analysis process, results based on models representing the design at the time of the System Requirements Review. In addition to baseline performance predictions, sensitivity studies are performed to assess modeling uncertainties. Of particular interest is the sensitivity of optical performance to uncertainties in temperature predictions and variations in metal properties. The paper concludes with a discussion of modeling uncertainty as it pertains to STOP analysis.
NASA Astrophysics Data System (ADS)
Roy, Swagata; Biswas, Srija; Babu, K. Arun; Mandal, Sumantra
2018-05-01
A novel constitutive model has been developed for predicting flow responses of super-austenitic stainless steel over a wide range of strains (0.05-0.6), temperatures (1173-1423 K) and strain rates (0.001-1 s-1). Further, the predictability of this new model has been compared with the existing Johnson-Cook (JC) and modified Zerilli-Armstrong (M-ZA) model. The JC model is not befitted for flow prediction as it is found to be exhibiting very high ( 36%) average absolute error (δ) and low ( 0.92) correlation coefficient (R). On the contrary, the M-ZA model has demonstrated relatively lower δ ( 13%) and higher R ( 0.96) for flow prediction. The incorporation of couplings of processing parameters in M-ZA model has led to exhibit better prediction than JC model. However, the flow analyses of the studied alloy have revealed the additional synergistic influences of strain and strain rate as well as strain, temperature, and strain rate apart from those considered in M-ZA model. Hence, the new phenomenological model has been formulated incorporating all the individual and synergistic effects of processing parameters and a `strain-shifting' parameter. The proposed model predicted the flow behavior of the alloy with much better correlation and generalization than M-ZA model as substantiated by its lower δ ( 7.9%) and higher R ( 0.99) of prediction.
NASA Astrophysics Data System (ADS)
Wang, Haixia; Suo, Tongchuan; Wu, Xiaolin; Zhang, Yue; Wang, Chunhua; Yu, Heshui; Li, Zheng
2018-03-01
The control of batch-to-batch quality variations remains a challenging task for pharmaceutical industries, e.g., traditional Chinese medicine (TCM) manufacturing. One difficult problem is to produce pharmaceutical products with consistent quality from raw material of large quality variations. In this paper, an integrated methodology combining the near infrared spectroscopy (NIRS) and dynamic predictive modeling is developed for the monitoring and control of the batch extraction process of licorice. With the spectra data in hand, the initial state of the process is firstly estimated with a state-space model to construct a process monitoring strategy for the early detection of variations induced by the initial process inputs such as raw materials. Secondly, the quality property of the end product is predicted at the mid-course during the extraction process with a partial least squares (PLS) model. The batch-end-time (BET) is then adjusted accordingly to minimize the quality variations. In conclusion, our study shows that with the help of the dynamic predictive modeling, NIRS can offer the past and future information of the process, which enables more accurate monitoring and control of process performance and product quality.
Efficient Reduction and Analysis of Model Predictive Error
NASA Astrophysics Data System (ADS)
Doherty, J.
2006-12-01
Most groundwater models are calibrated against historical measurements of head and other system states before being used to make predictions in a real-world context. Through the calibration process, parameter values are estimated or refined such that the model is able to reproduce historical behaviour of the system at pertinent observation points reasonably well. Predictions made by the model are deemed to have greater integrity because of this. Unfortunately, predictive integrity is not as easy to achieve as many groundwater practitioners would like to think. The level of parameterisation detail estimable through the calibration process (especially where estimation takes place on the basis of heads alone) is strictly limited, even where full use is made of modern mathematical regularisation techniques such as those encapsulated in the PEST calibration package. (Use of these mechanisms allows more information to be extracted from a calibration dataset than is possible using simpler regularisation devices such as zones of piecewise constancy.) Where a prediction depends on aspects of parameterisation detail that are simply not inferable through the calibration process (which is often the case for predictions related to contaminant movement, and/or many aspects of groundwater/surface water interaction), then that prediction may be just as much in error as it would have been if the model had not been calibrated at all. Model predictive error arises from two sources. These are (a) the presence of measurement noise within the calibration dataset through which linear combinations of parameters spanning the "calibration solution space" are inferred, and (b) the sensitivity of the prediction to members of the "calibration null space" spanned by linear combinations of parameters which are not inferable through the calibration process. The magnitude of the former contribution depends on the level of measurement noise. The magnitude of the latter contribution (which often dominates the former) depends on the "innate variability" of hydraulic properties within the model domain. Knowledge of both of these is a prerequisite for characterisation of the magnitude of possible model predictive error. Unfortunately, in most cases, such knowledge is incomplete and subjective. Nevertheless, useful analysis of model predictive error can still take place. The present paper briefly discusses the means by which mathematical regularisation can be employed in the model calibration process in order to extract as much information as possible on hydraulic property heterogeneity prevailing within the model domain, thereby reducing predictive error to the lowest that can be achieved on the basis of that dataset. It then demonstrates the means by which predictive error variance can be quantified based on information supplied by the regularised inversion process. Both linear and nonlinear predictive error variance analysis is demonstrated using a number of real-world and synthetic examples.
Measuring and modelling the structure of chocolate
NASA Astrophysics Data System (ADS)
Le Révérend, Benjamin J. D.; Fryer, Peter J.; Smart, Ian; Bakalis, Serafim
2015-01-01
The cocoa butter present in chocolate exists as six different polymorphs. To achieve the desired crystal form (βV), traditional chocolate manufacturers use relatively slow cooling (<2°C/min). A newer generation of rapid cooling systems has been suggested requiring further understanding of fat crystallisation. To allow better control and understanding of these processes and newer rapid cooling processes, it is necessary to understand both heat transfer and crystallization kinetics. The proposed model aims to predict the temperature in the chocolate products during processing as well as the crystal structure of cocoa butter throughout the process. A set of ordinary differential equations describes the kinetics of fat crystallisation. The parameters were obtained by fitting the model to a set of DSC curves. The heat transfer equations were coupled to the kinetic model and solved using commercially available CFD software. A method using single crystal XRD was developed using a novel subtraction method to quantify the cocoa butter structure in chocolate directly and results were compared to the ones predicted from the model. The model was proven to predict phase change temperature during processing accurately (±1°C). Furthermore, it was possible to correctly predict phase changes and polymorphous transitions. The good agreement between the model and experimental data on the model geometry allows a better design and control of industrial processes.
Gaussian Process Regression (GPR) Representation in Predictive Model Markup Language (PMML)
Lechevalier, D.; Ak, R.; Ferguson, M.; Law, K. H.; Lee, Y.-T. T.; Rachuri, S.
2017-01-01
This paper describes Gaussian process regression (GPR) models presented in predictive model markup language (PMML). PMML is an extensible-markup-language (XML) -based standard language used to represent data-mining and predictive analytic models, as well as pre- and post-processed data. The previous PMML version, PMML 4.2, did not provide capabilities for representing probabilistic (stochastic) machine-learning algorithms that are widely used for constructing predictive models taking the associated uncertainties into consideration. The newly released PMML version 4.3, which includes the GPR model, provides new features: confidence bounds and distribution for the predictive estimations. Both features are needed to establish the foundation for uncertainty quantification analysis. Among various probabilistic machine-learning algorithms, GPR has been widely used for approximating a target function because of its capability of representing complex input and output relationships without predefining a set of basis functions, and predicting a target output with uncertainty quantification. GPR is being employed to various manufacturing data-analytics applications, which necessitates representing this model in a standardized form for easy and rapid employment. In this paper, we present a GPR model and its representation in PMML. Furthermore, we demonstrate a prototype using a real data set in the manufacturing domain. PMID:29202125
Gaussian Process Regression (GPR) Representation in Predictive Model Markup Language (PMML).
Park, J; Lechevalier, D; Ak, R; Ferguson, M; Law, K H; Lee, Y-T T; Rachuri, S
2017-01-01
This paper describes Gaussian process regression (GPR) models presented in predictive model markup language (PMML). PMML is an extensible-markup-language (XML) -based standard language used to represent data-mining and predictive analytic models, as well as pre- and post-processed data. The previous PMML version, PMML 4.2, did not provide capabilities for representing probabilistic (stochastic) machine-learning algorithms that are widely used for constructing predictive models taking the associated uncertainties into consideration. The newly released PMML version 4.3, which includes the GPR model, provides new features: confidence bounds and distribution for the predictive estimations. Both features are needed to establish the foundation for uncertainty quantification analysis. Among various probabilistic machine-learning algorithms, GPR has been widely used for approximating a target function because of its capability of representing complex input and output relationships without predefining a set of basis functions, and predicting a target output with uncertainty quantification. GPR is being employed to various manufacturing data-analytics applications, which necessitates representing this model in a standardized form for easy and rapid employment. In this paper, we present a GPR model and its representation in PMML. Furthermore, we demonstrate a prototype using a real data set in the manufacturing domain.
Uribe-Rivera, David E; Soto-Azat, Claudio; Valenzuela-Sánchez, Andrés; Bizama, Gustavo; Simonetti, Javier A; Pliscoff, Patricio
2017-07-01
Climate change is a major threat to biodiversity; the development of models that reliably predict its effects on species distributions is a priority for conservation biogeography. Two of the main issues for accurate temporal predictions from Species Distribution Models (SDM) are model extrapolation and unrealistic dispersal scenarios. We assessed the consequences of these issues on the accuracy of climate-driven SDM predictions for the dispersal-limited Darwin's frog Rhinoderma darwinii in South America. We calibrated models using historical data (1950-1975) and projected them across 40 yr to predict distribution under current climatic conditions, assessing predictive accuracy through the area under the ROC curve (AUC) and True Skill Statistics (TSS), contrasting binary model predictions against temporal-independent validation data set (i.e., current presences/absences). To assess the effects of incorporating dispersal processes we compared the predictive accuracy of dispersal constrained models with no dispersal limited SDMs; and to assess the effects of model extrapolation on the predictive accuracy of SDMs, we compared this between extrapolated and no extrapolated areas. The incorporation of dispersal processes enhanced predictive accuracy, mainly due to a decrease in the false presence rate of model predictions, which is consistent with discrimination of suitable but inaccessible habitat. This also had consequences on range size changes over time, which is the most used proxy for extinction risk from climate change. The area of current climatic conditions that was absent in the baseline conditions (i.e., extrapolated areas) represents 39% of the study area, leading to a significant decrease in predictive accuracy of model predictions for those areas. Our results highlight (1) incorporating dispersal processes can improve predictive accuracy of temporal transference of SDMs and reduce uncertainties of extinction risk assessments from global change; (2) as geographical areas subjected to novel climates are expected to arise, they must be reported as they show less accurate predictions under future climate scenarios. Consequently, environmental extrapolation and dispersal processes should be explicitly incorporated to report and reduce uncertainties in temporal predictions of SDMs, respectively. Doing so, we expect to improve the reliability of the information we provide for conservation decision makers under future climate change scenarios. © 2017 by the Ecological Society of America.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hamm, L.; Smith, F.; Aleman, S.
2013-05-16
This report documents the development and application of computer models to describe the sorption of pertechnetate [TcO₄⁻], and its surrogate perrhenate [ReO₄⁻], on SuperLig® 639 resin. Two models have been developed: 1) A thermodynamic isotherm model, based on experimental data, that predicts [TcO₄⁻] and [ReO₄⁻] sorption as a function of solution composition and temperature and 2) A column model that uses the isotherm calculated by the first model to simulate the performance of a full-scale sorption process. The isotherm model provides a synthesis of experimental data collected from many different sources to give a best estimate prediction of the behaviormore » of the pertechnetate-SuperLig® 639 system and an estimate of the uncertainty in this prediction. The column model provides a prediction of the expected performance of the plant process by determining the volume of waste solution that can be processed based on process design parameters such as column size, flow rate and resin physical properties.« less
Van Dongen, Hans P. A.; Mott, Christopher G.; Huang, Jen-Kuang; Mollicone, Daniel J.; McKenzie, Frederic D.; Dinges, David F.
2007-01-01
Current biomathematical models of fatigue and performance do not accurately predict cognitive performance for individuals with a priori unknown degrees of trait vulnerability to sleep loss, do not predict performance reliably when initial conditions are uncertain, and do not yield statistically valid estimates of prediction accuracy. These limitations diminish their usefulness for predicting the performance of individuals in operational environments. To overcome these 3 limitations, a novel modeling approach was developed, based on the expansion of a statistical technique called Bayesian forecasting. The expanded Bayesian forecasting procedure was implemented in the two-process model of sleep regulation, which has been used to predict performance on the basis of the combination of a sleep homeostatic process and a circadian process. Employing the two-process model with the Bayesian forecasting procedure to predict performance for individual subjects in the face of unknown traits and uncertain states entailed subject-specific optimization of 3 trait parameters (homeostatic build-up rate, circadian amplitude, and basal performance level) and 2 initial state parameters (initial homeostatic state and circadian phase angle). Prior information about the distribution of the trait parameters in the population at large was extracted from psychomotor vigilance test (PVT) performance measurements in 10 subjects who had participated in a laboratory experiment with 88 h of total sleep deprivation. The PVT performance data of 3 additional subjects in this experiment were set aside beforehand for use in prospective computer simulations. The simulations involved updating the subject-specific model parameters every time the next performance measurement became available, and then predicting performance 24 h ahead. Comparison of the predictions to the subjects' actual data revealed that as more data became available for the individuals at hand, the performance predictions became increasingly more accurate and had progressively smaller 95% confidence intervals, as the model parameters converged efficiently to those that best characterized each individual. Even when more challenging simulations were run (mimicking a change in the initial homeostatic state; simulating the data to be sparse), the predictions were still considerably more accurate than would have been achieved by the two-process model alone. Although the work described here is still limited to periods of consolidated wakefulness with stable circadian rhythms, the results obtained thus far indicate that the Bayesian forecasting procedure can successfully overcome some of the major outstanding challenges for biomathematical prediction of cognitive performance in operational settings. Citation: Van Dongen HPA; Mott CG; Huang JK; Mollicone DJ; McKenzie FD; Dinges DF. Optimization of biomathematical model predictions for cognitive performance impairment in individuals: accounting for unknown traits and uncertain states in homeostatic and circadian processes. SLEEP 2007;30(9):1129-1143. PMID:17910385
A compound reconstructed prediction model for nonstationary climate processes
NASA Astrophysics Data System (ADS)
Wang, Geli; Yang, Peicai
2005-07-01
Based on the idea of climate hierarchy and the theory of state space reconstruction, a local approximation prediction model with the compound structure is built for predicting some nonstationary climate process. By means of this model and the data sets consisting of north Indian Ocean sea-surface temperature, Asian zonal circulation index and monthly mean precipitation anomaly from 37 observation stations in the Inner Mongolia area of China (IMC), a regional prediction experiment for the winter precipitation of IMC is also carried out. When using the same sign ratio R between the prediction field and the actual field to measure the prediction accuracy, an averaged R of 63% given by 10 predictions samples is reached.
VARTM Process Modeling of Aerospace Composite Structures
NASA Technical Reports Server (NTRS)
Song, Xiao-Lan; Grimsley, Brian W.; Hubert, Pascal; Cano, Roberto J.; Loos, Alfred C.
2003-01-01
A three-dimensional model was developed to simulate the VARTM composite manufacturing process. The model considers the two important mechanisms that occur during the process: resin flow, and compaction and relaxation of the preform. The model was used to simulate infiltration of a carbon preform with an epoxy resin by the VARTM process. The model predicted flow patterns and preform thickness changes agreed qualitatively with the measured values. However, the predicted total infiltration times were much longer than measured most likely due to the inaccurate preform permeability values used in the simulation.
NASA Technical Reports Server (NTRS)
Dewan, Mohammad W.; Huggett, Daniel J.; Liao, T. Warren; Wahab, Muhammad A.; Okeil, Ayman M.
2015-01-01
Friction-stir-welding (FSW) is a solid-state joining process where joint properties are dependent on welding process parameters. In the current study three critical process parameters including spindle speed (??), plunge force (????), and welding speed (??) are considered key factors in the determination of ultimate tensile strength (UTS) of welded aluminum alloy joints. A total of 73 weld schedules were welded and tensile properties were subsequently obtained experimentally. It is observed that all three process parameters have direct influence on UTS of the welded joints. Utilizing experimental data, an optimized adaptive neuro-fuzzy inference system (ANFIS) model has been developed to predict UTS of FSW joints. A total of 1200 models were developed by varying the number of membership functions (MFs), type of MFs, and combination of four input variables (??,??,????,??????) utilizing a MATLAB platform. Note EFI denotes an empirical force index derived from the three process parameters. For comparison, optimized artificial neural network (ANN) models were also developed to predict UTS from FSW process parameters. By comparing ANFIS and ANN predicted results, it was found that optimized ANFIS models provide better results than ANN. This newly developed best ANFIS model could be utilized for prediction of UTS of FSW joints.
The brain, self and society: a social-neuroscience model of predictive processing.
Kelly, Michael P; Kriznik, Natasha M; Kinmonth, Ann Louise; Fletcher, Paul C
2018-05-10
This paper presents a hypothesis about how social interactions shape and influence predictive processing in the brain. The paper integrates concepts from neuroscience and sociology where a gulf presently exists between the ways that each describe the same phenomenon - how the social world is engaged with by thinking humans. We combine the concepts of predictive processing models (also called predictive coding models in the neuroscience literature) with ideal types, typifications and social practice - concepts from the sociological literature. This generates a unified hypothetical framework integrating the social world and hypothesised brain processes. The hypothesis combines aspects of neuroscience and psychology with social theory to show how social behaviors may be "mapped" onto brain processes. It outlines a conceptual framework that connects the two disciplines and that may enable creative dialogue and potential future research.
Sakoda, Lori C; Henderson, Louise M; Caverly, Tanner J; Wernli, Karen J; Katki, Hormuzd A
2017-12-01
Risk prediction models may be useful for facilitating effective and high-quality decision-making at critical steps in the lung cancer screening process. This review provides a current overview of published lung cancer risk prediction models and their applications to lung cancer screening and highlights both challenges and strategies for improving their predictive performance and use in clinical practice. Since the 2011 publication of the National Lung Screening Trial results, numerous prediction models have been proposed to estimate the probability of developing or dying from lung cancer or the probability that a pulmonary nodule is malignant. Respective models appear to exhibit high discriminatory accuracy in identifying individuals at highest risk of lung cancer or differentiating malignant from benign pulmonary nodules. However, validation and critical comparison of the performance of these models in independent populations are limited. Little is also known about the extent to which risk prediction models are being applied in clinical practice and influencing decision-making processes and outcomes related to lung cancer screening. Current evidence is insufficient to determine which lung cancer risk prediction models are most clinically useful and how to best implement their use to optimize screening effectiveness and quality. To address these knowledge gaps, future research should be directed toward validating and enhancing existing risk prediction models for lung cancer and evaluating the application of model-based risk calculators and its corresponding impact on screening processes and outcomes.
Markovian prediction of future values for food grains in the economic survey
NASA Astrophysics Data System (ADS)
Sathish, S.; Khadar Babu, S. K.
2017-11-01
Now-a-days prediction and forecasting are plays a vital role in research. For prediction, regression is useful to predict the future value and current value on production process. In this paper, we assume food grain production exhibit Markov chain dependency and time homogeneity. The economic generative performance evaluation the balance time artificial fertilization different level in Estrusdetection using a daily Markov chain model. Finally, Markov process prediction gives better performance compare with Regression model.
Advances in modeling soil erosion after disturbance on rangelands
USDA-ARS?s Scientific Manuscript database
Research has been undertaken to develop process based models that predict soil erosion rate after disturbance on rangelands. In these models soil detachment is predicted as a combination of multiple erosion processes, rain splash and thin sheet flow (splash and sheet) detachment and concentrated flo...
File Usage Analysis and Resource Usage Prediction: a Measurement-Based Study. Ph.D. Thesis
NASA Technical Reports Server (NTRS)
Devarakonda, Murthy V.-S.
1987-01-01
A probabilistic scheme was developed to predict process resource usage in UNIX. Given the identity of the program being run, the scheme predicts CPU time, file I/O, and memory requirements of a process at the beginning of its life. The scheme uses a state-transition model of the program's resource usage in its past executions for prediction. The states of the model are the resource regions obtained from an off-line cluster analysis of processes run on the system. The proposed method is shown to work on data collected from a VAX 11/780 running 4.3 BSD UNIX. The results show that the predicted values correlate well with the actual. The coefficient of correlation between the predicted and actual values of CPU time is 0.84. Errors in prediction are mostly small. Some 82% of errors in CPU time prediction are less than 0.5 standard deviations of process CPU time.
Cao, Hongliang; Xin, Ya; Yuan, Qiaoxia
2016-02-01
To predict conveniently the biochar yield from cattle manure pyrolysis, intelligent modeling approach was introduced in this research. A traditional artificial neural networks (ANN) model and a novel least squares support vector machine (LS-SVM) model were developed. For the identification and prediction evaluation of the models, a data set with 33 experimental data was used, which were obtained using a laboratory-scale fixed bed reaction system. The results demonstrated that the intelligent modeling approach is greatly convenient and effective for the prediction of the biochar yield. In particular, the novel LS-SVM model has a more satisfying predicting performance and its robustness is better than the traditional ANN model. The introduction and application of the LS-SVM modeling method gives a successful example, which is a good reference for the modeling study of cattle manure pyrolysis process, even other similar processes. Copyright © 2015 Elsevier Ltd. All rights reserved.
NASA Technical Reports Server (NTRS)
Thomas, Russell H.; Burley, Casey L.; Guo, Yueping
2016-01-01
Aircraft system noise predictions have been performed for NASA modeled hybrid wing body aircraft advanced concepts with 2025 entry-into-service technology assumptions. The system noise predictions developed over a period from 2009 to 2016 as a result of improved modeling of the aircraft concepts, design changes, technology development, flight path modeling, and the use of extensive integrated system level experimental data. In addition, the system noise prediction models and process have been improved in many ways. An additional process is developed here for quantifying the uncertainty with a 95% confidence level. This uncertainty applies only to the aircraft system noise prediction process. For three points in time during this period, the vehicle designs, technologies, and noise prediction process are documented. For each of the three predictions, and with the information available at each of those points in time, the uncertainty is quantified using the direct Monte Carlo method with 10,000 simulations. For the prediction of cumulative noise of an advanced aircraft at the conceptual level of design, the total uncertainty band has been reduced from 12.2 to 9.6 EPNL dB. A value of 3.6 EPNL dB is proposed as the lower limit of uncertainty possible for the cumulative system noise prediction of an advanced aircraft concept.
Mental workload prediction based on attentional resource allocation and information processing.
Xiao, Xu; Wanyan, Xiaoru; Zhuang, Damin
2015-01-01
Mental workload is an important component in complex human-machine systems. The limited applicability of empirical workload measures produces the need for workload modeling and prediction methods. In the present study, a mental workload prediction model is built on the basis of attentional resource allocation and information processing to ensure pilots' accuracy and speed in understanding large amounts of flight information on the cockpit display interface. Validation with an empirical study of an abnormal attitude recovery task showed that this model's prediction of mental workload highly correlated with experimental results. This mental workload prediction model provides a new tool for optimizing human factors interface design and reducing human errors.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nguyen, Ba Nghiep; Bapanapalli, Satish K.; Smith, Mark T.
2008-09-01
The objective of our work is to enable the optimum design of lightweight automotive structural components using injection-molded long fiber thermoplastics (LFTs). To this end, an integrated approach that links process modeling to structural analysis with experimental microstructural characterization and validation is developed. First, process models for LFTs are developed and implemented into processing codes (e.g. ORIENT, Moldflow) to predict the microstructure of the as-formed composite (i.e. fiber length and orientation distributions). In parallel, characterization and testing methods are developed to obtain necessary microstructural data to validate process modeling predictions. Second, the predicted LFT composite microstructure is imported into amore » structural finite element analysis by ABAQUS to determine the response of the as-formed composite to given boundary conditions. At this stage, constitutive models accounting for the composite microstructure are developed to predict various types of behaviors (i.e. thermoelastic, viscoelastic, elastic-plastic, damage, fatigue, and impact) of LFTs. Experimental methods are also developed to determine material parameters and to validate constitutive models. Such a process-linked-structural modeling approach allows an LFT composite structure to be designed with confidence through numerical simulations. Some recent results of our collaborative research will be illustrated to show the usefulness and applications of this integrated approach.« less
Modeling and Simulation of Quenching and Tempering Process in steels
NASA Astrophysics Data System (ADS)
Deng, Xiaohu; Ju, Dongying
Quenching and tempering (Q&T) is a combined heat treatment process to achieve maximum toughness and ductility at a specified hardness and strength. It is important to develop a mathematical model for quenching and tempering process for satisfy requirement of mechanical properties with low cost. This paper presents a modified model to predict structural evolution and hardness distribution during quenching and tempering process of steels. The model takes into account tempering parameters, carbon content, isothermal and non-isothermal transformations. Moreover, precipitation of transition carbides, decomposition of retained austenite and precipitation of cementite can be simulated respectively. Hardness distributions of quenched and tempered workpiece are predicted by experimental regression equation. In order to validate the model, it is employed to predict the tempering of 80MnCr5 steel. The predicted precipitation dynamics of transition carbides and cementite is consistent with the previous experimental and simulated results from literature. Then the model is implemented within the framework of the developed simulation code COSMAP to simulate microstructure, stress and distortion in the heat treated component. It is applied to simulate Q&T process of J55 steel. The calculated results show a good agreement with the experimental ones. This agreement indicates that the model is effective for simulation of Q&T process of steels.
Zhou, Jingwen; Xu, Zhenghong; Chen, Shouwen
2013-04-01
The thuringiensin abiotic degradation processes in aqueous solution under different conditions, with a pH range of 5.0-9.0 and a temperature range of 10-40°C, were systematically investigated by an exponential decay model and a radius basis function (RBF) neural network model, respectively. The half-lives of thuringiensin calculated by the exponential decay model ranged from 2.72 d to 16.19 d under the different conditions mentioned above. Furthermore, an RBF model with accuracy of 0.1 and SPREAD value 5 was employed to model the degradation processes. The results showed that the model could simulate and predict the degradation processes well. Both the half-lives and the prediction data showed that thuringiensin was an easily degradable antibiotic, which could be an important factor in the evaluation of its safety. Copyright © 2012 Elsevier Ltd. All rights reserved.
USDA-ARS?s Scientific Manuscript database
A predictive mathematical model was developed to simulate heat transfer in a tomato undergoing double sided infrared (IR) heating in a dry-peeling process. The aims of this study were to validate the developed model using experimental data and to investigate different engineering parameters that mos...
Geospatial application of the Water Erosion Prediction Project (WEPP) Model
D. C. Flanagan; J. R. Frankenberger; T. A. Cochrane; C. S. Renschler; W. J. Elliot
2011-01-01
The Water Erosion Prediction Project (WEPP) model is a process-based technology for prediction of soil erosion by water at hillslope profile, field, and small watershed scales. In particular, WEPP utilizes observed or generated daily climate inputs to drive the surface hydrology processes (infiltration, runoff, ET) component, which subsequently impacts the rest of the...
NASA Astrophysics Data System (ADS)
Hung, Nguyen Trong; Thuan, Le Ba; Thanh, Tran Chi; Nhuan, Hoang; Khoai, Do Van; Tung, Nguyen Van; Lee, Jin-Young; Jyothi, Rajesh Kumar
2018-06-01
Modeling uranium dioxide pellet process from ammonium uranyl carbonate - derived uranium dioxide powder (UO2 ex-AUC powder) and predicting fuel rod temperature distribution were reported in the paper. Response surface methodology (RSM) and FRAPCON-4.0 code were used to model the process and to predict the fuel rod temperature under steady-state operating condition. Fuel rod design of AP-1000 designed by Westinghouse Electric Corporation, in these the pellet fabrication parameters are from the study, were input data for the code. The predictive data were suggested the relationship between the fabrication parameters of UO2 pellets and their temperature image in nuclear reactor.
NASA Astrophysics Data System (ADS)
Colla, V.; Desanctis, M.; Dimatteo, A.; Lovicu, G.; Valentini, R.
2011-09-01
The purpose of the present work is the implementation and validation of a model able to predict the microstructure changes and the mechanical properties in the modern high-strength dual-phase steels after the continuous annealing process line (CAPL) and galvanizing (Galv) process. Experimental continuous cooling transformation (CCT) diagrams for 13 differently alloying dual-phase steels were measured by dilatometry from the intercritical range and were used to tune the parameters of the microstructural prediction module of the model. Mechanical properties and microstructural features were measured for more than 400 dual-phase steels simulating the CAPL and Galv industrial process, and the results were used to construct the mechanical model that predicts mechanical properties from microstructural features, chemistry, and process parameters. The model was validated and proved its efficiency in reproducing the transformation kinetic and mechanical properties of dual-phase steels produced by typical industrial process. Although it is limited to the dual-phase grades and chemical compositions explored, this model will constitute a useful tool for the steel industry.
In Silico Strategies for Modeling Stereoselective Metabolism of Pyrethroids
In silico methods are invaluable tools to researchers seeking to understand and predict metabolic processes within PBPK models. Even though these methods have been successfully utilized to predict and quantify metabolic processes, there are many challenges involved. Stereochemica...
Incorporating uncertainty in predictive species distribution modelling.
Beale, Colin M; Lennon, Jack J
2012-01-19
Motivated by the need to solve ecological problems (climate change, habitat fragmentation and biological invasions), there has been increasing interest in species distribution models (SDMs). Predictions from these models inform conservation policy, invasive species management and disease-control measures. However, predictions are subject to uncertainty, the degree and source of which is often unrecognized. Here, we review the SDM literature in the context of uncertainty, focusing on three main classes of SDM: niche-based models, demographic models and process-based models. We identify sources of uncertainty for each class and discuss how uncertainty can be minimized or included in the modelling process to give realistic measures of confidence around predictions. Because this has typically not been performed, we conclude that uncertainty in SDMs has often been underestimated and a false precision assigned to predictions of geographical distribution. We identify areas where development of new statistical tools will improve predictions from distribution models, notably the development of hierarchical models that link different types of distribution model and their attendant uncertainties across spatial scales. Finally, we discuss the need to develop more defensible methods for assessing predictive performance, quantifying model goodness-of-fit and for assessing the significance of model covariates.
Impact of modellers' decisions on hydrological a priori predictions
NASA Astrophysics Data System (ADS)
Holländer, H. M.; Bormann, H.; Blume, T.; Buytaert, W.; Chirico, G. B.; Exbrayat, J.-F.; Gustafsson, D.; Hölzel, H.; Krauße, T.; Kraft, P.; Stoll, S.; Blöschl, G.; Flühler, H.
2013-07-01
The purpose of this paper is to stimulate a re-thinking of how we, the catchment hydrologists, could become reliable forecasters. A group of catchment modellers predicted the hydrological response of a man-made 6 ha catchment in its initial phase (Chicken Creek) without having access to the observed records. They used conceptually different model families. Their modelling experience differed largely. The prediction exercise was organized in three steps: (1) for the 1st prediction modellers received a basic data set describing the internal structure of the catchment (somewhat more complete than usually available to a priori predictions in ungauged catchments). They did not obtain time series of stream flow, soil moisture or groundwater response. (2) Before the 2nd improved prediction they inspected the catchment on-site and attended a workshop where the modellers presented and discussed their first attempts. (3) For their improved 3rd prediction they were offered additional data by charging them pro forma with the costs for obtaining this additional information. Holländer et al. (2009) discussed the range of predictions obtained in step 1. Here, we detail the modeller's decisions in accounting for the various processes based on what they learned during the field visit (step 2) and add the final outcome of step 3 when the modellers made use of additional data. We document the prediction progress as well as the learning process resulting from the availability of added information. For the 2nd and 3rd step, the progress in prediction quality could be evaluated in relation to individual modelling experience and costs of added information. We learned (i) that soft information such as the modeller's system understanding is as important as the model itself (hard information), (ii) that the sequence of modelling steps matters (field visit, interactions between differently experienced experts, choice of model, selection of available data, and methods for parameter guessing), and (iii) that added process understanding can be as efficient as adding data for improving parameters needed to satisfy model requirements.
[GSH fermentation process modeling using entropy-criterion based RBF neural network model].
Tan, Zuoping; Wang, Shitong; Deng, Zhaohong; Du, Guocheng
2008-05-01
The prediction accuracy and generalization of GSH fermentation process modeling are often deteriorated by noise existing in the corresponding experimental data. In order to avoid this problem, we present a novel RBF neural network modeling approach based on entropy criterion. It considers the whole distribution structure of the training data set in the parameter learning process compared with the traditional MSE-criterion based parameter learning, and thus effectively avoids the weak generalization and over-learning. Then the proposed approach is applied to the GSH fermentation process modeling. Our results demonstrate that this proposed method has better prediction accuracy, generalization and robustness such that it offers a potential application merit for the GSH fermentation process modeling.
Predicting concrete corrosion of sewers using artificial neural network.
Jiang, Guangming; Keller, Jurg; Bond, Philip L; Yuan, Zhiguo
2016-04-01
Corrosion is often a major failure mechanism for concrete sewers and under such circumstances the sewer service life is largely determined by the progression of microbially induced concrete corrosion. The modelling of sewer processes has become possible due to the improved understanding of in-sewer transformation. Recent systematic studies about the correlation between the corrosion processes and sewer environment factors should be utilized to improve the prediction capability of service life by sewer models. This paper presents an artificial neural network (ANN)-based approach for modelling the concrete corrosion processes in sewers. The approach included predicting the time for the corrosion to initiate and then predicting the corrosion rate after the initiation period. The ANN model was trained and validated with long-term (4.5 years) corrosion data obtained in laboratory corrosion chambers, and further verified with field measurements in real sewers across Australia. The trained model estimated the corrosion initiation time and corrosion rates very close to those measured in Australian sewers. The ANN model performed better than a multiple regression model also developed on the same dataset. Additionally, the ANN model can serve as a prediction framework for sewer service life, which can be progressively improved and expanded by including corrosion rates measured in different sewer conditions. Furthermore, the proposed methodology holds promise to facilitate the construction of analytical models associated with corrosion processes of concrete sewers. Copyright © 2016 Elsevier Ltd. All rights reserved.
a Gaussian Process Based Multi-Person Interaction Model
NASA Astrophysics Data System (ADS)
Klinger, T.; Rottensteiner, F.; Heipke, C.
2016-06-01
Online multi-person tracking in image sequences is commonly guided by recursive filters, whose predictive models define the expected positions of future states. When a predictive model deviates too much from the true motion of a pedestrian, which is often the case in crowded scenes due to unpredicted accelerations, the data association is prone to fail. In this paper we propose a novel predictive model on the basis of Gaussian Process Regression. The model takes into account the motion of every tracked pedestrian in the scene and the prediction is executed with respect to the velocities of all interrelated persons. As shown by the experiments, the model is capable of yielding more plausible predictions even in the presence of mutual occlusions or missing measurements. The approach is evaluated on a publicly available benchmark and outperforms other state-of-the-art trackers.
The Prediction of Length-of-day Variations Based on Gaussian Processes
NASA Astrophysics Data System (ADS)
Lei, Y.; Zhao, D. N.; Gao, Y. P.; Cai, H. B.
2015-01-01
Due to the complicated time-varying characteristics of the length-of-day (LOD) variations, the accuracies of traditional strategies for the prediction of the LOD variations such as the least squares extrapolation model, the time-series analysis model, and so on, have not met the requirements for real-time and high-precision applications. In this paper, a new machine learning algorithm --- the Gaussian process (GP) model is employed to forecast the LOD variations. Its prediction precisions are analyzed and compared with those of the back propagation neural networks (BPNN), general regression neural networks (GRNN) models, and the Earth Orientation Parameters Prediction Comparison Campaign (EOP PCC). The results demonstrate that the application of the GP model to the prediction of the LOD variations is efficient and feasible.
Modelling of Two-Stage Methane Digestion With Pretreatment of Biomass
NASA Astrophysics Data System (ADS)
Dychko, A.; Remez, N.; Opolinskyi, I.; Kraychuk, S.; Ostapchuk, N.; Yevtieieva, L.
2018-04-01
Systems of anaerobic digestion should be used for processing of organic waste. Managing the process of anaerobic recycling of organic waste requires reliable predicting of biogas production. Development of mathematical model of process of organic waste digestion allows determining the rate of biogas output at the two-stage process of anaerobic digestion considering the first stage. Verification of Konto's model, based on the studied anaerobic processing of organic waste, is implemented. The dependencies of biogas output and its rate from time are set and may be used to predict the process of anaerobic processing of organic waste.
Predictive information processing in music cognition. A critical review.
Rohrmeier, Martin A; Koelsch, Stefan
2012-02-01
Expectation and prediction constitute central mechanisms in the perception and cognition of music, which have been explored in theoretical and empirical accounts. We review the scope and limits of theoretical accounts of musical prediction with respect to feature-based and temporal prediction. While the concept of prediction is unproblematic for basic single-stream features such as melody, it is not straight-forward for polyphonic structures or higher-order features such as formal predictions. Behavioural results based on explicit and implicit (priming) paradigms provide evidence of priming in various domains that may reflect predictive behaviour. Computational learning models, including symbolic (fragment-based), probabilistic/graphical, or connectionist approaches, provide well-specified predictive models of specific features and feature combinations. While models match some experimental results, full-fledged music prediction cannot yet be modelled. Neuroscientific results regarding the early right-anterior negativity (ERAN) and mismatch negativity (MMN) reflect expectancy violations on different levels of processing complexity, and provide some neural evidence for different predictive mechanisms. At present, the combinations of neural and computational modelling methodologies are at early stages and require further research. Copyright © 2012 Elsevier B.V. All rights reserved.
Quality by control: Towards model predictive control of mammalian cell culture bioprocesses.
Sommeregger, Wolfgang; Sissolak, Bernhard; Kandra, Kulwant; von Stosch, Moritz; Mayer, Martin; Striedner, Gerald
2017-07-01
The industrial production of complex biopharmaceuticals using recombinant mammalian cell lines is still mainly built on a quality by testing approach, which is represented by fixed process conditions and extensive testing of the end-product. In 2004 the FDA launched the process analytical technology initiative, aiming to guide the industry towards advanced process monitoring and better understanding of how critical process parameters affect the critical quality attributes. Implementation of process analytical technology into the bio-production process enables moving from the quality by testing to a more flexible quality by design approach. The application of advanced sensor systems in combination with mathematical modelling techniques offers enhanced process understanding, allows on-line prediction of critical quality attributes and subsequently real-time product quality control. In this review opportunities and unsolved issues on the road to a successful quality by design and dynamic control implementation are discussed. A major focus is directed on the preconditions for the application of model predictive control for mammalian cell culture bioprocesses. Design of experiments providing information about the process dynamics upon parameter change, dynamic process models, on-line process state predictions and powerful software environments seem to be a prerequisite for quality by control realization. © 2017 The Authors. Biotechnology Journal published by WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
The management submodel of the Wind Erosion Prediction System
USDA-ARS?s Scientific Manuscript database
The Wind Erosion Prediction System (WEPS) is a process-based, daily time-step, computer model that predicts soil erosion via simulation of the physical processes controlling wind erosion. WEPS is comprised of several individual modules (submodels) that reflect different sets of physical processes, ...
Li, Jia; Xia, Yunni; Luo, Xin
2014-01-01
OWL-S, one of the most important Semantic Web service ontologies proposed to date, provides a core ontological framework and guidelines for describing the properties and capabilities of their web services in an unambiguous, computer interpretable form. Predicting the reliability of composite service processes specified in OWL-S allows service users to decide whether the process meets the quantitative quality requirement. In this study, we consider the runtime quality of services to be fluctuating and introduce a dynamic framework to predict the runtime reliability of services specified in OWL-S, employing the Non-Markovian stochastic Petri net (NMSPN) and the time series model. The framework includes the following steps: obtaining the historical response times series of individual service components; fitting these series with a autoregressive-moving-average-model (ARMA for short) and predicting the future firing rates of service components; mapping the OWL-S process into a NMSPN model; employing the predicted firing rates as the model input of NMSPN and calculating the normal completion probability as the reliability estimate. In the case study, a comparison between the static model and our approach based on experimental data is presented and it is shown that our approach achieves higher prediction accuracy.
PREDICTIVE MODELING OF LIGHT-INDUCED MORTALITY OF ENTEROCOCCI FAECALIS IN RECREATIONAL WATERS
One approach to predictive modeling of biological contamination of recreational waters involves the application of process-based approaches that consider microbial sources, hydrodynamic transport, and microbial fate. This presentation focuses on one important fate process, light-...
NASA Technical Reports Server (NTRS)
Sreekantamurthy, Thammaiah; Hudson, Tyler B.; Hou, Tan-Hung; Grimsley, Brian W.
2016-01-01
Composite cure process induced residual strains and warping deformations in composite components present significant challenges in the manufacturing of advanced composite structure. As a part of the Manufacturing Process and Simulation initiative of the NASA Advanced Composite Project (ACP), research is being conducted on the composite cure process by developing an understanding of the fundamental mechanisms by which the process induced factors influence the residual responses. In this regard, analytical studies have been conducted on the cure process modeling of composite structural parts with varied physical, thermal, and resin flow process characteristics. The cure process simulation results were analyzed to interpret the cure response predictions based on the underlying physics incorporated into the modeling tool. In the cure-kinetic analysis, the model predictions on the degree of cure, resin viscosity and modulus were interpreted with reference to the temperature distribution in the composite panel part and tool setup during autoclave or hot-press curing cycles. In the fiber-bed compaction simulation, the pore pressure and resin flow velocity in the porous media models, and the compaction strain responses under applied pressure were studied to interpret the fiber volume fraction distribution predictions. In the structural simulation, the effect of temperature on the resin and ply modulus, and thermal coefficient changes during curing on predicted mechanical strains and chemical cure shrinkage strains were studied to understand the residual strains and stress response predictions. In addition to computational analysis, experimental studies were conducted to measure strains during the curing of laminated panels by means of optical fiber Bragg grating sensors (FBGs) embedded in the resin impregnated panels. The residual strain measurements from laboratory tests were then compared with the analytical model predictions. The paper describes the cure process procedures and residual strain predications, and discusses pertinent experimental results from the validation studies.
ERIC Educational Resources Information Center
Fü rst, Guillaume; Ghisletta, Paolo; Lubart, Todd
2016-01-01
The present work proposes an integrative model of creativity that includes personality traits and cognitive processes. This model hypothesizes that three high-order personality factors predict two main process factors, which in turn predict intensity and achievement of creative activities. The personality factors are: "Plasticity" (high…
Impact of modellers' decisions on hydrological a priori predictions
NASA Astrophysics Data System (ADS)
Holländer, H. M.; Bormann, H.; Blume, T.; Buytaert, W.; Chirico, G. B.; Exbrayat, J.-F.; Gustafsson, D.; Hölzel, H.; Krauße, T.; Kraft, P.; Stoll, S.; Blöschl, G.; Flühler, H.
2014-06-01
In practice, the catchment hydrologist is often confronted with the task of predicting discharge without having the needed records for calibration. Here, we report the discharge predictions of 10 modellers - using the model of their choice - for the man-made Chicken Creek catchment (6 ha, northeast Germany, Gerwin et al., 2009b) and we analyse how well they improved their prediction in three steps based on adding information prior to each following step. The modellers predicted the catchment's hydrological response in its initial phase without having access to the observed records. They used conceptually different physically based models and their modelling experience differed largely. Hence, they encountered two problems: (i) to simulate discharge for an ungauged catchment and (ii) using models that were developed for catchments, which are not in a state of landscape transformation. The prediction exercise was organized in three steps: (1) for the first prediction the modellers received a basic data set describing the catchment to a degree somewhat more complete than usually available for a priori predictions of ungauged catchments; they did not obtain information on stream flow, soil moisture, nor groundwater response and had therefore to guess the initial conditions; (2) before the second prediction they inspected the catchment on-site and discussed their first prediction attempt; (3) for their third prediction they were offered additional data by charging them pro forma with the costs for obtaining this additional information. Holländer et al. (2009) discussed the range of predictions obtained in step (1). Here, we detail the modeller's assumptions and decisions in accounting for the various processes. We document the prediction progress as well as the learning process resulting from the availability of added information. For the second and third steps, the progress in prediction quality is evaluated in relation to individual modelling experience and costs of added information. In this qualitative analysis of a statistically small number of predictions we learned (i) that soft information such as the modeller's system understanding is as important as the model itself (hard information), (ii) that the sequence of modelling steps matters (field visit, interactions between differently experienced experts, choice of model, selection of available data, and methods for parameter guessing), and (iii) that added process understanding can be as efficient as adding data for improving parameters needed to satisfy model requirements.
Domínguez-Tello, Antonio; Arias-Borrego, Ana; García-Barrera, Tamara; Gómez-Ariza, José Luis
2017-10-01
The trihalomethanes (TTHMs) and others disinfection by-products (DBPs) are formed in drinking water by the reaction of chlorine with organic precursors contained in the source water, in two consecutive and linked stages, that starts at the treatment plant and continues in second stage along the distribution system (DS) by reaction of residual chlorine with organic precursors not removed. Following this approach, this study aimed at developing a two-stage empirical model for predicting the formation of TTHMs in the water treatment plant and subsequently their evolution along the water distribution system (WDS). The aim of the two-stage model was to improve the predictive capability for a wide range of scenarios of water treatments and distribution systems. The two-stage model was developed using multiple regression analysis from a database (January 2007 to July 2012) using three different treatment processes (conventional and advanced) in the water supply system of Aljaraque area (southwest of Spain). Then, the new model was validated using a recent database from the same water supply system (January 2011 to May 2015). The validation results indicated no significant difference in the predictive and observed values of TTHM (R 2 0.874, analytical variance <17%). The new model was applied to three different supply systems with different treatment processes and different characteristics. Acceptable predictions were obtained in the three distribution systems studied, proving the adaptability of the new model to the boundary conditions. Finally the predictive capability of the new model was compared with 17 other models selected from the literature, showing satisfactory results prediction and excellent adaptability to treatment processes.
Prediction of Proper Temperatures for the Hot Stamping Process Based on the Kinetics Models
NASA Astrophysics Data System (ADS)
Samadian, P.; Parsa, M. H.; Mirzadeh, H.
2015-02-01
Nowadays, the application of kinetics models for predicting microstructures of steels subjected to thermo-mechanical treatments has increased to minimize direct experimentation, which is costly and time consuming. In the current work, the final microstructures of AISI 4140 steel sheets after the hot stamping process were predicted using the Kirkaldy and Li kinetics models combined with new thermodynamically based models in order for the determination of the appropriate process temperatures. In this way, the effect of deformation during hot stamping on the Ae3, Acm, and Ae1 temperatures was considered, and then the equilibrium volume fractions of phases at different temperatures were calculated. Moreover, the ferrite transformation rate equations of the Kirkaldy and Li models were modified by a term proposed by Åkerström to consider the influence of plastic deformation. Results showed that the modified Kirkaldy model is satisfactory for the determination of appropriate austenitization temperatures for the hot stamping process of AISI 4140 steel sheets because of agreeable microstructure predictions in comparison with the experimental observations.
Prediction of microstructure, residual stress, and deformation in laser powder bed fusion process
NASA Astrophysics Data System (ADS)
Yang, Y. P.; Jamshidinia, M.; Boulware, P.; Kelly, S. M.
2018-05-01
Laser powder bed fusion (L-PBF) process has been investigated significantly to build production parts with a complex shape. Modeling tools, which can be used in a part level, are essential to allow engineers to fine tune the shape design and process parameters for additive manufacturing. This study focuses on developing modeling methods to predict microstructure, hardness, residual stress, and deformation in large L-PBF built parts. A transient sequentially coupled thermal and metallurgical analysis method was developed to predict microstructure and hardness on L-PBF built high-strength, low-alloy steel parts. A moving heat-source model was used in this analysis to accurately predict the temperature history. A kinetics based model which was developed to predict microstructure in the heat-affected zone of a welded joint was extended to predict the microstructure and hardness in an L-PBF build by inputting the predicted temperature history. The tempering effect resulting from the following built layers on the current-layer microstructural phases were modeled, which is the key to predict the final hardness correctly. It was also found that the top layers of a build part have higher hardness because of the lack of the tempering effect. A sequentially coupled thermal and mechanical analysis method was developed to predict residual stress and deformation for an L-PBF build part. It was found that a line-heating model is not suitable for analyzing a large L-PBF built part. The layer heating method is a potential method for analyzing a large L-PBF built part. The experiment was conducted to validate the model predictions.
Prediction of microstructure, residual stress, and deformation in laser powder bed fusion process
NASA Astrophysics Data System (ADS)
Yang, Y. P.; Jamshidinia, M.; Boulware, P.; Kelly, S. M.
2017-12-01
Laser powder bed fusion (L-PBF) process has been investigated significantly to build production parts with a complex shape. Modeling tools, which can be used in a part level, are essential to allow engineers to fine tune the shape design and process parameters for additive manufacturing. This study focuses on developing modeling methods to predict microstructure, hardness, residual stress, and deformation in large L-PBF built parts. A transient sequentially coupled thermal and metallurgical analysis method was developed to predict microstructure and hardness on L-PBF built high-strength, low-alloy steel parts. A moving heat-source model was used in this analysis to accurately predict the temperature history. A kinetics based model which was developed to predict microstructure in the heat-affected zone of a welded joint was extended to predict the microstructure and hardness in an L-PBF build by inputting the predicted temperature history. The tempering effect resulting from the following built layers on the current-layer microstructural phases were modeled, which is the key to predict the final hardness correctly. It was also found that the top layers of a build part have higher hardness because of the lack of the tempering effect. A sequentially coupled thermal and mechanical analysis method was developed to predict residual stress and deformation for an L-PBF build part. It was found that a line-heating model is not suitable for analyzing a large L-PBF built part. The layer heating method is a potential method for analyzing a large L-PBF built part. The experiment was conducted to validate the model predictions.
Through-process modelling of texture and anisotropy in AA5182
NASA Astrophysics Data System (ADS)
Crumbach, M.; Neumann, L.; Goerdeler, M.; Aretz, H.; Gottstein, G.; Kopp, R.
2006-07-01
A through-process texture and anisotropy prediction for AA5182 sheet production from hot rolling through cold rolling and annealing is reported. Thermo-mechanical process data predicted by the finite element method (FEM) package T-Pack based on the software LARSTRAN were fed into a combination of physics based microstructure models for deformation texture (GIA), work hardening (3IVM), nucleation texture (ReNuc), and recrystallization texture (StaRT). The final simulated sheet texture was fed into a FEM simulation of cup drawing employing a new concept of interactively updated texture based yield locus predictions. The modelling results of texture development and anisotropy were compared to experimental data. The applicability to other alloys and processes is discussed.
NASA Astrophysics Data System (ADS)
Xiong, H.; Hamila, N.; Boisse, P.
2017-10-01
Pre-impregnated thermoplastic composites have recently attached increasing interest in the automotive industry for their excellent mechanical properties and their rapid cycle manufacturing process, modelling and numerical simulations of forming processes for composites parts with complex geometry is necessary to predict and optimize manufacturing practices, especially for the consolidation effects. A viscoelastic relaxation model is proposed to characterize the consolidation behavior of thermoplastic prepregs based on compaction tests with a range of temperatures. The intimate contact model is employed to predict the evolution of the consolidation which permits the microstructure prediction of void presented through the prepreg. Within a hyperelastic framework, several simulation tests are launched by combining a new developed solid shell finite element and the consolidation models.
Defense Waste Processing Facility Nitric- Glycolic Flowsheet Chemical Process Cell Chemistry: Part 2
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zamecnik, J.; Edwards, T.
The conversions of nitrite to nitrate, the destruction of glycolate, and the conversion of glycolate to formate and oxalate were modeled for the Nitric-Glycolic flowsheet using data from Chemical Process Cell (CPC) simulant runs conducted by Savannah River National Laboratory (SRNL) from 2011 to 2016. The goal of this work was to develop empirical correlation models to predict these values from measureable variables from the chemical process so that these quantities could be predicted a-priori from the sludge or simulant composition and measurable processing variables. The need for these predictions arises from the need to predict the REDuction/OXidation (REDOX) statemore » of the glass from the Defense Waste Processing Facility (DWPF) melter. This report summarizes the work on these correlations based on the aforementioned data. Previous work on these correlations was documented in a technical report covering data from 2011-2015. This current report supersedes this previous report. Further refinement of the models as additional data are collected is recommended.« less
NASA Astrophysics Data System (ADS)
Riley, W. J.; Dwivedi, D.; Ghimire, B.; Hoffman, F. M.; Pau, G. S. H.; Randerson, J. T.; Shen, C.; Tang, J.; Zhu, Q.
2015-12-01
Numerical model representations of decadal- to centennial-scale soil-carbon dynamics are a dominant cause of uncertainty in climate change predictions. Recent attempts by some Earth System Model (ESM) teams to integrate previously unrepresented soil processes (e.g., explicit microbial processes, abiotic interactions with mineral surfaces, vertical transport), poor performance of many ESM land models against large-scale and experimental manipulation observations, and complexities associated with spatial heterogeneity highlight the nascent nature of our community's ability to accurately predict future soil carbon dynamics. I will present recent work from our group to develop a modeling framework to integrate pore-, column-, watershed-, and global-scale soil process representations into an ESM (ACME), and apply the International Land Model Benchmarking (ILAMB) package for evaluation. At the column scale and across a wide range of sites, observed depth-resolved carbon stocks and their 14C derived turnover times can be explained by a model with explicit representation of two microbial populations, a simple representation of mineralogy, and vertical transport. Integrating soil and plant dynamics requires a 'process-scaling' approach, since all aspects of the multi-nutrient system cannot be explicitly resolved at ESM scales. I will show that one approach, the Equilibrium Chemistry Approximation, improves predictions of forest nitrogen and phosphorus experimental manipulations and leads to very different global soil carbon predictions. Translating model representations from the site- to ESM-scale requires a spatial scaling approach that either explicitly resolves the relevant processes, or more practically, accounts for fine-resolution dynamics at coarser scales. To that end, I will present recent watershed-scale modeling work that applies reduced order model methods to accurately scale fine-resolution soil carbon dynamics to coarse-resolution simulations. Finally, we contend that creating believable soil carbon predictions requires a robust, transparent, and community-available benchmarking framework. I will present an ILAMB evaluation of several of the above-mentioned approaches in ACME, and attempt to motivate community adoption of this evaluation approach.
Pearce, Marcus T
2018-05-11
Music perception depends on internal psychological models derived through exposure to a musical culture. It is hypothesized that this musical enculturation depends on two cognitive processes: (1) statistical learning, in which listeners acquire internal cognitive models of statistical regularities present in the music to which they are exposed; and (2) probabilistic prediction based on these learned models that enables listeners to organize and process their mental representations of music. To corroborate these hypotheses, I review research that uses a computational model of probabilistic prediction based on statistical learning (the information dynamics of music (IDyOM) model) to simulate data from empirical studies of human listeners. The results show that a broad range of psychological processes involved in music perception-expectation, emotion, memory, similarity, segmentation, and meter-can be understood in terms of a single, underlying process of probabilistic prediction using learned statistical models. Furthermore, IDyOM simulations of listeners from different musical cultures demonstrate that statistical learning can plausibly predict causal effects of differential cultural exposure to musical styles, providing a quantitative model of cultural distance. Understanding the neural basis of musical enculturation will benefit from close coordination between empirical neuroimaging and computational modeling of underlying mechanisms, as outlined here. © 2018 The Authors. Annals of the New York Academy of Sciences published by Wiley Periodicals, Inc. on behalf of New York Academy of Sciences.
NASA Astrophysics Data System (ADS)
Corbetta, Matteo; Sbarufatti, Claudio; Giglio, Marco; Todd, Michael D.
2018-05-01
The present work critically analyzes the probabilistic definition of dynamic state-space models subject to Bayesian filters used for monitoring and predicting monotonic degradation processes. The study focuses on the selection of the random process, often called process noise, which is a key perturbation source in the evolution equation of particle filtering. Despite the large number of applications of particle filtering predicting structural degradation, the adequacy of the picked process noise has not been investigated. This paper reviews existing process noise models that are typically embedded in particle filters dedicated to monitoring and predicting structural damage caused by fatigue, which is monotonic in nature. The analysis emphasizes that existing formulations of the process noise can jeopardize the performance of the filter in terms of state estimation and remaining life prediction (i.e., damage prognosis). This paper subsequently proposes an optimal and unbiased process noise model and a list of requirements that the stochastic model must satisfy to guarantee high prognostic performance. These requirements are useful for future and further implementations of particle filtering for monotonic system dynamics. The validity of the new process noise formulation is assessed against experimental fatigue crack growth data from a full-scale aeronautical structure using dedicated performance metrics.
A review of predictive coding algorithms.
Spratling, M W
2017-03-01
Predictive coding is a leading theory of how the brain performs probabilistic inference. However, there are a number of distinct algorithms which are described by the term "predictive coding". This article provides a concise review of these different predictive coding algorithms, highlighting their similarities and differences. Five algorithms are covered: linear predictive coding which has a long and influential history in the signal processing literature; the first neuroscience-related application of predictive coding to explaining the function of the retina; and three versions of predictive coding that have been proposed to model cortical function. While all these algorithms aim to fit a generative model to sensory data, they differ in the type of generative model they employ, in the process used to optimise the fit between the model and sensory data, and in the way that they are related to neurobiology. Copyright © 2016 Elsevier Inc. All rights reserved.
Application of a High-Fidelity Icing Analysis Method to a Model-Scale Rotor in Forward Flight
NASA Technical Reports Server (NTRS)
Narducci, Robert; Orr, Stanley; Kreeger, Richard E.
2012-01-01
An icing analysis process involving the loose coupling of OVERFLOW-RCAS for rotor performance prediction and with LEWICE3D for thermal analysis and ice accretion is applied to a model-scale rotor for validation. The process offers high-fidelity rotor analysis for the noniced and iced rotor performance evaluation that accounts for the interaction of nonlinear aerodynamics with blade elastic deformations. Ice accumulation prediction also involves loosely coupled data exchanges between OVERFLOW and LEWICE3D to produce accurate ice shapes. Validation of the process uses data collected in the 1993 icing test involving Sikorsky's Powered Force Model. Non-iced and iced rotor performance predictions are compared to experimental measurements as are predicted ice shapes.
Bridge Structure Deformation Prediction Based on GNSS Data Using Kalman-ARIMA-GARCH Model
Li, Xiaoqing; Wang, Yu
2018-01-01
Bridges are an essential part of the ground transportation system. Health monitoring is fundamentally important for the safety and service life of bridges. A large amount of structural information is obtained from various sensors using sensing technology, and the data processing has become a challenging issue. To improve the prediction accuracy of bridge structure deformation based on data mining and to accurately evaluate the time-varying characteristics of bridge structure performance evolution, this paper proposes a new method for bridge structure deformation prediction, which integrates the Kalman filter, autoregressive integrated moving average model (ARIMA), and generalized autoregressive conditional heteroskedasticity (GARCH). Firstly, the raw deformation data is directly pre-processed using the Kalman filter to reduce the noise. After that, the linear recursive ARIMA model is established to analyze and predict the structure deformation. Finally, the nonlinear recursive GARCH model is introduced to further improve the accuracy of the prediction. Simulation results based on measured sensor data from the Global Navigation Satellite System (GNSS) deformation monitoring system demonstrated that: (1) the Kalman filter is capable of denoising the bridge deformation monitoring data; (2) the prediction accuracy of the proposed Kalman-ARIMA-GARCH model is satisfactory, where the mean absolute error increases only from 3.402 mm to 5.847 mm with the increment of the prediction step; and (3) in comparision to the Kalman-ARIMA model, the Kalman-ARIMA-GARCH model results in superior prediction accuracy as it includes partial nonlinear characteristics (heteroscedasticity); the mean absolute error of five-step prediction using the proposed model is improved by 10.12%. This paper provides a new way for structural behavior prediction based on data processing, which can lay a foundation for the early warning of bridge health monitoring system based on sensor data using sensing technology. PMID:29351254
Bridge Structure Deformation Prediction Based on GNSS Data Using Kalman-ARIMA-GARCH Model.
Xin, Jingzhou; Zhou, Jianting; Yang, Simon X; Li, Xiaoqing; Wang, Yu
2018-01-19
Bridges are an essential part of the ground transportation system. Health monitoring is fundamentally important for the safety and service life of bridges. A large amount of structural information is obtained from various sensors using sensing technology, and the data processing has become a challenging issue. To improve the prediction accuracy of bridge structure deformation based on data mining and to accurately evaluate the time-varying characteristics of bridge structure performance evolution, this paper proposes a new method for bridge structure deformation prediction, which integrates the Kalman filter, autoregressive integrated moving average model (ARIMA), and generalized autoregressive conditional heteroskedasticity (GARCH). Firstly, the raw deformation data is directly pre-processed using the Kalman filter to reduce the noise. After that, the linear recursive ARIMA model is established to analyze and predict the structure deformation. Finally, the nonlinear recursive GARCH model is introduced to further improve the accuracy of the prediction. Simulation results based on measured sensor data from the Global Navigation Satellite System (GNSS) deformation monitoring system demonstrated that: (1) the Kalman filter is capable of denoising the bridge deformation monitoring data; (2) the prediction accuracy of the proposed Kalman-ARIMA-GARCH model is satisfactory, where the mean absolute error increases only from 3.402 mm to 5.847 mm with the increment of the prediction step; and (3) in comparision to the Kalman-ARIMA model, the Kalman-ARIMA-GARCH model results in superior prediction accuracy as it includes partial nonlinear characteristics (heteroscedasticity); the mean absolute error of five-step prediction using the proposed model is improved by 10.12%. This paper provides a new way for structural behavior prediction based on data processing, which can lay a foundation for the early warning of bridge health monitoring system based on sensor data using sensing technology.
A gentle introduction to quantile regression for ecologists
Cade, B.S.; Noon, B.R.
2003-01-01
Quantile regression is a way to estimate the conditional quantiles of a response variable distribution in the linear model that provides a more complete view of possible causal relationships between variables in ecological processes. Typically, all the factors that affect ecological processes are not measured and included in the statistical models used to investigate relationships between variables associated with those processes. As a consequence, there may be a weak or no predictive relationship between the mean of the response variable (y) distribution and the measured predictive factors (X). Yet there may be stronger, useful predictive relationships with other parts of the response variable distribution. This primer relates quantile regression estimates to prediction intervals in parametric error distribution regression models (eg least squares), and discusses the ordering characteristics, interval nature, sampling variation, weighting, and interpretation of the estimates for homogeneous and heterogeneous regression models.
Word of Mouth : An Agent-based Approach to Predictability of Stock Prices
NASA Astrophysics Data System (ADS)
Shimokawa, Tetsuya; Misawa, Tadanobu; Watanabe, Kyoko
This paper addresses how communication processes among investors affect stock prices formation, especially emerging predictability of stock prices, in financial markets. An agent based model, called the word of mouth model, is introduced for analyzing the problem. This model provides a simple, but sufficiently versatile, description of informational diffusion process and is successful in making lucidly explanation for the predictability of small sized stocks, which is a stylized fact in financial markets but difficult to resolve by traditional models. Our model also provides a rigorous examination of the under reaction hypothesis to informational shocks.
Cost Models for MMC Manufacturing Processes
NASA Technical Reports Server (NTRS)
Elzey, Dana M.; Wadley, Haydn N. G.
1996-01-01
Processes for the manufacture of advanced metal matrix composites are rapidly approaching maturity in the research laboratory and there is growing interest in their transition to industrial production. However, research conducted to date has almost exclusively focused on overcoming the technical barriers to producing high-quality material and little attention has been given to the economical feasibility of these laboratory approaches and process cost issues. A quantitative cost modeling (QCM) approach was developed to address these issues. QCM are cost analysis tools based on predictive process models relating process conditions to the attributes of the final product. An important attribute, of the QCM approach is the ability to predict the sensitivity of material production costs to product quality and to quantitatively explore trade-offs between cost and quality. Applications of the cost models allow more efficient direction of future MMC process technology development and a more accurate assessment of MMC market potential. Cost models were developed for two state-of-the art metal matrix composite (MMC) manufacturing processes: tape casting and plasma spray deposition. Quality and Cost models are presented for both processes and the resulting predicted quality-cost curves are presented and discussed.
A study on predicting network corrections in PPP-RTK processing
NASA Astrophysics Data System (ADS)
Wang, Kan; Khodabandeh, Amir; Teunissen, Peter
2017-10-01
In PPP-RTK processing, the network corrections including the satellite clocks, the satellite phase biases and the ionospheric delays are provided to the users to enable fast single-receiver integer ambiguity resolution. To solve the rank deficiencies in the undifferenced observation equations, the estimable parameters are formed to generate full-rank design matrix. In this contribution, we firstly discuss the interpretation of the estimable parameters without and with a dynamic satellite clock model incorporated in a Kalman filter during the network processing. The functionality of the dynamic satellite clock model is tested in the PPP-RTK processing. Due to the latency generated by the network processing and data transfer, the network corrections are delayed for the real-time user processing. To bridge the latencies, we discuss and compare two prediction approaches making use of the network corrections without and with the dynamic satellite clock model, respectively. The first prediction approach is based on the polynomial fitting of the estimated network parameters, while the second approach directly follows the dynamic model in the Kalman filter of the network processing and utilises the satellite clock drifts estimated in the network processing. Using 1 Hz data from two networks in Australia, the influences of the two prediction approaches on the user positioning results are analysed and compared for latencies ranging from 3 to 10 s. The accuracy of the positioning results decreases with the increasing latency of the network products. For a latency of 3 s, the RMS of the horizontal and the vertical coordinates (with respect to the ground truth) do not show large differences applying both prediction approaches. For a latency of 10 s, the prediction approach making use of the satellite clock model has generated slightly better positioning results with the differences of the RMS at mm-level. Further advantages and disadvantages of both prediction approaches are also discussed in this contribution.
McGowan, Conor P.; Allan, Nathan; Servoss, Jeff; Hedwall, Shaula J.; Wooldridge, Brian
2017-01-01
Assessment of a species' status is a key part of management decision making for endangered and threatened species under the U.S. Endangered Species Act. Predicting the future state of the species is an essential part of species status assessment, and projection models can play an important role in developing predictions. We built a stochastic simulation model that incorporated parametric and environmental uncertainty to predict the probable future status of the Sonoran desert tortoise in the southwestern United States and North Central Mexico. Sonoran desert tortoise was a Candidate species for listing under the Endangered Species Act, and decision makers wanted to use model predictions in their decision making process. The model accounted for future habitat loss and possible effects of climate change induced droughts to predict future population growth rates, abundances, and quasi-extinction probabilities. Our model predicts that the population will likely decline over the next few decades, but there is very low probability of quasi-extinction less than 75 years into the future. Increases in drought frequency and intensity may increase extinction risk for the species. Our model helped decision makers predict and characterize uncertainty about the future status of the species in their listing decision. We incorporated complex ecological processes (e.g., climate change effects on tortoises) in transparent and explicit ways tailored to support decision making processes related to endangered species.
Updraft Fixed Bed Gasification Aspen Plus Model
DOE Office of Scientific and Technical Information (OSTI.GOV)
2007-09-27
The updraft fixed bed gasification model provides predictive modeling capabilities for updraft fixed bed gasifiers, when devolatilization data is available. The fixed bed model is constructed using Aspen Plus, process modeling software, coupled with a FORTRAN user kinetic subroutine. Current updraft gasification models created in Aspen Plus have limited predictive capabilities and must be "tuned" to reflect a generalized gas composition as specified in literature or by the gasifier manufacturer. This limits the applicability of the process model.
Application of agent-based system for bioprocess description and process improvement.
Gao, Ying; Kipling, Katie; Glassey, Jarka; Willis, Mark; Montague, Gary; Zhou, Yuhong; Titchener-Hooker, Nigel J
2010-01-01
Modeling plays an important role in bioprocess development for design and scale-up. Predictive models can also be used in biopharmaceutical manufacturing to assist decision-making either to maintain process consistency or to identify optimal operating conditions. To predict the whole bioprocess performance, the strong interactions present in a processing sequence must be adequately modeled. Traditionally, bioprocess modeling considers process units separately, which makes it difficult to capture the interactions between units. In this work, a systematic framework is developed to analyze the bioprocesses based on a whole process understanding and considering the interactions between process operations. An agent-based approach is adopted to provide a flexible infrastructure for the necessary integration of process models. This enables the prediction of overall process behavior, which can then be applied during process development or once manufacturing has commenced, in both cases leading to the capacity for fast evaluation of process improvement options. The multi-agent system comprises a process knowledge base, process models, and a group of functional agents. In this system, agent components co-operate with each other in performing their tasks. These include the description of the whole process behavior, evaluating process operating conditions, monitoring of the operating processes, predicting critical process performance, and providing guidance to decision-making when coping with process deviations. During process development, the system can be used to evaluate the design space for process operation. During manufacture, the system can be applied to identify abnormal process operation events and then to provide suggestions as to how best to cope with the deviations. In all cases, the function of the system is to ensure an efficient manufacturing process. The implementation of the agent-based approach is illustrated via selected application scenarios, which demonstrate how such a framework may enable the better integration of process operations by providing a plant-wide process description to facilitate process improvement. Copyright 2009 American Institute of Chemical Engineers
NASA Technical Reports Server (NTRS)
Johnston, John D.; Howard, Joseph M.; Mosier, Gary E.; Parrish, Keith A.; McGinnis, Mark A.; Bluth, Marcel; Kim, Kevin; Ha, Kong Q.
2004-01-01
The James Web Space Telescope (JWST) is a large, infrared-optimized space telescope scheduled for launch in 2011. This is a continuation of a series of papers on modeling activities for JWST. The structural-thermal-optical, often referred to as STOP, analysis process is used to predict the effect of thermal distortion on optical performance. The benchmark STOP analysis for JWST assesses the effect of an observatory slew on wavefront error. Temperatures predicted using geometric and thermal math models are mapped to a structural finite element model in order to predict thermally induced deformations. Motions and deformations at optical surfaces are then input to optical models, and optical performance is predicted using either an optical ray trace or a linear optical analysis tool. In addition to baseline performance predictions, a process for performing sensitivity studies to assess modeling uncertainties is described.
Visual anticipation biases conscious decision making but not bottom-up visual processing.
Mathews, Zenon; Cetnarski, Ryszard; Verschure, Paul F M J
2014-01-01
Prediction plays a key role in control of attention but it is not clear which aspects of prediction are most prominent in conscious experience. An evolving view on the brain is that it can be seen as a prediction machine that optimizes its ability to predict states of the world and the self through the top-down propagation of predictions and the bottom-up presentation of prediction errors. There are competing views though on whether prediction or prediction errors dominate the formation of conscious experience. Yet, the dynamic effects of prediction on perception, decision making and consciousness have been difficult to assess and to model. We propose a novel mathematical framework and a psychophysical paradigm that allows us to assess both the hierarchical structuring of perceptual consciousness, its content and the impact of predictions and/or errors on conscious experience, attention and decision-making. Using a displacement detection task combined with reverse correlation, we reveal signatures of the usage of prediction at three different levels of perceptual processing: bottom-up fast saccades, top-down driven slow saccades and consciousnes decisions. Our results suggest that the brain employs multiple parallel mechanism at different levels of perceptual processing in order to shape effective sensory consciousness within a predicted perceptual scene. We further observe that bottom-up sensory and top-down predictive processes can be dissociated through cognitive load. We propose a probabilistic data association model from dynamical systems theory to model the predictive multi-scale bias in perceptual processing that we observe and its role in the formation of conscious experience. We propose that these results support the hypothesis that consciousness provides a time-delayed description of a task that is used to prospectively optimize real time control structures, rather than being engaged in the real-time control of behavior itself.
Thomas W. Bonnot; Frank R. Thompson; Joshua J. Millspaugh
2017-01-01
The increasing need to predict how climate change will impact wildlife species has exposed limitations in how well current approaches model important biological processes at scales at which those processes interact with climate. We used a comprehensive approach that combined recent advances in landscape and population modeling into dynamic-landscape metapopulation...
Krajcsi, Attila; Lengyel, Gábor; Kojouharova, Petia
2018-01-01
HIGHLIGHTS We test whether symbolic number comparison is handled by an analog noisy system.Analog system model has systematic biases in describing symbolic number comparison.This suggests that symbolic and non-symbolic numbers are processed by different systems. Dominant numerical cognition models suppose that both symbolic and non-symbolic numbers are processed by the Analog Number System (ANS) working according to Weber's law. It was proposed that in a number comparison task the numerical distance and size effects reflect a ratio-based performance which is the sign of the ANS activation. However, increasing number of findings and alternative models propose that symbolic and non-symbolic numbers might be processed by different representations. Importantly, alternative explanations may offer similar predictions to the ANS prediction, therefore, former evidence usually utilizing only the goodness of fit of the ANS prediction is not sufficient to support the ANS account. To test the ANS model more rigorously, a more extensive test is offered here. Several properties of the ANS predictions for the error rates, reaction times, and diffusion model drift rates were systematically analyzed in both non-symbolic dot comparison and symbolic Indo-Arabic comparison tasks. It was consistently found that while the ANS model's prediction is relatively good for the non-symbolic dot comparison, its prediction is poorer and systematically biased for the symbolic Indo-Arabic comparison. We conclude that only non-symbolic comparison is supported by the ANS, and symbolic number comparisons are processed by other representation. PMID:29491845
A Complete Procedure for Predicting and Improving the Performance of HAWT's
NASA Astrophysics Data System (ADS)
Al-Abadi, Ali; Ertunç, Özgür; Sittig, Florian; Delgado, Antonio
2014-06-01
A complete procedure for predicting and improving the performance of the horizontal axis wind turbine (HAWT) has been developed. The first process is predicting the power extracted by the turbine and the derived rotor torque, which should be identical to that of the drive unit. The BEM method and a developed post-stall treatment for resolving stall-regulated HAWT is incorporated in the prediction. For that, a modified stall-regulated prediction model, which can predict the HAWT performance over the operating range of oncoming wind velocity, is derived from existing models. The model involves radius and chord, which has made it more general in applications for predicting the performance of different scales and rotor shapes of HAWTs. The second process is modifying the rotor shape by an optimization process, which can be applied to any existing HAWT, to improve its performance. A gradient- based optimization is used for adjusting the chord and twist angle distribution of the rotor blade to increase the extraction of the power while keeping the drive torque constant, thus the same drive unit can be kept. The final process is testing the modified turbine to predict its enhanced performance. The procedure is applied to NREL phase-VI 10kW as a baseline turbine. The study has proven the applicability of the developed model in predicting the performance of the baseline as well as the optimized turbine. In addition, the optimization method has shown that the power coefficient can be increased while keeping same design rotational speed.
In-situ biogas upgrading process: Modeling and simulations aspects.
Lovato, Giovanna; Alvarado-Morales, Merlin; Kovalovszki, Adam; Peprah, Maria; Kougias, Panagiotis G; Rodrigues, José Alberto Domingues; Angelidaki, Irini
2017-12-01
Biogas upgrading processes by in-situ hydrogen (H 2 ) injection are still challenging and could benefit from a mathematical model to predict system performance. Therefore, a previous model on anaerobic digestion was updated and expanded to include the effect of H 2 injection into the liquid phase of a fermenter with the aim of modeling and simulating these processes. This was done by including hydrogenotrophic methanogen kinetics for H 2 consumption and inhibition effect on the acetogenic steps. Special attention was paid to gas to liquid transfer of H 2 . The final model was successfully validated considering a set of Case Studies. Biogas composition and H 2 utilization were correctly predicted, with overall deviation below 10% compared to experimental measurements. Parameter sensitivity analysis revealed that the model is highly sensitive to the H 2 injection rate and mass transfer coefficient. The model developed is an effective tool for predicting process performance in scenarios with biogas upgrading. Copyright © 2017 Elsevier Ltd. All rights reserved.
Sources of Uncertainty in Predicting Land Surface Fluxes Using Diverse Data and Models
NASA Technical Reports Server (NTRS)
Dungan, Jennifer L.; Wang, Weile; Michaelis, Andrew; Votava, Petr; Nemani, Ramakrishma
2010-01-01
In the domain of predicting land surface fluxes, models are used to bring data from large observation networks and satellite remote sensing together to make predictions about present and future states of the Earth. Characterizing the uncertainty about such predictions is a complex process and one that is not yet fully understood. Uncertainty exists about initialization, measurement and interpolation of input variables; model parameters; model structure; and mixed spatial and temporal supports. Multiple models or structures often exist to describe the same processes. Uncertainty about structure is currently addressed by running an ensemble of different models and examining the distribution of model outputs. To illustrate structural uncertainty, a multi-model ensemble experiment we have been conducting using the Terrestrial Observation and Prediction System (TOPS) will be discussed. TOPS uses public versions of process-based ecosystem models that use satellite-derived inputs along with surface climate data and land surface characterization to produce predictions of ecosystem fluxes including gross and net primary production and net ecosystem exchange. Using the TOPS framework, we have explored the uncertainty arising from the application of models with different assumptions, structures, parameters, and variable definitions. With a small number of models, this only begins to capture the range of possible spatial fields of ecosystem fluxes. Few attempts have been made to systematically address the components of uncertainty in such a framework. We discuss the characterization of uncertainty for this approach including both quantifiable and poorly known aspects.
Scheiblauer, Johannes; Scheiner, Stefan; Joksch, Martin; Kavsek, Barbara
2018-09-14
A combined experimental/theoretical approach is presented, for improving the predictability of Saccharomyces cerevisiae fermentations. In particular, a mathematical model was developed explicitly taking into account the main mechanisms of the fermentation process, allowing for continuous computation of key process variables, including the biomass concentration and the respiratory quotient (RQ). For model calibration and experimental validation, batch and fed-batch fermentations were carried out. Comparison of the model-predicted biomass concentrations and RQ developments with the corresponding experimentally recorded values shows a remarkably good agreement for both batch and fed-batch processes, confirming the adequacy of the model. Furthermore, sensitivity studies were performed, in order to identify model parameters whose variations have significant effects on the model predictions: our model responds with significant sensitivity to the variations of only six parameters. These studies provide a valuable basis for model reduction, as also demonstrated in this paper. Finally, optimization-based parametric studies demonstrate how our model can be utilized for improving the efficiency of Saccharomyces cerevisiae fermentations. Copyright © 2018 Elsevier Ltd. All rights reserved.
Symbolic Processing Combined with Model-Based Reasoning
NASA Technical Reports Server (NTRS)
James, Mark
2009-01-01
A computer program for the detection of present and prediction of future discrete states of a complex, real-time engineering system utilizes a combination of symbolic processing and numerical model-based reasoning. One of the biggest weaknesses of a purely symbolic approach is that it enables prediction of only future discrete states while missing all unmodeled states or leading to incorrect identification of an unmodeled state as a modeled one. A purely numerical approach is based on a combination of statistical methods and mathematical models of the applicable physics and necessitates development of a complete model to the level of fidelity required for prediction. In addition, a purely numerical approach does not afford the ability to qualify its results without some form of symbolic processing. The present software implements numerical algorithms to detect unmodeled events and symbolic algorithms to predict expected behavior, correlate the expected behavior with the unmodeled events, and interpret the results in order to predict future discrete states. The approach embodied in this software differs from that of the BEAM methodology (aspects of which have been discussed in several prior NASA Tech Briefs articles), which provides for prediction of future measurements in the continuous-data domain.
Prediction and generation of binary Markov processes: Can a finite-state fox catch a Markov mouse?
NASA Astrophysics Data System (ADS)
Ruebeck, Joshua B.; James, Ryan G.; Mahoney, John R.; Crutchfield, James P.
2018-01-01
Understanding the generative mechanism of a natural system is a vital component of the scientific method. Here, we investigate one of the fundamental steps toward this goal by presenting the minimal generator of an arbitrary binary Markov process. This is a class of processes whose predictive model is well known. Surprisingly, the generative model requires three distinct topologies for different regions of parameter space. We show that a previously proposed generator for a particular set of binary Markov processes is, in fact, not minimal. Our results shed the first quantitative light on the relative (minimal) costs of prediction and generation. We find, for instance, that the difference between prediction and generation is maximized when the process is approximately independently, identically distributed.
Curcio, Stefano; Saraceno, Alessandra; Calabrò, Vincenza; Iorio, Gabriele
2014-01-01
The present paper was aimed at showing that advanced modeling techniques, based either on artificial neural networks or on hybrid systems, might efficiently predict the behavior of two biotechnological processes designed for the obtainment of second-generation biofuels from waste biomasses. In particular, the enzymatic transesterification of waste-oil glycerides, the key step for the obtainment of biodiesel, and the anaerobic digestion of agroindustry wastes to produce biogas were modeled. It was proved that the proposed modeling approaches provided very accurate predictions of systems behavior. Both neural network and hybrid modeling definitely represented a valid alternative to traditional theoretical models, especially when comprehensive knowledge of the metabolic pathways, of the true kinetic mechanisms, and of the transport phenomena involved in biotechnological processes was difficult to be achieved.
Saraceno, Alessandra; Calabrò, Vincenza; Iorio, Gabriele
2014-01-01
The present paper was aimed at showing that advanced modeling techniques, based either on artificial neural networks or on hybrid systems, might efficiently predict the behavior of two biotechnological processes designed for the obtainment of second-generation biofuels from waste biomasses. In particular, the enzymatic transesterification of waste-oil glycerides, the key step for the obtainment of biodiesel, and the anaerobic digestion of agroindustry wastes to produce biogas were modeled. It was proved that the proposed modeling approaches provided very accurate predictions of systems behavior. Both neural network and hybrid modeling definitely represented a valid alternative to traditional theoretical models, especially when comprehensive knowledge of the metabolic pathways, of the true kinetic mechanisms, and of the transport phenomena involved in biotechnological processes was difficult to be achieved. PMID:24516363
Reliability Prediction of Ontology-Based Service Compositions Using Petri Net and Time Series Models
Li, Jia; Xia, Yunni; Luo, Xin
2014-01-01
OWL-S, one of the most important Semantic Web service ontologies proposed to date, provides a core ontological framework and guidelines for describing the properties and capabilities of their web services in an unambiguous, computer interpretable form. Predicting the reliability of composite service processes specified in OWL-S allows service users to decide whether the process meets the quantitative quality requirement. In this study, we consider the runtime quality of services to be fluctuating and introduce a dynamic framework to predict the runtime reliability of services specified in OWL-S, employing the Non-Markovian stochastic Petri net (NMSPN) and the time series model. The framework includes the following steps: obtaining the historical response times series of individual service components; fitting these series with a autoregressive-moving-average-model (ARMA for short) and predicting the future firing rates of service components; mapping the OWL-S process into a NMSPN model; employing the predicted firing rates as the model input of NMSPN and calculating the normal completion probability as the reliability estimate. In the case study, a comparison between the static model and our approach based on experimental data is presented and it is shown that our approach achieves higher prediction accuracy. PMID:24688429
Explaining neural signals in human visual cortex with an associative learning model.
Jiang, Jiefeng; Schmajuk, Nestor; Egner, Tobias
2012-08-01
"Predictive coding" models posit a key role for associative learning in visual cognition, viewing perceptual inference as a process of matching (learned) top-down predictions (or expectations) against bottom-up sensory evidence. At the neural level, these models propose that each region along the visual processing hierarchy entails one set of processing units encoding predictions of bottom-up input, and another set computing mismatches (prediction error or surprise) between predictions and evidence. This contrasts with traditional views of visual neurons operating purely as bottom-up feature detectors. In support of the predictive coding hypothesis, a recent human neuroimaging study (Egner, Monti, & Summerfield, 2010) showed that neural population responses to expected and unexpected face and house stimuli in the "fusiform face area" (FFA) could be well-described as a summation of hypothetical face-expectation and -surprise signals, but not by feature detector responses. Here, we used computer simulations to test whether these imaging data could be formally explained within the broader framework of a mathematical neural network model of associative learning (Schmajuk, Gray, & Lam, 1996). Results show that FFA responses could be fit very closely by model variables coding for conditional predictions (and their violations) of stimuli that unconditionally activate the FFA. These data document that neural population signals in the ventral visual stream that deviate from classic feature detection responses can formally be explained by associative prediction and surprise signals.
Modeling Sediment Detention Ponds Using Reactor Theory and Advection-Diffusion Concepts
NASA Astrophysics Data System (ADS)
Wilson, Bruce N.; Barfield, Billy J.
1985-04-01
An algorithm is presented to model the sedimentation process in detention ponds. This algorithm is based on a mass balance for an infinitesimal layer that couples reactor theory concepts with advection-diffusion processes. Reactor theory concepts are used to (1) determine residence time of sediment particles and to (2) mix influent sediment with previously stored flow. Advection-diffusion processes are used to model the (1) settling characteristics of sediment and the (2) vertical diffusion of sediment due to turbulence. Predicted results of the model are compared to those observed on two pilot scale ponds for a total of 12 runs. The average percent error between predicted and observed trap efficiency was 5.2%. Overall, the observed sedimentology values were predicted with reasonable accuracy.
Probabilistic modeling of discourse-aware sentence processing.
Dubey, Amit; Keller, Frank; Sturt, Patrick
2013-07-01
Probabilistic models of sentence comprehension are increasingly relevant to questions concerning human language processing. However, such models are often limited to syntactic factors. This restriction is unrealistic in light of experimental results suggesting interactions between syntax and other forms of linguistic information in human sentence processing. To address this limitation, this article introduces two sentence processing models that augment a syntactic component with information about discourse co-reference. The novel combination of probabilistic syntactic components with co-reference classifiers permits them to more closely mimic human behavior than existing models. The first model uses a deep model of linguistics, based in part on probabilistic logic, allowing it to make qualitative predictions on experimental data; the second model uses shallow processing to make quantitative predictions on a broad-coverage reading-time corpus. Copyright © 2013 Cognitive Science Society, Inc.
Modeling transport phenomena and uncertainty quantification in solidification processes
NASA Astrophysics Data System (ADS)
Fezi, Kyle S.
Direct chill (DC) casting is the primary processing route for wrought aluminum alloys. This semicontinuous process consists of primary cooling as the metal is pulled through a water cooled mold followed by secondary cooling with a water jet spray and free falling water. To gain insight into this complex solidification process, a fully transient model of DC casting was developed to predict the transport phenomena of aluminum alloys for various conditions. This model is capable of solving mixture mass, momentum, energy, and species conservation equations during multicomponent solidification. Various DC casting process parameters were examined for their effect on transport phenomena predictions in an alloy of commercial interest (aluminum alloy 7050). The practice of placing a wiper to divert cooling water from the ingot surface was studied and the results showed that placement closer to the mold causes remelting at the surface and increases susceptibility to bleed outs. Numerical models of metal alloy solidification, like the one previously mentioned, are used to gain insight into physical phenomena that cannot be observed experimentally. However, uncertainty in model inputs cause uncertainty in results and those insights. The analysis of model assumptions and probable input variability on the level of uncertainty in model predictions has not been calculated in solidification modeling as yet. As a step towards understanding the effect of uncertain inputs on solidification modeling, uncertainty quantification (UQ) and sensitivity analysis were first performed on a transient solidification model of a simple binary alloy (Al-4.5wt.%Cu) in a rectangular cavity with both columnar and equiaxed solid growth models. This analysis was followed by quantifying the uncertainty in predictions from the recently developed transient DC casting model. The PRISM Uncertainty Quantification (PUQ) framework quantified the uncertainty and sensitivity in macrosegregation, solidification time, and sump profile predictions. Uncertain model inputs of interest included the secondary dendrite arm spacing, equiaxed particle size, equiaxed packing fraction, heat transfer coefficient, and material properties. The most influential input parameters for predicting the macrosegregation level were the dendrite arm spacing, which also strongly depended on the choice of mushy zone permeability model, and the equiaxed packing fraction. Additionally, the degree of uncertainty required to produce accurate predictions depended on the output of interest from the model.
Zimmermann, Morgana; Longhi, Daniel A; Schaffner, Donald W; Aragão, Gláucia M F
2014-05-01
The knowledge and understanding of Bacillus coagulans inactivation during a thermal treatment in tomato pulp, as well as the influence of temperature variation during thermal processes are essential for design, calculation, and optimization of the process. The aims of this work were to predict B. coagulans spores inactivation in tomato pulp under varying time-temperature profiles with Gompertz-inspired inactivation model and to validate the model's predictions by comparing the predicted values with experimental data. B. coagulans spores in pH 4.3 tomato pulp at 4 °Brix were sealed in capillary glass tubes and heated in thermostatically controlled circulating oil baths. Seven different nonisothermal profiles in the range from 95 to 105 °C were studied. Predicted inactivation kinetics showed similar behavior to experimentally observed inactivation curves when the samples were exposed to temperatures in the upper range of this study (99 to 105 °C). Profiles that resulted in less accurate predictions were those where the range of temperatures analyzed were comparatively lower (inactivation profiles starting at 95 °C). The link between fail prediction and both lower starting temperature and magnitude of the temperature shift suggests some chemical or biological mechanism at work. Statistical analysis showed that overall model predictions were acceptable, with bias factors from 0.781 to 1.012, and accuracy factors from 1.049 to 1.351, and confirm that the models used were adequate to estimate B. coagulans spores inactivation under fluctuating temperature conditions in the range from 95 to 105 °C. How can we estimate Bacillus coagulans inactivation during sudden temperature shifts in heat processing? This article provides a validated model that can be used to predict B. coagulans under changing temperature conditions. B. coagulans is a spore-forming bacillus that spoils acidified food products. The mathematical model developed here can be used to predict the spoilage risk following thermal process deviations for tomato products. © 2014 Institute of Food Technologists®
NASA Astrophysics Data System (ADS)
Cheng, Jun; Gong, Yadong; Wang, Jinsheng
2013-11-01
The current research of micro-grinding mainly focuses on the optimal processing technology for different materials. However, the material removal mechanism in micro-grinding is the base of achieving high quality processing surface. Therefore, a novel method for predicting surface roughness in micro-grinding of hard brittle materials considering micro-grinding tool grains protrusion topography is proposed in this paper. The differences of material removal mechanism between convention grinding process and micro-grinding process are analyzed. Topography characterization has been done on micro-grinding tools which are fabricated by electroplating. Models of grain density generation and grain interval are built, and new predicting model of micro-grinding surface roughness is developed. In order to verify the precision and application effect of the surface roughness prediction model proposed, a micro-grinding orthogonally experiment on soda-lime glass is designed and conducted. A series of micro-machining surfaces which are 78 nm to 0.98 μm roughness of brittle material is achieved. It is found that experimental roughness results and the predicting roughness data have an evident coincidence, and the component variable of describing the size effects in predicting model is calculated to be 1.5×107 by reverse method based on the experimental results. The proposed model builds a set of distribution to consider grains distribution densities in different protrusion heights. Finally, the characterization of micro-grinding tools which are used in the experiment has been done based on the distribution set. It is concluded that there is a significant coincidence between surface prediction data from the proposed model and measurements from experiment results. Therefore, the effectiveness of the model is demonstrated. This paper proposes a novel method for predicting surface roughness in micro-grinding of hard brittle materials considering micro-grinding tool grains protrusion topography, which would provide significant research theory and experimental reference of material removal mechanism in micro-grinding of soda-lime glass.
Statistical and engineering methods for model enhancement
NASA Astrophysics Data System (ADS)
Chang, Chia-Jung
Models which describe the performance of physical process are essential for quality prediction, experimental planning, process control and optimization. Engineering models developed based on the underlying physics/mechanics of the process such as analytic models or finite element models are widely used to capture the deterministic trend of the process. However, there usually exists stochastic randomness in the system which may introduce the discrepancy between physics-based model predictions and observations in reality. Alternatively, statistical models can be used to develop models to obtain predictions purely based on the data generated from the process. However, such models tend to perform poorly when predictions are made away from the observed data points. This dissertation contributes to model enhancement research by integrating physics-based model and statistical model to mitigate the individual drawbacks and provide models with better accuracy by combining the strengths of both models. The proposed model enhancement methodologies including the following two streams: (1) data-driven enhancement approach and (2) engineering-driven enhancement approach. Through these efforts, more adequate models are obtained, which leads to better performance in system forecasting, process monitoring and decision optimization. Among different data-driven enhancement approaches, Gaussian Process (GP) model provides a powerful methodology for calibrating a physical model in the presence of model uncertainties. However, if the data contain systematic experimental errors, the GP model can lead to an unnecessarily complex adjustment of the physical model. In Chapter 2, we proposed a novel enhancement procedure, named as “Minimal Adjustment”, which brings the physical model closer to the data by making minimal changes to it. This is achieved by approximating the GP model by a linear regression model and then applying a simultaneous variable selection of the model and experimental bias terms. Two real examples and simulations are presented to demonstrate the advantages of the proposed approach. Different from enhancing the model based on data-driven perspective, an alternative approach is to focus on adjusting the model by incorporating the additional domain or engineering knowledge when available. This often leads to models that are very simple and easy to interpret. The concepts of engineering-driven enhancement are carried out through two applications to demonstrate the proposed methodologies. In the first application where polymer composite quality is focused, nanoparticle dispersion has been identified as a crucial factor affecting the mechanical properties. Transmission Electron Microscopy (TEM) images are commonly used to represent nanoparticle dispersion without further quantifications on its characteristics. In Chapter 3, we developed the engineering-driven nonhomogeneous Poisson random field modeling strategy to characterize nanoparticle dispersion status of nanocomposite polymer, which quantitatively represents the nanomaterial quality presented through image data. The model parameters are estimated through the Bayesian MCMC technique to overcome the challenge of limited amount of accessible data due to the time consuming sampling schemes. The second application is to calibrate the engineering-driven force models of laser-assisted micro milling (LAMM) process statistically, which facilitates a systematic understanding and optimization of targeted processes. In Chapter 4, the force prediction interval has been derived by incorporating the variability in the runout parameters as well as the variability in the measured cutting forces. The experimental results indicate that the model predicts the cutting force profile with good accuracy using a 95% confidence interval. To conclude, this dissertation is the research drawing attention to model enhancement, which has considerable impacts on modeling, design, and optimization of various processes and systems. The fundamental methodologies of model enhancement are developed and further applied to various applications. These research activities developed engineering compliant models for adequate system predictions based on observational data with complex variable relationships and uncertainty, which facilitate process planning, monitoring, and real-time control.
Obtaining Accurate Probabilities Using Classifier Calibration
ERIC Educational Resources Information Center
Pakdaman Naeini, Mahdi
2016-01-01
Learning probabilistic classification and prediction models that generate accurate probabilities is essential in many prediction and decision-making tasks in machine learning and data mining. One way to achieve this goal is to post-process the output of classification models to obtain more accurate probabilities. These post-processing methods are…
NASA Astrophysics Data System (ADS)
Anderson, O. Roger
The rate of information processing during science learning and the efficiency of the learner in mobilizing relevant information in long-term memory as an aid in transmitting newly acquired information to stable storage in long-term memory are fundamental aspects of science content acquisition. These cognitive processes, moreover, may be substantially related in tempo and quality of organization to the efficiency of higher thought processes such as divergent thinking and problem-solving ability that characterize scientific thought. As a contribution to our quantitative understanding of these fundamental information processes, a mathematical model of information acquisition is presented and empirically evaluated in comparison to evidence obtained from experimental studies of science content acquisition. Computer-based models are used to simulate variations in learning parameters and to generate the theoretical predictions to be empirically tested. The initial tests of the predictive accuracy of the model show close agreement between predicted and actual mean recall scores in short-term learning tasks. Implications of the model for human information acquisition and possible future research are discussed in the context of the unique theoretical framework of the model.
Application of a Model for Simulating the Vacuum Arc Remelting Process in Titanium Alloys
NASA Astrophysics Data System (ADS)
Patel, Ashish; Tripp, David W.; Fiore, Daniel
Mathematical modeling is routinely used in the process development and production of advanced aerospace alloys to gain greater insight into system dynamics and to predict the effect of process modifications or upsets on final properties. This article describes the application of a 2-D mathematical VAR model presented in previous LMPC meetings. The impact of process parameters on melt pool geometry, solidification behavior, fluid-flow and chemistry in Ti-6Al-4V ingots will be discussed. Model predictions were first validated against the measured characteristics of industrially produced ingots, and process inputs and model formulation were adjusted to match macro-etched pool shapes. The results are compared to published data in the literature. Finally, the model is used to examine ingot chemistry during successive VAR melts.
Prediction of porosity of food materials during drying: Current challenges and directions.
Joardder, Mohammad U H; Kumar, C; Karim, M A
2017-07-18
Pore formation in food samples is a common physical phenomenon observed during dehydration processes. The pore evolution during drying significantly affects the physical properties and quality of dried foods. Therefore, it should be taken into consideration when predicting transport processes in the drying sample. Characteristics of pore formation depend on the drying process parameters, product properties and processing time. Understanding the physics of pore formation and evolution during drying will assist in accurately predicting the drying kinetics and quality of food materials. Researchers have been trying to develop mathematical models to describe the pore formation and evolution during drying. In this study, existing porosity models are critically analysed and limitations are identified. Better insight into the factors affecting porosity is provided, and suggestions are proposed to overcome the limitations. These include considerations of process parameters such as glass transition temperature, sample temperature, and variable material properties in the porosity models. Several researchers have proposed models for porosity prediction of food materials during drying. However, these models are either very simplistic or empirical in nature and failed to consider relevant significant factors that influence porosity. In-depth understanding of characteristics of the pore is required for developing a generic model of porosity. A micro-level analysis of pore formation is presented for better understanding, which will help in developing an accurate and generic porosity model.
A Binaural Grouping Model for Predicting Speech Intelligibility in Multitalker Environments
Colburn, H. Steven
2016-01-01
Spatially separating speech maskers from target speech often leads to a large intelligibility improvement. Modeling this phenomenon has long been of interest to binaural-hearing researchers for uncovering brain mechanisms and for improving signal-processing algorithms in hearing-assistive devices. Much of the previous binaural modeling work focused on the unmasking enabled by binaural cues at the periphery, and little quantitative modeling has been directed toward the grouping or source-separation benefits of binaural processing. In this article, we propose a binaural model that focuses on grouping, specifically on the selection of time-frequency units that are dominated by signals from the direction of the target. The proposed model uses Equalization-Cancellation (EC) processing with a binary decision rule to estimate a time-frequency binary mask. EC processing is carried out to cancel the target signal and the energy change between the EC input and output is used as a feature that reflects target dominance in each time-frequency unit. The processing in the proposed model requires little computational resources and is straightforward to implement. In combination with the Coherence-based Speech Intelligibility Index, the model is applied to predict the speech intelligibility data measured by Marrone et al. The predicted speech reception threshold matches the pattern of the measured data well, even though the predicted intelligibility improvements relative to the colocated condition are larger than some of the measured data, which may reflect the lack of internal noise in this initial version of the model. PMID:27698261
A Binaural Grouping Model for Predicting Speech Intelligibility in Multitalker Environments.
Mi, Jing; Colburn, H Steven
2016-10-03
Spatially separating speech maskers from target speech often leads to a large intelligibility improvement. Modeling this phenomenon has long been of interest to binaural-hearing researchers for uncovering brain mechanisms and for improving signal-processing algorithms in hearing-assistive devices. Much of the previous binaural modeling work focused on the unmasking enabled by binaural cues at the periphery, and little quantitative modeling has been directed toward the grouping or source-separation benefits of binaural processing. In this article, we propose a binaural model that focuses on grouping, specifically on the selection of time-frequency units that are dominated by signals from the direction of the target. The proposed model uses Equalization-Cancellation (EC) processing with a binary decision rule to estimate a time-frequency binary mask. EC processing is carried out to cancel the target signal and the energy change between the EC input and output is used as a feature that reflects target dominance in each time-frequency unit. The processing in the proposed model requires little computational resources and is straightforward to implement. In combination with the Coherence-based Speech Intelligibility Index, the model is applied to predict the speech intelligibility data measured by Marrone et al. The predicted speech reception threshold matches the pattern of the measured data well, even though the predicted intelligibility improvements relative to the colocated condition are larger than some of the measured data, which may reflect the lack of internal noise in this initial version of the model. © The Author(s) 2016.
Sugeno-Fuzzy Expert System Modeling for Quality Prediction of Non-Contact Machining Process
NASA Astrophysics Data System (ADS)
Sivaraos; Khalim, A. Z.; Salleh, M. S.; Sivakumar, D.; Kadirgama, K.
2018-03-01
Modeling can be categorised into four main domains: prediction, optimisation, estimation and calibration. In this paper, the Takagi-Sugeno-Kang (TSK) fuzzy logic method is examined as a prediction modelling method to investigate the taper quality of laser lathing, which seeks to replace traditional lathe machines with 3D laser lathing in order to achieve the desired cylindrical shape of stock materials. Three design parameters were selected: feed rate, cutting speed and depth of cut. A total of twenty-four experiments were conducted with eight sequential runs and replicated three times. The results were found to be 99% of accuracy rate of the TSK fuzzy predictive model, which suggests that the model is a suitable and practical method for non-linear laser lathing process.
Model of Silicon Refining During Tapping: Removal of Ca, Al, and Other Selected Element Groups
NASA Astrophysics Data System (ADS)
Olsen, Jan Erik; Kero, Ida T.; Engh, Thorvald A.; Tranell, Gabriella
2017-04-01
A mathematical model for industrial refining of silicon alloys has been developed for the so-called oxidative ladle refining process. It is a lumped (zero-dimensional) model, based on the mass balances of metal, slag, and gas in the ladle, developed to operate with relatively short computational times for the sake of industrial relevance. The model accounts for a semi-continuous process which includes both the tapping and post-tapping refining stages. It predicts the concentrations of Ca, Al, and trace elements, most notably the alkaline metals, alkaline earth metal, and rare earth metals. The predictive power of the model depends on the quality of the model coefficients, the kinetic coefficient, τ, and the equilibrium partition coefficient, L for a given element. A sensitivity analysis indicates that the model results are most sensitive to L. The model has been compared to industrial measurement data and found to be able to qualitatively, and to some extent quantitatively, predict the data. The model is very well suited for alkaline and alkaline earth metals which respond relatively fast to the refining process. The model is less well suited for elements such as the lanthanides and Al, which are refined more slowly. A major challenge for the prediction of the behavior of the rare earth metals is that reliable thermodynamic data for true equilibrium conditions relevant to the industrial process is not typically available in literature.
Benchmarking novel approaches for modelling species range dynamics
Zurell, Damaris; Thuiller, Wilfried; Pagel, Jörn; Cabral, Juliano S; Münkemüller, Tamara; Gravel, Dominique; Dullinger, Stefan; Normand, Signe; Schiffers, Katja H.; Moore, Kara A.; Zimmermann, Niklaus E.
2016-01-01
Increasing biodiversity loss due to climate change is one of the most vital challenges of the 21st century. To anticipate and mitigate biodiversity loss, models are needed that reliably project species’ range dynamics and extinction risks. Recently, several new approaches to model range dynamics have been developed to supplement correlative species distribution models (SDMs), but applications clearly lag behind model development. Indeed, no comparative analysis has been performed to evaluate their performance. Here, we build on process-based, simulated data for benchmarking five range (dynamic) models of varying complexity including classical SDMs, SDMs coupled with simple dispersal or more complex population dynamic models (SDM hybrids), and a hierarchical Bayesian process-based dynamic range model (DRM). We specifically test the effects of demographic and community processes on model predictive performance. Under current climate, DRMs performed best, although only marginally. Under climate change, predictive performance varied considerably, with no clear winners. Yet, all range dynamic models improved predictions under climate change substantially compared to purely correlative SDMs, and the population dynamic models also predicted reasonable extinction risks for most scenarios. When benchmarking data were simulated with more complex demographic and community processes, simple SDM hybrids including only dispersal often proved most reliable. Finally, we found that structural decisions during model building can have great impact on model accuracy, but prior system knowledge on important processes can reduce these uncertainties considerably. Our results reassure the clear merit in using dynamic approaches for modelling species’ response to climate change but also emphasise several needs for further model and data improvement. We propose and discuss perspectives for improving range projections through combination of multiple models and for making these approaches operational for large numbers of species. PMID:26872305
Benchmarking novel approaches for modelling species range dynamics.
Zurell, Damaris; Thuiller, Wilfried; Pagel, Jörn; Cabral, Juliano S; Münkemüller, Tamara; Gravel, Dominique; Dullinger, Stefan; Normand, Signe; Schiffers, Katja H; Moore, Kara A; Zimmermann, Niklaus E
2016-08-01
Increasing biodiversity loss due to climate change is one of the most vital challenges of the 21st century. To anticipate and mitigate biodiversity loss, models are needed that reliably project species' range dynamics and extinction risks. Recently, several new approaches to model range dynamics have been developed to supplement correlative species distribution models (SDMs), but applications clearly lag behind model development. Indeed, no comparative analysis has been performed to evaluate their performance. Here, we build on process-based, simulated data for benchmarking five range (dynamic) models of varying complexity including classical SDMs, SDMs coupled with simple dispersal or more complex population dynamic models (SDM hybrids), and a hierarchical Bayesian process-based dynamic range model (DRM). We specifically test the effects of demographic and community processes on model predictive performance. Under current climate, DRMs performed best, although only marginally. Under climate change, predictive performance varied considerably, with no clear winners. Yet, all range dynamic models improved predictions under climate change substantially compared to purely correlative SDMs, and the population dynamic models also predicted reasonable extinction risks for most scenarios. When benchmarking data were simulated with more complex demographic and community processes, simple SDM hybrids including only dispersal often proved most reliable. Finally, we found that structural decisions during model building can have great impact on model accuracy, but prior system knowledge on important processes can reduce these uncertainties considerably. Our results reassure the clear merit in using dynamic approaches for modelling species' response to climate change but also emphasize several needs for further model and data improvement. We propose and discuss perspectives for improving range projections through combination of multiple models and for making these approaches operational for large numbers of species. © 2016 John Wiley & Sons Ltd.
Federal Register 2010, 2011, 2012, 2013, 2014
2010-12-07
... Wind Erosion Prediction System for Soil Erodibility System Calculations for the Natural Resources... Erosion Prediction System (WEPS) for soil erodibility system calculations scheduled for implementation for... computer model is a process-based, daily time-step computer model that predicts soil erosion via simulation...
A cluster expansion model for predicting activation barrier of atomic processes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rehman, Tafizur; Jaipal, M.; Chatterjee, Abhijit, E-mail: achatter@iitk.ac.in
2013-06-15
We introduce a procedure based on cluster expansion models for predicting the activation barrier of atomic processes encountered while studying the dynamics of a material system using the kinetic Monte Carlo (KMC) method. Starting with an interatomic potential description, a mathematical derivation is presented to show that the local environment dependence of the activation barrier can be captured using cluster interaction models. Next, we develop a systematic procedure for training the cluster interaction model on-the-fly, which involves: (i) obtaining activation barriers for handful local environments using nudged elastic band (NEB) calculations, (ii) identifying the local environment by analyzing the NEBmore » results, and (iii) estimating the cluster interaction model parameters from the activation barrier data. Once a cluster expansion model has been trained, it is used to predict activation barriers without requiring any additional NEB calculations. Numerical studies are performed to validate the cluster expansion model by studying hop processes in Ag/Ag(100). We show that the use of cluster expansion model with KMC enables efficient generation of an accurate process rate catalog.« less
Adaptation of clinical prediction models for application in local settings.
Kappen, Teus H; Vergouwe, Yvonne; van Klei, Wilton A; van Wolfswinkel, Leo; Kalkman, Cor J; Moons, Karel G M
2012-01-01
When planning to use a validated prediction model in new patients, adequate performance is not guaranteed. For example, changes in clinical practice over time or a different case mix than the original validation population may result in inaccurate risk predictions. To demonstrate how clinical information can direct updating a prediction model and development of a strategy for handling missing predictor values in clinical practice. A previously derived and validated prediction model for postoperative nausea and vomiting was updated using a data set of 1847 patients. The update consisted of 1) changing the definition of an existing predictor, 2) reestimating the regression coefficient of a predictor, and 3) adding a new predictor to the model. The updated model was then validated in a new series of 3822 patients. Furthermore, several imputation models were considered to handle real-time missing values, so that possible missing predictor values could be anticipated during actual model use. Differences in clinical practice between our local population and the original derivation population guided the update strategy of the prediction model. The predictive accuracy of the updated model was better (c statistic, 0.68; calibration slope, 1.0) than the original model (c statistic, 0.62; calibration slope, 0.57). Inclusion of logistical variables in the imputation models, besides observed patient characteristics, contributed to a strategy to deal with missing predictor values at the time of risk calculation. Extensive knowledge of local, clinical processes provides crucial information to guide the process of adapting a prediction model to new clinical practices.
The Climate Variability & Predictability (CVP) Program at NOAA - Recent Program Advancements
NASA Astrophysics Data System (ADS)
Lucas, S. E.; Todd, J. F.
2015-12-01
The Climate Variability & Predictability (CVP) Program supports research aimed at providing process-level understanding of the climate system through observation, modeling, analysis, and field studies. This vital knowledge is needed to improve climate models and predictions so that scientists can better anticipate the impacts of future climate variability and change. To achieve its mission, the CVP Program supports research carried out at NOAA and other federal laboratories, NOAA Cooperative Institutes, and academic institutions. The Program also coordinates its sponsored projects with major national and international scientific bodies including the World Climate Research Programme (WCRP), the International and U.S. Climate Variability and Predictability (CLIVAR/US CLIVAR) Program, and the U.S. Global Change Research Program (USGCRP). The CVP program sits within NOAA's Climate Program Office (http://cpo.noaa.gov/CVP). The CVP Program currently supports multiple projects in areas that are aimed at improved representation of physical processes in global models. Some of the topics that are currently funded include: i) Improved Understanding of Intraseasonal Tropical Variability - DYNAMO field campaign and post -field projects, and the new climate model improvement teams focused on MJO processes; ii) Climate Process Teams (CPTs, co-funded with NSF) with projects focused on Cloud macrophysical parameterization and its application to aerosol indirect effects, and Internal-Wave Driven Mixing in Global Ocean Models; iii) Improved Understanding of Tropical Pacific Processes, Biases, and Climatology; iv) Understanding Arctic Sea Ice Mechanism and Predictability;v) AMOC Mechanisms and Decadal Predictability Recent results from CVP-funded projects will be summarized. Additional information can be found at http://cpo.noaa.gov/CVP.
Prediction of Indian Summer-Monsoon Onset Variability: A Season in Advance.
Pradhan, Maheswar; Rao, A Suryachandra; Srivastava, Ankur; Dakate, Ashish; Salunke, Kiran; Shameera, K S
2017-10-27
Monsoon onset is an inherent transient phenomenon of Indian Summer Monsoon and it was never envisaged that this transience can be predicted at long lead times. Though onset is precipitous, its variability exhibits strong teleconnections with large scale forcing such as ENSO and IOD and hence may be predictable. Despite of the tremendous skill achieved by the state-of-the-art models in predicting such large scale processes, the prediction of monsoon onset variability by the models is still limited to just 2-3 weeks in advance. Using an objective definition of onset in a global coupled ocean-atmosphere model, it is shown that the skillful prediction of onset variability is feasible under seasonal prediction framework. The better representations/simulations of not only the large scale processes but also the synoptic and intraseasonal features during the evolution of monsoon onset are the comprehensions behind skillful simulation of monsoon onset variability. The changes observed in convection, tropospheric circulation and moisture availability prior to and after the onset are evidenced in model simulations, which resulted in high hit rate of early/delay in monsoon onset in the high resolution model.
Clinical time series prediction: Toward a hierarchical dynamical system framework.
Liu, Zitao; Hauskrecht, Milos
2015-09-01
Developing machine learning and data mining algorithms for building temporal models of clinical time series is important for understanding of the patient condition, the dynamics of a disease, effect of various patient management interventions and clinical decision making. In this work, we propose and develop a novel hierarchical framework for modeling clinical time series data of varied length and with irregularly sampled observations. Our hierarchical dynamical system framework for modeling clinical time series combines advantages of the two temporal modeling approaches: the linear dynamical system and the Gaussian process. We model the irregularly sampled clinical time series by using multiple Gaussian process sequences in the lower level of our hierarchical framework and capture the transitions between Gaussian processes by utilizing the linear dynamical system. The experiments are conducted on the complete blood count (CBC) panel data of 1000 post-surgical cardiac patients during their hospitalization. Our framework is evaluated and compared to multiple baseline approaches in terms of the mean absolute prediction error and the absolute percentage error. We tested our framework by first learning the time series model from data for the patients in the training set, and then using it to predict future time series values for the patients in the test set. We show that our model outperforms multiple existing models in terms of its predictive accuracy. Our method achieved a 3.13% average prediction accuracy improvement on ten CBC lab time series when it was compared against the best performing baseline. A 5.25% average accuracy improvement was observed when only short-term predictions were considered. A new hierarchical dynamical system framework that lets us model irregularly sampled time series data is a promising new direction for modeling clinical time series and for improving their predictive performance. Copyright © 2014 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Tiebin, Wu; Yunlian, Liu; Xinjun, Li; Yi, Yu; Bin, Zhang
2018-06-01
Aiming at the difficulty in quality prediction of sintered ores, a hybrid prediction model is established based on mechanism models of sintering and time-weighted error compensation on the basis of the extreme learning machine (ELM). At first, mechanism models of drum index, total iron, and alkalinity are constructed according to the chemical reaction mechanism and conservation of matter in the sintering process. As the process is simplified in the mechanism models, these models are not able to describe high nonlinearity. Therefore, errors are inevitable. For this reason, the time-weighted ELM based error compensation model is established. Simulation results verify that the hybrid model has a high accuracy and can meet the requirement for industrial applications.
Lv, Shao-Wa; Liu, Dong; Hu, Pan-Pan; Ye, Xu-Yan; Xiao, Hong-Bin; Kuang, Hai-Xue
2010-03-01
To optimize the process of extracting effective constituents from Aralia elata by response surface methodology. The independent variables were ethanol concentration, reflux time and solvent fold, the dependent variable was extraction rate of total saponins in Aralia elata. Linear or no-linear mathematic models were used to estimate the relationship between independent and dependent variables. Response surface methodology was used to optimize the process of extraction. The prediction was carried out through comparing the observed and predicted values. Regression coefficient of binomial fitting complex model was as high as 0.9617, the optimum conditions of extraction process were 70% ethanol, 2.5 hours for reflux, 20-fold solvent and 3 times for extraction. The bias between observed and predicted values was -2.41%. It shows the optimum model is highly predictive.
Multi input single output model predictive control of non-linear bio-polymerization process
DOE Office of Scientific and Technical Information (OSTI.GOV)
Arumugasamy, Senthil Kumar; Ahmad, Z.
This paper focuses on Multi Input Single Output (MISO) Model Predictive Control of bio-polymerization process in which mechanistic model is developed and linked with the feedforward neural network model to obtain a hybrid model (Mechanistic-FANN) of lipase-catalyzed ring-opening polymerization of ε-caprolactone (ε-CL) for Poly (ε-caprolactone) production. In this research, state space model was used, in which the input to the model were the reactor temperatures and reactor impeller speeds and the output were the molecular weight of polymer (M{sub n}) and polymer polydispersity index. State space model for MISO created using System identification tool box of Matlab™. This state spacemore » model is used in MISO MPC. Model predictive control (MPC) has been applied to predict the molecular weight of the biopolymer and consequently control the molecular weight of biopolymer. The result shows that MPC is able to track reference trajectory and give optimum movement of manipulated variable.« less
WEPP Model applications for evaluations of best management practices
D. C. Flanagan; W. J. Elliott; J. R. Frankenberger; C. Huang
2010-01-01
The Water Erosion Prediction Project (WEPP) model is a process-based erosion prediction technology for application to small watersheds and hillslope profiles, under agricultural, forested, rangeland, and other land management conditions. Developed by the United States Department of Agriculture (USDA) over the past 25 years, WEPP simulates many of the physical processes...
Explicit simulation of ice particle habits in a Numerical Weather Prediction Model
NASA Astrophysics Data System (ADS)
Hashino, Tempei
2007-05-01
This study developed a scheme for explicit simulation of ice particle habits in Numerical Weather Prediction (NWP) Models. The scheme is called Spectral Ice Habit Prediction System (SHIPS), and the goal is to retain growth history of ice particles in the Eulerian dynamics framework. It diagnoses characteristics of ice particles based on a series of particle property variables (PPVs) that reflect history of microphysieal processes and the transport between mass bins and air parcels in space. Therefore, categorization of ice particles typically used in bulk microphysical parameterization and traditional bin models is not necessary, so that errors that stem from the categorization can be avoided. SHIPS predicts polycrystals as well as hexagonal monocrystals based on empirically derived habit frequency and growth rate, and simulates the habit-dependent aggregation and riming processes by use of the stochastic collection equation with predicted PPVs. Idealized two dimensional simulations were performed with SHIPS in a NWP model. The predicted spatial distribution of ice particle habits and types, and evolution of particle size distributions showed good quantitative agreement with observation This comprehensive model of ice particle properties, distributions, and evolution in clouds can be used to better understand problems facing wide range of research disciplines, including microphysics processes, radiative transfer in a cloudy atmosphere, data assimilation, and weather modification.
NASA Astrophysics Data System (ADS)
Lohmar, Johannes; Bambach, Markus; Karhausen, Kai F.
2013-01-01
Integrated computational materials engineering is an up to date method for developing new materials and optimizing complete process chains. In the simulation of a process chain, material models play a central role as they capture the response of the material to external process conditions. While much effort is put into their development and improvement, less attention is paid to their implementation, which is problematic because the representation of microstructure in the model has a decisive influence on modeling accuracy and calculation speed. The aim of this article is to analyze the influence of different microstructure representation concepts on the prediction of flow stress and microstructure evolution when using the same set of material equations. Scalar, tree-based and cluster-based concepts are compared for a multi-stage rolling process of an AA5182 alloy. It was found that implementation influences the predicted flow stress and grain size, in particular in the regime of coupled hardening and softening.
A Novel Modelling Approach for Predicting Forest Growth and Yield under Climate Change.
Ashraf, M Irfan; Meng, Fan-Rui; Bourque, Charles P-A; MacLean, David A
2015-01-01
Global climate is changing due to increasing anthropogenic emissions of greenhouse gases. Forest managers need growth and yield models that can be used to predict future forest dynamics during the transition period of present-day forests under a changing climatic regime. In this study, we developed a forest growth and yield model that can be used to predict individual-tree growth under current and projected future climatic conditions. The model was constructed by integrating historical tree growth records with predictions from an ecological process-based model using neural networks. The new model predicts basal area (BA) and volume growth for individual trees in pure or mixed species forests. For model development, tree-growth data under current climatic conditions were obtained using over 3000 permanent sample plots from the Province of Nova Scotia, Canada. Data to reflect tree growth under a changing climatic regime were projected with JABOWA-3 (an ecological process-based model). Model validation with designated data produced model efficiencies of 0.82 and 0.89 in predicting individual-tree BA and volume growth. Model efficiency is a relative index of model performance, where 1 indicates an ideal fit, while values lower than zero means the predictions are no better than the average of the observations. Overall mean prediction error (BIAS) of basal area and volume growth predictions was nominal (i.e., for BA: -0.0177 cm(2) 5-year(-1) and volume: 0.0008 m(3) 5-year(-1)). Model variability described by root mean squared error (RMSE) in basal area prediction was 40.53 cm(2) 5-year(-1) and 0.0393 m(3) 5-year(-1) in volume prediction. The new modelling approach has potential to reduce uncertainties in growth and yield predictions under different climate change scenarios. This novel approach provides an avenue for forest managers to generate required information for the management of forests in transitional periods of climate change. Artificial intelligence technology has substantial potential in forest modelling.
A Novel Modelling Approach for Predicting Forest Growth and Yield under Climate Change
Ashraf, M. Irfan; Meng, Fan-Rui; Bourque, Charles P.-A.; MacLean, David A.
2015-01-01
Global climate is changing due to increasing anthropogenic emissions of greenhouse gases. Forest managers need growth and yield models that can be used to predict future forest dynamics during the transition period of present-day forests under a changing climatic regime. In this study, we developed a forest growth and yield model that can be used to predict individual-tree growth under current and projected future climatic conditions. The model was constructed by integrating historical tree growth records with predictions from an ecological process-based model using neural networks. The new model predicts basal area (BA) and volume growth for individual trees in pure or mixed species forests. For model development, tree-growth data under current climatic conditions were obtained using over 3000 permanent sample plots from the Province of Nova Scotia, Canada. Data to reflect tree growth under a changing climatic regime were projected with JABOWA-3 (an ecological process-based model). Model validation with designated data produced model efficiencies of 0.82 and 0.89 in predicting individual-tree BA and volume growth. Model efficiency is a relative index of model performance, where 1 indicates an ideal fit, while values lower than zero means the predictions are no better than the average of the observations. Overall mean prediction error (BIAS) of basal area and volume growth predictions was nominal (i.e., for BA: -0.0177 cm2 5-year-1 and volume: 0.0008 m3 5-year-1). Model variability described by root mean squared error (RMSE) in basal area prediction was 40.53 cm2 5-year-1 and 0.0393 m3 5-year-1 in volume prediction. The new modelling approach has potential to reduce uncertainties in growth and yield predictions under different climate change scenarios. This novel approach provides an avenue for forest managers to generate required information for the management of forests in transitional periods of climate change. Artificial intelligence technology has substantial potential in forest modelling. PMID:26173081
NASA Technical Reports Server (NTRS)
Hoppa, Mary Ann; Wilson, Larry W.
1994-01-01
There are many software reliability models which try to predict future performance of software based on data generated by the debugging process. Our research has shown that by improving the quality of the data one can greatly improve the predictions. We are working on methodologies which control some of the randomness inherent in the standard data generation processes in order to improve the accuracy of predictions. Our contribution is twofold in that we describe an experimental methodology using a data structure called the debugging graph and apply this methodology to assess the robustness of existing models. The debugging graph is used to analyze the effects of various fault recovery orders on the predictive accuracy of several well-known software reliability algorithms. We found that, along a particular debugging path in the graph, the predictive performance of different models can vary greatly. Similarly, just because a model 'fits' a given path's data well does not guarantee that the model would perform well on a different path. Further we observed bug interactions and noted their potential effects on the predictive process. We saw that not only do different faults fail at different rates, but that those rates can be affected by the particular debugging stage at which the rates are evaluated. Based on our experiment, we conjecture that the accuracy of a reliability prediction is affected by the fault recovery order as well as by fault interaction.
Intelligent sensor-model automated control of PMR-15 autoclave processing
NASA Technical Reports Server (NTRS)
Hart, S.; Kranbuehl, D.; Loos, A.; Hinds, B.; Koury, J.
1992-01-01
An intelligent sensor model system has been built and used for automated control of the PMR-15 cure process in the autoclave. The system uses frequency-dependent FM sensing (FDEMS), the Loos processing model, and the Air Force QPAL intelligent software shell. The Loos model is used to predict and optimize the cure process including the time-temperature dependence of the extent of reaction, flow, and part consolidation. The FDEMS sensing system in turn monitors, in situ, the removal of solvent, changes in the viscosity, reaction advancement and cure completion in the mold continuously throughout the processing cycle. The sensor information is compared with the optimum processing conditions from the model. The QPAL composite cure control system allows comparison of the sensor monitoring with the model predictions to be broken down into a series of discrete steps and provides a language for making decisions on what to do next regarding time-temperature and pressure.
Tomperi, Jani; Leiviskä, Kauko
2018-06-01
Traditionally the modelling in an activated sludge process has been based on solely the process measurements, but as the interest to optically monitor wastewater samples to characterize the floc morphology has increased, in the recent years the results of image analyses have been more frequently utilized to predict the characteristics of wastewater. This study shows that the traditional process measurements or the automated optical monitoring variables by themselves are not capable of developing the best predictive models for the treated wastewater quality in a full-scale wastewater treatment plant, but utilizing these variables together the optimal models, which show the level and changes in the treated wastewater quality, are achieved. By this early warning, process operation can be optimized to avoid environmental damages and economic losses. The study also shows that specific optical monitoring variables are important in modelling a certain quality parameter, regardless of the other input variables available.
NASA Astrophysics Data System (ADS)
Wahid, A.; Putra, I. G. E. P.
2018-03-01
Dimethyl ether (DME) as an alternative clean energy has attracted a growing attention in the recent years. DME production via reactive distillation has potential for capital cost and energy requirement savings. However, combination of reaction and distillation on a single column makes reactive distillation process a very complex multivariable system with high non-linearity of process and strong interaction between process variables. This study investigates a multivariable model predictive control (MPC) based on two-point temperature control strategy for the DME reactive distillation column to maintain the purities of both product streams. The process model is estimated by a first order plus dead time model. The DME and water purity is maintained by controlling a stage temperature in rectifying and stripping section, respectively. The result shows that the model predictive controller performed faster responses compared to conventional PI controller that are showed by the smaller ISE values. In addition, the MPC controller is able to handle the loop interactions well.
Albaek, Mads O; Gernaey, Krist V; Hansen, Morten S; Stocks, Stuart M
2011-08-01
The purpose of this article is to demonstrate how a model can be constructed such that the progress of a submerged fed-batch fermentation of a filamentous fungus can be predicted with acceptable accuracy. The studied process was enzyme production with Aspergillus oryzae in 550 L pilot plant stirred tank reactors. Different conditions of agitation and aeration were employed as well as two different impeller geometries. The limiting factor for the productivity was oxygen supply to the fermentation broth, and the carbon substrate feed flow rate was controlled by the dissolved oxygen tension. In order to predict the available oxygen transfer in the system, the stoichiometry of the reaction equation including maintenance substrate consumption was first determined. Mainly based on the biomass concentration a viscosity prediction model was constructed, because rising viscosity of the fermentation broth due to hyphal growth of the fungus leads to significant lower mass transfer towards the end of the fermentation process. Each compartment of the model was shown to predict the experimental results well. The overall model can be used to predict key process parameters at varying fermentation conditions. Copyright © 2011 Wiley Periodicals, Inc.
NASA Astrophysics Data System (ADS)
Grujicic, M.; Arakere, A.; Ramaswami, S.; Snipes, J. S.; Yavari, R.; Yen, C.-F.; Cheeseman, B. A.; Montgomery, J. S.
2013-06-01
A conventional gas metal arc welding (GMAW) butt-joining process has been modeled using a two-way fully coupled, transient, thermal-mechanical finite-element procedure. To achieve two-way thermal-mechanical coupling, the work of plastic deformation resulting from potentially high thermal stresses is allowed to be dissipated in the form of heat, and the mechanical material model of the workpiece and the weld is made temperature dependent. Heat losses from the deposited filler-metal are accounted for by considering conduction to the adjoining workpieces as well as natural convection and radiation to the surroundings. The newly constructed GMAW process model is then applied, in conjunction with the basic material physical-metallurgy, to a prototypical high-hardness armor martensitic steel (MIL A46100). The main outcome of this procedure is the prediction of the spatial distribution of various crystalline phases within the weld and the heat-affected zone regions, as a function of the GMAW process parameters. The newly developed GMAW process model is validated by comparing its predictions with available open-literature experimental and computational data.
Visual anticipation biases conscious decision making but not bottom-up visual processing
Mathews, Zenon; Cetnarski, Ryszard; Verschure, Paul F. M. J.
2015-01-01
Prediction plays a key role in control of attention but it is not clear which aspects of prediction are most prominent in conscious experience. An evolving view on the brain is that it can be seen as a prediction machine that optimizes its ability to predict states of the world and the self through the top-down propagation of predictions and the bottom-up presentation of prediction errors. There are competing views though on whether prediction or prediction errors dominate the formation of conscious experience. Yet, the dynamic effects of prediction on perception, decision making and consciousness have been difficult to assess and to model. We propose a novel mathematical framework and a psychophysical paradigm that allows us to assess both the hierarchical structuring of perceptual consciousness, its content and the impact of predictions and/or errors on conscious experience, attention and decision-making. Using a displacement detection task combined with reverse correlation, we reveal signatures of the usage of prediction at three different levels of perceptual processing: bottom-up fast saccades, top-down driven slow saccades and consciousnes decisions. Our results suggest that the brain employs multiple parallel mechanism at different levels of perceptual processing in order to shape effective sensory consciousness within a predicted perceptual scene. We further observe that bottom-up sensory and top-down predictive processes can be dissociated through cognitive load. We propose a probabilistic data association model from dynamical systems theory to model the predictive multi-scale bias in perceptual processing that we observe and its role in the formation of conscious experience. We propose that these results support the hypothesis that consciousness provides a time-delayed description of a task that is used to prospectively optimize real time control structures, rather than being engaged in the real-time control of behavior itself. PMID:25741290
National Centers for Environmental Prediction
Statistics Observational Data Processing Data Assimilation Monsoon Desk Model Transition Seminars Seminar Modeling Center NOAA Center for Weather and Climate Prediction (NCWCP) 5830 University Research Court
National Centers for Environmental Prediction
Statistics Observational Data Processing Data Assimilation Monsoon Desk Model Transition Seminars Seminar Environmental Modeling Center NOAA Center for Weather and Climate Prediction (NCWCP) 5830 University Research
Prediction in processing is a by-product of language learning.
Chang, Franklin; Kidd, Evan; Rowland, Caroline F
2013-08-01
Both children and adults predict the content of upcoming language, suggesting that prediction is useful for learning as well as processing. We present an alternative model which can explain prediction behaviour as a by-product of language learning. We suggest that a consideration of language acquisition places important constraints on Pickering & Garrod's (P&G's) theory.
Extending BPM Environments of Your Choice with Performance Related Decision Support
NASA Astrophysics Data System (ADS)
Fritzsche, Mathias; Picht, Michael; Gilani, Wasif; Spence, Ivor; Brown, John; Kilpatrick, Peter
What-if Simulations have been identified as one solution for business performance related decision support. Such support is especially useful in cases where it can be automatically generated out of Business Process Management (BPM) Environments from the existing business process models and performance parameters monitored from the executed business process instances. Currently, some of the available BPM Environments offer basic-level performance prediction capabilities. However, these functionalities are normally too limited to be generally useful for performance related decision support at business process level. In this paper, an approach is presented which allows the non-intrusive integration of sophisticated tooling for what-if simulations, analytic performance prediction tools, process optimizations or a combination of such solutions into already existing BPM environments. The approach abstracts from process modelling techniques which enable automatic decision support spanning processes across numerous BPM Environments. For instance, this enables end-to-end decision support for composite processes modelled with the Business Process Modelling Notation (BPMN) on top of existing Enterprise Resource Planning (ERP) processes modelled with proprietary languages.
Predictive models of radiative neutrino masses
DOE Office of Scientific and Technical Information (OSTI.GOV)
Julio, J., E-mail: julio@lipi.go.id
2016-06-21
We discuss two models of radiative neutrino mass generation. The first model features one–loop Zee model with Z{sub 4} symmetry. The second model is the two–loop neutrino mass model with singly- and doubly-charged scalars. These two models fit neutrino oscillation data well and predict some interesting rates for lepton flavor violation processes.
NASA Astrophysics Data System (ADS)
Liu, Z.; LU, G.; He, H.; Wu, Z.; He, J.
2017-12-01
Reliable drought prediction is fundamental for seasonal water management. Considering that drought development is closely related to the spatio-temporal evolution of large-scale circulation patterns, we develop a conceptual prediction model of seasonal drought processes based on atmospheric/oceanic Standardized Anomalies (SA). It is essentially the synchronous stepwise regression relationship between 90-day-accumulated atmospheric/oceanic SA-based predictors and 3-month SPI updated daily (SPI3). It is forced with forecasted atmospheric and oceanic variables retrieved from seasonal climate forecast systems, and it can make seamless drought prediction for operational use after a year-to-year calibration. Simulation and prediction of four severe seasonal regional drought processes in China were forced with the NCEP/NCAR reanalysis datasets and the NCEP Climate Forecast System Version 2 (CFSv2) operationally forecasted datasets, respectively. With the help of real-time correction for operational application, model application during four recent severe regional drought events in China revealed that the model is good at development prediction but weak in severity prediction. In addition to weakness in prediction of drought peak, the prediction of drought relief is possible to be predicted as drought recession. This weak performance may be associated with precipitation-causing weather patterns during drought relief. Based on initial virtual analysis on predicted 90-day prospective SPI3 curves, it shows that the 2009/2010 drought in Southwest China and 2014 drought in North China can be predicted and simulated well even for the prospective 1-75 day. In comparison, the prospective 1-45 day may be a feasible and acceptable lead time for simulation and prediction of the 2011 droughts in Southwest China and East China, after which the simulated and predicted developments clearly change.
Pavurala, Naresh; Xu, Xiaoming; Krishnaiah, Yellela S R
2017-05-15
Hyperspectral imaging using near infrared spectroscopy (NIRS) integrates spectroscopy and conventional imaging to obtain both spectral and spatial information of materials. The non-invasive and rapid nature of hyperspectral imaging using NIRS makes it a valuable process analytical technology (PAT) tool for in-process monitoring and control of the manufacturing process for transdermal drug delivery systems (TDS). The focus of this investigation was to develop and validate the use of Near Infra-red (NIR) hyperspectral imaging to monitor coat thickness uniformity, a critical quality attribute (CQA) for TDS. Chemometric analysis was used to process the hyperspectral image and a partial least square (PLS) model was developed to predict the coat thickness of the TDS. The goodness of model fit and prediction were 0.9933 and 0.9933, respectively, indicating an excellent fit to the training data and also good predictability. The % Prediction Error (%PE) for internal and external validation samples was less than 5% confirming the accuracy of the PLS model developed in the present study. The feasibility of the hyperspectral imaging as a real-time process analytical tool for continuous processing was also investigated. When the PLS model was applied to detect deliberate variation in coating thickness, it was able to predict both the small and large variations as well as identify coating defects such as non-uniform regions and presence of air bubbles. Published by Elsevier B.V.
Green, Jasmine; Liem, Gregory Arief D; Martin, Andrew J; Colmar, Susan; Marsh, Herbert W; McInerney, Dennis
2012-10-01
The study tested three theoretically/conceptually hypothesized longitudinal models of academic processes leading to academic performance. Based on a longitudinal sample of 1866 high-school students across two consecutive years of high school (Time 1 and Time 2), the model with the most superior heuristic value demonstrated: (a) academic motivation and self-concept positively predicted attitudes toward school; (b) attitudes toward school positively predicted class participation and homework completion and negatively predicted absenteeism; and (c) class participation and homework completion positively predicted test performance whilst absenteeism negatively predicted test performance. Taken together, these findings provide support for the relevance of the self-system model and, particularly, the importance of examining the dynamic relationships amongst engagement factors of the model. The study highlights implications for educational and psychological theory, measurement, and intervention. Copyright © 2012 The Foundation for Professionals in Services for Adolescents. Published by Elsevier Ltd. All rights reserved.
Tools for studying dry-cured ham processing by using computed tomography.
Santos-Garcés, Eva; Muñoz, Israel; Gou, Pere; Sala, Xavier; Fulladosa, Elena
2012-01-11
An accurate knowledge and optimization of dry-cured ham elaboration processes could help to reduce operating costs and maximize product quality. The development of nondestructive tools to characterize chemical parameters such as salt and water contents and a(w) during processing is of special interest. In this paper, predictive models for salt content (R(2) = 0.960 and RMSECV = 0.393), water content (R(2) = 0.912 and RMSECV = 1.751), and a(w) (R(2) = 0.906 and RMSECV = 0.008), which comprise the whole elaboration process, were developed. These predictive models were used to develop analytical tools such as distribution diagrams, line profiles, and regions of interest (ROIs) from the acquired computed tomography (CT) scans. These CT analytical tools provided quantitative information on salt, water, and a(w) in terms of content but also distribution throughout the process. The information obtained was applied to two industrial case studies. The main drawback of the predictive models and CT analytical tools is the disturbance that fat produces in water content and a(w) predictions.
Application of predictive modelling techniques in industry: from food design up to risk assessment.
Membré, Jeanne-Marie; Lambert, Ronald J W
2008-11-30
In this communication, examples of applications of predictive microbiology in industrial contexts (i.e. Nestlé and Unilever) are presented which cover a range of applications in food safety from formulation and process design to consumer safety risk assessment. A tailor-made, private expert system, developed to support safe product/process design assessment is introduced as an example of how predictive models can be deployed for use by non-experts. Its use in conjunction with other tools and software available in the public domain is discussed. Specific applications of predictive microbiology techniques are presented relating to investigations of either growth or limits to growth with respect to product formulation or process conditions. An example of a probabilistic exposure assessment model for chilled food application is provided and its potential added value as a food safety management tool in an industrial context is weighed against its disadvantages. The role of predictive microbiology in the suite of tools available to food industry and some of its advantages and constraints are discussed.
Jørgensen, Søren; Dau, Torsten
2011-09-01
A model for predicting the intelligibility of processed noisy speech is proposed. The speech-based envelope power spectrum model has a similar structure as the model of Ewert and Dau [(2000). J. Acoust. Soc. Am. 108, 1181-1196], developed to account for modulation detection and masking data. The model estimates the speech-to-noise envelope power ratio, SNR(env), at the output of a modulation filterbank and relates this metric to speech intelligibility using the concept of an ideal observer. Predictions were compared to data on the intelligibility of speech presented in stationary speech-shaped noise. The model was further tested in conditions with noisy speech subjected to reverberation and spectral subtraction. Good agreement between predictions and data was found in all cases. For spectral subtraction, an analysis of the model's internal representation of the stimuli revealed that the predicted decrease of intelligibility was caused by the estimated noise envelope power exceeding that of the speech. The classical concept of the speech transmission index fails in this condition. The results strongly suggest that the signal-to-noise ratio at the output of a modulation frequency selective process provides a key measure of speech intelligibility. © 2011 Acoustical Society of America
A Modified Isotropic-Kinematic Hardening Model to Predict the Defects in Tube Hydroforming Process
NASA Astrophysics Data System (ADS)
Jin, Kai; Guo, Qun; Tao, Jie; Guo, Xun-zhong
2017-11-01
Numerical simulations of tube hydroforming process of hollow crankshafts were conducted by using finite element analysis method. Moreover, the modified model involving the integration of isotropic-kinematic hardening model with ductile criteria model was used to more accurately optimize the process parameters such as internal pressure, feed distance and friction coefficient. Subsequently, hydroforming experiments were performed based on the simulation results. The comparison between experimental and simulation results indicated that the prediction of tube deformation, crack and wrinkle was quite accurate for the tube hydroforming process. Finally, hollow crankshafts with high thickness uniformity were obtained and the thickness distribution between numerical and experimental results was well consistent.
Kathleen L. Kavanaugh; Matthew B. Dickinson; Anthony S. Bova
2010-01-01
Current operational methods for predicting tree mortality from fire injury are regression-based models that only indirectly consider underlying causes and, thus, have limited generality. A better understanding of the physiological consequences of tree heating and injury are needed to develop biophysical process models that can make predictions under changing or novel...
Meteorological data-processing package
NASA Technical Reports Server (NTRS)
Billingsly, J. B.; Braken, P. A.
1979-01-01
METPAK, meteorological data-processing package of satellite data used to develop cloud-tracking maps, is given. Data can develop and enhance numerical prediction models for mesoscale phenomena and improve ability to detect and predict storms.
The Coastal Ocean Prediction Systems program: Understanding and managing our coastal ocean
DOE Office of Scientific and Technical Information (OSTI.GOV)
Eden, H.F.; Mooers, C.N.K.
1990-06-01
The goal of COPS is to couple a program of regular observations to numerical models, through techniques of data assimilation, in order to provide a predictive capability for the US coastal ocean including the Great Lakes, estuaries, and the entire Exclusive Economic Zone (EEZ). The objectives of the program include: determining the predictability of the coastal ocean and the processes that govern the predictability; developing efficient prediction systems for the coastal ocean based on the assimilation of real-time observations into numerical models; and coupling the predictive systems for the physical behavior of the coastal ocean to predictive systems for biological,more » chemical, and geological processes to achieve an interdisciplinary capability. COPS will provide the basis for effective monitoring and prediction of coastal ocean conditions by optimizing the use of increased scientific understanding, improved observations, advanced computer models, and computer graphics to make the best possible estimates of sea level, currents, temperatures, salinities, and other properties of entire coastal regions.« less
Artificial neural network modelling of a large-scale wastewater treatment plant operation.
Güçlü, Dünyamin; Dursun, Sükrü
2010-11-01
Artificial Neural Networks (ANNs), a method of artificial intelligence method, provide effective predictive models for complex processes. Three independent ANN models trained with back-propagation algorithm were developed to predict effluent chemical oxygen demand (COD), suspended solids (SS) and aeration tank mixed liquor suspended solids (MLSS) concentrations of the Ankara central wastewater treatment plant. The appropriate architecture of ANN models was determined through several steps of training and testing of the models. ANN models yielded satisfactory predictions. Results of the root mean square error, mean absolute error and mean absolute percentage error were 3.23, 2.41 mg/L and 5.03% for COD; 1.59, 1.21 mg/L and 17.10% for SS; 52.51, 44.91 mg/L and 3.77% for MLSS, respectively, indicating that the developed model could be efficiently used. The results overall also confirm that ANN modelling approach may have a great implementation potential for simulation, precise performance prediction and process control of wastewater treatment plants.
Flash-point prediction for binary partially miscible mixtures of flammable solvents.
Liaw, Horng-Jang; Lu, Wen-Hung; Gerbaud, Vincent; Chen, Chan-Cheng
2008-05-30
Flash point is the most important variable used to characterize fire and explosion hazard of liquids. Herein, partially miscible mixtures are presented within the context of liquid-liquid extraction processes. This paper describes development of a model for predicting the flash point of binary partially miscible mixtures of flammable solvents. To confirm the predictive efficacy of the derived flash points, the model was verified by comparing the predicted values with the experimental data for the studied mixtures: methanol+octane; methanol+decane; acetone+decane; methanol+2,2,4-trimethylpentane; and, ethanol+tetradecane. Our results reveal that immiscibility in the two liquid phases should not be ignored in the prediction of flash point. Overall, the predictive results of this proposed model describe the experimental data well. Based on this evidence, therefore, it appears reasonable to suggest potential application for our model in assessment of fire and explosion hazards, and development of inherently safer designs for chemical processes containing binary partially miscible mixtures of flammable solvents.
Partial least squares for efficient models of fecal indicator bacteria on Great Lakes beaches
Brooks, Wesley R.; Fienen, Michael N.; Corsi, Steven R.
2013-01-01
At public beaches, it is now common to mitigate the impact of water-borne pathogens by posting a swimmer's advisory when the concentration of fecal indicator bacteria (FIB) exceeds an action threshold. Since culturing the bacteria delays public notification when dangerous conditions exist, regression models are sometimes used to predict the FIB concentration based on readily-available environmental measurements. It is hard to know which environmental parameters are relevant to predicting FIB concentration, and the parameters are usually correlated, which can hurt the predictive power of a regression model. Here the method of partial least squares (PLS) is introduced to automate the regression modeling process. Model selection is reduced to the process of setting a tuning parameter to control the decision threshold that separates predicted exceedances of the standard from predicted non-exceedances. The method is validated by application to four Great Lakes beaches during the summer of 2010. Performance of the PLS models compares favorably to that of the existing state-of-the-art regression models at these four sites.
Yan, Zhao-Da; Zhou, Chong-Guang; Su, Shi-Chuan; Liu, Zhen-Tao; Wang, Xi-Zhen
2003-01-01
In order to predict and improve the performance of natural gas/diesel dual fuel engine (DFE), a combustion rate model based on forward neural network was built to study the combustion process of the DFE. The effect of the operating parameters on combustion rate was also studied by means of this model. The study showed that the predicted results were good agreement with the experimental data. It was proved that the developed combustion rate model could be used to successfully predict and optimize the combustion process of dual fuel engine.
Mathematical modeling of the process of filling a mold during injection molding of ceramic products
NASA Astrophysics Data System (ADS)
Kulkov, S. N.; Korobenkov, M. V.; Bragin, N. A.
2015-10-01
Using the software package Fluent it have been predicted of the filling of a mold in injection molding of ceramic products is of great importance, because the strength of the final product is directly related to the presence of voids in the molding, making possible early prediction of inaccuracies in the mold prior to manufacturing. The calculations were performed in the formulation of mathematical modeling of hydrodynamic turbulent process of filling a predetermined volume of a viscous liquid. The model used to determine the filling forms evaluated the influence of density and viscosity of the feedstock, and the injection pressure on the mold filling process to predict the formation of voids in the area caused by the shape defect geometry.
Sovány, Tamás; Papós, Kitti; Kása, Péter; Ilič, Ilija; Srčič, Stane; Pintye-Hódi, Klára
2013-06-01
The importance of in silico modeling in the pharmaceutical industry is continuously increasing. The aim of the present study was the development of a neural network model for prediction of the postcompressional properties of scored tablets based on the application of existing data sets from our previous studies. Some important process parameters and physicochemical characteristics of the powder mixtures were used as training factors to achieve the best applicability in a wide range of possible compositions. The results demonstrated that, after some pre-processing of the factors, an appropriate prediction performance could be achieved. However, because of the poor extrapolation capacity, broadening of the training data range appears necessary.
Prediction of normalized biodiesel properties by simulation of multiple feedstock blends.
García, Manuel; Gonzalo, Alberto; Sánchez, José Luis; Arauzo, Jesús; Peña, José Angel
2010-06-01
A continuous process for biodiesel production has been simulated using Aspen HYSYS V7.0 software. As fresh feed, feedstocks with a mild acid content have been used. The process flowsheet follows a traditional alkaline transesterification scheme constituted by esterification, transesterification and purification stages. Kinetic models taking into account the concentration of the different species have been employed in order to simulate the behavior of the CSTR reactors and the product distribution within the process. The comparison between experimental data found in literature and the predicted normalized properties, has been discussed. Additionally, a comparison between different thermodynamic packages has been performed. NRTL activity model has been selected as the most reliable of them. The combination of these models allows the prediction of 13 out of 25 parameters included in standard EN-14214:2003, and confers simulators a great value as predictive as well as optimization tool. (c) 2010 Elsevier Ltd. All rights reserved.
Process models as tools in forestry research and management
Kurt Johnsen; Lisa Samuelson; Robert Teskey; Steve McNulty; Tom Fox
2001-01-01
Forest process models are mathematical representations of biological systems that incorporate our understanding of physiological and ecological mechanisms into predictive algorithms. These models were originally designed and used for research purposes, but are being developed for use in practical forest management. Process models designed for research...
Mathematics for understanding disease.
Bies, R R; Gastonguay, M R; Schwartz, S L
2008-06-01
The application of mathematical models to reflect the organization and activity of biological systems can be viewed as a continuum of purpose. The far left of the continuum is solely the prediction of biological parameter values, wherein an understanding of the underlying biological processes is irrelevant to the purpose. At the far right of the continuum are mathematical models, the purposes of which are a precise understanding of those biological processes. No models in present use fall at either end of the continuum. Without question, however, the emphasis in regards to purpose has been on prediction, e.g., clinical trial simulation and empirical disease progression modeling. Clearly the model that ultimately incorporates a universal understanding of biological organization will also precisely predict biological events, giving the continuum the logical form of a tautology. Currently that goal lies at an immeasurable distance. Nonetheless, the motive here is to urge movement in the direction of that goal. The distance traveled toward understanding naturally depends upon the nature of the scientific question posed with respect to comprehending and/or predicting a particular disease process. A move toward mathematical models implies a move away from static empirical modeling and toward models that focus on systems biology, wherein modeling entails the systematic study of the complex pattern of organization inherent in biological systems.
USDA-ARS?s Scientific Manuscript database
The Water Erosion Prediction Project (WEPP) and the Agricultural Policy/Environmental eXtender (APEX) are process-based models that can predict spatial and temporal distributions of erosion for hillslopes and watersheds. This study applies the WEPP model to predict runoff and erosion for a 35-ha fie...
A simulation technique for predicting thickness of thermal sprayed coatings
NASA Technical Reports Server (NTRS)
Goedjen, John G.; Miller, Robert A.; Brindley, William J.; Leissler, George W.
1995-01-01
The complexity of many of the components being coated today using the thermal spray process makes the trial and error approach traditionally followed in depositing a uniform coating inadequate, thereby necessitating a more analytical approach to developing robotic trajectories. A two dimensional finite difference simulation model has been developed to predict the thickness of coatings deposited using the thermal spray process. The model couples robotic and component trajectories and thermal spraying parameters to predict coating thickness. Simulations and experimental verification were performed on a rotating disk to evaluate the predictive capabilities of the approach.
A model for prediction of STOVL ejector dynamics
NASA Technical Reports Server (NTRS)
Drummond, Colin K.
1989-01-01
A semi-empirical control-volume approach to ejector modeling for transient performance prediction is presented. This new approach is motivated by the need for a predictive real-time ejector sub-system simulation for Short Take-Off Verticle Landing (STOVL) integrated flight and propulsion controls design applications. Emphasis is placed on discussion of the approximate characterization of the mixing process central to thrust augmenting ejector operation. The proposed ejector model suggests transient flow predictions are possible with a model based on steady-flow data. A practical test case is presented to illustrate model calibration.
The role of ensemble post-processing for modeling the ensemble tail
NASA Astrophysics Data System (ADS)
Van De Vyver, Hans; Van Schaeybroeck, Bert; Vannitsem, Stéphane
2016-04-01
The past decades the numerical weather prediction community has witnessed a paradigm shift from deterministic to probabilistic forecast and state estimation (Buizza and Leutbecher, 2015; Buizza et al., 2008), in an attempt to quantify the uncertainties associated with initial-condition and model errors. An important benefit of a probabilistic framework is the improved prediction of extreme events. However, one may ask to what extent such model estimates contain information on the occurrence probability of extreme events and how this information can be optimally extracted. Different approaches have been proposed and applied on real-world systems which, based on extreme value theory, allow the estimation of extreme-event probabilities conditional on forecasts and state estimates (Ferro, 2007; Friederichs, 2010). Using ensemble predictions generated with a model of low dimensionality, a thorough investigation is presented quantifying the change of predictability of extreme events associated with ensemble post-processing and other influencing factors including the finite ensemble size, lead time and model assumption and the use of different covariates (ensemble mean, maximum, spread...) for modeling the tail distribution. Tail modeling is performed by deriving extreme-quantile estimates using peak-over-threshold representation (generalized Pareto distribution) or quantile regression. Common ensemble post-processing methods aim to improve mostly the ensemble mean and spread of a raw forecast (Van Schaeybroeck and Vannitsem, 2015). Conditional tail modeling, on the other hand, is a post-processing in itself, focusing on the tails only. Therefore, it is unclear how applying ensemble post-processing prior to conditional tail modeling impacts the skill of extreme-event predictions. This work is investigating this question in details. Buizza, Leutbecher, and Isaksen, 2008: Potential use of an ensemble of analyses in the ECMWF Ensemble Prediction System, Q. J. R. Meteorol. Soc. 134: 2051-2066.Buizza and Leutbecher, 2015: The forecast skill horizon, Q. J. R. Meteorol. Soc. 141: 3366-3382.Ferro, 2007: A probability model for verifying deterministic forecasts of extreme events. Weather and Forecasting 22 (5), 1089-1100.Friederichs, 2010: Statistical downscaling of extreme precipitation events using extreme value theory. Extremes 13, 109-132.Van Schaeybroeck and Vannitsem, 2015: Ensemble post-processing using member-by-member approaches: theoretical aspects. Q.J.R. Meteorol. Soc., 141: 807-818.
Ferguson, Jake M; Ponciano, José M
2014-01-01
Predicting population extinction risk is a fundamental application of ecological theory to the practice of conservation biology. Here, we compared the prediction performance of a wide array of stochastic, population dynamics models against direct observations of the extinction process from an extensive experimental data set. By varying a series of biological and statistical assumptions in the proposed models, we were able to identify the assumptions that affected predictions about population extinction. We also show how certain autocorrelation structures can emerge due to interspecific interactions, and that accounting for the stochastic effect of these interactions can improve predictions of the extinction process. We conclude that it is possible to account for the stochastic effects of community interactions on extinction when using single-species time series. PMID:24304946
Predicting field weed emergence with empirical models and soft computing techniques
USDA-ARS?s Scientific Manuscript database
Seedling emergence is the most important phenological process that influences the success of weed species; therefore, predicting weed emergence timing plays a critical role in scheduling weed management measures. Important efforts have been made in the attempt to develop models to predict seedling e...
Basic Research on Adaptive Model Algorithmic Control
1985-12-01
Control Conference. Richalet, J., A. Rault, J.L. Testud and J. Papon (1978). Model predictive heuristic control: applications to industrial...pp.977-982. Richalet, J., A. Rault, J. L. Testud and J. Papon (1978). Model predictive heuristic control: applications to industrial processes
NASA Astrophysics Data System (ADS)
Sreekanth, J.; Moore, Catherine
2018-04-01
The application of global sensitivity and uncertainty analysis techniques to groundwater models of deep sedimentary basins are typically challenged by large computational burdens combined with associated numerical stability issues. The highly parameterized approaches required for exploring the predictive uncertainty associated with the heterogeneous hydraulic characteristics of multiple aquifers and aquitards in these sedimentary basins exacerbate these issues. A novel Patch Modelling Methodology is proposed for improving the computational feasibility of stochastic modelling analysis of large-scale and complex groundwater models. The method incorporates a nested groundwater modelling framework that enables efficient simulation of groundwater flow and transport across multiple spatial and temporal scales. The method also allows different processes to be simulated within different model scales. Existing nested model methodologies are extended by employing 'joining predictions' for extrapolating prediction-salient information from one model scale to the next. This establishes a feedback mechanism supporting the transfer of information from child models to parent models as well as parent models to child models in a computationally efficient manner. This feedback mechanism is simple and flexible and ensures that while the salient small scale features influencing larger scale prediction are transferred back to the larger scale, this does not require the live coupling of models. This method allows the modelling of multiple groundwater flow and transport processes using separate groundwater models that are built for the appropriate spatial and temporal scales, within a stochastic framework, while also removing the computational burden associated with live model coupling. The utility of the method is demonstrated by application to an actual large scale aquifer injection scheme in Australia.
Predictive computation of genomic logic processing functions in embryonic development
Peter, Isabelle S.; Faure, Emmanuel; Davidson, Eric H.
2012-01-01
Gene regulatory networks (GRNs) control the dynamic spatial patterns of regulatory gene expression in development. Thus, in principle, GRN models may provide system-level, causal explanations of developmental process. To test this assertion, we have transformed a relatively well-established GRN model into a predictive, dynamic Boolean computational model. This Boolean model computes spatial and temporal gene expression according to the regulatory logic and gene interactions specified in a GRN model for embryonic development in the sea urchin. Additional information input into the model included the progressive embryonic geometry and gene expression kinetics. The resulting model predicted gene expression patterns for a large number of individual regulatory genes each hour up to gastrulation (30 h) in four different spatial domains of the embryo. Direct comparison with experimental observations showed that the model predictively computed these patterns with remarkable spatial and temporal accuracy. In addition, we used this model to carry out in silico perturbations of regulatory functions and of embryonic spatial organization. The model computationally reproduced the altered developmental functions observed experimentally. Two major conclusions are that the starting GRN model contains sufficiently complete regulatory information to permit explanation of a complex developmental process of gene expression solely in terms of genomic regulatory code, and that the Boolean model provides a tool with which to test in silico regulatory circuitry and developmental perturbations. PMID:22927416
Integration of Tuyere, Raceway and Shaft Models for Predicting Blast Furnace Process
NASA Astrophysics Data System (ADS)
Fu, Dong; Tang, Guangwu; Zhao, Yongfu; D'Alessio, John; Zhou, Chenn Q.
2018-06-01
A novel modeling strategy is presented for simulating the blast furnace iron making process. Such physical and chemical phenomena are taking place across a wide range of length and time scales, and three models are developed to simulate different regions of the blast furnace, i.e., the tuyere model, the raceway model and the shaft model. This paper focuses on the integration of the three models to predict the entire blast furnace process. Mapping output and input between models and an iterative scheme are developed to establish communications between models. The effects of tuyere operation and burden distribution on blast furnace fuel efficiency are investigated numerically. The integration of different models provides a way to realistically simulate the blast furnace by improving the modeling resolution on local phenomena and minimizing the model assumptions.
Critical Zone Architecture and the Last Glacial Legacy in Unglaciated North America
NASA Astrophysics Data System (ADS)
Marshall, J. A.; Roering, J. J.; Rempel, A. W.; Bartlein, P. J.; Merritts, D. J.; Walter, R. C.
2015-12-01
As fresh bedrock is exhumed into the Critical Zone and intersects with water and life, rock attributes controlling geochemical reactions, hydrologic routing, accommodation space for roots, surface area, and the mobile fraction of regolith are set not just by present-day processes, but are predicated on the 'ghosts' of past processes embedded in the subsurface architecture. Easily observable modern ecosystem processes such as tree throw can erase the past and bias our interpretation of landscape evolution. Abundant paleoenvironmental records demonstrate that unglaciated regions experienced profound climate changes through the late Pleistocene-Holocene transition, but studies quantifying how environmental variables affect erosion and weathering rates in these settings often marginalize or even forego consideration of the role of past climate regimes. Here we combine seven downscaled Last Glacial Maximum (LGM) paleoclimate reconstructions with a state of the art frost cracking model to explore frost weathering potential across the North American continent 21 ka. We analyze existing evidence of LGM periglacial processes and features to better constrain frost weathering model predictions. All seven models predict frost cracking across a large swath to the west of the Continental Divide, with the southernmost extent at ~ latitude 35° N, and increasing latitude towards the buffering influence of the Pacific Ocean. All models predict significant frost cracking in the unglaciated Rocky Mountains. To the east of the Continental Divide, models results diverge more, but all predict regions with LGM temperatures too cold for significant frost cracking (mean annual temperatures < 15 °C), corroborated by observations of permafrost relics such as ice wedges in some areas. Our results provide a framework for coupling paleoclimate reconstructions with a predictive frost weathering model, and importantly, suggest that modeling modern Critical Zone process evolution may require a consideration of vastly different processes when rock was first exhumed into the Critical Zone reactor.
Boersen, Nathan; Carvajal, M Teresa; Morris, Kenneth R; Peck, Garnet E; Pinal, Rodolfo
2015-01-01
While previous research has demonstrated roller compaction operating parameters strongly influence the properties of the final product, a greater emphasis might be placed on the raw material attributes of the formulation. There were two main objectives to this study. First, to assess the effects of different process variables on the properties of the obtained ribbons and downstream granules produced from the rolled compacted ribbons. Second, was to establish if models obtained with formulations of one active pharmaceutical ingredient (API) could predict the properties of similar formulations in terms of the excipients used, but with a different API. Tolmetin and acetaminophen, chosen for their different compaction properties, were roller compacted on Fitzpatrick roller compactor using the same formulation. Models created using tolmetin and tested using acetaminophen. The physical properties of the blends, ribbon, granule and tablet were characterized. Multivariate analysis using partial least squares was used to analyze all data. Multivariate models showed that the operating parameters and raw material attributes were essential in the prediction of ribbon porosity and post-milled particle size. The post compacted ribbon and granule attributes also significantly contributed to the prediction of the tablet tensile strength. Models derived using tolmetin could reasonably predict the ribbon porosity of a second API. After further processing, the post-milled ribbon and granules properties, rather than the physical attributes of the formulation were needed to predict downstream tablet properties. An understanding of the percolation threshold of the formulation significantly improved the predictive ability of the models.
Wu, Sheng; Jin, Qibing; Zhang, Ridong; Zhang, Junfeng; Gao, Furong
2017-07-01
In this paper, an improved constrained tracking control design is proposed for batch processes under uncertainties. A new process model that facilitates process state and tracking error augmentation with further additional tuning is first proposed. Then a subsequent controller design is formulated using robust stable constrained MPC optimization. Unlike conventional robust model predictive control (MPC), the proposed method enables the controller design to bear more degrees of tuning so that improved tracking control can be acquired, which is very important since uncertainties exist inevitably in practice and cause model/plant mismatches. An injection molding process is introduced to illustrate the effectiveness of the proposed MPC approach in comparison with conventional robust MPC. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.
Rutter, Carolyn M; Knudsen, Amy B; Marsh, Tracey L; Doria-Rose, V Paul; Johnson, Eric; Pabiniak, Chester; Kuntz, Karen M; van Ballegooijen, Marjolein; Zauber, Ann G; Lansdorp-Vogelaar, Iris
2016-07-01
Microsimulation models synthesize evidence about disease processes and interventions, providing a method for predicting long-term benefits and harms of prevention, screening, and treatment strategies. Because models often require assumptions about unobservable processes, assessing a model's predictive accuracy is important. We validated 3 colorectal cancer (CRC) microsimulation models against outcomes from the United Kingdom Flexible Sigmoidoscopy Screening (UKFSS) Trial, a randomized controlled trial that examined the effectiveness of one-time flexible sigmoidoscopy screening to reduce CRC mortality. The models incorporate different assumptions about the time from adenoma initiation to development of preclinical and symptomatic CRC. Analyses compare model predictions to study estimates across a range of outcomes to provide insight into the accuracy of model assumptions. All 3 models accurately predicted the relative reduction in CRC mortality 10 years after screening (predicted hazard ratios, with 95% percentile intervals: 0.56 [0.44, 0.71], 0.63 [0.51, 0.75], 0.68 [0.53, 0.83]; estimated with 95% confidence interval: 0.56 [0.45, 0.69]). Two models with longer average preclinical duration accurately predicted the relative reduction in 10-year CRC incidence. Two models with longer mean sojourn time accurately predicted the number of screen-detected cancers. All 3 models predicted too many proximal adenomas among patients referred to colonoscopy. Model accuracy can only be established through external validation. Analyses such as these are therefore essential for any decision model. Results supported the assumptions that the average time from adenoma initiation to development of preclinical cancer is long (up to 25 years), and mean sojourn time is close to 4 years, suggesting the window for early detection and intervention by screening is relatively long. Variation in dwell time remains uncertain and could have important clinical and policy implications. © The Author(s) 2016.
Clinical time series prediction: towards a hierarchical dynamical system framework
Liu, Zitao; Hauskrecht, Milos
2014-01-01
Objective Developing machine learning and data mining algorithms for building temporal models of clinical time series is important for understanding of the patient condition, the dynamics of a disease, effect of various patient management interventions and clinical decision making. In this work, we propose and develop a novel hierarchical framework for modeling clinical time series data of varied length and with irregularly sampled observations. Materials and methods Our hierarchical dynamical system framework for modeling clinical time series combines advantages of the two temporal modeling approaches: the linear dynamical system and the Gaussian process. We model the irregularly sampled clinical time series by using multiple Gaussian process sequences in the lower level of our hierarchical framework and capture the transitions between Gaussian processes by utilizing the linear dynamical system. The experiments are conducted on the complete blood count (CBC) panel data of 1000 post-surgical cardiac patients during their hospitalization. Our framework is evaluated and compared to multiple baseline approaches in terms of the mean absolute prediction error and the absolute percentage error. Results We tested our framework by first learning the time series model from data for the patient in the training set, and then applying the model in order to predict future time series values on the patients in the test set. We show that our model outperforms multiple existing models in terms of its predictive accuracy. Our method achieved a 3.13% average prediction accuracy improvement on ten CBC lab time series when it was compared against the best performing baseline. A 5.25% average accuracy improvement was observed when only short-term predictions were considered. Conclusion A new hierarchical dynamical system framework that lets us model irregularly sampled time series data is a promising new direction for modeling clinical time series and for improving their predictive performance. PMID:25534671
An actual load forecasting methodology by interval grey modeling based on the fractional calculus.
Yang, Yang; Xue, Dingyü
2017-07-17
The operation processes for thermal power plant are measured by the real-time data, and a large number of historical interval data can be obtained from the dataset. Within defined periods of time, the interval information could provide important information for decision making and equipment maintenance. Actual load is one of the most important parameters, and the trends hidden in the historical data will show the overall operation status of the equipments. However, based on the interval grey parameter numbers, the modeling and prediction process is more complicated than the one with real numbers. In order not lose any information, the geometric coordinate features are used by the coordinates of area and middle point lines in this paper, which are proved with the same information as the original interval data. The grey prediction model for interval grey number by the fractional-order accumulation calculus is proposed. Compared with integer-order model, the proposed method could have more freedom with better performance for modeling and prediction, which can be widely used in the modeling process and prediction for the small amount interval historical industry sequence samples. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.
Processes Understanding of Decadal Climate Variability
NASA Astrophysics Data System (ADS)
Prömmel, Kerstin; Cubasch, Ulrich
2016-04-01
The realistic representation of decadal climate variability in the models is essential for the quality of decadal climate predictions. Therefore, the understanding of those processes leading to decadal climate variability needs to be improved. Several of these processes are already included in climate models but their importance has not yet completely been clarified. The simulation of other processes requires sometimes a higher resolution of the model or an extension by additional subsystems. This is addressed within one module of the German research program "MiKlip II - Decadal Climate Predictions" (http://www.fona-miklip.de/en/) with a focus on the following processes. Stratospheric processes and their impact on the troposphere are analysed regarding the climate response to aerosol perturbations caused by volcanic eruptions and the stratospheric decadal variability due to solar forcing, climate change and ozone recovery. To account for the interaction between changing ozone concentrations and climate a computationally efficient ozone chemistry module is developed and implemented in the MiKlip prediction system. The ocean variability and air-sea interaction are analysed with a special focus on the reduction of the North Atlantic cold bias. In addition, the predictability of the oceanic carbon uptake with a special emphasis on the underlying mechanism is investigated. This addresses a combination of physical, biological and chemical processes.
NASA Astrophysics Data System (ADS)
Beriro, D. J.; Abrahart, R. J.; Nathanail, C. P.
2012-04-01
Data-driven modelling is most commonly used to develop predictive models that will simulate natural processes. This paper, in contrast, uses Gene Expression Programming (GEP) to construct two alternative models of different pan evaporation estimations by means of symbolic regression: a simulator, a model of a real-world process developed on observed records, and an emulator, an imitator of some other model developed on predicted outputs calculated by that source model. The solutions are compared and contrasted for the purposes of determining whether any substantial differences exist between either option. This analysis will address recent arguments over the impact of using downloaded hydrological modelling datasets originating from different initial sources i.e. observed or calculated. These differences can be easily be overlooked by modellers, resulting in a model of a model developed on estimations derived from deterministic empirical equations and producing exceptionally high goodness-of-fit. This paper uses different lines-of-evidence to evaluate model output and in so doing paves the way for a new protocol in machine learning applications. Transparent modelling tools such as symbolic regression offer huge potential for explaining stochastic processes, however, the basic tenets of data quality and recourse to first principles with regard to problem understanding should not be trivialised. GEP is found to be an effective tool for the prediction of observed and calculated pan evaporation, with results supported by an understanding of the records, and of the natural processes concerned, evaluated using one-at-a-time response function sensitivity analysis. The results show that both architectures and response functions are very similar, implying that previously observed differences in goodness-of-fit can be explained by whether models are applied to observed or calculated data.
Li, Wen-bing; Yao, Lin-tao; Liu, Mu-hua; Huang, Lin; Yao, Ming-yin; Chen, Tian-bing; He, Xiu-wen; Yang, Ping; Hu, Hui-qin; Nie, Jiang-hui
2015-05-01
Cu in navel orange was detected rapidly by laser-induced breakdown spectroscopy (LIBS) combined with partial least squares (PLS) for quantitative analysis, then the effect on the detection accuracy of the model with different spectral data ptetreatment methods was explored. Spectral data for the 52 Gannan navel orange samples were pretreated by different data smoothing, mean centralized and standard normal variable transform. Then 319~338 nm wavelength section containing characteristic spectral lines of Cu was selected to build PLS models, the main evaluation indexes of models such as regression coefficient (r), root mean square error of cross validation (RMSECV) and the root mean square error of prediction (RMSEP) were compared and analyzed. Three indicators of PLS model after 13 points smoothing and processing of the mean center were found reaching 0. 992 8, 3. 43 and 3. 4 respectively, the average relative error of prediction model is only 5. 55%, and in one word, the quality of calibration and prediction of this model are the best results. The results show that selecting the appropriate data pre-processing method, the prediction accuracy of PLS quantitative model of fruits and vegetables detected by LIBS can be improved effectively, providing a new method for fast and accurate detection of fruits and vegetables by LIBS.
Revisiting the Holy Grail: using plant functional traits to understand ecological processes.
Funk, Jennifer L; Larson, Julie E; Ames, Gregory M; Butterfield, Bradley J; Cavender-Bares, Jeannine; Firn, Jennifer; Laughlin, Daniel C; Sutton-Grier, Ariana E; Williams, Laura; Wright, Justin
2017-05-01
One of ecology's grand challenges is developing general rules to explain and predict highly complex systems. Understanding and predicting ecological processes from species' traits has been considered a 'Holy Grail' in ecology. Plant functional traits are increasingly being used to develop mechanistic models that can predict how ecological communities will respond to abiotic and biotic perturbations and how species will affect ecosystem function and services in a rapidly changing world; however, significant challenges remain. In this review, we highlight recent work and outstanding questions in three areas: (i) selecting relevant traits; (ii) describing intraspecific trait variation and incorporating this variation into models; and (iii) scaling trait data to community- and ecosystem-level processes. Over the past decade, there have been significant advances in the characterization of plant strategies based on traits and trait relationships, and the integration of traits into multivariate indices and models of community and ecosystem function. However, the utility of trait-based approaches in ecology will benefit from efforts that demonstrate how these traits and indices influence organismal, community, and ecosystem processes across vegetation types, which may be achieved through meta-analysis and enhancement of trait databases. Additionally, intraspecific trait variation and species interactions need to be incorporated into predictive models using tools such as Bayesian hierarchical modelling. Finally, existing models linking traits to community and ecosystem processes need to be empirically tested for their applicability to be realized. © 2016 Cambridge Philosophical Society.
Petroleum-resource appraisal and discovery rate forecasting in partially explored regions
Drew, Lawrence J.; Schuenemeyer, J.H.; Root, David H.; Attanasi, E.D.
1980-01-01
PART A: A model of the discovery process can be used to predict the size distribution of future petroleum discoveries in partially explored basins. The parameters of the model are estimated directly from the historical drilling record, rather than being determined by assumptions or analogies. The model is based on the concept of the area of influence of a drill hole, which states that the area of a basin exhausted by a drill hole varies with the size and shape of targets in the basin and with the density of previously drilled wells. It also uses the concept of discovery efficiency, which measures the rate of discovery within several classes of deposit size. The model was tested using 25 years of historical exploration data (1949-74) from the Denver basin. From the trend in the discovery rate (the number of discoveries per unit area exhausted), the discovery efficiencies in each class of deposit size were estimated. Using pre-1956 discovery and drilling data, the model accurately predicted the size distribution of discoveries for the 1956-74 period. PART B: A stochastic model of the discovery process has been developed to predict, using past drilling and discovery data, the distribution of future petroleum deposits in partially explored basins, and the basic mathematical properties of the model have been established. The model has two exogenous parameters, the efficiency of exploration and the effective basin size. The first parameter is the ratio of the probability that an actual exploratory well will make a discovery to the probability that a randomly sited well will make a discovery. The second parameter, the effective basin size, is the area of that part of the basin in which drillers are willing to site wells. Methods for estimating these parameters from locations of past wells and from the sizes and locations of past discoveries were derived, and the properties of estimators of the parameters were studied by simulation. PART C: This study examines the temporal properties and determinants of petroleum exploration for firms operating in the Denver basin. Expectations associated with the favorability of a specific area are modeled by using distributed lag proxy variables (of previous discoveries) and predictions from a discovery process model. In the second part of the study, a discovery process model is linked with a behavioral well-drilling model in order to predict the supply of new reserves. Results of the study indicate that the positive effects of new discoveries on drilling increase for several periods and then diminish to zero within 2? years after the deposit discovery date. Tests of alternative specifications of the argument of the distributed lag function using alternative minimum size classes of deposits produced little change in the model's explanatory power. This result suggests that, once an exploration play is underway, favorable operator expectations are sustained by the quantity of oil found per time period rather than by the discovery of specific size deposits. When predictions of the value of undiscovered deposits (generated from a discovery process model) were substituted for the expectations variable in models used to explain exploration effort, operator behavior was found to be consistent with these predictions. This result suggests that operators, on the average, were efficiently using information contained in the discovery history of the basin in carrying out their exploration plans. Comparison of the two approaches to modeling unobservable operator expectations indicates that the two models produced very similar results. The integration of the behavioral well-drilling model and discovery process model to predict the additions to reserves per unit time was successful only when the quarterly predictions were aggregated to annual values. The accuracy of the aggregated predictions was also found to be reasonably robust to errors in predictions from the behavioral well-drilling equation.
NASA Astrophysics Data System (ADS)
Caldararu, Silvia; Purves, Drew W.; Smith, Matthew J.
2017-04-01
Improving international food security under a changing climate and increasing human population will be greatly aided by improving our ability to modify, understand and predict crop growth. What we predominantly have at our disposal are either process-based models of crop physiology or statistical analyses of yield datasets, both of which suffer from various sources of error. In this paper, we present a generic process-based crop model (PeakN-crop v1.0) which we parametrise using a Bayesian model-fitting algorithm to three different sources: data-space-based vegetation indices, eddy covariance productivity measurements and regional crop yields. We show that the model parametrised without data, based on prior knowledge of the parameters, can largely capture the observed behaviour but the data-constrained model greatly improves both the model fit and reduces prediction uncertainty. We investigate the extent to which each dataset contributes to the model performance and show that while all data improve on the prior model fit, the satellite-based data and crop yield estimates are particularly important for reducing model error and uncertainty. Despite these improvements, we conclude that there are still significant knowledge gaps, in terms of available data for model parametrisation, but our study can help indicate the necessary data collection to improve our predictions of crop yields and crop responses to environmental changes.
NASA Technical Reports Server (NTRS)
Loos, Alfred C.; Macrae, John D.; Hammond, Vincent H.; Kranbuehl, David E.; Hart, Sean M.; Hasko, Gregory H.; Markus, Alan M.
1993-01-01
A two-dimensional model of the resin transfer molding (RTM) process was developed which can be used to simulate the infiltration of resin into an anisotropic fibrous preform. Frequency dependent electromagnetic sensing (FDEMS) has been developed for in situ monitoring of the RTM process. Flow visualization tests were performed to obtain data which can be used to verify the sensor measurements and the model predictions. Results of the tests showed that FDEMS can accurately detect the position of the resin flow-front during mold filling, and that the model predicted flow-front patterns agreed well with the measured flow-front patterns.
Predicting intensity ranks of peptide fragment ions.
Frank, Ari M
2009-05-01
Accurate modeling of peptide fragmentation is necessary for the development of robust scoring functions for peptide-spectrum matches, which are the cornerstone of MS/MS-based identification algorithms. Unfortunately, peptide fragmentation is a complex process that can involve several competing chemical pathways, which makes it difficult to develop generative probabilistic models that describe it accurately. However, the vast amounts of MS/MS data being generated now make it possible to use data-driven machine learning methods to develop discriminative ranking-based models that predict the intensity ranks of a peptide's fragment ions. We use simple sequence-based features that get combined by a boosting algorithm into models that make peak rank predictions with high accuracy. In an accompanying manuscript, we demonstrate how these prediction models are used to significantly improve the performance of peptide identification algorithms. The models can also be useful in the design of optimal multiple reaction monitoring (MRM) transitions, in cases where there is insufficient experimental data to guide the peak selection process. The prediction algorithm can also be run independently through PepNovo+, which is available for download from http://bix.ucsd.edu/Software/PepNovo.html.
Predicting Intensity Ranks of Peptide Fragment Ions
Frank, Ari M.
2009-01-01
Accurate modeling of peptide fragmentation is necessary for the development of robust scoring functions for peptide-spectrum matches, which are the cornerstone of MS/MS-based identification algorithms. Unfortunately, peptide fragmentation is a complex process that can involve several competing chemical pathways, which makes it difficult to develop generative probabilistic models that describe it accurately. However, the vast amounts of MS/MS data being generated now make it possible to use data-driven machine learning methods to develop discriminative ranking-based models that predict the intensity ranks of a peptide's fragment ions. We use simple sequence-based features that get combined by a boosting algorithm in to models that make peak rank predictions with high accuracy. In an accompanying manuscript, we demonstrate how these prediction models are used to significantly improve the performance of peptide identification algorithms. The models can also be useful in the design of optimal MRM transitions, in cases where there is insufficient experimental data to guide the peak selection process. The prediction algorithm can also be run independently through PepNovo+, which is available for download from http://bix.ucsd.edu/Software/PepNovo.html. PMID:19256476
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bateman, K. J.; Capson, D. D.
2004-03-29
Argonne National Laboratory (ANL) has developed a process to immobilize waste salt containing fission products, uranium, and transuranic elements as chlorides in a glass-bonded ceramic waste form. This salt was generated in the electrorefining operation used in the electrometallurgical treatment of spent Experimental Breeder Reactor-II (EBR-II) fuel. The ceramic waste process culminates with an elevated temperature operation. The processing conditions used by the furnace, for demonstration scale and production scale operations, are to be developed at Argonne National Laboratory-West (ANL-West). To assist in selecting the processing conditions of the furnace and to reduce the number of costly experiments, a finitemore » difference model was developed to predict the consolidation of the ceramic waste. The model accurately predicted the heating as well as the bulk density of the ceramic waste form. The methodology used to develop the computer model and a comparison of the analysis to experimental data is presented.« less
A model for predicting field-directed particle transport in the magnetofection process.
Furlani, Edward P; Xue, Xiaozheng
2012-05-01
To analyze the magnetofection process in which magnetic carrier particles with surface-bound gene vectors are attracted to target cells for transfection using an external magnetic field and to obtain a fundamental understanding of the impact of key factors such as particle size and field strength on the gene delivery process. A numerical model is used to study the field-directed transport of the carrier particle-gene vector complex to target cells in a conventional multiwell culture plate system. The model predicts the transport dynamics and the distribution of particle accumulation at the target cells. The impact of several factors that strongly influence gene vector delivery is assessed including the properties of the carrier particles, the strength of the field source, and its extent and proximity relative to the target cells. The study demonstrates that modeling can be used to predict and optimize gene vector delivery in the magnetofection process for novel and conventional in vitro systems.
Soft sensor modeling based on variable partition ensemble method for nonlinear batch processes
NASA Astrophysics Data System (ADS)
Wang, Li; Chen, Xiangguang; Yang, Kai; Jin, Huaiping
2017-01-01
Batch processes are always characterized by nonlinear and system uncertain properties, therefore, the conventional single model may be ill-suited. A local learning strategy soft sensor based on variable partition ensemble method is developed for the quality prediction of nonlinear and non-Gaussian batch processes. A set of input variable sets are obtained by bootstrapping and PMI criterion. Then, multiple local GPR models are developed based on each local input variable set. When a new test data is coming, the posterior probability of each best performance local model is estimated based on Bayesian inference and used to combine these local GPR models to get the final prediction result. The proposed soft sensor is demonstrated by applying to an industrial fed-batch chlortetracycline fermentation process.
USDA-ARS?s Scientific Manuscript database
The Water Erosion Prediction Project (WEPP) model was originally developed for hillslope and small watershed applications. The model simulates complex interactive processes influencing erosion, such as surface runoff, soil-water changes, vegetation growth and senescence, and snow accumulation and me...
The Rangeland Hydrology and Erosion Model: A dynamic approach for predicting soil loss on rangelands
USDA-ARS?s Scientific Manuscript database
In this study we present the improved Rangeland Hydrology and Erosion Model (RHEM V2.3), a process-based erosion prediction tool specific for rangeland application. The article provides the mathematical formulation of the model and parameter estimation equations. Model performance is assessed agains...
Predicting Student Performance in a Collaborative Learning Environment
ERIC Educational Resources Information Center
Olsen, Jennifer K.; Aleven, Vincent; Rummel, Nikol
2015-01-01
Student models for adaptive systems may not model collaborative learning optimally. Past research has either focused on modeling individual learning or for collaboration, has focused on group dynamics or group processes without predicting learning. In the current paper, we adjust the Additive Factors Model (AFM), a standard logistic regression…
Predicting greenhouse gas emissions from beef cattle feedyard manure
USDA-ARS?s Scientific Manuscript database
Improved predictive models for nitrous oxide and methane are crucial for assessing the greenhouse gas (GHG) footprint of beef cattle production. Biochemical process based models to predict GHG from manure rely on information derived from studies on soil and only limited study has been conducted on m...
Predicting greenhouse gas emissions from beef cattle feedyard manure
USDA-ARS?s Scientific Manuscript database
Improved predictive models for nitrous oxide and methane are crucial for assessing the greenhouse gas (GHG) footprint of beef cattle production. Biochemical process-based models to predict GHG from manure rely on information derived from studies on soil and only limited study has been conducted on m...
Demonstration of the Water Erosion Prediction Project (WEPP) internet interface and services
USDA-ARS?s Scientific Manuscript database
The Water Erosion Prediction Project (WEPP) model is a process-based FORTRAN computer simulation program for prediction of runoff and soil erosion by water at hillslope profile, field, and small watershed scales. To effectively run the WEPP model and interpret results additional software has been de...
Maltesen, Morten Jonas; van de Weert, Marco; Grohganz, Holger
2012-09-01
Moisture content and aerodynamic particle size are critical quality attributes for spray-dried protein formulations. In this study, spray-dried insulin powders intended for pulmonary delivery were produced applying design of experiments methodology. Near infrared spectroscopy (NIR) in combination with preprocessing and multivariate analysis in the form of partial least squares projections to latent structures (PLS) were used to correlate the spectral data with moisture content and aerodynamic particle size measured by a time of flight principle. PLS models predicting the moisture content were based on the chemical information of the water molecules in the NIR spectrum. Models yielded prediction errors (RMSEP) between 0.39% and 0.48% with thermal gravimetric analysis used as reference method. The PLS models predicting the aerodynamic particle size were based on baseline offset in the NIR spectra and yielded prediction errors between 0.27 and 0.48 μm. The morphology of the spray-dried particles had a significant impact on the predictive ability of the models. Good predictive models could be obtained for spherical particles with a calibration error (RMSECV) of 0.22 μm, whereas wrinkled particles resulted in much less robust models with a Q (2) of 0.69. Based on the results in this study, NIR is a suitable tool for process analysis of the spray-drying process and for control of moisture content and particle size, in particular for smooth and spherical particles.
Fast and Accurate Prediction of Stratified Steel Temperature During Holding Period of Ladle
NASA Astrophysics Data System (ADS)
Deodhar, Anirudh; Singh, Umesh; Shukla, Rishabh; Gautham, B. P.; Singh, Amarendra K.
2017-04-01
Thermal stratification of liquid steel in a ladle during the holding period and the teeming operation has a direct bearing on the superheat available at the caster and hence on the caster set points such as casting speed and cooling rates. The changes in the caster set points are typically carried out based on temperature measurements at the end of tundish outlet. Thermal prediction models provide advance knowledge of the influence of process and design parameters on the steel temperature at various stages. Therefore, they can be used in making accurate decisions about the caster set points in real time. However, this requires both fast and accurate thermal prediction models. In this work, we develop a surrogate model for the prediction of thermal stratification using data extracted from a set of computational fluid dynamics (CFD) simulations, pre-determined using design of experiments technique. Regression method is used for training the predictor. The model predicts the stratified temperature profile instantaneously, for a given set of process parameters such as initial steel temperature, refractory heat content, slag thickness, and holding time. More than 96 pct of the predicted values are within an error range of ±5 K (±5 °C), when compared against corresponding CFD results. Considering its accuracy and computational efficiency, the model can be extended for thermal control of casting operations. This work also sets a benchmark for developing similar thermal models for downstream processes such as tundish and caster.
Psychological processes in young bullies versus bully‐victims
Poorthuis, Astrid M. G.; Malti, Tina
2017-01-01
Some children who bully others are also victimized themselves (“bully‐victims”) whereas others are not victimized themselves (“bullies”). These subgroups have been shown to differ in their social functioning as early as in kindergarten. What is less clear are the motives that underlie the bullying behavior of young bullies and bully‐victims. The present study examined whether bullies have proactive motives for aggression and anticipate to feel happy after victimizing others, whereas bully‐victims have reactive motives for aggression, poor theory of mind skills, and attribute hostile intent to others. This “distinct processes hypothesis” was contrasted with the “shared processes hypothesis,” predicting that bullies and bully‐victims do not differ on these psychological processes. Children (n = 283, age 4–9) were classified as bully, bully‐victim, or noninvolved using peer‐nominations. Theory of mind, hostile intent attributions, and happy victimizer emotions were assessed using standard vignettes and false‐belief tasks; reactive and proactive motives were assessed using teacher‐reports. We tested our hypotheses using Bayesian model selection, enabling us to directly compare the distinct processes model (predicting that bullies and bully‐victims deviate from noninvolved children on different psychological processes) against the shared processes model (predicting that bullies and bully‐victims deviate from noninvolved children on all psychological processes alike). Overall, the shared processes model received more support than the distinct processes model. These results suggest that in early childhood, bullies and bully‐victims have shared, rather than distinct psychological processes underlying their bullying behavior. PMID:28181256
Samadi, Sara; Vaziri, Behrooz Mahmoodzadeh
2017-07-14
Solid extraction process, using the supercritical fluid, is a modern science and technology, which has come in vogue regarding its considerable advantages. In the present article, a new and comprehensive model is presented for predicting the performance and separation yield of the supercritical extraction process. The base of process modeling is partial differential mass balances. In the proposed model, the solid particles are considered twofold: (a) particles with intact structure, (b) particles with destructed structure. A distinct mass transfer coefficient has been used for extraction of each part of solid particles to express different extraction regimes and to evaluate the process accurately (internal mass transfer coefficient was used for the intact-structure particles and external mass transfer coefficient was employed for the destructed-structure particles). In order to evaluate and validate the proposed model, the obtained results from simulations were compared with two series of available experimental data for extraction of chamomile extract with supercritical carbon dioxide, which had an excellent agreement. This is indicative of high potentiality of the model in predicting the extraction process, precisely. In the following, the effect of major parameters on supercritical extraction process, like pressure, temperature, supercritical fluid flow rate, and the size of solid particles was evaluated. The model can be used as a superb starting point for scientific and experimental applications. Copyright © 2017 Elsevier B.V. All rights reserved.
Evaluation of the energy efficiency of enzyme fermentation by mechanistic modeling.
Albaek, Mads O; Gernaey, Krist V; Hansen, Morten S; Stocks, Stuart M
2012-04-01
Modeling biotechnological processes is key to obtaining increased productivity and efficiency. Particularly crucial to successful modeling of such systems is the coupling of the physical transport phenomena and the biological activity in one model. We have applied a model for the expression of cellulosic enzymes by the filamentous fungus Trichoderma reesei and found excellent agreement with experimental data. The most influential factor was demonstrated to be viscosity and its influence on mass transfer. Not surprisingly, the biological model is also shown to have high influence on the model prediction. At different rates of agitation and aeration as well as headspace pressure, we can predict the energy efficiency of oxygen transfer, a key process parameter for economical production of industrial enzymes. An inverse relationship between the productivity and energy efficiency of the process was found. This modeling approach can be used by manufacturers to evaluate the enzyme fermentation process for a range of different process conditions with regard to energy efficiency. Copyright © 2011 Wiley Periodicals, Inc.
Improving orbit prediction accuracy through supervised machine learning
NASA Astrophysics Data System (ADS)
Peng, Hao; Bai, Xiaoli
2018-05-01
Due to the lack of information such as the space environment condition and resident space objects' (RSOs') body characteristics, current orbit predictions that are solely grounded on physics-based models may fail to achieve required accuracy for collision avoidance and have led to satellite collisions already. This paper presents a methodology to predict RSOs' trajectories with higher accuracy than that of the current methods. Inspired by the machine learning (ML) theory through which the models are learned based on large amounts of observed data and the prediction is conducted without explicitly modeling space objects and space environment, the proposed ML approach integrates physics-based orbit prediction algorithms with a learning-based process that focuses on reducing the prediction errors. Using a simulation-based space catalog environment as the test bed, the paper demonstrates three types of generalization capability for the proposed ML approach: (1) the ML model can be used to improve the same RSO's orbit information that is not available during the learning process but shares the same time interval as the training data; (2) the ML model can be used to improve predictions of the same RSO at future epochs; and (3) the ML model based on a RSO can be applied to other RSOs that share some common features.
Selection, calibration, and validation of models of tumor growth.
Lima, E A B F; Oden, J T; Hormuth, D A; Yankeelov, T E; Almeida, R C
2016-11-01
This paper presents general approaches for addressing some of the most important issues in predictive computational oncology concerned with developing classes of predictive models of tumor growth. First, the process of developing mathematical models of vascular tumors evolving in the complex, heterogeneous, macroenvironment of living tissue; second, the selection of the most plausible models among these classes, given relevant observational data; third, the statistical calibration and validation of models in these classes, and finally, the prediction of key Quantities of Interest (QOIs) relevant to patient survival and the effect of various therapies. The most challenging aspects of this endeavor is that all of these issues often involve confounding uncertainties: in observational data, in model parameters, in model selection, and in the features targeted in the prediction. Our approach can be referred to as "model agnostic" in that no single model is advocated; rather, a general approach that explores powerful mixture-theory representations of tissue behavior while accounting for a range of relevant biological factors is presented, which leads to many potentially predictive models. Then representative classes are identified which provide a starting point for the implementation of OPAL, the Occam Plausibility Algorithm (OPAL) which enables the modeler to select the most plausible models (for given data) and to determine if the model is a valid tool for predicting tumor growth and morphology ( in vivo ). All of these approaches account for uncertainties in the model, the observational data, the model parameters, and the target QOI. We demonstrate these processes by comparing a list of models for tumor growth, including reaction-diffusion models, phase-fields models, and models with and without mechanical deformation effects, for glioma growth measured in murine experiments. Examples are provided that exhibit quite acceptable predictions of tumor growth in laboratory animals while demonstrating successful implementations of OPAL.
Gibbons, Frederick X; Houlihan, Amy E; Gerrard, Meg
2009-05-01
A brief overview of theories of health behaviour that are based on the expectancy-value perspective is presented. This approach maintains that health behaviours are the result of a deliberative decision-making process that involves consideration of behavioural options along with anticipated outcomes associated with those options. It is argued that this perspective is effective at explaining and predicting many types of health behaviour, including health-promoting actions (e.g. UV protection, condom use, smoking cessation), but less effective at predicting risky health behaviours, such as unprotected, casual sex, drunk driving or binge drinking. These are behaviours that are less reasoned or premeditated - especially among adolescents. An argument is made for incorporating elements of dual-processing theories in an effort to improve the 'utility' of these models. Specifically, it is suggested that adolescent health behaviour involves both analytic and heuristic processing. Both types of processing are incorporated in the prototype-willingness (prototype) model, which is described in some detail. Studies of health behaviour based on the expectancy-value perspective (e.g. theory of reasoned action) are reviewed, along with studies based on the prototype model. These two sets of studies together suggest that the dual-processing perspective, in general, and the prototype model, in particular, add to the predictive validity of expectancy-value models for predicting adolescent health behaviour. Research and interventions that incorporate elements of dual-processing and elements of expectancy-value are more effective at explaining and changing adolescent health behaviour than are those based on expectancy-value theories alone.
The Use of Particle/Substrate Material Models in Simulation of Cold-Gas Dynamic-Spray Process
NASA Astrophysics Data System (ADS)
Rahmati, Saeed; Ghaei, Abbas
2014-02-01
Cold spray is a coating deposition method in which the solid particles are accelerated to the substrate using a low temperature supersonic gas flow. Many numerical studies have been carried out in the literature in order to study this process in more depth. Despite the inability of Johnson-Cook plasticity model in prediction of material behavior at high strain rates, it is the model that has been frequently used in simulation of cold spray. Therefore, this research was devoted to compare the performance of different material models in the simulation of cold spray process. Six different material models, appropriate for high strain-rate plasticity, were employed in finite element simulation of cold spray process for copper. The results showed that the material model had a considerable effect on the predicted deformed shapes.
NAPR: a Cloud-Based Framework for Neuroanatomical Age Prediction.
Pardoe, Heath R; Kuzniecky, Ruben
2018-01-01
The availability of cloud computing services has enabled the widespread adoption of the "software as a service" (SaaS) approach for software distribution, which utilizes network-based access to applications running on centralized servers. In this paper we apply the SaaS approach to neuroimaging-based age prediction. Our system, named "NAPR" (Neuroanatomical Age Prediction using R), provides access to predictive modeling software running on a persistent cloud-based Amazon Web Services (AWS) compute instance. The NAPR framework allows external users to estimate the age of individual subjects using cortical thickness maps derived from their own locally processed T1-weighted whole brain MRI scans. As a demonstration of the NAPR approach, we have developed two age prediction models that were trained using healthy control data from the ABIDE, CoRR, DLBS and NKI Rockland neuroimaging datasets (total N = 2367, age range 6-89 years). The provided age prediction models were trained using (i) relevance vector machines and (ii) Gaussian processes machine learning methods applied to cortical thickness surfaces obtained using Freesurfer v5.3. We believe that this transparent approach to out-of-sample evaluation and comparison of neuroimaging age prediction models will facilitate the development of improved age prediction models and allow for robust evaluation of the clinical utility of these methods.
Gabriel, Alonzo A; Cayabyab, Jochelle Elysse C; Tan, Athalie Kaye L; Corook, Mark Lester F; Ables, Errol John O; Tiangson-Bayaga, Cecile Leah P
2015-06-15
A predictive response surface model for the influences of product (soluble solids and titratable acidity) and process (temperature and heating time) parameters on the degradation of ascorbic acid (AA) in heated simulated fruit juices (SFJs) was established. Physicochemical property ranges of freshly squeezed and processed juices, and a previously established decimal reduction times of Escherichiacoli O157:H7 at different heating temperatures were used in establishing a Central Composite Design of Experiment that determined the combinations of product and process variable used in the model building. Only the individual linear effects of temperature and heating time significantly (P<0.05) affected AA reduction (%AAr). Validating systems either over- or underestimated actual %AAr with bias factors 0.80-1.20. However, all validating systems still resulted in acceptable predictive efficacy, with accuracy factor 1.00-1.26. The model may be useful in establishing unique process schedules for specific products, for the simultaneous control and improvement of food safety and quality. Copyright © 2015 Elsevier Ltd. All rights reserved.
Thermodynamic model effects on the design and optimization of natural gas plants
DOE Office of Scientific and Technical Information (OSTI.GOV)
Diaz, S.; Zabaloy, M.; Brignole, E.A.
1999-07-01
The design and optimization of natural gas plants is carried out on the basis of process simulators. The physical property package is generally based on cubic equations of state. By rigorous thermodynamics phase equilibrium conditions, thermodynamic functions, equilibrium phase separations, work and heat are computed. The aim of this work is to analyze the NGL turboexpansion process and identify possible process computations that are more sensitive to model predictions accuracy. Three equations of state, PR, SRK and Peneloux modification, are used to study the effect of property predictions on process calculations and plant optimization. It is shown that turboexpander plantsmore » have moderate sensitivity with respect to phase equilibrium computations, but higher accuracy is required for the prediction of enthalpy and turboexpansion work. The effect of modeling CO{sub 2} solubility is also critical in mixtures with high CO{sub 2} content in the feed.« less
Prediction of microalgae hydrothermal liquefaction products from feedstock biochemical composition
DOE Office of Scientific and Technical Information (OSTI.GOV)
Leow, Shijie; Witter, John R.; Vardon, Derek R.
Hydrothermal liquefaction (HTL) uses water under elevated temperatures and pressures (200–350 °C, 5–20 MPa) to convert biomass into liquid “biocrude” oil. Despite extensive reports on factors influencing microalgae cell composition during cultivation and separate reports on HTL products linked to cell composition, the field still lacks a quantitative model to predict HTL conversion product yield and qualities from feedstock biochemical composition; the tailoring of microalgae feedstock for downstream conversion is a unique and critical aspect of microalgae biofuels that must be leveraged upon for optimization of the whole process. This study developed predictive relationships for HTL biocrude yield and othermore » conversion product characteristics based on HTL of Nannochloropsis oculata batches harvested with a wide range of compositions (23–59% dw lipids, 58–17% dw proteins, 12–22% dw carbohydrates) and a defatted batch (0% dw lipids, 75% dw proteins, 19% dw carbohydrates). HTL biocrude yield (33–68% dw) and carbon distribution (49–83%) increased in proportion to the fatty acid (FA) content. A component additivity model (predicting biocrude yield from lipid, protein, and carbohydrates) was more accurate predicting literature yields for diverse microalgae species than previous additivity models derived from model compounds. FA profiling of the biocrude product showed strong links to the initial feedstock FA profile of the lipid component, demonstrating that HTL acts as a water-based extraction process for FAs; the remainder non-FA structural components could be represented using the defatted batch. These findings were used to introduce a new FA-based model that predicts biocrude oil yields along with other critical parameters, and is capable of adjusting for the wide variations in HTL methodology and microalgae species through the defatted batch. Lastly, the FA model was linked to an upstream cultivation model (Phototrophic Process Model), providing for the first time an integrated modeling framework to overcome a critical barrier to microalgae-derived HTL biofuels and enable predictive analysis of the overall microalgal-to-biofuel process.« less
Prediction of microalgae hydrothermal liquefaction products from feedstock biochemical composition
Leow, Shijie; Witter, John R.; Vardon, Derek R.; ...
2015-05-11
Hydrothermal liquefaction (HTL) uses water under elevated temperatures and pressures (200–350 °C, 5–20 MPa) to convert biomass into liquid “biocrude” oil. Despite extensive reports on factors influencing microalgae cell composition during cultivation and separate reports on HTL products linked to cell composition, the field still lacks a quantitative model to predict HTL conversion product yield and qualities from feedstock biochemical composition; the tailoring of microalgae feedstock for downstream conversion is a unique and critical aspect of microalgae biofuels that must be leveraged upon for optimization of the whole process. This study developed predictive relationships for HTL biocrude yield and othermore » conversion product characteristics based on HTL of Nannochloropsis oculata batches harvested with a wide range of compositions (23–59% dw lipids, 58–17% dw proteins, 12–22% dw carbohydrates) and a defatted batch (0% dw lipids, 75% dw proteins, 19% dw carbohydrates). HTL biocrude yield (33–68% dw) and carbon distribution (49–83%) increased in proportion to the fatty acid (FA) content. A component additivity model (predicting biocrude yield from lipid, protein, and carbohydrates) was more accurate predicting literature yields for diverse microalgae species than previous additivity models derived from model compounds. FA profiling of the biocrude product showed strong links to the initial feedstock FA profile of the lipid component, demonstrating that HTL acts as a water-based extraction process for FAs; the remainder non-FA structural components could be represented using the defatted batch. These findings were used to introduce a new FA-based model that predicts biocrude oil yields along with other critical parameters, and is capable of adjusting for the wide variations in HTL methodology and microalgae species through the defatted batch. Lastly, the FA model was linked to an upstream cultivation model (Phototrophic Process Model), providing for the first time an integrated modeling framework to overcome a critical barrier to microalgae-derived HTL biofuels and enable predictive analysis of the overall microalgal-to-biofuel process.« less
Are atmospheric updrafts a key to unlocking climate forcing and sensitivity?
Donner, Leo J.; O'Brien, Travis A.; Rieger, Daniel; ...
2016-10-20
Both climate forcing and climate sensitivity persist as stubborn uncertainties limiting the extent to which climate models can provide actionable scientific scenarios for climate change. A key, explicit control on cloud–aerosol interactions, the largest uncertainty in climate forcing, is the vertical velocity of cloud-scale updrafts. Model-based studies of climate sensitivity indicate that convective entrainment, which is closely related to updraft speeds, is an important control on climate sensitivity. Updraft vertical velocities also drive many physical processes essential to numerical weather prediction. Vertical velocities and their role in atmospheric physical processes have been given very limited attention in models for climatemore » and numerical weather prediction. The relevant physical scales range down to tens of meters and are thus frequently sub-grid and require parameterization. Many state-of-science convection parameterizations provide mass fluxes without specifying Vertical velocities and their role in atmospheric physical processes have been given very limited attention in models for climate and numerical weather prediction. The relevant physical scales range down to tens of meters and are thus frequently sub-grid and require parameterization. Many state-of-science convection parameterizations provide mass fluxes without specifying vs in climate models may capture this behavior, but it has not been accounted for when parameterizing cloud and precipitation processes in current models. New observations of convective vertical velocities offer a potentially promising path toward developing process-level cloud models and parameterizations for climate and numerical weather prediction. Taking account of the scale dependence of resolved vertical velocities offers a path to matching cloud-scale physical processes and their driving dynamics more realistically, with a prospect of reduced uncertainty in both climate forcing and sensitivity.« less
Are atmospheric updrafts a key to unlocking climate forcing and sensitivity?
DOE Office of Scientific and Technical Information (OSTI.GOV)
Donner, Leo J.; O'Brien, Travis A.; Rieger, Daniel
Both climate forcing and climate sensitivity persist as stubborn uncertainties limiting the extent to which climate models can provide actionable scientific scenarios for climate change. A key, explicit control on cloud–aerosol interactions, the largest uncertainty in climate forcing, is the vertical velocity of cloud-scale updrafts. Model-based studies of climate sensitivity indicate that convective entrainment, which is closely related to updraft speeds, is an important control on climate sensitivity. Updraft vertical velocities also drive many physical processes essential to numerical weather prediction. Vertical velocities and their role in atmospheric physical processes have been given very limited attention in models for climatemore » and numerical weather prediction. The relevant physical scales range down to tens of meters and are thus frequently sub-grid and require parameterization. Many state-of-science convection parameterizations provide mass fluxes without specifying Vertical velocities and their role in atmospheric physical processes have been given very limited attention in models for climate and numerical weather prediction. The relevant physical scales range down to tens of meters and are thus frequently sub-grid and require parameterization. Many state-of-science convection parameterizations provide mass fluxes without specifying vs in climate models may capture this behavior, but it has not been accounted for when parameterizing cloud and precipitation processes in current models. New observations of convective vertical velocities offer a potentially promising path toward developing process-level cloud models and parameterizations for climate and numerical weather prediction. Taking account of the scale dependence of resolved vertical velocities offers a path to matching cloud-scale physical processes and their driving dynamics more realistically, with a prospect of reduced uncertainty in both climate forcing and sensitivity.« less
Dynamic rain fade compensation techniques for the advanced communications technology satellite
NASA Technical Reports Server (NTRS)
Manning, Robert M.
1992-01-01
The dynamic and composite nature of propagation impairments that are incurred on earth-space communications links at frequencies in and above the 30/20 GHz Ka band necessitate the use of dynamic statistical identification and prediction processing of the fading signal in order to optimally estimate and predict the levels of each of the deleterious attenuation components. Such requirements are being met in NASA's Advanced Communications Technology Satellite (ACTS) project by the implementation of optimal processing schemes derived through the use of the ACTS Rain Attenuation Prediction Model and nonlinear Markov filtering theory. The ACTS Rain Attenuation Prediction Model discerns climatological variations on the order of 0.5 deg in latitude and longitude in the continental U.S. The time-dependent portion of the model gives precise availability predictions for the 'spot beam' links of ACTS. However, the structure of the dynamic portion of the model, which yields performance parameters such as fade duration probabilities, is isomorphic to the state-variable approach of stochastic control theory and is amenable to the design of such statistical fade processing schemes which can be made specific to the particular climatological location at which they are employed.
Preferential sampling and Bayesian geostatistics: Statistical modeling and examples.
Cecconi, Lorenzo; Grisotto, Laura; Catelan, Dolores; Lagazio, Corrado; Berrocal, Veronica; Biggeri, Annibale
2016-08-01
Preferential sampling refers to any situation in which the spatial process and the sampling locations are not stochastically independent. In this paper, we present two examples of geostatistical analysis in which the usual assumption of stochastic independence between the point process and the measurement process is violated. To account for preferential sampling, we specify a flexible and general Bayesian geostatistical model that includes a shared spatial random component. We apply the proposed model to two different case studies that allow us to highlight three different modeling and inferential aspects of geostatistical modeling under preferential sampling: (1) continuous or finite spatial sampling frame; (2) underlying causal model and relevant covariates; and (3) inferential goals related to mean prediction surface or prediction uncertainty. © The Author(s) 2016.
Prediction in the Processing of Repair Disfluencies
Lowder, Matthew W.; Ferreira, Fernanda
2015-01-01
Imagine a speaker who says "Turn left, uh I mean…" Before hearing the repair, the listener is likely to anticipate the word "right" based on the context, including the reparandum "left." Thus, even though the reparandum is not intended as part of the utterance, the listener uses it as information to predict the repair. The issue we explore in this article is how prediction operates in disfluency contexts. We begin by describing the Overlay model of disfluency comprehension, which assumes that the listener identifies a reparandum as such only after a repair is encountered which creates a local ungrammaticality. The Overlay model also allows the reparandum to influence subsequent processing, because the reparandum is not deleted from the final representation of the sentence. A somewhat different model can be developed which assumes a more active, anticipatory process for resolving repair disfluencies. On this model, the listener might predict the likely repair when the speaker becomes disfluent, or even identify a reparandum if the word or word string seems inconsistent with the speaker's intention. Our proposal is that the prediction can be made using the same mechanism involved in the processing of contrast, in which a listener uses contrastive prominence to generate likely alternates (the contrast set). We suggest that these two approaches to disfluency processing are not inconsistent: Successful repair processing requires listeners to use statistical and linguistic evidence to identify a reparandum and to integrate the repair, and the lingering of the reparandum is due to the coexistence in working memory of the reparandum, the repair, and unselected members of the contrast set. PMID:26878026
Milquez-Sanabria, Harvey; Blanco-Cocom, Luis; Alzate-Gaviria, Liliana
2016-10-03
Agro-industrial wastes are an energy source for different industries. However, its application has not reached small industries. Previous and current research activities performed on the acidogenic phase of two-phase anaerobic digestion processes deal particularly with process optimization of the acid-phase reactors operating with a wide variety of substrates, both soluble and complex in nature. Mathematical models for anaerobic digestion have been developed to understand and improve the efficient operation of the process. At present, lineal models with the advantages of requiring less data, predicting future behavior and updating when a new set of data becomes available have been developed. The aim of this research was to contribute to the reduction of organic solid waste, generate biogas and develop a simple but accurate mathematical model to predict the behavior of the UASB reactor. The system was maintained separate for 14 days during which hydrolytic and acetogenic bacteria broke down onion waste, produced and accumulated volatile fatty acids. On this day, two reactors were coupled and the system continued for 16 days more. The biogas and methane yields and volatile solid reduction were 0.6 ± 0.05 m 3 (kg VS removed ) -1 , 0.43 ± 0.06 m 3 (kg VS removed ) -1 and 83.5 ± 9.8 %, respectively. The model application showed a good prediction of all process parameters defined; maximum error between experimental and predicted value was 1.84 % for alkalinity profile. A linear predictive adaptive model for anaerobic digestion of onion waste in a two-stage process was determined under batch-fed condition. Organic load rate (OLR) was maintained constant for the entire operation, modifying effluent hydrolysis reactor feed to UASB reactor. This condition avoids intoxication of UASB reactor and also limits external buffer addition.
Computational intelligence models to predict porosity of tablets using minimum features
Khalid, Mohammad Hassan; Kazemi, Pezhman; Perez-Gandarillas, Lucia; Michrafy, Abderrahim; Szlęk, Jakub; Jachowicz, Renata; Mendyk, Aleksander
2017-01-01
The effects of different formulations and manufacturing process conditions on the physical properties of a solid dosage form are of importance to the pharmaceutical industry. It is vital to have in-depth understanding of the material properties and governing parameters of its processes in response to different formulations. Understanding the mentioned aspects will allow tighter control of the process, leading to implementation of quality-by-design (QbD) practices. Computational intelligence (CI) offers an opportunity to create empirical models that can be used to describe the system and predict future outcomes in silico. CI models can help explore the behavior of input parameters, unlocking deeper understanding of the system. This research endeavor presents CI models to predict the porosity of tablets created by roll-compacted binary mixtures, which were milled and compacted under systematically varying conditions. CI models were created using tree-based methods, artificial neural networks (ANNs), and symbolic regression trained on an experimental data set and screened using root-mean-square error (RMSE) scores. The experimental data were composed of proportion of microcrystalline cellulose (MCC) (in percentage), granule size fraction (in micrometers), and die compaction force (in kilonewtons) as inputs and porosity as an output. The resulting models show impressive generalization ability, with ANNs (normalized root-mean-square error [NRMSE] =1%) and symbolic regression (NRMSE =4%) as the best-performing methods, also exhibiting reliable predictive behavior when presented with a challenging external validation data set (best achieved symbolic regression: NRMSE =3%). Symbolic regression demonstrates the transition from the black box modeling paradigm to more transparent predictive models. Predictive performance and feature selection behavior of CI models hints at the most important variables within this factor space. PMID:28138223
Computational intelligence models to predict porosity of tablets using minimum features.
Khalid, Mohammad Hassan; Kazemi, Pezhman; Perez-Gandarillas, Lucia; Michrafy, Abderrahim; Szlęk, Jakub; Jachowicz, Renata; Mendyk, Aleksander
2017-01-01
The effects of different formulations and manufacturing process conditions on the physical properties of a solid dosage form are of importance to the pharmaceutical industry. It is vital to have in-depth understanding of the material properties and governing parameters of its processes in response to different formulations. Understanding the mentioned aspects will allow tighter control of the process, leading to implementation of quality-by-design (QbD) practices. Computational intelligence (CI) offers an opportunity to create empirical models that can be used to describe the system and predict future outcomes in silico. CI models can help explore the behavior of input parameters, unlocking deeper understanding of the system. This research endeavor presents CI models to predict the porosity of tablets created by roll-compacted binary mixtures, which were milled and compacted under systematically varying conditions. CI models were created using tree-based methods, artificial neural networks (ANNs), and symbolic regression trained on an experimental data set and screened using root-mean-square error (RMSE) scores. The experimental data were composed of proportion of microcrystalline cellulose (MCC) (in percentage), granule size fraction (in micrometers), and die compaction force (in kilonewtons) as inputs and porosity as an output. The resulting models show impressive generalization ability, with ANNs (normalized root-mean-square error [NRMSE] =1%) and symbolic regression (NRMSE =4%) as the best-performing methods, also exhibiting reliable predictive behavior when presented with a challenging external validation data set (best achieved symbolic regression: NRMSE =3%). Symbolic regression demonstrates the transition from the black box modeling paradigm to more transparent predictive models. Predictive performance and feature selection behavior of CI models hints at the most important variables within this factor space.
Experimentally validated mathematical model of analyte uptake by permeation passive samplers.
Salim, F; Ioannidis, M; Górecki, T
2017-11-15
A mathematical model describing the sampling process in a permeation-based passive sampler was developed and evaluated numerically. The model was applied to the Waterloo Membrane Sampler (WMS), which employs a polydimethylsiloxane (PDMS) membrane as a permeation barrier, and an adsorbent as a receiving phase. Samplers of this kind are used for sampling volatile organic compounds (VOC) from air and soil gas. The model predicts the spatio-temporal variation of sorbed and free analyte concentrations within the sampler components (membrane, sorbent bed and dead volume), from which the uptake rate throughout the sampling process can be determined. A gradual decline in the uptake rate during the sampling process is predicted, which is more pronounced when sampling higher concentrations. Decline of the uptake rate can be attributed to diminishing analyte concentration gradient within the membrane, which results from resistance to mass transfer and the development of analyte concentration gradients within the sorbent bed. The effects of changing the sampler component dimensions on the rate of this decline in the uptake rate can be predicted from the model. Performance of the model was evaluated experimentally for sampling of toluene vapors under controlled conditions. The model predictions proved close to the experimental values. The model provides a valuable tool to predict changes in the uptake rate during sampling, to assign suitable exposure times at different analyte concentration levels, and to optimize the dimensions of the sampler in a manner that minimizes these changes during the sampling period.
NASA Astrophysics Data System (ADS)
Wang, Zhao-Qiang; Hu, Chang-Hua; Si, Xiao-Sheng; Zio, Enrico
2018-02-01
Current degradation modeling and remaining useful life prediction studies share a common assumption that the degrading systems are not maintained or maintained perfectly (i.e., to an as-good-as new state). This paper concerns the issues of how to model the degradation process and predict the remaining useful life of degrading systems subjected to imperfect maintenance activities, which can restore the health condition of a degrading system to any degradation level between as-good-as new and as-bad-as old. Toward this end, a nonlinear model driven by Wiener process is first proposed to characterize the degradation trajectory of the degrading system subjected to imperfect maintenance, where negative jumps are incorporated to quantify the influence of imperfect maintenance activities on the system's degradation. Then, the probability density function of the remaining useful life is derived analytically by a space-scale transformation, i.e., transforming the constructed degradation model with negative jumps crossing a constant threshold level to a Wiener process model crossing a random threshold level. To implement the proposed method, unknown parameters in the degradation model are estimated by the maximum likelihood estimation method. Finally, the proposed degradation modeling and remaining useful life prediction method are applied to a practical case of draught fans belonging to a kind of mechanical systems from steel mills. The results reveal that, for a degrading system subjected to imperfect maintenance, our proposed method can obtain more accurate remaining useful life predictions than those of the benchmark model in literature.
Multi-Hypothesis Modelling Capabilities for Robust Data-Model Integration
NASA Astrophysics Data System (ADS)
Walker, A. P.; De Kauwe, M. G.; Lu, D.; Medlyn, B.; Norby, R. J.; Ricciuto, D. M.; Rogers, A.; Serbin, S.; Weston, D. J.; Ye, M.; Zaehle, S.
2017-12-01
Large uncertainty is often inherent in model predictions due to imperfect knowledge of how to describe the mechanistic processes (hypotheses) that a model is intended to represent. Yet this model hypothesis uncertainty (MHU) is often overlooked or informally evaluated, as methods to quantify and evaluate MHU are limited. MHU is increased as models become more complex because each additional processes added to a model comes with inherent MHU as well as parametric unceratinty. With the current trend of adding more processes to Earth System Models (ESMs), we are adding uncertainty, which can be quantified for parameters but not MHU. Model inter-comparison projects do allow for some consideration of hypothesis uncertainty but in an ad hoc and non-independent fashion. This has stymied efforts to evaluate ecosystem models against data and intepret the results mechanistically because it is not simple to interpret exactly why a model is producing the results it does and identify which model assumptions are key as they combine models of many sub-systems and processes, each of which may be conceptualised and represented mathematically in various ways. We present a novel modelling framework—the multi-assumption architecture and testbed (MAAT)—that automates the combination, generation, and execution of a model ensemble built with different representations of process. We will present the argument that multi-hypothesis modelling needs to be considered in conjunction with other capabilities (e.g. the Predictive Ecosystem Analyser; PecAn) and statistical methods (e.g. sensitivity anaylsis, data assimilation) to aid efforts in robust data model integration to enhance our predictive understanding of biological systems.
Processing demands in belief-desire reasoning: inhibition or general difficulty?
Friedman, Ori; Leslie, Alan M
2005-05-01
Most 4-year-olds can predict the behavior of a person who wants an object but is mistaken about its location. More difficult is predicting behavior when the person is mistaken about location and wants to avoid the object. We tested between two explanations for children's difficulties with avoidance false belief: the Selection Processing model of inhibitory processing and a General Difficulty account. Children were presented with a false belief task and a control task, in which belief attribution was as difficult as in the false belief task. Predicting behavior in light of the character's desire to avoid the object added more difficulty in the false belief task. This finding is consistent with the Selection Processing model, but not with the General Difficulty account.
NASA Astrophysics Data System (ADS)
Yao, Yao
2012-05-01
Hydraulic fracturing technology is being widely used within the oil and gas industry for both waste injection and unconventional gas production wells. It is essential to predict the behavior of hydraulic fractures accurately based on understanding the fundamental mechanism(s). The prevailing approach for hydraulic fracture modeling continues to rely on computational methods based on Linear Elastic Fracture Mechanics (LEFM). Generally, these methods give reasonable predictions for hard rock hydraulic fracture processes, but still have inherent limitations, especially when fluid injection is performed in soft rock/sand or other non-conventional formations. These methods typically give very conservative predictions on fracture geometry and inaccurate estimation of required fracture pressure. One of the reasons the LEFM-based methods fail to give accurate predictions for these materials is that the fracture process zone ahead of the crack tip and softening effect should not be neglected in ductile rock fracture analysis. A 3D pore pressure cohesive zone model has been developed and applied to predict hydraulic fracturing under fluid injection. The cohesive zone method is a numerical tool developed to model crack initiation and growth in quasi-brittle materials considering the material softening effect. The pore pressure cohesive zone model has been applied to investigate the hydraulic fracture with different rock properties. The hydraulic fracture predictions of a three-layer water injection case have been compared using the pore pressure cohesive zone model with revised parameters, LEFM-based pseudo 3D model, a Perkins-Kern-Nordgren (PKN) model, and an analytical solution. Based on the size of the fracture process zone and its effect on crack extension in ductile rock, the fundamental mechanical difference of LEFM and cohesive fracture mechanics-based methods is discussed. An effective fracture toughness method has been proposed to consider the fracture process zone effect on the ductile rock fracture.
Brand, Matthias; Schiebener, Johannes; Pertl, Marie-Theres; Delazer, Margarete
2014-01-01
Recent models on decision making under risk conditions have suggested that numerical abilities are important ingredients of advantageous decision-making performance, but empirical evidence is still limited. The results of our first study show that logical reasoning and basic mental calculation capacities predict ratio processing and that ratio processing predicts decision making under risk. In the second study, logical reasoning together with executive functions predicted probability processing (numeracy and probability knowledge), and probability processing predicted decision making under risk. These findings suggest that increasing an individual's understanding of ratios and probabilities should lead to more advantageous decisions under risk conditions.
NASA Astrophysics Data System (ADS)
Neill, Aaron; Reaney, Sim
2015-04-01
Fully-distributed, physically-based rainfall-runoff models attempt to capture some of the complexity of the runoff processes that operate within a catchment, and have been used to address a variety of issues including water quality and the effect of climate change on flood frequency. Two key issues are prevalent, however, which call into question the predictive capability of such models. The first is the issue of parameter equifinality which can be responsible for large amounts of uncertainty. The second is whether such models make the right predictions for the right reasons - are the processes operating within a catchment correctly represented, or do the predictive abilities of these models result only from the calibration process? The use of additional data sources, such as environmental tracers, has been shown to help address both of these issues, by allowing for multi-criteria model calibration to be undertaken, and by permitting a greater understanding of the processes operating in a catchment and hence a more thorough evaluation of how well catchment processes are represented in a model. Using discharge and oxygen-18 data sets, the ability of the fully-distributed, physically-based CRUM3 model to represent the runoff processes in three sub-catchments in Cumbria, NW England has been evaluated. These catchments (Morland, Dacre and Pow) are part of the of the River Eden demonstration test catchment project. The oxygen-18 data set was firstly used to derive transit-time distributions and mean residence times of water for each of the catchments to gain an integrated overview of the types of processes that were operating. A generalised likelihood uncertainty estimation procedure was then used to calibrate the CRUM3 model for each catchment based on a single discharge data set from each catchment. Transit-time distributions and mean residence times of water obtained from the model using the top 100 behavioural parameter sets for each catchment were then compared to those derived from the oxygen-18 data to see how well the model captured catchment dynamics. The value of incorporating the oxygen-18 data set, as well as discharge data sets from multiple as opposed to single gauging stations in each catchment, in the calibration process to improve the predictive capability of the model was then investigated. This was achieved by assessing by how much the identifiability of the model parameters and the ability of the model to represent the runoff processes operating in each catchment improved with the inclusion of the additional data sets with respect to the likely costs that would be incurred in obtaining the data sets themselves.
NASA Astrophysics Data System (ADS)
Del Raye, G.; Weng, K.
2011-12-01
Ocean acidification affects organisms on a biochemical scale, yet its societal impacts manifest from changes that propagate through entire populations. Successful forecasting of the effects of ocean acidification therefore depends on at least two steps: (1) deducing systemic physiology based on subcellular stresses and (2) scaling individual physiology up to ecosystem processes. Predictions that are based on known biological processes (process-based models) may fare better than purely statistical models in both these steps because the latter are less robust to novel environmental conditions. Here we present a process-based model that uses temperature, pO2, and pCO2 to predict maximal aerobic scope in Atlantic cod. Using this model, we show that (i) experimentally-derived physiological parameters are sufficient to capture the response of cod aerobic scope to temperature and oxygen, and (ii) subcellular pH effects can be used to predict the systemic physiological response of cod to an acidified ocean. We predict that acute pH stress (on a scale of hours) could limit the mobility of Atlantic cod during diel vertical migration across a pCO2 gradient, promoting habitat compression. Finally, we use a global sensitivity analysis to identify opportunities for the improvement of model uncertainty as well as some physiological adaptations that could mitigate climate stresses on cod in the future.
Wang, Jie-Sheng; Han, Shuang
2015-01-01
For predicting the key technology indicators (concentrate grade and tailings recovery rate) of flotation process, a feed-forward neural network (FNN) based soft-sensor model optimized by the hybrid algorithm combining particle swarm optimization (PSO) algorithm and gravitational search algorithm (GSA) is proposed. Although GSA has better optimization capability, it has slow convergence velocity and is easy to fall into local optimum. So in this paper, the velocity vector and position vector of GSA are adjusted by PSO algorithm in order to improve its convergence speed and prediction accuracy. Finally, the proposed hybrid algorithm is adopted to optimize the parameters of FNN soft-sensor model. Simulation results show that the model has better generalization and prediction accuracy for the concentrate grade and tailings recovery rate to meet the online soft-sensor requirements of the real-time control in the flotation process. PMID:26583034
Chill Down Process of Hydrogen Transport Pipelines
NASA Technical Reports Server (NTRS)
Mei, Renwei; Klausner, James
2006-01-01
A pseudo-steady model has been developed to predict the chilldown history of pipe wall temperature in the horizontal transport pipeline for cryogenic fluids. A new film boiling heat transfer model is developed by incorporating the stratified flow structure for cryogenic chilldown. A modified nucleate boiling heat transfer correlation for cryogenic chilldown process inside a horizontal pipe is proposed. The efficacy of the correlations is assessed by comparing the model predictions with measured values of wall temperature in several azimuthal positions in a well controlled experiment by Chung et al. (2004). The computed pipe wall temperature histories match well with the measured results. The present model captures important features of thermal interaction between the pipe wall and the cryogenic fluid, provides a simple and robust platform for predicting pipe wall chilldown history in long horizontal pipe at relatively low computational cost, and builds a foundation to incorporate the two-phase hydrodynamic interaction in the chilldown process.
Thermoplastic matrix composite processing model
NASA Technical Reports Server (NTRS)
Dara, P. H.; Loos, A. C.
1985-01-01
The effects the processing parameters pressure, temperature, and time have on the quality of continuous graphite fiber reinforced thermoplastic matrix composites were quantitatively accessed by defining the extent to which intimate contact and bond formation has occurred at successive ply interfaces. Two models are presented predicting the extents to which the ply interfaces have achieved intimate contact and cohesive strength. The models are based on experimental observation of compression molded laminates and neat resin conditions, respectively. Identified as the mechanism explaining the phenomenon by which the plies bond to themselves is the theory of autohesion (or self diffusion). Theoretical predictions from the Reptation Theory between autohesive strength and contact time are used to explain the effects of the processing parameters on the observed experimental strengths. The application of a time-temperature relationship for autohesive strength predictions is evaluated. A viscoelastic compression molding model of a tow was developed to explain the phenomenon by which the prepreg ply interfaces develop intimate contact.
Lee, Sunghoon Ivan; Mortazavi, Bobak; Hoffman, Haydn A; Lu, Derek S; Li, Charles; Paak, Brian H; Garst, Jordan H; Razaghy, Mehrdad; Espinal, Marie; Park, Eunjeong; Lu, Daniel C; Sarrafzadeh, Majid
2016-01-01
Predicting the functional outcomes of spinal cord disorder patients after medical treatments, such as a surgical operation, has always been of great interest. Accurate posttreatment prediction is especially beneficial for clinicians, patients, care givers, and therapists. This paper introduces a prediction method for postoperative functional outcomes by a novel use of Gaussian process regression. The proposed method specifically considers the restricted value range of the target variables by modeling the Gaussian process based on a truncated Normal distribution, which significantly improves the prediction results. The prediction has been made in assistance with target tracking examinations using a highly portable and inexpensive handgrip device, which greatly contributes to the prediction performance. The proposed method has been validated through a dataset collected from a clinical cohort pilot involving 15 patients with cervical spinal cord disorder. The results show that the proposed method can accurately predict postoperative functional outcomes, Oswestry disability index and target tracking scores, based on the patient's preoperative information with a mean absolute error of 0.079 and 0.014 (out of 1.0), respectively.
Wang, Hai-Xia; Suo, Tong-Chuan; Yu, He-Shui; Li, Zheng
2016-10-01
The manufacture of traditional Chinese medicine (TCM) products is always accompanied by processing complex raw materials and real-time monitoring of the manufacturing process. In this study, we investigated different modeling strategies for the extraction process of licorice. Near-infrared spectra associate with the extraction time was used to detemine the states of the extraction processes. Three modeling approaches, i.e., principal component analysis (PCA), partial least squares regression (PLSR) and parallel factor analysis-PLSR (PARAFAC-PLSR), were adopted for the prediction of the real-time status of the process. The overall results indicated that PCA, PLSR and PARAFAC-PLSR can effectively detect the errors in the extraction procedure and predict the process trajectories, which has important significance for the monitoring and controlling of the extraction processes. Copyright© by the Chinese Pharmaceutical Association.
NASA Astrophysics Data System (ADS)
Cao, Qing; Nastac, Laurentiu; Pitts-Baggett, April; Yu, Qiulin
2018-03-01
A quick modeling analysis approach for predicting the slag-steel reaction and desulfurization kinetics in argon gas-stirred ladles has been developed in this study. The model consists of two uncoupled components: (i) a computational fluid dynamics (CFD) model for predicting the fluid flow and the characteristics of slag-steel interface, and (ii) a multicomponent reaction kinetics model for calculating the desulfurization evolution. The steel-slag interfacial area and mass transfer coefficients predicted by the CFD simulation are used as the processing data for the reaction model. Since the desulfurization predictions are uncoupled from the CFD simulation, the computational time of this uncoupled predictive approach is decreased by at least 100 times for each case study when compared with the CFD-reaction kinetics fully coupled model. The uncoupled modeling approach was validated by comparing the evolution of steel and slag compositions with the experimentally measured data during ladle metallurgical furnace (LMF) processing at Nucor Steel Tuscaloosa, Inc. Then, the validated approach was applied to investigate the effects of the initial steel and slag compositions, as well as different types of additions during the refining process on the desulfurization efficiency. The results revealed that the sulfur distribution ratio and the desulfurization reaction can be promoted by making Al and CaO additions during the refining process. It was also shown that by increasing the initial Al content in liquid steel, both Al oxidation and desulfurization rates rapidly increase. In addition, it was found that the variation of the initial Si content in steel has no significant influence on the desulfurization rate. Lastly, if the initial CaO content in slag is increased or the initial Al2O3 content is decreased in the fluid-slag compositional range, the desulfurization rate can be improved significantly during the LMF process.
NASA Astrophysics Data System (ADS)
Cao, Qing; Nastac, Laurentiu; Pitts-Baggett, April; Yu, Qiulin
2018-06-01
A quick modeling analysis approach for predicting the slag-steel reaction and desulfurization kinetics in argon gas-stirred ladles has been developed in this study. The model consists of two uncoupled components: (i) a computational fluid dynamics (CFD) model for predicting the fluid flow and the characteristics of slag-steel interface, and (ii) a multicomponent reaction kinetics model for calculating the desulfurization evolution. The steel-slag interfacial area and mass transfer coefficients predicted by the CFD simulation are used as the processing data for the reaction model. Since the desulfurization predictions are uncoupled from the CFD simulation, the computational time of this uncoupled predictive approach is decreased by at least 100 times for each case study when compared with the CFD-reaction kinetics fully coupled model. The uncoupled modeling approach was validated by comparing the evolution of steel and slag compositions with the experimentally measured data during ladle metallurgical furnace (LMF) processing at Nucor Steel Tuscaloosa, Inc. Then, the validated approach was applied to investigate the effects of the initial steel and slag compositions, as well as different types of additions during the refining process on the desulfurization efficiency. The results revealed that the sulfur distribution ratio and the desulfurization reaction can be promoted by making Al and CaO additions during the refining process. It was also shown that by increasing the initial Al content in liquid steel, both Al oxidation and desulfurization rates rapidly increase. In addition, it was found that the variation of the initial Si content in steel has no significant influence on the desulfurization rate. Lastly, if the initial CaO content in slag is increased or the initial Al2O3 content is decreased in the fluid-slag compositional range, the desulfurization rate can be improved significantly during the LMF process.
On the Performance of Alternate Conceptual Ecohydrological Models for Streamflow Prediction
NASA Astrophysics Data System (ADS)
Naseem, Bushra; Ajami, Hoori; Cordery, Ian; Sharma, Ashish
2016-04-01
A merging of a lumped conceptual hydrological model with two conceptual dynamic vegetation models is presented to assess the performance of these models for simultaneous simulations of streamflow and leaf area index (LAI). Two conceptual dynamic vegetation models with differing representation of ecological processes are merged with a lumped conceptual hydrological model (HYMOD) to predict catchment scale streamflow and LAI. The merged RR-LAI-I model computes relative leaf biomass based on transpiration rates while the RR-LAI-II model computes above ground green and dead biomass based on net primary productivity and water use efficiency in response to soil moisture dynamics. To assess the performance of these models, daily discharge and 8-day MODIS LAI product for 27 catchments of 90 - 1600km2 in size located in the Murray - Darling Basin in Australia are used. Our results illustrate that when single-objective optimisation was focussed on maximizing the objective function for streamflow or LAI, the other un-calibrated predicted outcome (LAI if streamflow is the focus) was consistently compromised. Thus, single-objective optimization cannot take into account the essence of all processes in the conceptual ecohydrological models. However, multi-objective optimisation showed great strength for streamflow and LAI predictions. Both response outputs were better simulated by RR-LAI-II than RR-LAI-I due to better representation of physical processes such as net primary productivity (NPP) in RR-LAI-II. Our results highlight that simultaneous calibration of streamflow and LAI using a multi-objective algorithm proves to be an attractive tool for improved streamflow predictions.
Zhang, Xingyu; Kim, Joyce; Patzer, Rachel E; Pitts, Stephen R; Patzer, Aaron; Schrager, Justin D
2017-10-26
To describe and compare logistic regression and neural network modeling strategies to predict hospital admission or transfer following initial presentation to Emergency Department (ED) triage with and without the addition of natural language processing elements. Using data from the National Hospital Ambulatory Medical Care Survey (NHAMCS), a cross-sectional probability sample of United States EDs from 2012 and 2013 survey years, we developed several predictive models with the outcome being admission to the hospital or transfer vs. discharge home. We included patient characteristics immediately available after the patient has presented to the ED and undergone a triage process. We used this information to construct logistic regression (LR) and multilayer neural network models (MLNN) which included natural language processing (NLP) and principal component analysis from the patient's reason for visit. Ten-fold cross validation was used to test the predictive capacity of each model and receiver operating curves (AUC) were then calculated for each model. Of the 47,200 ED visits from 642 hospitals, 6,335 (13.42%) resulted in hospital admission (or transfer). A total of 48 principal components were extracted by NLP from the reason for visit fields, which explained 75% of the overall variance for hospitalization. In the model including only structured variables, the AUC was 0.824 (95% CI 0.818-0.830) for logistic regression and 0.823 (95% CI 0.817-0.829) for MLNN. Models including only free-text information generated AUC of 0.742 (95% CI 0.731- 0.753) for logistic regression and 0.753 (95% CI 0.742-0.764) for MLNN. When both structured variables and free text variables were included, the AUC reached 0.846 (95% CI 0.839-0.853) for logistic regression and 0.844 (95% CI 0.836-0.852) for MLNN. The predictive accuracy of hospital admission or transfer for patients who presented to ED triage overall was good, and was improved with the inclusion of free text data from a patient's reason for visit regardless of modeling approach. Natural language processing and neural networks that incorporate patient-reported outcome free text may increase predictive accuracy for hospital admission.
Cerebellar tDCS Modulates Neural Circuits during Semantic Prediction: A Combined tDCS-fMRI Study.
D'Mello, Anila M; Turkeltaub, Peter E; Stoodley, Catherine J
2017-02-08
It has been proposed that the cerebellum acquires internal models of mental processes that enable prediction, allowing for the optimization of behavior. In language, semantic prediction speeds speech production and comprehension. Right cerebellar lobules VI and VII (including Crus I/II) are engaged during a variety of language processes and are functionally connected with cerebral cortical language networks. Further, right posterolateral cerebellar neuromodulation modifies behavior during predictive language processing. These data are consistent with a role for the cerebellum in semantic processing and semantic prediction. We combined transcranial direct current stimulation (tDCS) and fMRI to assess the behavioral and neural consequences of cerebellar tDCS during a sentence completion task. Task-based and resting-state fMRI data were acquired in healthy human adults ( n = 32; μ = 23.1 years) both before and after 20 min of 1.5 mA anodal ( n = 18) or sham ( n = 14) tDCS applied to the right posterolateral cerebellum. In the sentence completion task, the first four words of the sentence modulated the predictability of the final target word. In some sentences, the preceding context strongly predicted the target word, whereas other sentences were nonpredictive. Completion of predictive sentences increased activation in right Crus I/II of the cerebellum. Relative to sham tDCS, anodal tDCS increased activation in right Crus I/II during semantic prediction and enhanced resting-state functional connectivity between hubs of the reading/language networks. These results are consistent with a role for the right posterolateral cerebellum beyond motor aspects of language, and suggest that cerebellar internal models of linguistic stimuli support semantic prediction. SIGNIFICANCE STATEMENT Cerebellar involvement in language tasks and language networks is now well established, yet the specific cerebellar contribution to language processing remains unclear. It is thought that the cerebellum acquires internal models of mental processes that enable prediction, allowing for the optimization of behavior. Here we combined neuroimaging and neuromodulation to provide evidence that the cerebellum is specifically involved in semantic prediction during sentence processing. We found that activation within right Crus I/II was enhanced when semantic predictions were made, and we show that modulation of this region with transcranial direct current stimulation alters both activation patterns and functional connectivity within whole-brain language networks. For the first time, these data show that cerebellar neuromodulation impacts activation patterns specifically during predictive language processing. Copyright © 2017 the authors 0270-6474/17/371604-10$15.00/0.
NASA Astrophysics Data System (ADS)
Zheng, Fei; Zhu, Jiang
2017-04-01
How to design a reliable ensemble prediction strategy with considering the major uncertainties of a forecasting system is a crucial issue for performing an ensemble forecast. In this study, a new stochastic perturbation technique is developed to improve the prediction skills of El Niño-Southern Oscillation (ENSO) through using an intermediate coupled model. We first estimate and analyze the model uncertainties from the ensemble Kalman filter analysis results through assimilating the observed sea surface temperatures. Then, based on the pre-analyzed properties of model errors, we develop a zero-mean stochastic model-error model to characterize the model uncertainties mainly induced by the missed physical processes of the original model (e.g., stochastic atmospheric forcing, extra-tropical effects, Indian Ocean Dipole). Finally, we perturb each member of an ensemble forecast at each step by the developed stochastic model-error model during the 12-month forecasting process, and add the zero-mean perturbations into the physical fields to mimic the presence of missing processes and high-frequency stochastic noises. The impacts of stochastic model-error perturbations on ENSO deterministic predictions are examined by performing two sets of 21-yr hindcast experiments, which are initialized from the same initial conditions and differentiated by whether they consider the stochastic perturbations. The comparison results show that the stochastic perturbations have a significant effect on improving the ensemble-mean prediction skills during the entire 12-month forecasting process. This improvement occurs mainly because the nonlinear terms in the model can form a positive ensemble-mean from a series of zero-mean perturbations, which reduces the forecasting biases and then corrects the forecast through this nonlinear heating mechanism.
NASA Astrophysics Data System (ADS)
Aljoaba, Sharif; Dillon, Oscar; Khraisheh, Marwan; Jawahir, I. S.
2012-07-01
The ability to generate nano-sized grains is one of the advantages of friction stir processing (FSP). However, the high temperatures generated during the stirring process within the processing zone stimulate the grains to grow after recrystallization. Therefore, maintaining the small grains becomes a critical issue when using FSP. In the present reports, coolants are applied to the fixture and/or processed material in order to reduce the temperature and hence, grain growth. Most of the reported data in the literature concerning cooling techniques are experimental. We have seen no reports that attempt to predict these quantities when using coolants while the material is undergoing FSP. Therefore, there is need to develop a model that predicts the resulting grain size when using coolants, which is an important step toward designing the material microstructure. In this study, two three-dimensional computational fluid dynamics (CFD) models are reported which simulate FSP with and without coolant application while using the STAR CCM+ CFD commercial software. In the model with the coolant application, the fixture (backing plate) is modeled while is not in the other model. User-defined subroutines were incorporated in the software and implemented to investigate the effects of changing process parameters on temperature, strain rate and material velocity fields in, and around, the processed nugget. In addition, a correlation between these parameters and the Zener-Holloman parameter used in material science was developed to predict the grain size distribution. Different stirring conditions were incorporated in this study to investigate their effects on material flow and microstructural modification. A comparison of the results obtained by using each of the models on the processed microstructure is also presented for the case of Mg AZ31B-O alloy. The predicted results are also compared with the available experimental data and generally show good agreement.
Putting mechanisms into crop production models
USDA-ARS?s Scientific Manuscript database
Crop simulation models dynamically predict processes of carbon, nitrogen, and water balance on daily or hourly time-steps to the point of predicting yield and production at crop maturity. A brief history of these models is reviewed, and their level of mechanism for assimilation and respiration, ran...
NASA Astrophysics Data System (ADS)
Okabe, Tomonaga; Yashiro, Shigeki
This study proposes the cohesive zone model (CZM) for predicting fatigue damage growth in notched carbon-fiber-reinforced composite plastic (CFRP) cross-ply laminates. In this model, damage growth in the fracture process of cohesive elements due to cyclic loading is represented by the conventional damage mechanics model. We preliminarily investigated whether this model can appropriately express fatigue damage growth for a circular crack embedded in isotropic solid material. This investigation demonstrated that this model could reproduce the results with the well-established fracture mechanics model plus the Paris' law by tuning adjustable parameters. We then numerically investigated the damage process in notched CFRP cross-ply laminates under tensile cyclic loading and compared the predicted damage patterns with those in experiments reported by Spearing et al. (Compos. Sci. Technol. 1992). The predicted damage patterns agreed with the experiment results, which exhibited the extension of multiple types of damage (e.g., splits, transverse cracks and delaminations) near the notches.
Ferguson, Jake M; Ponciano, José M
2014-02-01
Predicting population extinction risk is a fundamental application of ecological theory to the practice of conservation biology. Here, we compared the prediction performance of a wide array of stochastic, population dynamics models against direct observations of the extinction process from an extensive experimental data set. By varying a series of biological and statistical assumptions in the proposed models, we were able to identify the assumptions that affected predictions about population extinction. We also show how certain autocorrelation structures can emerge due to interspecific interactions, and that accounting for the stochastic effect of these interactions can improve predictions of the extinction process. We conclude that it is possible to account for the stochastic effects of community interactions on extinction when using single-species time series. © 2013 The Authors. Ecology Letters published by John Wiley & Sons Ltd and CNRS.
A Theoretical and Experimental Analysis of the Outside World Perception Process
NASA Technical Reports Server (NTRS)
Wewerinke, P. H.
1978-01-01
The outside scene is often an important source of information for manual control tasks. Important examples of these are car driving and aircraft control. This paper deals with modelling this visual scene perception process on the basis of linear perspective geometry and the relative motion cues. Model predictions utilizing psychophysical threshold data from base-line experiments and literature of a variety of visual approach tasks are compared with experimental data. Both the performance and workload results illustrate that the model provides a meaningful description of the outside world perception process, with a useful predictive capability.
Computer Models of Personality: Implications for Measurement
ERIC Educational Resources Information Center
Cranton, P. A.
1976-01-01
Current research on computer models of personality is reviewed and categorized under five headings: (1) models of belief systems; (2) models of interpersonal behavior; (3) models of decision-making processes; (4) prediction models; and (5) theory-based simulations of specific processes. The use of computer models in personality measurement is…
Runoff as a factor in USLE/RUSLE technology
NASA Astrophysics Data System (ADS)
Kinnell, Peter
2014-05-01
Modelling erosion for prediction purposes started with the development of the Universal Soil Loss Equation the focus of which was the prediction of long term (~20) average annul soil loss from field sized areas. That purpose has been maintained in the subsequent revision RUSLE, the most widely used erosion prediction model in the world. The lack of ability to predict short term soil loss saw the development of so-called process based models like WEPP and EUROSEM which focussed on predicting event erosion but failed to improve the prediction of long term erosion where the RUSLE worked well. One of the features of erosion recognised in the so-called process based modes is the fact that runoff is a primary factor in rainfall erosion and some modifications of USLE/RUSLE model have been proposed have included runoff as in independent factor in determining event erosivity. However, these models have ignored fundamental mathematical rules. The USLE-M which replaces the EI30 index by the product of the runoff ratio and EI30 was developed from the concept that soil loss is the product of runoff and sediment concentration and operates in a way that obeys the mathematical rules upon which the USLE/RUSLE model was based. In accounts for event soil loss better that the EI30 index where runoff values are known or predicted adequately. RUSLE2 now includes a capacity to model runoff driven erosion.
REVIEW: Widespread access to predictive models in the motor system: a short review
NASA Astrophysics Data System (ADS)
Davidson, Paul R.; Wolpert, Daniel M.
2005-09-01
Recent behavioural and computational studies suggest that access to internal predictive models of arm and object dynamics is widespread in the sensorimotor system. Several systems, including those responsible for oculomotor and skeletomotor control, perceptual processing, postural control and mental imagery, are able to access predictions of the motion of the arm. A capacity to make and use predictions of object dynamics is similarly widespread. Here, we review recent studies looking at the predictive capacity of the central nervous system which reveal pervasive access to forward models of the environment.
Assessment of traffic noise levels in urban areas using different soft computing techniques.
Tomić, J; Bogojević, N; Pljakić, M; Šumarac-Pavlović, D
2016-10-01
Available traffic noise prediction models are usually based on regression analysis of experimental data, and this paper presents the application of soft computing techniques in traffic noise prediction. Two mathematical models are proposed and their predictions are compared to data collected by traffic noise monitoring in urban areas, as well as to predictions of commonly used traffic noise models. The results show that application of evolutionary algorithms and neural networks may improve process of development, as well as accuracy of traffic noise prediction.
Risk prediction model: Statistical and artificial neural network approach
NASA Astrophysics Data System (ADS)
Paiman, Nuur Azreen; Hariri, Azian; Masood, Ibrahim
2017-04-01
Prediction models are increasingly gaining popularity and had been used in numerous areas of studies to complement and fulfilled clinical reasoning and decision making nowadays. The adoption of such models assist physician's decision making, individual's behavior, and consequently improve individual outcomes and the cost-effectiveness of care. The objective of this paper is to reviewed articles related to risk prediction model in order to understand the suitable approach, development and the validation process of risk prediction model. A qualitative review of the aims, methods and significant main outcomes of the nineteen published articles that developed risk prediction models from numerous fields were done. This paper also reviewed on how researchers develop and validate the risk prediction models based on statistical and artificial neural network approach. From the review done, some methodological recommendation in developing and validating the prediction model were highlighted. According to studies that had been done, artificial neural network approached in developing the prediction model were more accurate compared to statistical approach. However currently, only limited published literature discussed on which approach is more accurate for risk prediction model development.
Modeling and prediction of human word search behavior in interactive machine translation
NASA Astrophysics Data System (ADS)
Ji, Duo; Yu, Bai; Ma, Bin; Ye, Na
2017-12-01
As a kind of computer aided translation method, Interactive Machine Translation technology reduced manual translation repetitive and mechanical operation through a variety of methods, so as to get the translation efficiency, and played an important role in the practical application of the translation work. In this paper, we regarded the behavior of users' frequently searching for words in the translation process as the research object, and transformed the behavior to the translation selection problem under the current translation. The paper presented a prediction model, which is a comprehensive utilization of alignment model, translation model and language model of the searching words behavior. It achieved a highly accurate prediction of searching words behavior, and reduced the switching of mouse and keyboard operations in the users' translation process.
Mathematical modelling and numerical simulation of forces in milling process
NASA Astrophysics Data System (ADS)
Turai, Bhanu Murthy; Satish, Cherukuvada; Prakash Marimuthu, K.
2018-04-01
Machining of the material by milling induces forces, which act on the work piece material, tool and which in turn act on the machining tool. The forces involved in milling process can be quantified, mathematical models help to predict these forces. A lot of research has been carried out in this area in the past few decades. The current research aims at developing a mathematical model to predict forces at different levels which arise machining of Aluminium6061 alloy. Finite element analysis was used to develop a FE model to predict the cutting forces. Simulation was done for varying cutting conditions. Different experiments was designed using Taguchi method. A L9 orthogonal array was designed and the output was measure for the different experiments. The same was used to develop the mathematical model.
Smith, Alison; Ntoumanis, Nikos; Duda, Joan
2007-12-01
Grounded in self-determination theory (Deci & Ryan, 1985) and the self-concordance model (Sheldon & Elliot, 1999), this study examined the motivational processes underlying goal striving in sport as well as the role of perceived coach autonomy support in the goal process. Structural equation modeling with a sample of 210 British athletes showed that autonomous goal motives positively predicted effort, which, in turn, predicted goal attainment. Goal attainment was positively linked to need satisfaction, which, in turn, predicted psychological well-being. Effort and need satisfaction were found to mediate the associations between autonomous motives and goal attainment and between attainment and well-being, respectively. Controlled motives negatively predicted well-being, and coach autonomy support positively predicted both autonomous motives and need satisfaction. Associations of autonomous motives with effort were not reducible to goal difficulty, goal specificity, or goal efficacy. These findings support the self-concordance model as a framework for further research on goal setting in sport.
NASA Astrophysics Data System (ADS)
Suzuki, Kazuyoshi; Zupanski, Milija
2018-01-01
In this study, we investigate the uncertainties associated with land surface processes in an ensemble predication context. Specifically, we compare the uncertainties produced by a coupled atmosphere-land modeling system with two different land surface models, the Noah- MP land surface model (LSM) and the Noah LSM, by using the Maximum Likelihood Ensemble Filter (MLEF) data assimilation system as a platform for ensemble prediction. We carried out 24-hour prediction simulations in Siberia with 32 ensemble members beginning at 00:00 UTC on 5 March 2013. We then compared the model prediction uncertainty of snow depth and solid precipitation with observation-based research products and evaluated the standard deviation of the ensemble spread. The prediction skill and ensemble spread exhibited high positive correlation for both LSMs, indicating a realistic uncertainty estimation. The inclusion of a multiple snowlayer model in the Noah-MP LSM was beneficial for reducing the uncertainties of snow depth and snow depth change compared to the Noah LSM, but the uncertainty in daily solid precipitation showed minimal difference between the two LSMs. The impact of LSM choice in reducing temperature uncertainty was limited to surface layers of the atmosphere. In summary, we found that the more sophisticated Noah-MP LSM reduces uncertainties associated with land surface processes compared to the Noah LSM. Thus, using prediction models with improved skill implies improved predictability and greater certainty of prediction.
NASA Astrophysics Data System (ADS)
Fu, Y.; Yang, W.; Xu, O.; Zhou, L.; Wang, J.
2017-04-01
To investigate time-variant and nonlinear characteristics in industrial processes, a soft sensor modelling method based on time difference, moving-window recursive partial least square (PLS) and adaptive model updating is proposed. In this method, time difference values of input and output variables are used as training samples to construct the model, which can reduce the effects of the nonlinear characteristic on modelling accuracy and retain the advantages of recursive PLS algorithm. To solve the high updating frequency of the model, a confidence value is introduced, which can be updated adaptively according to the results of the model performance assessment. Once the confidence value is updated, the model can be updated. The proposed method has been used to predict the 4-carboxy-benz-aldehyde (CBA) content in the purified terephthalic acid (PTA) oxidation reaction process. The results show that the proposed soft sensor modelling method can reduce computation effectively, improve prediction accuracy by making use of process information and reflect the process characteristics accurately.
A class-based link prediction using Distance Dependent Chinese Restaurant Process
NASA Astrophysics Data System (ADS)
Andalib, Azam; Babamir, Seyed Morteza
2016-08-01
One of the important tasks in relational data analysis is link prediction which has been successfully applied on many applications such as bioinformatics, information retrieval, etc. The link prediction is defined as predicting the existence or absence of edges between nodes of a network. In this paper, we propose a novel method for link prediction based on Distance Dependent Chinese Restaurant Process (DDCRP) model which enables us to utilize the information of the topological structure of the network such as shortest path and connectivity of the nodes. We also propose a new Gibbs sampling algorithm for computing the posterior distribution of the hidden variables based on the training data. Experimental results on three real-world datasets show the superiority of the proposed method over other probabilistic models for link prediction problem.
Mining manufacturing data for discovery of high productivity process characteristics.
Charaniya, Salim; Le, Huong; Rangwala, Huzefa; Mills, Keri; Johnson, Kevin; Karypis, George; Hu, Wei-Shou
2010-06-01
Modern manufacturing facilities for bioproducts are highly automated with advanced process monitoring and data archiving systems. The time dynamics of hundreds of process parameters and outcome variables over a large number of production runs are archived in the data warehouse. This vast amount of data is a vital resource to comprehend the complex characteristics of bioprocesses and enhance production robustness. Cell culture process data from 108 'trains' comprising production as well as inoculum bioreactors from Genentech's manufacturing facility were investigated. Each run constitutes over one-hundred on-line and off-line temporal parameters. A kernel-based approach combined with a maximum margin-based support vector regression algorithm was used to integrate all the process parameters and develop predictive models for a key cell culture performance parameter. The model was also used to identify and rank process parameters according to their relevance in predicting process outcome. Evaluation of cell culture stage-specific models indicates that production performance can be reliably predicted days prior to harvest. Strong associations between several temporal parameters at various manufacturing stages and final process outcome were uncovered. This model-based data mining represents an important step forward in establishing a process data-driven knowledge discovery in bioprocesses. Implementation of this methodology on the manufacturing floor can facilitate a real-time decision making process and thereby improve the robustness of large scale bioprocesses. 2010 Elsevier B.V. All rights reserved.
Limits of Risk Predictability in a Cascading Alternating Renewal Process Model.
Lin, Xin; Moussawi, Alaa; Korniss, Gyorgy; Bakdash, Jonathan Z; Szymanski, Boleslaw K
2017-07-27
Most risk analysis models systematically underestimate the probability and impact of catastrophic events (e.g., economic crises, natural disasters, and terrorism) by not taking into account interconnectivity and interdependence of risks. To address this weakness, we propose the Cascading Alternating Renewal Process (CARP) to forecast interconnected global risks. However, assessments of the model's prediction precision are limited by lack of sufficient ground truth data. Here, we establish prediction precision as a function of input data size by using alternative long ground truth data generated by simulations of the CARP model with known parameters. We illustrate the approach on a model of fires in artificial cities assembled from basic city blocks with diverse housing. The results confirm that parameter recovery variance exhibits power law decay as a function of the length of available ground truth data. Using CARP, we also demonstrate estimation using a disparate dataset that also has dependencies: real-world prediction precision for the global risk model based on the World Economic Forum Global Risk Report. We conclude that the CARP model is an efficient method for predicting catastrophic cascading events with potential applications to emerging local and global interconnected risks.
Anderson-Cook, Christine M.; Morzinski, Jerome; Blecker, Kenneth D.
2015-08-19
Understanding the impact of production, environmental exposure and age characteristics on the reliability of a population is frequently based on underlying science and empirical assessment. When there is incomplete science to prescribe which inputs should be included in a model of reliability to predict future trends, statistical model/variable selection techniques can be leveraged on a stockpile or population of units to improve reliability predictions as well as suggest new mechanisms affecting reliability to explore. We describe a five-step process for exploring relationships between available summaries of age, usage and environmental exposure and reliability. The process involves first identifying potential candidatemore » inputs, then second organizing data for the analysis. Third, a variety of models with different combinations of the inputs are estimated, and fourth, flexible metrics are used to compare them. As a result, plots of the predicted relationships are examined to distill leading model contenders into a prioritized list for subject matter experts to understand and compare. The complexity of the model, quality of prediction and cost of future data collection are all factors to be considered by the subject matter experts when selecting a final model.« less
NASA Technical Reports Server (NTRS)
Karakas, Amanda I.; vanRaai, Mark A.; Lugaro, Maria; Sterling, Nicholas C.; Dinerstein, Harriet L.
2008-01-01
Type I planetary nebulae (PNe) have high He/H and N/O ratios and are thought to be descendants of stars with initial masses of approx. 3-8 Stellar Mass. These characteristics indicate that the progenitor stars experienced proton-capture nucleosynthesis at the base of the convective envelope, in addition to the slow neutron capture process operating in the He-shell (the s-process). We compare the predicted abundances of elements up to Sr from models of intermediate-mass asymptotic giant branch (AGB) stars to measured abundances in Type I PNe. In particular, we compare predictions and observations for the light trans-iron elements Se and Kr, in order to constrain convective mixing and the s-process in these stars. A partial mixing zone is included in selected models to explore the effect of a C-13 pocket on the s-process yields. The solar-metallicity models produce enrichments of [(Se, Kr)/Fe] less than or approx. 0.6, consistent with Galactic Type I PNe where the observed enhancements are typically less than or approx. 0.3 dex, while lower metallicity models predict larger enrichments of C, N, Se, and Kr. O destruction occurs in the most massive models but it is not efficient enough to account for the greater than or approx. 0.3 dex O depletions observed in some Type I PNe. It is not possible to reach firm conclusions regarding the neutron source operating in massive AGB stars from Se and Kr abundances in Type I PNe; abundances for more s-process elements may help to distinguish between the two neutron sources. We predict that only the most massive (M grester than or approx.5 Stellar Mass) models would evolve into Type I PNe, indicating that extra-mixing processes are active in lower-mass stars (3-4 Stellar Mass), if these stars are to evolve into Type I PNe.
NASA Astrophysics Data System (ADS)
Karakas, Amanda I.; van Raai, Mark A.; Lugaro, Maria; Sterling, N. C.; Dinerstein, Harriet L.
2009-01-01
Type I planetary nebulae (PNe) have high He/H and N/O ratios and are thought to be descendants of stars with initial masses of ~3-8 M sun. These characteristics indicate that the progenitor stars experienced proton-capture nucleosynthesis at the base of the convective envelope, in addition to the slow neutron capture process operating in the He-shell (the s-process). We compare the predicted abundances of elements up to Sr from models of intermediate-mass asymptotic giant branch (AGB) stars to measured abundances in Type I PNe. In particular, we compare predictions and observations for the light trans-iron elements Se and Kr, in order to constrain convective mixing and the s-process in these stars. A partial mixing zone is included in selected models to explore the effect of a 13C pocket on the s-process yields. The solar-metallicity models produce enrichments of [(Se, Kr)/Fe] lsim0.6, consistent with Galactic Type I PNe where the observed enhancements are typically lsim0.3 dex, while lower metallicity models predict larger enrichments of C, N, Se, and Kr. O destruction occurs in the most massive models but it is not efficient enough to account for the gsim0.3 dex O depletions observed in some Type I PNe. It is not possible to reach firm conclusions regarding the neutron source operating in massive AGB stars from Se and Kr abundances in Type I PNe; abundances for more s-process elements may help to distinguish between the two neutron sources. We predict that only the most massive (M gsim 5 M sun) models would evolve into Type I PNe, indicating that extra-mixing processes are active in lower-mass stars (3-4 M sun), if these stars are to evolve into Type I PNe. This paper includes data taken at The McDonald Observatory of The University of Texas at Austin.
Drift-Scale Coupled Processes (DST and THC Seepage) Models
DOE Office of Scientific and Technical Information (OSTI.GOV)
P. Dixon
The purpose of this Model Report (REV02) is to document the unsaturated zone (UZ) models used to evaluate the potential effects of coupled thermal-hydrological-chemical (THC) processes on UZ flow and transport. This Model Report has been developed in accordance with the ''Technical Work Plan for: Performance Assessment Unsaturated Zone'' (Bechtel SAIC Company, LLC (BSC) 2002 [160819]). The technical work plan (TWP) describes planning information pertaining to the technical scope, content, and management of this Model Report in Section 1.12, Work Package AUZM08, ''Coupled Effects on Flow and Seepage''. The plan for validation of the models documented in this Model Reportmore » is given in Attachment I, Model Validation Plans, Section I-3-4, of the TWP. Except for variations in acceptance criteria (Section 4.2), there were no deviations from this TWP. This report was developed in accordance with AP-SIII.10Q, ''Models''. This Model Report documents the THC Seepage Model and the Drift Scale Test (DST) THC Model. The THC Seepage Model is a drift-scale process model for predicting the composition of gas and water that could enter waste emplacement drifts and the effects of mineral alteration on flow in rocks surrounding drifts. The DST THC model is a drift-scale process model relying on the same conceptual model and much of the same input data (i.e., physical, hydrological, thermodynamic, and kinetic) as the THC Seepage Model. The DST THC Model is the primary method for validating the THC Seepage Model. The DST THC Model compares predicted water and gas compositions, as well as mineral alteration patterns, with observed data from the DST. These models provide the framework to evaluate THC coupled processes at the drift scale, predict flow and transport behavior for specified thermal-loading conditions, and predict the evolution of mineral alteration and fluid chemistry around potential waste emplacement drifts. The DST THC Model is used solely for the validation of the THC Seepage Model and is not used for calibration to measured data.« less
NASA Astrophysics Data System (ADS)
Cowdery, E.; Dietze, M.
2017-12-01
As atmospheric levels of carbon dioxide levels continue to increase, it is critical that terrestrial ecosystem models can accurately predict ecological responses to the changing environment. Current predictions of net primary productivity (NPP) in response to elevated atmospheric CO2 concentration are highly variable and contain a considerable amount of uncertainty. Benchmarking model predictions against data are necessary to assess their ability to replicate observed patterns, but also to identify and evaluate the assumptions causing inter-model differences. We have implemented a novel benchmarking workflow as part of the Predictive Ecosystem Analyzer (PEcAn) that is automated, repeatable, and generalized to incorporate different sites and ecological models. Building on the recent Free-Air CO2 Enrichment Model Data Synthesis (FACE-MDS) project, we used observational data from the FACE experiments to test this flexible, extensible benchmarking approach aimed at providing repeatable tests of model process representation that can be performed quickly and frequently. Model performance assessments are often limited to traditional residual error analysis; however, this can result in a loss of critical information. Models that fail tests of relative measures of fit may still perform well under measures of absolute fit and mathematical similarity. This implies that models that are discounted as poor predictors of ecological productivity may still be capturing important patterns. Conversely, models that have been found to be good predictors of productivity may be hiding error in their sub-process that result in the right answers for the wrong reasons. Our suite of tests have not only highlighted process based sources of uncertainty in model productivity calculations, they have also quantified the patterns and scale of this error. Combining these findings with PEcAn's model sensitivity analysis and variance decomposition strengthen our ability to identify which processes need further study and additional data constraints. This can be used to inform future experimental design and in turn can provide an informative starting point for data assimilation.
Jian Yang; Peter J. Weisberg; Thomas E. Dilts; E. Louise Loudermilk; Robert M. Scheller; Alison Stanton; Carl Skinner
2015-01-01
Strategic fire and fuel management planning benefits from detailed understanding of how wildfire occurrences are distributed spatially under current climate, and from predictive models of future wildfire occurrence given climate change scenarios. In this study, we fitted historical wildfire occurrence data from 1986 to 2009 to a suite of spatial point process (SPP)...
Modeling of Fume Formation from Shielded Metal Arc Welding Process
NASA Astrophysics Data System (ADS)
Sivapirakasam, S. P.; Mohan, Sreejith; Santhosh Kumar, M. C.; Surianarayanan, M.
2017-04-01
In this study, a semi-empirical model of fume formation rate (FFR) from a shielded metal arc welding (SMAW) process has been developed. The model was developed for a DC electrode positive (DCEP) operation and involves the calculations of droplet temperature, surface area of the droplet, and partial vapor pressures of the constituents of the droplet to predict the FFR. The model was further extended for predicting FFR from nano-coated electrodes. The model estimates the FFR for Fe and Mn assuming constant proportion of other elements in the electrode. Fe FFR was overestimated, while Mn FFR was underestimated. The contribution of spatters and other mechanism in the arc responsible for fume formation were neglected. A good positive correlation was obtained between the predicted and experimental FFR values which highlighted the usefulness of the model.
Testing the Predictions of the Central Capacity Sharing Model
ERIC Educational Resources Information Center
Tombu, Michael; Jolicoeur, Pierre
2005-01-01
The divergent predictions of 2 models of dual-task performance are investigated. The central bottleneck and central capacity sharing models argue that a central stage of information processing is capacity limited, whereas stages before and after are capacity free. The models disagree about the nature of this central capacity limitation. The…
Testing DRAINMOD-FOREST for predicting evapotranspiration in a mid-rotation pine plantation
Shiying Tian; Mohamed A. Youssef; Ge Sun; George M. Chescheir; Asko Noormets; Devendra M. Amatya; R. Wayne Skaggs; John S. King; Steve McNulty; Michael Gavazzi; Guofang Miao; Jean-Christophe Domec
2015-01-01
Evapotranspiration (ET) is a key component of the hydrologic cycle in terrestrial ecosystems and accurate description of ET processes is essential for developing reliable ecohydrological models. This study investigated the accuracy of ET prediction by the DRAINMOD-FOREST after its calibration/validation for predicting commonly measured hydrological variables. The model...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wood, Brian D.
2013-11-04
Biogeochemical reactive transport processes in the subsurface environment are important to many contemporary environmental issues of significance to DOE. Quantification of risks and impacts associated with environmental management options, and design of remediation systems where needed, require that we have at our disposal reliable predictive tools (usually in the form of numerical simulation models). However, it is well known that even the most sophisticated reactive transport models available today have poor predictive power, particularly when applied at the field scale. Although the lack of predictive ability is associated in part with our inability to characterize the subsurface and limitations inmore » computational power, significant advances have been made in both of these areas in recent decades and can be expected to continue. In this research, we examined the upscaling (pore to Darcy and Darcy to field) the problem of bioremediation via biofilms in porous media. The principle idea was to start with a conceptual description of the bioremediation process at the pore scale, and apply upscaling methods to formally develop the appropriate upscaled model at the so-called Darcy scale. The purpose was to determine (1) what forms the upscaled models would take, and (2) how one might parameterize such upscaled models for applications to bioremediation in the field. We were able to effectively upscale the bioremediation process to explain how the pore-scale phenomena were linked to the field scale. The end product of this research was to produce a set of upscaled models that could be used to help predict field-scale bioremediation. These models were mechanistic, in the sense that they directly incorporated pore-scale information, but upscaled so that only the essential features of the process were needed to predict the effective parameters that appear in the model. In this way, a direct link between the microscale and the field scale was made, but the upscaling process helped inform potential users of the model what kinds of information would be needed to accurately characterize the system.« less
Wenchi Jin; Hong S. He; Frank R. Thompson
2016-01-01
Process-based forest ecosystem models vary from simple physiological, complex physiological, to hybrid empirical-physiological models. Previous studies indicate that complex models provide the best prediction at plot scale with a temporal extent of less than 10 years, however, it is largely untested as to whether complex models outperform the other two types of models...
Microbial burden prediction model for unmanned planetary spacecraft
NASA Technical Reports Server (NTRS)
Hoffman, A. R.; Winterburn, D. A.
1972-01-01
The technical development of a computer program for predicting microbial burden on unmanned planetary spacecraft is outlined. The discussion includes the derivation of the basic analytical equations, the selection of a method for handling several random variables, the macrologic of the computer programs and the validation and verification of the model. The prediction model was developed to (1) supplement the biological assays of a spacecraft by simulating the microbial accretion during periods when assays are not taken; (2) minimize the necessity for a large number of microbiological assays; and (3) predict the microbial loading on a lander immediately prior to sterilization and other non-lander equipment prior to launch. It is shown that these purposes not only were achieved but also that the prediction results compare favorably to the estimates derived from the direct assays. The computer program can be applied not only as a prediction instrument but also as a management and control tool. The basic logic of the model is shown to have possible applicability to other sequential flow processes, such as food processing.
Johnson, Douglas H.; Cook, R.D.
2013-01-01
In her AAAS News & Notes piece "Can the Southwest manage its thirst?" (26 July, p. 362), K. Wren quotes Ajay Kalra, who advocates a particular method for predicting Colorado River streamflow "because it eschews complex physical climate models for a statistical data-driven modeling approach." A preference for data-driven models may be appropriate in this individual situation, but it is not so generally, Data-driven models often come with a warning against extrapolating beyond the range of the data used to develop the models. When the future is like the past, data-driven models can work well for prediction, but it is easy to over-model local or transient phenomena, often leading to predictive inaccuracy (1). Mechanistic models are built on established knowledge of the process that connects the response variables with the predictors, using information obtained outside of an extant data set. One may shy away from a mechanistic approach when the underlying process is judged to be too complicated, but good predictive models can be constructed with statistical components that account for ingredients missing in the mechanistic analysis. Models with sound mechanistic components are more generally applicable and robust than data-driven models.
Fluorescence Spectroscopy and Chemometric Modeling for Bioprocess Monitoring
Faassen, Saskia M.; Hitzmann, Bernd
2015-01-01
On-line sensors for the detection of crucial process parameters are desirable for the monitoring, control and automation of processes in the biotechnology, food and pharma industry. Fluorescence spectroscopy as a highly developed and non-invasive technique that enables the on-line measurements of substrate and product concentrations or the identification of characteristic process states. During a cultivation process significant changes occur in the fluorescence spectra. By means of chemometric modeling, prediction models can be calculated and applied for process supervision and control to provide increased quality and the productivity of bioprocesses. A range of applications for different microorganisms and analytes has been proposed during the last years. This contribution provides an overview of different analysis methods for the measured fluorescence spectra and the model-building chemometric methods used for various microbial cultivations. Most of these processes are observed using the BioView® Sensor, thanks to its robustness and insensitivity to adverse process conditions. Beyond that, the PLS-method is the most frequently used chemometric method for the calculation of process models and prediction of process variables. PMID:25942644
A Microstructure-Based Constitutive Model for Superplastic Forming
NASA Astrophysics Data System (ADS)
Jafari Nedoushan, Reza; Farzin, Mahmoud; Mashayekhi, Mohammad; Banabic, Dorel
2012-11-01
A constitutive model is proposed for simulations of hot metal forming processes. This model is constructed based on dominant mechanisms that take part in hot forming and includes intergranular deformation, grain boundary sliding, and grain boundary diffusion. A Taylor type polycrystalline model is used to predict intergranular deformation. Previous works on grain boundary sliding and grain boundary diffusion are extended to drive three-dimensional macro stress-strain rate relationships for each mechanism. In these relationships, the effect of grain size is also taken into account. The proposed model is first used to simulate step strain-rate tests and the results are compared with experimental data. It is shown that the model can be used to predict flow stresses for various grain sizes and strain rates. The yield locus is then predicted for multiaxial stress states, and it is observed that it is very close to the von Mises yield criterion. It is also shown that the proposed model can be directly used to simulate hot forming processes. Bulge forming process and gas pressure tray forming are simulated, and the results are compared with experimental data.
Kosiba, Graham D.; Wixom, Ryan R.; Oehlschlaeger, Matthew A.
2017-10-27
Image processing and stereological techniques were used to characterize the heterogeneity of composite propellant and inform a predictive burn rate model. Composite propellant samples made up of ammonium perchlorate (AP), hydroxyl-terminated polybutadiene (HTPB), and aluminum (Al) were faced with an ion mill and imaged with a scanning electron microscope (SEM) and x-ray tomography (micro-CT). Properties of both the bulk and individual components of the composite propellant were determined from a variety of image processing tools. An algebraic model, based on the improved Beckstead-Derr-Price model developed by Cohen and Strand, was used to predict the steady-state burning of the aluminized compositemore » propellant. In the presented model the presence of aluminum particles within the propellant was introduced. The thermal effects of aluminum particles are accounted for at the solid-gas propellant surface interface and aluminum combustion is considered in the gas phase using a single global reaction. In conclusion, properties derived from image processing were used directly as model inputs, leading to a sample-specific predictive combustion model.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kosiba, Graham D.; Wixom, Ryan R.; Oehlschlaeger, Matthew A.
Image processing and stereological techniques were used to characterize the heterogeneity of composite propellant and inform a predictive burn rate model. Composite propellant samples made up of ammonium perchlorate (AP), hydroxyl-terminated polybutadiene (HTPB), and aluminum (Al) were faced with an ion mill and imaged with a scanning electron microscope (SEM) and x-ray tomography (micro-CT). Properties of both the bulk and individual components of the composite propellant were determined from a variety of image processing tools. An algebraic model, based on the improved Beckstead-Derr-Price model developed by Cohen and Strand, was used to predict the steady-state burning of the aluminized compositemore » propellant. In the presented model the presence of aluminum particles within the propellant was introduced. The thermal effects of aluminum particles are accounted for at the solid-gas propellant surface interface and aluminum combustion is considered in the gas phase using a single global reaction. In conclusion, properties derived from image processing were used directly as model inputs, leading to a sample-specific predictive combustion model.« less
Cockpit System Situational Awareness Modeling Tool
NASA Technical Reports Server (NTRS)
Keller, John; Lebiere, Christian; Shay, Rick; Latorella, Kara
2004-01-01
This project explored the possibility of predicting pilot situational awareness (SA) using human performance modeling techniques for the purpose of evaluating developing cockpit systems. The Improved Performance Research Integration Tool (IMPRINT) was combined with the Adaptive Control of Thought-Rational (ACT-R) cognitive modeling architecture to produce a tool that can model both the discrete tasks of pilots and the cognitive processes associated with SA. The techniques for using this tool to predict SA were demonstrated using the newly developed Aviation Weather Information (AWIN) system. By providing an SA prediction tool to cockpit system designers, cockpit concepts can be assessed early in the design process while providing a cost-effective complement to the traditional pilot-in-the-loop experiments and data collection techniques.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Soulami, Ayoub; Lavender, Curt A.; Paxton, Dean M.
2014-04-23
Pacific Northwest National Laboratory (PNNL) has been investigating manufacturing processes for the uranium-10% molybdenum (U-10Mo) alloy plate-type fuel for the U.S. high-performance research reactors. This work supports the Convert Program of the U.S. Department of Energy’s National Nuclear Security Administration (DOE/NNSA) Global Threat Reduction Initiative. This report documents modeling results of PNNL’s efforts to perform finite-element simulations to predict roll separating forces and rolling defects. Simulations were performed using a finite-element model developed using the commercial code LS-Dyna. Simulations of the hot rolling of U-10Mo coupons encapsulated in low-carbon steel have been conducted following two different schedules. Model predictions ofmore » the roll-separation force and roll-pack thicknesses at different stages of the rolling process were compared with experimental measurements. This report discusses various attributes of the rolled coupons revealed by the model (e.g., dog-boning and thickness non-uniformity).« less
Eric J. Gustafson
2013-01-01
Researchers and natural resource managers need predictions of how multiple global changes (e.g., climate change, rising levels of air pollutants, exotic invasions) will affect landscape composition and ecosystem function. Ecological predictive models used for this purpose are constructed using either a mechanistic (process-based) or a phenomenological (empirical)...
Swanson, H L
1987-01-01
Three theoretical models (additive, independence, maximum rule) that characterize and predict the influence of independent hemispheric resources on learning-disabled and skilled readers' simultaneous processing were tested. Predictions related to word recall performance during simultaneous encoding conditions (dichotic listening task) were made from unilateral (dichotic listening task) presentations. The maximum rule model best characterized both ability groups in that simultaneous encoding produced no better recall than unilateral presentations. While the results support the hypothesis that both ability groups use similar processes in the combining of hemispheric resources (i.e., weak/dominant processing), ability group differences do occur in the coordination of such resources.
Prakash, J; Srinivasan, K
2009-07-01
In this paper, the authors have represented the nonlinear system as a family of local linear state space models, local PID controllers have been designed on the basis of linear models, and the weighted sum of the output from the local PID controllers (Nonlinear PID controller) has been used to control the nonlinear process. Further, Nonlinear Model Predictive Controller using the family of local linear state space models (F-NMPC) has been developed. The effectiveness of the proposed control schemes has been demonstrated on a CSTR process, which exhibits dynamic nonlinearity.
Implementation of new pavement performance prediction models in PMIS : report
DOT National Transportation Integrated Search
2012-08-01
Pavement performance prediction models and maintenance and rehabilitation (M&R) optimization processes : enable managers and engineers to plan and prioritize pavement M&R activities in a cost-effective manner. : This report describes TxDOTs effort...
Twist Model Development and Results from the Active Aeroelastic Wing F/A-18 Aircraft
NASA Technical Reports Server (NTRS)
Lizotte, Andrew M.; Allen, Michael J.
2007-01-01
Understanding the wing twist of the active aeroelastic wing (AAW) F/A-18 aircraft is a fundamental research objective for the program and offers numerous benefits. In order to clearly understand the wing flexibility characteristics, a model was created to predict real-time wing twist. A reliable twist model allows the prediction of twist for flight simulation, provides insight into aircraft performance uncertainties, and assists with computational fluid dynamic and aeroelastic issues. The left wing of the aircraft was heavily instrumented during the first phase of the active aeroelastic wing program allowing deflection data collection. Traditional data processing steps were taken to reduce flight data, and twist predictions were made using linear regression techniques. The model predictions determined a consistent linear relationship between the measured twist and aircraft parameters, such as surface positions and aircraft state variables. Error in the original model was reduced in some cases by using a dynamic pressure-based assumption. This technique produced excellent predictions for flight between the standard test points and accounted for nonlinearities in the data. This report discusses data processing techniques and twist prediction validation, and provides illustrative and quantitative results.
Twist Model Development and Results From the Active Aeroelastic Wing F/A-18 Aircraft
NASA Technical Reports Server (NTRS)
Lizotte, Andrew; Allen, Michael J.
2005-01-01
Understanding the wing twist of the active aeroelastic wing F/A-18 aircraft is a fundamental research objective for the program and offers numerous benefits. In order to clearly understand the wing flexibility characteristics, a model was created to predict real-time wing twist. A reliable twist model allows the prediction of twist for flight simulation, provides insight into aircraft performance uncertainties, and assists with computational fluid dynamic and aeroelastic issues. The left wing of the aircraft was heavily instrumented during the first phase of the active aeroelastic wing program allowing deflection data collection. Traditional data processing steps were taken to reduce flight data, and twist predictions were made using linear regression techniques. The model predictions determined a consistent linear relationship between the measured twist and aircraft parameters, such as surface positions and aircraft state variables. Error in the original model was reduced in some cases by using a dynamic pressure-based assumption and by using neural networks. These techniques produced excellent predictions for flight between the standard test points and accounted for nonlinearities in the data. This report discusses data processing techniques and twist prediction validation, and provides illustrative and quantitative results.
NASA Astrophysics Data System (ADS)
Grubbs, Guy; Michell, Robert; Samara, Marilia; Hampton, Donald; Hecht, James; Solomon, Stanley; Jahn, Jorg-Micha
2018-01-01
It is important to routinely examine and update models used to predict auroral emissions resulting from precipitating electrons in Earth's magnetotail. These models are commonly used to invert spectral auroral ground-based images to infer characteristics about incident electron populations when in situ measurements are unavailable. In this work, we examine and compare auroral emission intensities predicted by three commonly used electron transport models using varying electron population characteristics. We then compare model predictions to same-volume in situ electron measurements and ground-based imaging to qualitatively examine modeling prediction error. Initial comparisons showed differences in predictions by the GLobal airglOW (GLOW) model and the other transport models examined. Chemical reaction rates and radiative rates in GLOW were updated using recent publications, and predictions showed better agreement with the other models and the same-volume data, stressing that these rates are important to consider when modeling auroral processes. Predictions by each model exhibit similar behavior for varying atmospheric constants, energies, and energy fluxes. Same-volume electron data and images are highly correlated with predictions by each model, showing that these models can be used to accurately derive electron characteristics and ionospheric parameters based solely on multispectral optical imaging data.
Prediction model of sinoatrial node field potential using high order partial least squares.
Feng, Yu; Cao, Hui; Zhang, Yanbin
2015-01-01
High order partial least squares (HOPLS) is a novel data processing method. It is highly suitable for building prediction model which has tensor input and output. The objective of this study is to build a prediction model of the relationship between sinoatrial node field potential and high glucose using HOPLS. The three sub-signals of the sinoatrial node field potential made up the model's input. The concentration and the actuation duration of high glucose made up the model's output. The results showed that on the premise of predicting two dimensional variables, HOPLS had the same predictive ability and a lower dispersion degree compared with partial least squares (PLS).
NASA Technical Reports Server (NTRS)
Sasmal, G. P.; Hochstein, J. I.; Wendl, M. C.; Hardy, T. L.
1991-01-01
A multidimensional computational model of the pressurization process in a slush hydrogen propellant storage tank was developed and its accuracy evaluated by comparison to experimental data measured for a 5 ft diameter spherical tank. The fluid mechanic, thermodynamic, and heat transfer processes within the ullage are represented by a finite-volume model. The model was shown to be in reasonable agreement with the experiment data. A parameter study was undertaken to examine the dependence of the pressurization process on initial ullage temperature distribution and pressurant mass flow rate. It is shown that for a given heat flux rate at the ullage boundary, the pressurization process is nearly independent of initial temperature distribution. Significant differences were identified between the ullage temperature and velocity fields predicted for pressurization of slush and those predicted for pressurization of liquid hydrogen. A simplified model of the pressurization process was constructed in search of a dimensionless characterization of the pressurization process. It is shown that the relationship derived from this simplified model collapses all of the pressure history data generated during this study into a single curve.
NASA Astrophysics Data System (ADS)
Denissenkov, Pavel; Perdikakis, Georgios; Herwig, Falk; Schatz, Hendrik; Ritter, Christian; Pignatari, Marco; Jones, Samuel; Nikas, Stylianos; Spyrou, Artemis
2018-05-01
The first-peak s-process elements Rb, Sr, Y and Zr in the post-AGB star Sakurai's object (V4334 Sagittarii) have been proposed to be the result of i-process nucleosynthesis in a post-AGB very-late thermal pulse event. We estimate the nuclear physics uncertainties in the i-process model predictions to determine whether the remaining discrepancies with observations are significant and point to potential issues with the underlying astrophysical model. We find that the dominant source in the nuclear physics uncertainties are predictions of neutron capture rates on unstable neutron rich nuclei, which can have uncertainties of more than a factor 20 in the band of the i-process. We use a Monte Carlo variation of 52 neutron capture rates and a 1D multi-zone post-processing model for the i-process in Sakurai's object to determine the cumulative effect of these uncertainties on the final elemental abundance predictions. We find that the nuclear physics uncertainties are large and comparable to observational errors. Within these uncertainties the model predictions are consistent with observations. A correlation analysis of the results of our MC simulations reveals that the strongest impact on the predicted abundances of Rb, Sr, Y and Zr is made by the uncertainties in the (n, γ) reaction rates of 85Br, 86Br, 87Kr, 88Kr, 89Kr, 89Rb, 89Sr, and 92Sr. This conclusion is supported by a series of multi-zone simulations in which we increased and decreased to their maximum and minimum limits one or two reaction rates per run. We also show that simple and fast one-zone simulations should not be used instead of more realistic multi-zone stellar simulations for nuclear sensitivity and uncertainty studies of convective–reactive processes. Our findings apply more generally to any i-process site with similar neutron exposure, such as rapidly accreting white dwarfs with near-solar metallicities.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Simpson, L.; Britt, J.; Birkmire, R.
ITN Energy Systems, Inc., and Global Solar Energy, Inc., assisted by NREL's PV Manufacturing R&D program, have continued to advance CIGS production technology by developing trajectory-oriented predictive/control models, fault-tolerance control, control platform development, in-situ sensors, and process improvements. Modeling activities included developing physics-based and empirical models for CIGS and sputter-deposition processing, implementing model-based control, and applying predictive models to the construction of new evaporation sources and for control. Model-based control is enabled by implementing reduced or empirical models into a control platform. Reliability improvement activities include implementing preventive maintenance schedules; detecting failed sensors/equipment and reconfiguring to tinue processing; and systematicmore » development of fault prevention and reconfiguration strategies for the full range of CIGS PV production deposition processes. In-situ sensor development activities have resulted in improved control and indicated the potential for enhanced process status monitoring and control of the deposition processes. Substantial process improvements have been made, including significant improvement in CIGS uniformity, thickness control, efficiency, yield, and throughput. In large measure, these gains have been driven by process optimization, which in turn have been enabled by control and reliability improvements due to this PV Manufacturing R&D program.« less
A Gaussian Processes Technique for Short-term Load Forecasting with Considerations of Uncertainty
NASA Astrophysics Data System (ADS)
Ohmi, Masataro; Mori, Hiroyuki
In this paper, an efficient method is proposed to deal with short-term load forecasting with the Gaussian Processes. Short-term load forecasting plays a key role to smooth power system operation such as economic load dispatching, unit commitment, etc. Recently, the deregulated and competitive power market increases the degree of uncertainty. As a result, it is more important to obtain better prediction results to save the cost. One of the most important aspects is that power system operator needs the upper and lower bounds of the predicted load to deal with the uncertainty while they require more accurate predicted values. The proposed method is based on the Bayes model in which output is expressed in a distribution rather than a point. To realize the model efficiently, this paper proposes the Gaussian Processes that consists of the Bayes linear model and kernel machine to obtain the distribution of the predicted value. The proposed method is successively applied to real data of daily maximum load forecasting.
Ireland, Jane L; Adams, Christine
2015-01-01
The current study explores associations between implicit and explicit aggression in young adult male prisoners, seeking to apply the Reflection-Impulsive Model and indicate parity with elements of the General Aggression Model and social cognition. Implicit cognitive aggressive processing is not an area that has been examined among prisoners. Two hundred and sixty two prisoners completed an implicit cognitive aggression measure (Puzzle Test) and explicit aggression measures, covering current behaviour (DIPC-R) and aggression disposition (AQ). It was predicted that dispositional aggression would be predicted by implicit cognitive aggression, and that implicit cognitive aggression would predict current engagement in aggressive behaviour. It was also predicted that more impulsive implicit cognitive processing would associate with aggressive behaviour whereas cognitively effortful implicit cognitive processing would not. Implicit aggressive cognitive processing was associated with increased dispositional aggression but not current reports of aggressive behaviour. Impulsive implicit cognitive processing of an aggressive nature predicted increased dispositional aggression whereas more cognitively effortful implicit cognitive aggression did not. The article concludes by outlining the importance of accounting for implicit cognitive processing among prisoners and the need to separate such processing into facets (i.e. impulsive vs. cognitively effortful). Implications for future research and practice in this novel area of study are indicated. Copyright © 2015 Elsevier Ltd. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Puskar, Joseph David; Quintana, Michael A.; Sorensen, Neil Robert
A program is underway at Sandia National Laboratories to predict long-term reliability of photovoltaic (PV) systems. The vehicle for the reliability predictions is a Reliability Block Diagram (RBD), which models system behavior. Because this model is based mainly on field failure and repair times, it can be used to predict current reliability, but it cannot currently be used to accurately predict lifetime. In order to be truly predictive, physics-informed degradation processes and failure mechanisms need to be included in the model. This paper describes accelerated life testing of metal foil tapes used in thin-film PV modules, and how tape jointmore » degradation, a possible failure mode, can be incorporated into the model.« less
Rodriguez, Christina M.; Smith, Tamika L.; Silvia, Paul J.
2015-01-01
The Social Information Processing (SIP) model postulates that parents undergo a series of stages in implementing physical discipline that can escalate into physical child abuse. The current study utilized a multimethod approach to investigate whether SIP factors can predict risk of parent-child aggression (PCA) in a diverse sample of expectant mothers and fathers. SIP factors of PCA attitudes, negative child attributions, reactivity, and empathy were considered as potential predictors of PCA risk; additionally, analyses considered whether personal history of PCA predicted participants’ own PCA risk through its influence on their attitudes and attributions. Findings indicate that, for both mothers and fathers, history influenced attitudes but not attributions in predicting PCA risk, and attitudes and attributions predicted PCA risk; empathy and reactivity predicted negative child attributions for expectant mothers, but only reactivity significantly predicted attributions for expectant fathers. Path models for expectant mothers and fathers were remarkably similar. Overall, the findings provide support for major aspects of the SIP model. Continued work is needed in studying the progression of these factors across time for both mothers and fathers as well as the inclusion of other relevant ecological factors to the SIP model. PMID:26631420
Genetic and phylogenetic consequences of island biogeography.
Johnson, K P; Adler, F R; Cherry, J L
2000-04-01
Island biogeography theory predicts that the number of species on an island should increase with island size and decrease with island distance to the mainland. These predictions are generally well supported in comparative and experimental studies. These ecological, equilibrium predictions arise as a result of colonization and extinction processes. Because colonization and extinction are also important processes in evolution, we develop methods to test evolutionary predictions of island biogeography. We derive a population genetic model of island biogeography that incorporates island colonization, migration of individuals from the mainland, and extinction of island populations. The model provides a means of estimating the rates of migration and extinction from population genetic data. This model predicts that within an island population the distribution of genetic divergences with respect to the mainland source population should be bimodal, with much of the divergence dating to the colonization event. Across islands, this model predicts that populations on large islands should be on average more genetically divergent from mainland source populations than those on small islands. Likewise, populations on distant islands should be more divergent than those on close islands. Published observations of a larger proportion of endemic species on large and distant islands support these predictions.
NASA Astrophysics Data System (ADS)
Huda, Nazmul; Naser, Jamal; Brooks, Geoffrey; Reuter, Markus A.; Matusewicz, Robert W.
2012-02-01
Slag fuming is a reductive treatment process for molten zinciferous slags for extracting zinc in the form of metal vapor by injecting or adding a reductant source such as pulverized coal or lump coal and natural gas. A computational fluid dynamic (CFD) model was developed to study the zinc slag fuming process from imperial smelting furnace (ISF) slag in a top-submerged lance furnace and to investigate the details of fluid flow, reaction kinetics, and heat transfer in the furnace. The model integrates combustion phenomena and chemical reactions with the heat, mass, and momentum interfacial interaction between the phases present in the system. A commercial CFD package AVL Fire 2009.2 (AVL, Graz, Austria) coupled with a number of user-defined subroutines in FORTRAN programming language were used to develop the model. The model is based on three-dimensional (3-D) Eulerian multiphase flow approach, and it predicts the velocity and temperature field of the molten slag bath, generated turbulence, and vortex and plume shape at the lance tip. The model also predicts the mass fractions of slag and gaseous components inside the furnace. The model predicted that the percent of ZnO in the slag bath decreases linearly with time and is consistent broadly with the experimental data. The zinc fuming rate from the slag bath predicted by the model was validated through macrostep validation process against the experimental study of Waladan et al. The model results predicted that the rate of ZnO reduction is controlled by the mass transfer of ZnO from the bulk slag to slag-gas interface and rate of gas-carbon reaction for the specified simulation time studied. Although the model is based on zinc slag fuming, the basic approach could be expanded or applied for the CFD analysis of analogous systems.
Kumarapeli, P; De Lusignan, S; Ellis, T; Jones, B
2007-03-01
The Primary Care Data Quality programme (PCDQ) is a quality-improvement programme which processes routinely collected general practice computer data. Patient data collected from a wide range of different brands of clinical computer systems are aggregated, processed, and fed back to practices in an educational context to improve the quality of care. Process modelling is a well-established approach used to gain understanding and systematic appraisal, and identify areas of improvement of a business process. Unified modelling language (UML) is a general purpose modelling technique used for this purpose. We used UML to appraise the PCDQ process to see if the efficiency and predictability of the process could be improved. Activity analysis and thinking-aloud sessions were used to collect data to generate UML diagrams. The UML model highlighted the sequential nature of the current process as a barrier for efficiency gains. It also identified the uneven distribution of process controls, lack of symmetric communication channels, critical dependencies among processing stages, and failure to implement all the lessons learned in the piloting phase. It also suggested that improved structured reporting at each stage - especially from the pilot phase, parallel processing of data and correctly positioned process controls - should improve the efficiency and predictability of research projects. Process modelling provided a rational basis for the critical appraisal of a clinical data processing system; its potential maybe underutilized within health care.
Advanced modelling, monitoring, and process control of bioconversion systems
NASA Astrophysics Data System (ADS)
Schmitt, Elliott C.
Production of fuels and chemicals from lignocellulosic biomass is an increasingly important area of research and industrialization throughout the world. In order to be competitive with fossil-based fuels and chemicals, maintaining cost-effectiveness is critical. Advanced process control (APC) and optimization methods could significantly reduce operating costs in the biorefining industry. Two reasons APC has previously proven challenging to implement for bioprocesses include: lack of suitable online sensor technology of key system components, and strongly nonlinear first principal models required to predict bioconversion behavior. To overcome these challenges batch fermentations with the acetogen Moorella thermoacetica were monitored with Raman spectroscopy for the conversion of real lignocellulosic hydrolysates and a kinetic model for the conversion of synthetic sugars was developed. Raman spectroscopy was shown to be effective in monitoring the fermentation of sugarcane bagasse and sugarcane straw hydrolysate, where univariate models predicted acetate concentrations with a root mean square error of prediction (RMSEP) of 1.9 and 1.0 g L-1 for bagasse and straw, respectively. Multivariate partial least squares (PLS) models were employed to predict acetate, xylose, glucose, and total sugar concentrations for both hydrolysate fermentations. The PLS models were more robust than univariate models, and yielded a percent error of approximately 5% for both sugarcane bagasse and sugarcane straw. In addition, a screening technique was discussed for improving Raman spectra of hydrolysate samples prior to collecting fermentation data. Furthermore, a mechanistic model was developed to predict batch fermentation of synthetic glucose, xylose, and a mixture of the two sugars to acetate. The models accurately described the bioconversion process with an RMSEP of approximately 1 g L-1 for each model and provided insights into how kinetic parameters changed during dual substrate fermentation with diauxic growth. Model predictive control (MPC), an advanced process control strategy, is capable of utilizing nonlinear models and sensor feedback to provide optimal input while ensuring critical process constraints are met. Using the microorganism Saccharomyces cerevisiae, a commonly used microorganism for biofuel production, and work performed with M. thermoacetica, a nonlinear MPC was implemented on a continuous membrane cell-recycle bioreactor (MCRB) for the conversion of glucose to ethanol. The dilution rate was used to control the ethanol productivity of the system will maintaining total substrate conversion above the constraint of 98%. PLS multivariate models for glucose (RMSEP 1.5 g L-1) and ethanol (RMSEP 0.4 g L-1) were robust in predicting concentrations and a mechanistic kinetic model built accurately predicted continuous fermentation behavior. A setpoint trajectory, ranging from 2 - 4.5 g L-1 h-1 for productivity was closely tracked by the fermentation system using Raman measurements and an extended Kalman filter to estimate biomass concentrations. Overall, this work was able to demonstrate an effective approach for real-time monitoring and control of a complex fermentation system.
The role of bias in simulation of the Indian monsoon and its relationship to predictability
NASA Astrophysics Data System (ADS)
Kelly, P.
2016-12-01
Confidence in future projections of how climate change will affect the Indian monsoon is currently limited by- among other things-model biases. That is, the systematic error in simulating the mean present day climate. An important priority question in seamless prediction involves the role of the mean state. How much of the prediction error in imperfect models stems from a biased mean state (itself a result of many interacting process errors), and how much stems from the flow dependence of processes during an oscillation or variation we are trying to predict? Using simple but effective nudging techniques, we are able to address this question in a clean and incisive framework that teases apart the roles of the mean state vs. transient flow dependence in constraining predictability. The role of bias in model fidelity of simulations of the Indian monsoon is investigated in CAM5, and the relationship to predictability in remote regions in the "free" (non-nudged) domain is explored.
Wen J. Wang; Hong S. He; Jacob S. Fraser; Frank R. Thompson; Stephen R. Shifley; Martin A. Spetich
2014-01-01
LANDIS PRO predicts forest composition and structure changes incorporating species-, stand-, and landscape-scales processes at regional scales. Species-scale processes include tree growth, establishment, and mortality. Stand-scale processes contain density- and size-related resource competition that regulates self-thinning and seedling establishment. Landscapescale...
NASA Astrophysics Data System (ADS)
Fazli Shahri, Hamid Reza; Mahdavinejad, Ramezanali
2018-02-01
Thermal-based processes with Gaussian heat source often produce excessive temperature which can impose thermally-affected layers in specimens. Therefore, the temperature distribution and Heat Affected Zone (HAZ) of materials are two critical factors which are influenced by different process parameters. Measurement of the HAZ thickness and temperature distribution within the processes are not only difficult but also expensive. This research aims at finding a valuable knowledge on these factors by prediction of the process through a novel combinatory model. In this study, an integrated Artificial Neural Network (ANN) and genetic algorithm (GA) was used to predict the HAZ and temperature distribution of the specimens. To end this, a series of full factorial design of experiments were conducted by applying a Gaussian heat flux on Ti-6Al-4 V at first, then the temperature of the specimen was measured by Infrared thermography. The HAZ width of each sample was investigated through measuring the microhardness. Secondly, the experimental data was used to create a GA-ANN model. The efficiency of GA in design and optimization of the architecture of ANN was investigated. The GA was used to determine the optimal number of neurons in hidden layer, learning rate and momentum coefficient of both output and hidden layers of ANN. Finally, the reliability of models was assessed according to the experimental results and statistical indicators. The results demonstrated that the combinatory model predicted the HAZ and temperature more effective than a trial-and-error ANN model.
Black, Stephanie Winkeljohn; Pössel, Patrick
2013-08-01
Adolescents who develop depression have worse interpersonal and affective experiences and are more likely to develop substance problems and/or suicidal ideation compared to adolescents who do not develop depression. This study examined the combined effects of negative self-referent information processing and rumination (i.e., brooding and reflection) on adolescent depressive symptoms. It was hypothesized that the interaction of negative self-referent information processing and brooding would significantly predict depressive symptoms, while the interaction of negative self-referent information processing and reflection would not predict depressive symptoms. Adolescents (n = 92; 13-15 years; 34.7% female) participated in a 6-month longitudinal study. Self-report instruments measured depressive symptoms and rumination; a cognitive task measured information processing. Path modelling in Amos 19.0 analyzed the data. The interaction of negative information processing and brooding significantly predicted an increase in depressive symptoms 6 months later. The interaction of negative information processing and reflection did not significantly predict depression, however, the model not meet a priori standards to accept the null hypothesis. Results suggest clinicians working with adolescents at-risk for depression should consider focusing on the reduction of brooding and negative information processing to reduce long-term depressive symptoms.
Scalable Predictive Analysis in Critically Ill Patients Using a Visual Open Data Analysis Platform
Poucke, Sven Van; Zhang, Zhongheng; Schmitz, Martin; Vukicevic, Milan; Laenen, Margot Vander; Celi, Leo Anthony; Deyne, Cathy De
2016-01-01
With the accumulation of large amounts of health related data, predictive analytics could stimulate the transformation of reactive medicine towards Predictive, Preventive and Personalized (PPPM) Medicine, ultimately affecting both cost and quality of care. However, high-dimensionality and high-complexity of the data involved, prevents data-driven methods from easy translation into clinically relevant models. Additionally, the application of cutting edge predictive methods and data manipulation require substantial programming skills, limiting its direct exploitation by medical domain experts. This leaves a gap between potential and actual data usage. In this study, the authors address this problem by focusing on open, visual environments, suited to be applied by the medical community. Moreover, we review code free applications of big data technologies. As a showcase, a framework was developed for the meaningful use of data from critical care patients by integrating the MIMIC-II database in a data mining environment (RapidMiner) supporting scalable predictive analytics using visual tools (RapidMiner’s Radoop extension). Guided by the CRoss-Industry Standard Process for Data Mining (CRISP-DM), the ETL process (Extract, Transform, Load) was initiated by retrieving data from the MIMIC-II tables of interest. As use case, correlation of platelet count and ICU survival was quantitatively assessed. Using visual tools for ETL on Hadoop and predictive modeling in RapidMiner, we developed robust processes for automatic building, parameter optimization and evaluation of various predictive models, under different feature selection schemes. Because these processes can be easily adopted in other projects, this environment is attractive for scalable predictive analytics in health research. PMID:26731286
Scalable Predictive Analysis in Critically Ill Patients Using a Visual Open Data Analysis Platform.
Van Poucke, Sven; Zhang, Zhongheng; Schmitz, Martin; Vukicevic, Milan; Laenen, Margot Vander; Celi, Leo Anthony; De Deyne, Cathy
2016-01-01
With the accumulation of large amounts of health related data, predictive analytics could stimulate the transformation of reactive medicine towards Predictive, Preventive and Personalized (PPPM) Medicine, ultimately affecting both cost and quality of care. However, high-dimensionality and high-complexity of the data involved, prevents data-driven methods from easy translation into clinically relevant models. Additionally, the application of cutting edge predictive methods and data manipulation require substantial programming skills, limiting its direct exploitation by medical domain experts. This leaves a gap between potential and actual data usage. In this study, the authors address this problem by focusing on open, visual environments, suited to be applied by the medical community. Moreover, we review code free applications of big data technologies. As a showcase, a framework was developed for the meaningful use of data from critical care patients by integrating the MIMIC-II database in a data mining environment (RapidMiner) supporting scalable predictive analytics using visual tools (RapidMiner's Radoop extension). Guided by the CRoss-Industry Standard Process for Data Mining (CRISP-DM), the ETL process (Extract, Transform, Load) was initiated by retrieving data from the MIMIC-II tables of interest. As use case, correlation of platelet count and ICU survival was quantitatively assessed. Using visual tools for ETL on Hadoop and predictive modeling in RapidMiner, we developed robust processes for automatic building, parameter optimization and evaluation of various predictive models, under different feature selection schemes. Because these processes can be easily adopted in other projects, this environment is attractive for scalable predictive analytics in health research.
Enzyme clustering accelerates processing of intermediates through metabolic channeling
Castellana, Michele; Wilson, Maxwell Z.; Xu, Yifan; Joshi, Preeti; Cristea, Ileana M.; Rabinowitz, Joshua D.; Gitai, Zemer; Wingreen, Ned S.
2015-01-01
We present a quantitative model to demonstrate that coclustering multiple enzymes into compact agglomerates accelerates the processing of intermediates, yielding the same efficiency benefits as direct channeling, a well-known mechanism in which enzymes are funneled between enzyme active sites through a physical tunnel. The model predicts the separation and size of coclusters that maximize metabolic efficiency, and this prediction is in agreement with previously reported spacings between coclusters in mammalian cells. For direct validation, we study a metabolic branch point in Escherichia coli and experimentally confirm the model prediction that enzyme agglomerates can accelerate the processing of a shared intermediate by one branch, and thus regulate steady-state flux division. Our studies establish a quantitative framework to understand coclustering-mediated metabolic channeling and its application to both efficiency improvement and metabolic regulation. PMID:25262299
Runoff prediction is a cornerstone of water resources planning, and therefore modeling performance is a key issue. This paper investigates the comparative advantages of conceptual versus process- based models in predicting warm season runoff for upland, low-yield micro-catchments...
NASA Astrophysics Data System (ADS)
Yang, Xue-Min; Li, Jin-Yan; Chai, Guo-Ming; Duan, Dong-Ping; Zhang, Jian
2016-08-01
According to the experimental results of hot metal dephosphorization by CaO-based slags at a commercial-scale hot metal pretreatment station, the collected 16 models of equilibrium quotient k_{{P}} or phosphorus partition L_{{P}} between CaO-based slags and iron-based melts from the literature have been evaluated. The collected 16 models for predicting equilibrium quotient k_{{P}} can be transferred to predict phosphorus partition L_{{P}} . The predicted results by the collected 16 models cannot be applied to be criteria for evaluating k_{{P}} or L_{{P}} due to various forms or definitions of k_{{P}} or L_{{P}} . Thus, the measured phosphorus content [pct P] in a hot metal bath at the end point of the dephosphorization pretreatment process is applied to be the fixed criteria for evaluating the collected 16 models. The collected 16 models can be described in the form of linear functions as y = c0 + c1 x , in which independent variable x represents the chemical composition of slags, intercept c0 including the constant term depicts the temperature effect and other unmentioned or acquiescent thermodynamic factors, and slope c1 is regressed by the experimental results of k_{{P}} or L_{{P}} . Thus, a general approach to developing the thermodynamic model for predicting equilibrium quotient k_{{P}} or phosphorus partition L P or [pct P] in iron-based melts during the dephosphorization process is proposed by revising the constant term in intercept c0 for the summarized 15 models except for Suito's model (M3). The better models with an ideal revising possibility or flexibility among the collected 16 models have been selected and recommended. Compared with the predicted result by the revised 15 models and Suito's model (M3), the developed IMCT- L_{{P}} model coupled with the proposed dephosphorization mechanism by the present authors can be applied to accurately predict phosphorus partition L_{{P}} with the lowest mean deviation δ_{{L_{{P}} }} of log L_{{P}} as 2.33, as well as to predict [pct P] in a hot metal bath with the smallest mean deviation δ_{{[% {{ P}}]}} of [pct P] as 12.31.
Uncertainty prediction for PUB
NASA Astrophysics Data System (ADS)
Mendiondo, E. M.; Tucci, C. M.; Clarke, R. T.; Castro, N. M.; Goldenfum, J. A.; Chevallier, P.
2003-04-01
IAHS’ initiative of Prediction in Ungaged Basins (PUB) attempts to integrate monitoring needs and uncertainty prediction for river basins. This paper outlines alternative ways of uncertainty prediction which could be linked with new blueprints for PUB, thereby showing how equifinality-based models should be grasped using practical strategies of gauging like the Nested Catchment Experiment (NCE). Uncertainty prediction is discussed from observations of Potiribu Project, which is a NCE layout at representative basins of a suptropical biome of 300,000 km2 in South America. Uncertainty prediction is assessed at the microscale (1 m2 plots), at the hillslope (0,125 km2) and at the mesoscale (0,125 - 560 km2). At the microscale, uncertainty-based models are constrained by temporal variations of state variables with changing likelihood surfaces of experiments using Green-Ampt model. Two new blueprints emerged from this NCE for PUB: (1) the Scale Transferability Scheme (STS) at the hillslope scale and the Integrating Process Hypothesis (IPH) at the mesoscale. The STS integrates a multi-dimensional scaling with similarity thresholds, as a generalization of the Representative Elementary Area (REA), using spatial correlation from point (distributed) to area (lumped) process. In this way, STS addresses uncertainty-bounds of model parameters, into an upscaling process at the hillslope. In the other hand, the IPH approach regionalizes synthetic hydrographs, thereby interpreting the uncertainty bounds of streamflow variables. Multiscale evidences from Potiribu NCE layout show novel pathways of uncertainty prediction under a PUB perspective in representative basins of world biomes.
Analytical Modeling and Performance Prediction of Remanufactured Gearbox Components
NASA Astrophysics Data System (ADS)
Pulikollu, Raja V.; Bolander, Nathan; Vijayakar, Sandeep; Spies, Matthew D.
Gearbox components operate in extreme environments, often leading to premature removal or overhaul. Though worn or damaged, these components still have the ability to function given the appropriate remanufacturing processes are deployed. Doing so reduces a significant amount of resources (time, materials, energy, manpower) otherwise required to produce a replacement part. Unfortunately, current design and analysis approaches require extensive testing and evaluation to validate the effectiveness and safety of a component that has been used in the field then processed outside of original OEM specification. To test all possible combination of component coupled with various levels of potential damage repaired through various options of processing would be an expensive and time consuming feat, thus prohibiting a broad deployment of remanufacturing processes across industry. However, such evaluation and validation can occur through Integrated Computational Materials Engineering (ICME) modeling and simulation. Sentient developed a microstructure-based component life prediction (CLP) tool to quantify and assist gearbox components remanufacturing process. This was achieved by modeling the design-manufacturing-microstructure-property relationship. The CLP tool assists in remanufacturing of high value, high demand rotorcraft, automotive and wind turbine gears and bearings. This paper summarizes the CLP models development, and validation efforts by comparing the simulation results with rotorcraft spiral bevel gear physical test data. CLP analyzes gear components and systems for safety, longevity, reliability and cost by predicting (1) New gearbox component performance, and optimal time-to-remanufacture (2) Qualification of used gearbox components for remanufacturing process (3) Predicting the remanufactured component performance.
NASA Astrophysics Data System (ADS)
Bardant, Teuku Beuna; Dahnum, Deliana; Amaliyah, Nur
2017-11-01
Simultaneous Saccharification Fermentation (SSF) of palm oil (Elaeis guineensis) empty fruit bunch (EFB) pulp were investigated as a part of ethanol production process. SSF was investigated by observing the effect of substrate loading variation in range 10-20%w, cellulase loading 5-30 FPU/gr substrate and yeast addition 1-2%v to the ethanol yield. Mathematical model for describing the effects of these three variables to the ethanol yield were developed using Response Surface Methodology-Cheminformatics (RSM-CI). The model gave acceptable accuracy in predicting ethanol yield for Simultaneous Saccharification and Fermentation (SSF) with coefficient of determination (R2) 0.8899. Model validation based on data from previous study gave (R2) 0.7942 which was acceptable for using this model for trend prediction analysis. Trend prediction analysis based on model prediction yield showed that SSF gave trend for higher yield when the process was operated in high enzyme concentration and low substrate concentration. On the other hand, even SHF model showed better yield will be obtained if operated in lower substrate concentration, it still possible to operate in higher substrate concentration with slightly lower yield. Opportunity provided by SHF to operate in high loading substrate make it preferable option for application in commercial scale.
Lyashevska, Olga; Brus, Dick J; van der Meer, Jaap
2016-01-01
The objective of the study was to provide a general procedure for mapping species abundance when data are zero-inflated and spatially correlated counts. The bivalve species Macoma balthica was observed on a 500×500 m grid in the Dutch part of the Wadden Sea. In total, 66% of the 3451 counts were zeros. A zero-inflated Poisson mixture model was used to relate counts to environmental covariates. Two models were considered, one with relatively fewer covariates (model "small") than the other (model "large"). The models contained two processes: a Bernoulli (species prevalence) and a Poisson (species intensity, when the Bernoulli process predicts presence). The model was used to make predictions for sites where only environmental data are available. Predicted prevalences and intensities show that the model "small" predicts lower mean prevalence and higher mean intensity, than the model "large". Yet, the product of prevalence and intensity, which might be called the unconditional intensity, is very similar. Cross-validation showed that the model "small" performed slightly better, but the difference was small. The proposed methodology might be generally applicable, but is computer intensive.
NASA Astrophysics Data System (ADS)
Chu, Xingrong; Leotoing, Lionel; Guines, Dominique; Ragneau, Eric
2015-09-01
A solution to improve the formability of aluminum alloy sheets can consist in investigating warm forming processes. The optimization of forming process parameters needs a precise evaluation of material properties and sheet metal formability for actual operating environment. Based on the analytical M-K theory, a finite element (FE) M-K model was proposed to predict forming limit curves (FLCs) at different temperatures and strain rates. The influences of initial imperfection value ( f 0) and material thermos-viscoplastic model on the FLCs are discussed in this work. The flow stresses of AA5086 were characterized by uniaxial tensile tests at different temperatures (20, 150, and 200 °C) and equivalent strain rates (0.0125, 0.125, and 1.25 s-1). Three types of hardening models (power law model, saturation model, and mixed model) were proposed and adapted to correlate the experimental flow stresses. The three hardening models were implemented into the FE M-K model in order to predict FLCs for different forming conditions. The predicted limit strains are very sensitive to the thermo-viscoplastic modeling of AA5086 and to the calibration of the initial geometrical imperfection which controls the onset of necking.
Emerging approaches in predictive toxicology.
Zhang, Luoping; McHale, Cliona M; Greene, Nigel; Snyder, Ronald D; Rich, Ivan N; Aardema, Marilyn J; Roy, Shambhu; Pfuhler, Stefan; Venkatactahalam, Sundaresan
2014-12-01
Predictive toxicology plays an important role in the assessment of toxicity of chemicals and the drug development process. While there are several well-established in vitro and in vivo assays that are suitable for predictive toxicology, recent advances in high-throughput analytical technologies and model systems are expected to have a major impact on the field of predictive toxicology. This commentary provides an overview of the state of the current science and a brief discussion on future perspectives for the field of predictive toxicology for human toxicity. Computational models for predictive toxicology, needs for further refinement and obstacles to expand computational models to include additional classes of chemical compounds are highlighted. Functional and comparative genomics approaches in predictive toxicology are discussed with an emphasis on successful utilization of recently developed model systems for high-throughput analysis. The advantages of three-dimensional model systems and stem cells and their use in predictive toxicology testing are also described. © 2014 Wiley Periodicals, Inc.
Emerging Approaches in Predictive Toxicology
Zhang, Luoping; McHale, Cliona M.; Greene, Nigel; Snyder, Ronald D.; Rich, Ivan N.; Aardema, Marilyn J.; Roy, Shambhu; Pfuhler, Stefan; Venkatactahalam, Sundaresan
2016-01-01
Predictive toxicology plays an important role in the assessment of toxicity of chemicals and the drug development process. While there are several well-established in vitro and in vivo assays that are suitable for predictive toxicology, recent advances in high-throughput analytical technologies and model systems are expected to have a major impact on the field of predictive toxicology. This commentary provides an overview of the state of the current science and a brief discussion on future perspectives for the field of predictive toxicology for human toxicity. Computational models for predictive toxicology, needs for further refinement and obstacles to expand computational models to include additional classes of chemical compounds are highlighted. Functional and comparative genomics approaches in predictive toxicology are discussed with an emphasis on successful utilization of recently developed model systems for high-throughput analysis. The advantages of three-dimensional model systems and stem cells and their use in predictive toxicology testing are also described. PMID:25044351
Managing Analysis Models in the Design Process
NASA Technical Reports Server (NTRS)
Briggs, Clark
2006-01-01
Design of large, complex space systems depends on significant model-based support for exploration of the design space. Integrated models predict system performance in mission-relevant terms given design descriptions and multiple physics-based numerical models. Both the design activities and the modeling activities warrant explicit process definitions and active process management to protect the project from excessive risk. Software and systems engineering processes have been formalized and similar formal process activities are under development for design engineering and integrated modeling. JPL is establishing a modeling process to define development and application of such system-level models.
Comparing Binaural Pre-processing Strategies III
Warzybok, Anna; Ernst, Stephan M. A.
2015-01-01
A comprehensive evaluation of eight signal pre-processing strategies, including directional microphones, coherence filters, single-channel noise reduction, binaural beamformers, and their combinations, was undertaken with normal-hearing (NH) and hearing-impaired (HI) listeners. Speech reception thresholds (SRTs) were measured in three noise scenarios (multitalker babble, cafeteria noise, and single competing talker). Predictions of three common instrumental measures were compared with the general perceptual benefit caused by the algorithms. The individual SRTs measured without pre-processing and individual benefits were objectively estimated using the binaural speech intelligibility model. Ten listeners with NH and 12 HI listeners participated. The participants varied in age and pure-tone threshold levels. Although HI listeners required a better signal-to-noise ratio to obtain 50% intelligibility than listeners with NH, no differences in SRT benefit from the different algorithms were found between the two groups. With the exception of single-channel noise reduction, all algorithms showed an improvement in SRT of between 2.1 dB (in cafeteria noise) and 4.8 dB (in single competing talker condition). Model predictions with binaural speech intelligibility model explained 83% of the measured variance of the individual SRTs in the no pre-processing condition. Regarding the benefit from the algorithms, the instrumental measures were not able to predict the perceptual data in all tested noise conditions. The comparable benefit observed for both groups suggests a possible application of noise reduction schemes for listeners with different hearing status. Although the model can predict the individual SRTs without pre-processing, further development is necessary to predict the benefits obtained from the algorithms at an individual level. PMID:26721922
Abrasive slurry jet cutting model based on fuzzy relations
NASA Astrophysics Data System (ADS)
Qiang, C. H.; Guo, C. W.
2017-12-01
The cutting process of pre-mixed abrasive slurry or suspension jet (ASJ) is a complex process affected by many factors, and there is a highly nonlinear relationship between the cutting parameters and cutting quality. In this paper, guided by fuzzy theory, the fuzzy cutting model of ASJ was developed. In the modeling of surface roughness, the upper surface roughness prediction model and the lower surface roughness prediction model were established respectively. The adaptive fuzzy inference system combines the learning mechanism of neural networks and the linguistic reasoning ability of the fuzzy system, membership functions, and fuzzy rules are obtained by adaptive adjustment. Therefore, the modeling process is fast and effective. In this paper, the ANFIS module of MATLAB fuzzy logic toolbox was used to establish the fuzzy cutting model of ASJ, which is found to be quite instrumental to ASJ cutting applications.
A process-based model for cattle manure compost windrows: Model performance and application
USDA-ARS?s Scientific Manuscript database
A model was developed and incorporated in the Integrated Farm System Model (IFSM, v.4.3) that simulates important processes occurring during windrow composting of manure. The model, documented in an accompanying paper, predicts changes in windrow properties and conditions and the resulting emissions...
The P-chain: relating sentence production and its disorders to comprehension and acquisition
Dell, Gary S.; Chang, Franklin
2014-01-01
This article introduces the P-chain, an emerging framework for theory in psycholinguistics that unifies research on comprehension, production and acquisition. The framework proposes that language processing involves incremental prediction, which is carried out by the production system. Prediction necessarily leads to prediction error, which drives learning, including both adaptive adjustment to the mature language processing system as well as language acquisition. To illustrate the P-chain, we review the Dual-path model of sentence production, a connectionist model that explains structural priming in production and a number of facts about language acquisition. The potential of this and related models for explaining acquired and developmental disorders of sentence production is discussed. PMID:24324238
The P-chain: relating sentence production and its disorders to comprehension and acquisition.
Dell, Gary S; Chang, Franklin
2014-01-01
This article introduces the P-chain, an emerging framework for theory in psycholinguistics that unifies research on comprehension, production and acquisition. The framework proposes that language processing involves incremental prediction, which is carried out by the production system. Prediction necessarily leads to prediction error, which drives learning, including both adaptive adjustment to the mature language processing system as well as language acquisition. To illustrate the P-chain, we review the Dual-path model of sentence production, a connectionist model that explains structural priming in production and a number of facts about language acquisition. The potential of this and related models for explaining acquired and developmental disorders of sentence production is discussed.
Predictions of spray combustion interactions
NASA Technical Reports Server (NTRS)
Shuen, J. S.; Solomon, A. S. P.; Faeth, G. M.
1984-01-01
Mean and fluctuating phase velocities; mean particle mass flux; particle size; and mean gas-phase Reynolds stress, composition and temperature were measured in stationary, turbulent, axisymmetric, and flows which conform to the boundary layer approximations while having well-defined initial and boundary conditions in dilute particle-laden jets, nonevaporating sprays, and evaporating sprays injected into a still air environment. Three models of the processes, typical of current practice, were evaluated. The local homogeneous flow and deterministic separated flow models did not provide very satisfactory predictions over the present data base. In contrast, the stochastic separated flow model generally provided good predictions and appears to be an attractive approach for treating nonlinear interphase transport processes in turbulent flows containing particles (drops).
NASA Astrophysics Data System (ADS)
Bellugi, D. G.; Tennant, C.; Larsen, L.
2016-12-01
Catchment and climate heterogeneity complicate prediction of runoff across time and space, and resulting parameter uncertainty can lead to large accumulated errors in hydrologic models, particularly in ungauged basins. Recently, data-driven modeling approaches have been shown to avoid the accumulated uncertainty associated with many physically-based models, providing an appealing alternative for hydrologic prediction. However, the effectiveness of different methods in hydrologically and geomorphically distinct catchments, and the robustness of these methods to changing climate and changing hydrologic processes remain to be tested. Here, we evaluate the use of machine learning techniques to predict daily runoff across time and space using only essential climatic forcing (e.g. precipitation, temperature, and potential evapotranspiration) time series as model input. Model training and testing was done using a high quality dataset of daily runoff and climate forcing data for 25+ years for 600+ minimally-disturbed catchments (drainage area range 5-25,000 km2, median size 336 km2) that cover a wide range of climatic and physical characteristics. Preliminary results using Support Vector Regression (SVR) suggest that in some catchments this nonlinear-based regression technique can accurately predict daily runoff, while the same approach fails in other catchments, indicating that the representation of climate inputs and/or catchment filter characteristics in the model structure need further refinement to increase performance. We bolster this analysis by using Sparse Identification of Nonlinear Dynamics (a sparse symbolic regression technique) to uncover the governing equations that describe runoff processes in catchments where SVR performed well and for ones where it performed poorly, thereby enabling inference about governing processes. This provides a robust means of examining how catchment complexity influences runoff prediction skill, and represents a contribution towards the integration of data-driven inference and physically-based models.
NASA Astrophysics Data System (ADS)
Oh, Jung Hun; Kerns, Sarah; Ostrer, Harry; Powell, Simon N.; Rosenstein, Barry; Deasy, Joseph O.
2017-02-01
The biological cause of clinically observed variability of normal tissue damage following radiotherapy is poorly understood. We hypothesized that machine/statistical learning methods using single nucleotide polymorphism (SNP)-based genome-wide association studies (GWAS) would identify groups of patients of differing complication risk, and furthermore could be used to identify key biological sources of variability. We developed a novel learning algorithm, called pre-conditioned random forest regression (PRFR), to construct polygenic risk models using hundreds of SNPs, thereby capturing genomic features that confer small differential risk. Predictive models were trained and validated on a cohort of 368 prostate cancer patients for two post-radiotherapy clinical endpoints: late rectal bleeding and erectile dysfunction. The proposed method results in better predictive performance compared with existing computational methods. Gene ontology enrichment analysis and protein-protein interaction network analysis are used to identify key biological processes and proteins that were plausible based on other published studies. In conclusion, we confirm that novel machine learning methods can produce large predictive models (hundreds of SNPs), yielding clinically useful risk stratification models, as well as identifying important underlying biological processes in the radiation damage and tissue repair process. The methods are generally applicable to GWAS data and are not specific to radiotherapy endpoints.
NASA Astrophysics Data System (ADS)
Kim, S.; Seo, D. J.
2017-12-01
When water temperature (TW) increases due to changes in hydrometeorological conditions, the overall ecological conditions change in the aquatic system. The changes can be harmful to human health and potentially fatal to fish habitat. Therefore, it is important to assess the impacts of thermal disturbances on in-stream processes of water quality variables and be able to predict effectiveness of possible actions that may be taken for water quality protection. For skillful prediction of in-stream water quality processes, it is necessary for the watershed water quality models to be able to reflect such changes. Most of the currently available models, however, assume static parameters for the biophysiochemical processes and hence are not able to capture nonstationaries seen in water quality observations. In this work, we assess the performance of the Hydrological Simulation Program-Fortran (HSPF) in predicting algal dynamics following TW increase. The study area is located in the Republic of Korea where waterway change due to weir construction and drought concurrently occurred around 2012. In this work we use data assimilation (DA) techniques to update model parameters as well as the initial condition of selected state variables for in-stream processes relevant to algal growth. For assessment of model performance and characterization of temporal variability, various goodness-of-fit measures and wavelet analysis are used.
NASA Astrophysics Data System (ADS)
Zhang, Fangkun; Liu, Tao; Wang, Xue Z.; Liu, Jingxiang; Jiang, Xiaobin
2017-02-01
In this paper calibration model building based on using an ATR-FTIR spectroscopy is investigated for in-situ measurement of the solution concentration during a cooling crystallization process. The cooling crystallization of L-glutamic Acid (LGA) as a case is studied here. It was found that using the metastable zone (MSZ) data for model calibration can guarantee the prediction accuracy for monitoring the operating window of cooling crystallization, compared to the usage of undersaturated zone (USZ) spectra for model building as traditionally practiced. Calibration experiments were made for LGA solution under different concentrations. Four candidate calibration models were established using different zone data for comparison, by using a multivariate partial least-squares (PLS) regression algorithm for the collected spectra together with the corresponding temperature values. Experiments under different process conditions including the changes of solution concentration and operating temperature were conducted. The results indicate that using the MSZ spectra for model calibration can give more accurate prediction of the solution concentration during the crystallization process, while maintaining accuracy in changing the operating temperature. The primary reason of prediction error was clarified as spectral nonlinearity for in-situ measurement between USZ and MSZ. In addition, an LGA cooling crystallization experiment was performed to verify the sensitivity of these calibration models for monitoring the crystal growth process.
Predictive codes of familiarity and context during the perceptual learning of facial identities
NASA Astrophysics Data System (ADS)
Apps, Matthew A. J.; Tsakiris, Manos
2013-11-01
Face recognition is a key component of successful social behaviour. However, the computational processes that underpin perceptual learning and recognition as faces transition from unfamiliar to familiar are poorly understood. In predictive coding, learning occurs through prediction errors that update stimulus familiarity, but recognition is a function of both stimulus and contextual familiarity. Here we show that behavioural responses on a two-option face recognition task can be predicted by the level of contextual and facial familiarity in a computational model derived from predictive-coding principles. Using fMRI, we show that activity in the superior temporal sulcus varies with the contextual familiarity in the model, whereas activity in the fusiform face area covaries with the prediction error parameter that updated facial familiarity. Our results characterize the key computations underpinning the perceptual learning of faces, highlighting that the functional properties of face-processing areas conform to the principles of predictive coding.
Predictability and Prediction for an Experimental Cultural Market
NASA Astrophysics Data System (ADS)
Colbaugh, Richard; Glass, Kristin; Ormerod, Paul
Individuals are often influenced by the behavior of others, for instance because they wish to obtain the benefits of coordinated actions or infer otherwise inaccessible information. In such situations this social influence decreases the ex ante predictability of the ensuing social dynamics. We claim that, interestingly, these same social forces can increase the extent to which the outcome of a social process can be predicted very early in the process. This paper explores this claim through a theoretical and empirical analysis of the experimental music market described and analyzed in [1]. We propose a very simple model for this music market, assess the predictability of market outcomes through formal analysis of the model, and use insights derived through this analysis to develop algorithms for predicting market share winners, and their ultimate market shares, in the very early stages of the market. The utility of these predictive algorithms is illustrated through analysis of the experimental music market data sets [2].
Anurag Srivastava; Joan Q. Wu; William J. Elliot; Erin S. Brooks
2015-01-01
The Water Erosion Prediction Project (WEPP) model, originally developed for hillslope and small watershed applications, simulates complex interactive processes influencing erosion. Recent incorporations to the model have improved the subsurface hydrology components for forest applications. Incorporation of channel routing has made the WEPP model well suited for large...
Bit selection using field drilling data and mathematical investigation
NASA Astrophysics Data System (ADS)
Momeni, M. S.; Ridha, S.; Hosseini, S. J.; Meyghani, B.; Emamian, S. S.
2018-03-01
A drilling process will not be complete without the usage of a drill bit. Therefore, bit selection is considered to be an important task in drilling optimization process. To select a bit is considered as an important issue in planning and designing a well. This is simply because the cost of drilling bit in total cost is quite high. Thus, to perform this task, aback propagation ANN Model is developed. This is done by training the model using several wells and it is done by the usage of drilling bit records from offset wells. In this project, two models are developed by the usage of the ANN. One is to find predicted IADC bit code and one is to find Predicted ROP. Stage 1 was to find the IADC bit code by using all the given filed data. The output is the Targeted IADC bit code. Stage 2 was to find the Predicted ROP values using the gained IADC bit code in Stage 1. Next is Stage 3 where the Predicted ROP value is used back again in the data set to gain Predicted IADC bit code value. The output is the Predicted IADC bit code. Thus, at the end, there are two models that give the Predicted ROP values and Predicted IADC bit code values.
Determining the potential productivity of food crops in controlled environments
NASA Technical Reports Server (NTRS)
Bugbee, Bruce
1992-01-01
The quest to determine the maximum potential productivity of food crops is greatly benefitted by crop growth models. Many models have been developed to analyze and predict crop growth in the field, but it is difficult to predict biological responses to stress conditions. Crop growth models for the optimal environments of a Controlled Environment Life Support System (CELSS) can be highly predictive. This paper discusses the application of a crop growth model to CELSS; the model is used to evaluate factors limiting growth. The model separately evaluates the following four physiological processes: absorption of PPF by photosynthetic tissue, carbon fixation (photosynthesis), carbon use (respiration), and carbon partitioning (harvest index). These constituent processes determine potentially achievable productivity. An analysis of each process suggests that low harvest index is the factor most limiting to yield. PPF absorption by plant canopies and respiration efficiency are also of major importance. Research concerning productivity in a CELSS should emphasize: (1) the development of gas exchange techniques to continuously monitor plant growth rates and (2) environmental techniques to reduce plant height in communities.
Modeling of the flow stress for AISI H13 Tool Steel during Hard Machining Processes
NASA Astrophysics Data System (ADS)
Umbrello, Domenico; Rizzuti, Stefania; Outeiro, José C.; Shivpuri, Rajiv
2007-04-01
In general, the flow stress models used in computer simulation of machining processes are a function of effective strain, effective strain rate and temperature developed during the cutting process. However, these models do not adequately describe the material behavior in hard machining, where a range of material hardness between 45 and 60 HRC are used. Thus, depending on the specific material hardness different material models must be used in modeling the cutting process. This paper describes the development of a hardness-based flow stress and fracture models for the AISI H13 tool steel, which can be applied for range of material hardness mentioned above. These models were implemented in a non-isothermal viscoplastic numerical model to simulate the machining process for AISI H13 with various hardness values and applying different cutting regime parameters. Predicted results are validated by comparing them with experimental results found in the literature. They are found to predict reasonably well the cutting forces as well as the change in chip morphology from continuous to segmented chip as the material hardness change.
Neural-scaled entropy predicts the effects of nonlinear frequency compression on speech perception
Rallapalli, Varsha H.; Alexander, Joshua M.
2015-01-01
The Neural-Scaled Entropy (NSE) model quantifies information in the speech signal that has been altered beyond simple gain adjustments by sensorineural hearing loss (SNHL) and various signal processing. An extension of Cochlear-Scaled Entropy (CSE) [Stilp, Kiefte, Alexander, and Kluender (2010). J. Acoust. Soc. Am. 128(4), 2112–2126], NSE quantifies information as the change in 1-ms neural firing patterns across frequency. To evaluate the model, data from a study that examined nonlinear frequency compression (NFC) in listeners with SNHL were used because NFC can recode the same input information in multiple ways in the output, resulting in different outcomes for different speech classes. Overall, predictions were more accurate for NSE than CSE. The NSE model accurately described the observed degradation in recognition, and lack thereof, for consonants in a vowel-consonant-vowel context that had been processed in different ways by NFC. While NSE accurately predicted recognition of vowel stimuli processed with NFC, it underestimated them relative to a low-pass control condition without NFC. In addition, without modifications, it could not predict the observed improvement in recognition for word final /s/ and /z/. Findings suggest that model modifications that include information from slower modulations might improve predictions across a wider variety of conditions. PMID:26627780
Graham, Emily B.; Knelman, Joseph E.; Schindlbacher, Andreas; Siciliano, Steven; Breulmann, Marc; Yannarell, Anthony; Beman, J. M.; Abell, Guy; Philippot, Laurent; Prosser, James; Foulquier, Arnaud; Yuste, Jorge C.; Glanville, Helen C.; Jones, Davey L.; Angel, Roey; Salminen, Janne; Newton, Ryan J.; Bürgmann, Helmut; Ingram, Lachlan J.; Hamer, Ute; Siljanen, Henri M. P.; Peltoniemi, Krista; Potthast, Karin; Bañeras, Lluís; Hartmann, Martin; Banerjee, Samiran; Yu, Ri-Qing; Nogaro, Geraldine; Richter, Andreas; Koranda, Marianne; Castle, Sarah C.; Goberna, Marta; Song, Bongkeun; Chatterjee, Amitava; Nunes, Olga C.; Lopes, Ana R.; Cao, Yiping; Kaisermann, Aurore; Hallin, Sara; Strickland, Michael S.; Garcia-Pausas, Jordi; Barba, Josep; Kang, Hojeong; Isobe, Kazuo; Papaspyrou, Sokratis; Pastorelli, Roberta; Lagomarsino, Alessandra; Lindström, Eva S.; Basiliko, Nathan; Nemergut, Diana R.
2016-01-01
Microorganisms are vital in mediating the earth’s biogeochemical cycles; yet, despite our rapidly increasing ability to explore complex environmental microbial communities, the relationship between microbial community structure and ecosystem processes remains poorly understood. Here, we address a fundamental and unanswered question in microbial ecology: ‘When do we need to understand microbial community structure to accurately predict function?’ We present a statistical analysis investigating the value of environmental data and microbial community structure independently and in combination for explaining rates of carbon and nitrogen cycling processes within 82 global datasets. Environmental variables were the strongest predictors of process rates but left 44% of variation unexplained on average, suggesting the potential for microbial data to increase model accuracy. Although only 29% of our datasets were significantly improved by adding information on microbial community structure, we observed improvement in models of processes mediated by narrow phylogenetic guilds via functional gene data, and conversely, improvement in models of facultative microbial processes via community diversity metrics. Our results also suggest that microbial diversity can strengthen predictions of respiration rates beyond microbial biomass parameters, as 53% of models were improved by incorporating both sets of predictors compared to 35% by microbial biomass alone. Our analysis represents the first comprehensive analysis of research examining links between microbial community structure and ecosystem function. Taken together, our results indicate that a greater understanding of microbial communities informed by ecological principles may enhance our ability to predict ecosystem process rates relative to assessments based on environmental variables and microbial physiology. PMID:26941732
Graham, Emily B; Knelman, Joseph E; Schindlbacher, Andreas; Siciliano, Steven; Breulmann, Marc; Yannarell, Anthony; Beman, J M; Abell, Guy; Philippot, Laurent; Prosser, James; Foulquier, Arnaud; Yuste, Jorge C; Glanville, Helen C; Jones, Davey L; Angel, Roey; Salminen, Janne; Newton, Ryan J; Bürgmann, Helmut; Ingram, Lachlan J; Hamer, Ute; Siljanen, Henri M P; Peltoniemi, Krista; Potthast, Karin; Bañeras, Lluís; Hartmann, Martin; Banerjee, Samiran; Yu, Ri-Qing; Nogaro, Geraldine; Richter, Andreas; Koranda, Marianne; Castle, Sarah C; Goberna, Marta; Song, Bongkeun; Chatterjee, Amitava; Nunes, Olga C; Lopes, Ana R; Cao, Yiping; Kaisermann, Aurore; Hallin, Sara; Strickland, Michael S; Garcia-Pausas, Jordi; Barba, Josep; Kang, Hojeong; Isobe, Kazuo; Papaspyrou, Sokratis; Pastorelli, Roberta; Lagomarsino, Alessandra; Lindström, Eva S; Basiliko, Nathan; Nemergut, Diana R
2016-01-01
Microorganisms are vital in mediating the earth's biogeochemical cycles; yet, despite our rapidly increasing ability to explore complex environmental microbial communities, the relationship between microbial community structure and ecosystem processes remains poorly understood. Here, we address a fundamental and unanswered question in microbial ecology: 'When do we need to understand microbial community structure to accurately predict function?' We present a statistical analysis investigating the value of environmental data and microbial community structure independently and in combination for explaining rates of carbon and nitrogen cycling processes within 82 global datasets. Environmental variables were the strongest predictors of process rates but left 44% of variation unexplained on average, suggesting the potential for microbial data to increase model accuracy. Although only 29% of our datasets were significantly improved by adding information on microbial community structure, we observed improvement in models of processes mediated by narrow phylogenetic guilds via functional gene data, and conversely, improvement in models of facultative microbial processes via community diversity metrics. Our results also suggest that microbial diversity can strengthen predictions of respiration rates beyond microbial biomass parameters, as 53% of models were improved by incorporating both sets of predictors compared to 35% by microbial biomass alone. Our analysis represents the first comprehensive analysis of research examining links between microbial community structure and ecosystem function. Taken together, our results indicate that a greater understanding of microbial communities informed by ecological principles may enhance our ability to predict ecosystem process rates relative to assessments based on environmental variables and microbial physiology.
Interactions of timing and prediction error learning.
Kirkpatrick, Kimberly
2014-01-01
Timing and prediction error learning have historically been treated as independent processes, but growing evidence has indicated that they are not orthogonal. Timing emerges at the earliest time point when conditioned responses are observed, and temporal variables modulate prediction error learning in both simple conditioning and cue competition paradigms. In addition, prediction errors, through changes in reward magnitude or value alter timing of behavior. Thus, there appears to be a bi-directional interaction between timing and prediction error learning. Modern theories have attempted to integrate the two processes with mixed success. A neurocomputational approach to theory development is espoused, which draws on neurobiological evidence to guide and constrain computational model development. Heuristics for future model development are presented with the goal of sparking new approaches to theory development in the timing and prediction error fields. Copyright © 2013 Elsevier B.V. All rights reserved.
A method for grounding grid corrosion rate prediction
NASA Astrophysics Data System (ADS)
Han, Juan; Du, Jingyi
2017-06-01
Involved in a variety of factors, prediction of grounding grid corrosion complex, and uncertainty in the acquisition process, we propose a combination of EAHP (extended AHP) and fuzzy nearness degree of effective grounding grid corrosion rate prediction model. EAHP is used to establish judgment matrix and calculate the weight of each factors corrosion of grounding grid; different sample classification properties have different corrosion rate of contribution, and combining the principle of close to predict corrosion rate.The application result shows, the model can better capture data variation, thus to improve the validity of the model to get higher prediction precision.
When Sufficiently Processed, Semantically Related Distractor Pictures Hamper Picture Naming.
Matushanskaya, Asya; Mädebach, Andreas; Müller, Matthias M; Jescheniak, Jörg D
2016-11-01
Prominent speech production models view lexical access as a competitive process. According to these models, a semantically related distractor picture should interfere with target picture naming more strongly than an unrelated one. However, several studies failed to obtain such an effect. Here, we demonstrate that semantic interference is obtained, when the distractor picture is sufficiently processed. Participants named one of two pictures presented in close temporal succession, with color cueing the target. Experiment 1 induced the prediction that the target appears first. When this prediction was violated (distractor first), semantic interference was observed. Experiment 2 ruled out that the time available for distractor processing was the driving force. These results show that semantically related distractor pictures interfere with the naming response when they are sufficiently processed. The data thus provide further support for models viewing lexical access as a competitive process.
Uncertainty analysis of a groundwater flow model in East-central Florida.
Sepúlveda, Nicasio; Doherty, John
2015-01-01
A groundwater flow model for east-central Florida has been developed to help water-resource managers assess the impact of increased groundwater withdrawals from the Floridan aquifer system on heads and spring flows originating from the Upper Floridan Aquifer. The model provides a probabilistic description of predictions of interest to water-resource managers, given the uncertainty associated with system heterogeneity, the large number of input parameters, and a nonunique groundwater flow solution. The uncertainty associated with these predictions can then be considered in decisions with which the model has been designed to assist. The "Null Space Monte Carlo" method is a stochastic probabilistic approach used to generate a suite of several hundred parameter field realizations, each maintaining the model in a calibrated state, and each considered to be hydrogeologically plausible. The results presented herein indicate that the model's capacity to predict changes in heads or spring flows that originate from increased groundwater withdrawals is considerably greater than its capacity to predict the absolute magnitudes of heads or spring flows. Furthermore, the capacity of the model to make predictions that are similar in location and in type to those in the calibration dataset exceeds its capacity to make predictions of different types at different locations. The quantification of these outcomes allows defensible use of the modeling process in support of future water-resources decisions. The model allows the decision-making process to recognize the uncertainties, and the spatial or temporal variability of uncertainties that are associated with predictions of future system behavior in a complex hydrogeological context. © 2014, National Ground Water Association.
Modeling methods for merging computational and experimental aerodynamic pressure data
NASA Astrophysics Data System (ADS)
Haderlie, Jacob C.
This research describes a process to model surface pressure data sets as a function of wing geometry from computational and wind tunnel sources and then merge them into a single predicted value. The described merging process will enable engineers to integrate these data sets with the goal of utilizing the advantages of each data source while overcoming the limitations of both; this provides a single, combined data set to support analysis and design. The main challenge with this process is accurately representing each data source everywhere on the wing. Additionally, this effort demonstrates methods to model wind tunnel pressure data as a function of angle of attack as an initial step towards a merging process that uses both location on the wing and flow conditions (e.g., angle of attack, flow velocity or Reynold's number) as independent variables. This surrogate model of pressure as a function of angle of attack can be useful for engineers that need to predict the location of zero-order discontinuities, e.g., flow separation or normal shocks. Because, to the author's best knowledge, there is no published, well-established merging method for aerodynamic pressure data (here, the coefficient of pressure Cp), this work identifies promising modeling and merging methods, and then makes a critical comparison of these methods. Surrogate models represent the pressure data for both data sets. Cubic B-spline surrogate models represent the computational simulation results. Machine learning and multi-fidelity surrogate models represent the experimental data. This research compares three surrogates for the experimental data (sequential--a.k.a. online--Gaussian processes, batch Gaussian processes, and multi-fidelity additive corrector) on the merits of accuracy and computational cost. The Gaussian process (GP) methods employ cubic B-spline CFD surrogates as a model basis function to build a surrogate model of the WT data, and this usage of the CFD surrogate in building the WT data could serve as a "merging" because the resulting WT pressure prediction uses information from both sources. In the GP approach, this model basis function concept seems to place more "weight" on the Cp values from the wind tunnel (WT) because the GP surrogate uses the CFD to approximate the WT data values. Conversely, the computationally inexpensive additive corrector method uses the CFD B-spline surrogate to define the shape of the spanwise distribution of the Cp while minimizing prediction error at all spanwise locations for a given arc length position; this, too, combines information from both sources to make a prediction of the 2-D WT-based Cp distribution, but the additive corrector approach gives more weight to the CFD prediction than to the WT data. Three surrogate models of the experimental data as a function of angle of attack are also compared for accuracy and computational cost. These surrogates are a single Gaussian process model (a single "expert"), product of experts, and generalized product of experts. The merging approach provides a single pressure distribution that combines experimental and computational data. The batch Gaussian process method provides a relatively accurate surrogate that is computationally acceptable, and can receive wind tunnel data from port locations that are not necessarily parallel to a variable direction. On the other hand, the sequential Gaussian process and additive corrector methods must receive a sufficient number of data points aligned with one direction, e.g., from pressure port bands (tap rows) aligned with the freestream. The generalized product of experts best represents wind tunnel pressure as a function of angle of attack, but at higher computational cost than the single expert approach. The format of the application data from computational and experimental sources in this work precluded the merging process from including flow condition variables (e.g., angle of attack) in the independent variables, so the merging process is only conducted in the wing geometry variables of arc length and span. The merging process of Cp data allows a more "hands-off" approach to aircraft design and analysis, (i.e., not as many engineers needed to debate the Cp distribution shape) and generates Cp predictions at any location on the wing. However, the cost with these benefits are engineer time (learning how to build surrogates), computational time in constructing the surrogates, and surrogate accuracy (surrogates introduce error into data predictions). This dissertation effort used the Trap Wing / First AIAA CFD High-Lift Prediction Workshop as a relevant transonic wing with a multi-element high-lift system, and this work identified that the batch GP model for the WT data and the B-spline surrogate for the CFD might best be combined using expert belief weights to describe Cp as a function of location on the wing element surface. (Abstract shortened by ProQuest.).
Using the domain identification model to study major and career decision-making processes
NASA Astrophysics Data System (ADS)
Tendhar, Chosang; Singh, Kusum; Jones, Brett D.
2018-03-01
The purpose of this study was to examine the extent to which (1) a domain identification model could be used to predict students' engineering major and career intentions and (2) the MUSIC Model of Motivation components could be used to predict domain identification. The data for this study were collected from first-year engineering students. We used a structural equation model to test the hypothesised relationship between variables in the partial domain identification model. The findings suggested that engineering identification significantly predicted engineering major intentions and career intentions and had the highest effect on those two variables compared to other motivational constructs. Furthermore, results suggested that success, interest, and caring are plausible contributors to students' engineering identification. Overall, there is strong evidence that the domain identification model can be used as a lens to study career decision-making processes in engineering, and potentially, in other fields as well.
Managing distribution changes in time series prediction
NASA Astrophysics Data System (ADS)
Matias, J. M.; Gonzalez-Manteiga, W.; Taboada, J.; Ordonez, C.
2006-07-01
When a problem is modeled statistically, a single distribution model is usually postulated that is assumed to be valid for the entire space. Nonetheless, this practice may be somewhat unrealistic in certain application areas, in which the conditions of the process that generates the data may change; as far as we are aware, however, no techniques have been developed to tackle this problem.This article proposes a technique for modeling and predicting this change in time series with a view to improving estimates and predictions. The technique is applied, among other models, to the hypernormal distribution recently proposed. When tested on real data from a range of stock market indices the technique produces better results that when a single distribution model is assumed to be valid for the entire period of time studied.Moreover, when a global model is postulated, it is highly recommended to select the hypernormal distribution parameter in the same likelihood maximization process.
Schearer, Eric M.; Liao, Yu-Wei; Perreault, Eric J.; Tresch, Matthew C.; Memberg, William D.; Kirsch, Robert F.; Lynch, Kevin M.
2016-01-01
We present a method to identify the dynamics of a human arm controlled by an implanted functional electrical stimulation neuroprosthesis. The method uses Gaussian process regression to predict shoulder and elbow torques given the shoulder and elbow joint positions and velocities and the electrical stimulation inputs to muscles. We compare the accuracy of torque predictions of nonparametric, semiparametric, and parametric model types. The most accurate of the three model types is a semiparametric Gaussian process model that combines the flexibility of a black box function approximator with the generalization power of a parameterized model. The semiparametric model predicted torques during stimulation of multiple muscles with errors less than 20% of the total muscle torque and passive torque needed to drive the arm. The identified model allows us to define an arbitrary reaching trajectory and approximately determine the muscle stimulations required to drive the arm along that trajectory. PMID:26955041
Evaluation of ceramics for stator application: Gas turbine engine report
NASA Technical Reports Server (NTRS)
Trela, W.; Havstad, P. H.
1978-01-01
Current ceramic materials, component fabrication processes, and reliability prediction capability for ceramic stators in an automotive gas turbine engine environment are assessed. Simulated engine duty cycle testing of stators conducted at temperatures up to 1093 C is discussed. Materials evaluated are SiC and Si3N4 fabricated from two near-net-shape processes: slip casting and injection molding. Stators for durability cycle evaluation and test specimens for material property characterization, and reliability prediction model prepared to predict stator performance in the simulated engine environment are considered. The status and description of the work performed for the reliability prediction modeling, stator fabrication, material property characterization, and ceramic stator evaluation efforts are reported.
Gaussian mixture models as flux prediction method for central receivers
NASA Astrophysics Data System (ADS)
Grobler, Annemarie; Gauché, Paul; Smit, Willie
2016-05-01
Flux prediction methods are crucial to the design and operation of central receiver systems. Current methods such as the circular and elliptical (bivariate) Gaussian prediction methods are often used in field layout design and aiming strategies. For experimental or small central receiver systems, the flux profile of a single heliostat often deviates significantly from the circular and elliptical Gaussian models. Therefore a novel method of flux prediction was developed by incorporating the fitting of Gaussian mixture models onto flux profiles produced by flux measurement or ray tracing. A method was also developed to predict the Gaussian mixture model parameters of a single heliostat for a given time using image processing. Recording the predicted parameters in a database ensures that more accurate predictions are made in a shorter time frame.
Wong, Aaron L; Shelhamer, Mark
2014-05-01
Adaptive processes are crucial in maintaining the accuracy of body movements and rely on error storage and processing mechanisms. Although classically studied with adaptation paradigms, evidence of these ongoing error-correction mechanisms should also be detectable in other movements. Despite this connection, current adaptation models are challenged when forecasting adaptation ability with measures of baseline behavior. On the other hand, we have previously identified an error-correction process present in a particular form of baseline behavior, the generation of predictive saccades. This process exhibits long-term intertrial correlations that decay gradually (as a power law) and are best characterized with the tools of fractal time series analysis. Since this baseline task and adaptation both involve error storage and processing, we sought to find a link between the intertrial correlations of the error-correction process in predictive saccades and the ability of subjects to alter their saccade amplitudes during an adaptation task. Here we find just such a relationship: the stronger the intertrial correlations during prediction, the more rapid the acquisition of adaptation. This reinforces the links found previously between prediction and adaptation in motor control and suggests that current adaptation models are inadequate to capture the complete dynamics of these error-correction processes. A better understanding of the similarities in error processing between prediction and adaptation might provide the means to forecast adaptation ability with a baseline task. This would have many potential uses in physical therapy and the general design of paradigms of motor adaptation. Copyright © 2014 the American Physiological Society.
Kinetic Modeling of a Silicon Refining Process in a Moist Hydrogen Atmosphere
NASA Astrophysics Data System (ADS)
Chen, Zhiyuan; Morita, Kazuki
2018-03-01
We developed a kinetic model that considers both silicon loss and boron removal in a metallurgical grade silicon refining process. This model was based on the hypotheses of reversible reactions. The reaction rate coefficient kept the same form but error of terminal boron concentration could be introduced when relating irreversible reactions. Experimental data from published studies were used to develop a model that fit the existing data. At 1500 °C, our kinetic analysis suggested that refining silicon in a moist hydrogen atmosphere generates several primary volatile species, including SiO, SiH, HBO, and HBO2. Using the experimental data and the kinetic analysis of volatile species, we developed a model that predicts a linear relationship between the reaction rate coefficient k and both the quadratic function of p(H2O) and the square root of p(H2). Moreover, the model predicted the partial pressure values for the predominant volatile species and the prediction was confirmed by the thermodynamic calculations, indicating the reliability of the model. We believe this model provides a foundation for designing a silicon refining process with a fast boron removal rate and low silicon loss.
Orbit Determination for the Lunar Reconnaissance Orbiter Using an Extended Kalman Filter
NASA Technical Reports Server (NTRS)
Slojkowski, Steven; Lowe, Jonathan; Woodburn, James
2015-01-01
Orbit determination (OD) analysis results are presented for the Lunar Reconnaissance Orbiter (LRO) using a commercially available Extended Kalman Filter, Analytical Graphics' Orbit Determination Tool Kit (ODTK). Process noise models for lunar gravity and solar radiation pressure (SRP) are described and OD results employing the models are presented. Definitive accuracy using ODTK meets mission requirements and is better than that achieved using the operational LRO OD tool, the Goddard Trajectory Determination System (GTDS). Results demonstrate that a Vasicek stochastic model produces better estimates of the coefficient of solar radiation pressure than a Gauss-Markov model, and prediction accuracy using a Vasicek model meets mission requirements over the analysis span. Modeling the effect of antenna motion on range-rate tracking considerably improves residuals and filter-smoother consistency. Inclusion of off-axis SRP process noise and generalized process noise improves filter performance for both definitive and predicted accuracy. Definitive accuracy from the smoother is better than achieved using GTDS and is close to that achieved by precision OD methods used to generate definitive science orbits. Use of a multi-plate dynamic spacecraft area model with ODTK's force model plugin capability provides additional improvements in predicted accuracy.
Kinetic Modeling of a Silicon Refining Process in a Moist Hydrogen Atmosphere
NASA Astrophysics Data System (ADS)
Chen, Zhiyuan; Morita, Kazuki
2018-06-01
We developed a kinetic model that considers both silicon loss and boron removal in a metallurgical grade silicon refining process. This model was based on the hypotheses of reversible reactions. The reaction rate coefficient kept the same form but error of terminal boron concentration could be introduced when relating irreversible reactions. Experimental data from published studies were used to develop a model that fit the existing data. At 1500 °C, our kinetic analysis suggested that refining silicon in a moist hydrogen atmosphere generates several primary volatile species, including SiO, SiH, HBO, and HBO2. Using the experimental data and the kinetic analysis of volatile species, we developed a model that predicts a linear relationship between the reaction rate coefficient k and both the quadratic function of p(H2O) and the square root of p(H2). Moreover, the model predicted the partial pressure values for the predominant volatile species and the prediction was confirmed by the thermodynamic calculations, indicating the reliability of the model. We believe this model provides a foundation for designing a silicon refining process with a fast boron removal rate and low silicon loss.
Simulation Modeling of Software Development Processes
NASA Technical Reports Server (NTRS)
Calavaro, G. F.; Basili, V. R.; Iazeolla, G.
1996-01-01
A simulation modeling approach is proposed for the prediction of software process productivity indices, such as cost and time-to-market, and the sensitivity analysis of such indices to changes in the organization parameters and user requirements. The approach uses a timed Petri Net and Object Oriented top-down model specification. Results demonstrate the model representativeness, and its usefulness in verifying process conformance to expectations, and in performing continuous process improvement and optimization.
Right Lateral Cerebellum Represents Linguistic Predictability.
Lesage, Elise; Hansen, Peter C; Miall, R Chris
2017-06-28
Mounting evidence indicates that posterolateral portions of the cerebellum (right Crus I/II) contribute to language processing, but the nature of this role remains unclear. Based on a well-supported theory of cerebellar motor function, which ascribes to the cerebellum a role in short-term prediction through internal modeling, we hypothesize that right cerebellar Crus I/II supports prediction of upcoming sentence content. We tested this hypothesis using event-related fMRI in male and female human subjects by manipulating the predictability of written sentences. Our design controlled for motor planning and execution, as well as for linguistic features and working memory load; it also allowed separation of the prediction interval from the presentation of the final sentence item. In addition, three further fMRI tasks captured semantic, phonological, and orthographic processing to shed light on the nature of the information processed. As hypothesized, activity in right posterolateral cerebellum correlated with the predictability of the upcoming target word. This cerebellar region also responded to prediction error during the outcome of the trial. Further, this region was engaged in phonological, but not semantic or orthographic, processing. This is the first imaging study to demonstrate a right cerebellar contribution in language comprehension independently from motor, cognitive, and linguistic confounds. These results complement our work using other methodologies showing cerebellar engagement in linguistic prediction and suggest that internal modeling of phonological representations aids language production and comprehension. SIGNIFICANCE STATEMENT The cerebellum is traditionally seen as a motor structure that allows for smooth movement by predicting upcoming signals. However, the cerebellum is also consistently implicated in nonmotor functions such as language and working memory. Using fMRI, we identify a cerebellar area that is active when words are predicted and when these predictions are violated. This area is active in a separate task that requires phonological processing, but not in tasks that require semantic or visuospatial processing. Our results support the idea of prediction as a unifying cerebellar function in motor and nonmotor domains. We provide new insights by linking the cerebellar role in prediction to its role in verbal working memory, suggesting that these predictions involve phonological processing. Copyright © 2017 Lesage et al.
Garitte, B.; Shao, H.; Wang, X. R.; ...
2017-01-09
Process understanding and parameter identification using numerical methods based on experimental findings are a key aspect of the international cooperative project DECOVALEX. Comparing the predictions from numerical models against experimental results increases confidence in the site selection and site evaluation process for a radioactive waste repository in deep geological formations. In the present phase of the project, DECOVALEX-2015, eight research teams have developed and applied models for simulating an in-situ heater experiment HE-E in the Opalinus Clay in the Mont Terri Rock Laboratory in Switzerland. The modelling task was divided into two study stages, related to prediction and interpretation ofmore » the experiment. A blind prediction of the HE-E experiment was performed based on calibrated parameter values for both the Opalinus Clay, that were based on the modelling of another in-situ experiment (HE-D), and modelling of laboratory column experiments on MX80 granular bentonite and a sand/bentonite mixture .. After publication of the experimental data, additional coupling functions were analysed and considered in the different models. Moreover, parameter values were varied to interpret the measured temperature, relative humidity and pore pressure evolution. The analysis of the predictive and interpretative results reveals the current state of understanding and predictability of coupled THM behaviours associated with geologic nuclear waste disposal in clay formations.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Garitte, B.; Shao, H.; Wang, X. R.
Process understanding and parameter identification using numerical methods based on experimental findings are a key aspect of the international cooperative project DECOVALEX. Comparing the predictions from numerical models against experimental results increases confidence in the site selection and site evaluation process for a radioactive waste repository in deep geological formations. In the present phase of the project, DECOVALEX-2015, eight research teams have developed and applied models for simulating an in-situ heater experiment HE-E in the Opalinus Clay in the Mont Terri Rock Laboratory in Switzerland. The modelling task was divided into two study stages, related to prediction and interpretation ofmore » the experiment. A blind prediction of the HE-E experiment was performed based on calibrated parameter values for both the Opalinus Clay, that were based on the modelling of another in-situ experiment (HE-D), and modelling of laboratory column experiments on MX80 granular bentonite and a sand/bentonite mixture .. After publication of the experimental data, additional coupling functions were analysed and considered in the different models. Moreover, parameter values were varied to interpret the measured temperature, relative humidity and pore pressure evolution. The analysis of the predictive and interpretative results reveals the current state of understanding and predictability of coupled THM behaviours associated with geologic nuclear waste disposal in clay formations.« less
A REVIEW AND COMPARISON OF MODELS FOR PREDICTING DYNAMIC CHEMICAL BIOCONCENTRATION IN FISH
Over the past 20 years, a variety of models have been developed to simulate the bioconcentration of hydrophobic organic chemicals by fish. These models differ not only in the processes they address but also in the way a given process is described. Processes described by these m...
The predictive consequences of parameterization
NASA Astrophysics Data System (ADS)
White, J.; Hughes, J. D.; Doherty, J. E.
2013-12-01
In numerical groundwater modeling, parameterization is the process of selecting the aspects of a computer model that will be allowed to vary during history matching. This selection process is dependent on professional judgment and is, therefore, inherently subjective. Ideally, a robust parameterization should be commensurate with the spatial and temporal resolution of the model and should include all uncertain aspects of the model. Limited computing resources typically require reducing the number of adjustable parameters so that only a subset of the uncertain model aspects are treated as estimable parameters; the remaining aspects are treated as fixed parameters during history matching. We use linear subspace theory to develop expressions for the predictive error incurred by fixing parameters. The predictive error is comprised of two terms. The first term arises directly from the sensitivity of a prediction to fixed parameters. The second term arises from prediction-sensitive adjustable parameters that are forced to compensate for fixed parameters during history matching. The compensation is accompanied by inappropriate adjustment of otherwise uninformed, null-space parameter components. Unwarranted adjustment of null-space components away from prior maximum likelihood values may produce bias if a prediction is sensitive to those components. The potential for subjective parameterization choices to corrupt predictions is examined using a synthetic model. Several strategies are evaluated, including use of piecewise constant zones, use of pilot points with Tikhonov regularization and use of the Karhunen-Loeve transformation. The best choice of parameterization (as defined by minimum error variance) is strongly dependent on the types of predictions to be made by the model.
BIG DATA ANALYTICS AND PRECISION ANIMAL AGRICULTURE SYMPOSIUM: Data to decisions.
White, B J; Amrine, D E; Larson, R L
2018-04-14
Big data are frequently used in many facets of business and agronomy to enhance knowledge needed to improve operational decisions. Livestock operations collect data of sufficient quantity to perform predictive analytics. Predictive analytics can be defined as a methodology and suite of data evaluation techniques to generate a prediction for specific target outcomes. The objective of this manuscript is to describe the process of using big data and the predictive analytic framework to create tools to drive decisions in livestock production, health, and welfare. The predictive analytic process involves selecting a target variable, managing the data, partitioning the data, then creating algorithms, refining algorithms, and finally comparing accuracy of the created classifiers. The partitioning of the datasets allows model building and refining to occur prior to testing the predictive accuracy of the model with naive data to evaluate overall accuracy. Many different classification algorithms are available for predictive use and testing multiple algorithms can lead to optimal results. Application of a systematic process for predictive analytics using data that is currently collected or that could be collected on livestock operations will facilitate precision animal management through enhanced livestock operational decisions.
NASA Astrophysics Data System (ADS)
Quinn, Niall; Freer, Jim; Coxon, Gemma; Dunne, Toby; Neal, Jeff; Bates, Paul; Sampson, Chris; Smith, Andy; Parkin, Geoff
2017-04-01
Computationally efficient flood inundation modelling systems capable of representing important hydrological and hydrodynamic flood generating processes over relatively large regions are vital for those interested in flood preparation, response, and real time forecasting. However, such systems are currently not readily available. This can be particularly important where flood predictions from intense rainfall are considered as the processes leading to flooding often involve localised, non-linear spatially connected hillslope-catchment responses. Therefore, this research introduces a novel hydrological-hydraulic modelling framework for the provision of probabilistic flood inundation predictions across catchment to regional scales that explicitly account for spatial variability in rainfall-runoff and routing processes. Approaches have been developed to automate the provision of required input datasets and estimate essential catchment characteristics from freely available, national datasets. This is an essential component of the framework as when making predictions over multiple catchments or at relatively large scales, and where data is often scarce, obtaining local information and manually incorporating it into the model quickly becomes infeasible. An extreme flooding event in the town of Morpeth, NE England, in 2008 was used as a first case study evaluation of the modelling framework introduced. The results demonstrated a high degree of prediction accuracy when comparing modelled and reconstructed event characteristics for the event, while the efficiency of the modelling approach used enabled the generation of relatively large ensembles of realisations from which uncertainty within the prediction may be represented. This research supports previous literature highlighting the importance of probabilistic forecasting, particularly during extreme events, which can be often be poorly characterised or even missed by deterministic predictions due to the inherent uncertainty in any model application. Future research will aim to further evaluate the robustness of the approaches introduced by applying the modelling framework to a variety of historical flood events across UK catchments. Furthermore, the flexibility and efficiency of the framework is ideally suited to the examination of the propagation of errors through the model which will help gain a better understanding of the dominant sources of uncertainty currently impacting flood inundation predictions.
Predictive Validation of an Influenza Spread Model
Hyder, Ayaz; Buckeridge, David L.; Leung, Brian
2013-01-01
Background Modeling plays a critical role in mitigating impacts of seasonal influenza epidemics. Complex simulation models are currently at the forefront of evaluating optimal mitigation strategies at multiple scales and levels of organization. Given their evaluative role, these models remain limited in their ability to predict and forecast future epidemics leading some researchers and public-health practitioners to question their usefulness. The objective of this study is to evaluate the predictive ability of an existing complex simulation model of influenza spread. Methods and Findings We used extensive data on past epidemics to demonstrate the process of predictive validation. This involved generalizing an individual-based model for influenza spread and fitting it to laboratory-confirmed influenza infection data from a single observed epidemic (1998–1999). Next, we used the fitted model and modified two of its parameters based on data on real-world perturbations (vaccination coverage by age group and strain type). Simulating epidemics under these changes allowed us to estimate the deviation/error between the expected epidemic curve under perturbation and observed epidemics taking place from 1999 to 2006. Our model was able to forecast absolute intensity and epidemic peak week several weeks earlier with reasonable reliability and depended on the method of forecasting-static or dynamic. Conclusions Good predictive ability of influenza epidemics is critical for implementing mitigation strategies in an effective and timely manner. Through the process of predictive validation applied to a current complex simulation model of influenza spread, we provided users of the model (e.g. public-health officials and policy-makers) with quantitative metrics and practical recommendations on mitigating impacts of seasonal influenza epidemics. This methodology may be applied to other models of communicable infectious diseases to test and potentially improve their predictive ability. PMID:23755236
Dynamic predictive model for the growth of Salmonella spp. in liquid whole egg.
Singh, Aikansh; Korasapati, Nageswara R; Juneja, Vijay K; Subbiah, Jeyamkondan; Froning, Glenn; Thippareddi, Harshavardhan
2011-04-01
A dynamic model for the growth of Salmonella spp. in liquid whole egg (LWE) (approximately pH 7.8) under continuously varying temperature was developed. The model was validated using 2 (5 to 15 °C; 600 h and 10 to 40 °C; 52 h) sinusoidal, continuously varying temperature profiles. LWE adjusted to pH 7.8 was inoculated with approximately 2.5-3.0 log CFU/mL of Salmonella spp., and the growth data at several isothermal conditions (5, 7, 10, 15, 20, 25, 30, 35, 37, 39, 41, 43, 45, and 47 °C) was collected. A primary model (Baranyi model) was fitted for each temperature growth data and corresponding maximum growth rates were estimated. Pseudo-R2 values were greater than 0.97 for primary models. Modified Ratkowsky model was used to fit the secondary model. The pseudo-R2 and root mean square error were 0.99 and 0.06 log CFU/mL, respectively, for the secondary model. A dynamic model for the prediction of Salmonella spp. growth under varying temperature conditions was developed using 4th-order Runge-Kutta method. The developed dynamic model was validated for 2 sinusoidal temperature profiles, 5 to 15 °C (for 600 h) and 10 to 40 °C (for 52 h) with corresponding root mean squared error values of 0.28 and 0.23 log CFU/mL, respectively, between predicted and observed Salmonella spp. populations. The developed dynamic model can be used to predict the growth of Salmonella spp. in LWE under varying temperature conditions. Liquid egg and egg products are widely used in food processing and in restaurant operations. These products can be contaminated with Salmonella spp. during breaking and other unit operations during processing. The raw, liquid egg products are stored under refrigeration prior to pasteurization. However, process deviations can occur such as refrigeration failure, leading to temperature fluctuations above the required temperatures as specified in the critical limits within hazard analysis and critical control point plans for the operations. The processors are required to evaluate the potential growth of Salmonella spp. in such products before the product can be used, or further processed. Dynamic predictive models are excellent tools for regulators as well as the processing plant personnel to evaluate the microbiological safety of the product under such conditions.
Building a Framework in Improving Drought Monitoring and Early Warning Systems in Africa
NASA Astrophysics Data System (ADS)
Tadesse, T.; Wall, N.; Haigh, T.; Shiferaw, A. S.; Beyene, S.; Demisse, G. B.; Zaitchik, B.
2015-12-01
Decision makers need a basic understanding of the prediction models and products of hydro-climatic extremes and their suitability in time and space for strategic resource and development planning to develop mitigation and adaptation strategies. Advances in our ability to assess and predict climate extremes (e.g., droughts and floods) under evolving climate change suggest opportunity to improve management of climatic/hydrologic risk in agriculture and water resources. In the NASA funded project entitled, "Seasonal Prediction of Hydro-Climatic Extremes in the Greater Horn of Africa (GHA) under Evolving Climate Conditions to Support Adaptation Strategies," we are attempting to develop a framework that uses dialogue between managers and scientists on how to enhance the use of models' outputs and prediction products in the GHA as well as improve the delivery of this information in ways that can be easily utilized by managers. This process is expected to help our multidisciplinary research team obtain feedback on the models and forecast products. In addition, engaging decision makers is essential in evaluating the use of drought and flood prediction models and products for decision-making processes in drought and flood management. Through this study, we plan to assess information requirements to implement a robust Early Warning Systems (EWS) by engaging decision makers in the process. This participatory process could also help the existing EWSs in Africa and to develop new local and regional EWSs. In this presentation, we report the progress made in the past two years of the NASA project.
NASA Astrophysics Data System (ADS)
Hughes, J. D.; White, J.; Doherty, J.
2011-12-01
Linear prediction uncertainty analysis in a Bayesian framework was applied to guide the conditioning of an integrated surface water/groundwater model that will be used to predict the effects of groundwater withdrawals on surface-water and groundwater flows. Linear prediction uncertainty analysis is an effective approach for identifying (1) raw and processed data most effective for model conditioning prior to inversion, (2) specific observations and periods of time critically sensitive to specific predictions, and (3) additional observation data that would reduce model uncertainty relative to specific predictions. We present results for a two-dimensional groundwater model of a 2,186 km2 area of the Biscayne aquifer in south Florida implicitly coupled to a surface-water routing model of the actively managed canal system. The model domain includes 5 municipal well fields withdrawing more than 1 Mm3/day and 17 operable surface-water control structures that control freshwater releases from the Everglades and freshwater discharges to Biscayne Bay. More than 10 years of daily observation data from 35 groundwater wells and 24 surface water gages are available to condition model parameters. A dense parameterization was used to fully characterize the contribution of the inversion null space to predictive uncertainty and included bias-correction parameters. This approach allows better resolution of the boundary between the inversion null space and solution space. Bias-correction parameters (e.g., rainfall, potential evapotranspiration, and structure flow multipliers) absorb information that is present in structural noise that may otherwise contaminate the estimation of more physically-based model parameters. This allows greater precision in predictions that are entirely solution-space dependent, and reduces the propensity for bias in predictions that are not. Results show that application of this analysis is an effective means of identifying those surface-water and groundwater data, both raw and processed, that minimize predictive uncertainty, while simultaneously identifying the maximum solution-space dimensionality of the inverse problem supported by the data.
NASA Astrophysics Data System (ADS)
Johnston, J. M.
2013-12-01
Freshwater habitats provide fishable, swimmable and drinkable resources and are a nexus of geophysical and biological processes. These processes in turn influence the persistence and sustainability of populations, communities and ecosystems. Climate change and landuse change encompass numerous stressors of potential exposure, including the introduction of toxic contaminants, invasive species, and disease in addition to physical drivers such as temperature and hydrologic regime. A systems approach that includes the scientific and technologic basis of assessing the health of ecosystems is needed to effectively protect human health and the environment. The Integrated Environmental Modeling Framework 'iemWatersheds' has been developed as a consistent and coherent means of forecasting the cumulative impact of co-occurring stressors. The Framework consists of three facilitating technologies: Data for Environmental Modeling (D4EM) that automates the collection and standardization of input data; the Framework for Risk Assessment of Multimedia Environmental Systems (FRAMES) that manages the flow of information between linked models; and the Supercomputer for Model Uncertainty and Sensitivity Evaluation (SuperMUSE) that provides post-processing and analysis of model outputs, including uncertainty and sensitivity analysis. Five models are linked within the Framework to provide multimedia simulation capabilities for hydrology and water quality processes: the Soil Water Assessment Tool (SWAT) predicts surface water and sediment runoff and associated contaminants; the Watershed Mercury Model (WMM) predicts mercury runoff and loading to streams; the Water quality Analysis and Simulation Program (WASP) predicts water quality within the stream channel; the Habitat Suitability Index (HSI) model scores physicochemical habitat quality for individual fish species; and the Bioaccumulation and Aquatic System Simulator (BASS) predicts fish growth, population dynamics and bioaccumulation of toxic substances. The capability of the Framework to address cumulative impacts will be demonstrated for freshwater ecosystem services and mountaintop mining.
Developing a Data Driven Process-Based Model for Remote Sensing of Ecosystem Production
NASA Astrophysics Data System (ADS)
Elmasri, B.; Rahman, A. F.
2010-12-01
Estimating ecosystem carbon fluxes at various spatial and temporal scales is essential for quantifying the global carbon cycle. Numerous models have been developed for this purpose using several environmental variables as well as vegetation indices derived from remotely sensed data. Here we present a data driven modeling approach for gross primary production (GPP) that is based on a process based model BIOME-BGC. The proposed model was run using available remote sensing data and it does not depend on look-up tables. Furthermore, this approach combines the merits of both empirical and process models, and empirical models were used to estimate certain input variables such as light use efficiency (LUE). This was achieved by using remotely sensed data to the mathematical equations that represent biophysical photosynthesis processes in the BIOME-BGC model. Moreover, a new spectral index for estimating maximum photosynthetic activity, maximum photosynthetic rate index (MPRI), is also developed and presented here. This new index is based on the ratio between the near infrared and the green bands (ρ858.5/ρ555). The model was tested and validated against MODIS GPP product and flux measurements from two eddy covariance flux towers located at Morgan Monroe State Forest (MMSF) in Indiana and Harvard Forest in Massachusetts. Satellite data acquired by the Advanced Microwave Scanning Radiometer (AMSR-E) and MODIS were used. The data driven model showed a strong correlation between the predicted and measured GPP at the two eddy covariance flux towers sites. This methodology produced better predictions of GPP than did the MODIS GPP product. Moreover, the proportion of error in the predicted GPP for MMSF and Harvard forest was dominated by unsystematic errors suggesting that the results are unbiased. The analysis indicated that maintenance respiration is one of the main factors that dominate the overall model outcome errors and improvement in maintenance respiration estimation will result in improved GPP predictions. Although there might be a room for improvements in our model outcomes through improved parameterization, our results suggest that such a methodology for running BIOME-BGC model based entirely on routinely available data can produce good predictions of GPP.
NASA Astrophysics Data System (ADS)
Weres, Jerzy; Kujawa, Sebastian; Olek, Wiesław; Czajkowski, Łukasz
2016-04-01
Knowledge of physical properties of biomaterials is important in understanding and designing agri-food and wood processing industries. In the study presented in this paper computational methods were developed and combined with experiments to enhance identification of agri-food and forest product properties, and to predict heat and water transport in such products. They were based on the finite element model of heat and water transport and supplemented with experimental data. Algorithms were proposed for image processing, geometry meshing, and inverse/direct finite element modelling. The resulting software system was composed of integrated subsystems for 3D geometry data acquisition and mesh generation, for 3D geometry modelling and visualization, and for inverse/direct problem computations for the heat and water transport processes. Auxiliary packages were developed to assess performance, accuracy and unification of data access. The software was validated by identifying selected properties and using the estimated values to predict the examined processes, and then comparing predictions to experimental data. The geometry, thermal conductivity, specific heat, coefficient of water diffusion, equilibrium water content and convective heat and water transfer coefficients in the boundary layer were analysed. The estimated values, used as an input for simulation of the examined processes, enabled reduction in the uncertainty associated with predictions.
NASA Astrophysics Data System (ADS)
Dolipski, Marian; Cheluszka, Piotr; Sobota, Piotr; Remiorz, Eryk
2017-03-01
The key working process carried out by roadheaders is rock mining. For this reason, the mathematical modelling of the mining process is underlying the prediction of a dynamic load on the main components of a roadheader, the prediction of power demand for rock cutting with given properties or the prediction of energy consumption of this process. The theoretical and experimental investigations conducted point out - especially in relation to the technical parameters of roadheaders used these days in underground mining and their operating conditions - that the mathematical models of the process employed to date have many limitations, and in many cases the results obtained using such models deviate largely from the reality. This is due to the fact that certain factors strongly influencing cutting process progress have not been considered at the modelling stage, or have been approached in an oversimplified fashion. The article presents a new model of a rock cutting process using conical picks of cutting heads of boom-type roadheaders. An important novelty with respect to the models applied to date is, firstly, that the actual shape of cuts has been modelled with such shape resulting from the geometry of the currently used conical picks, and, secondly, variations in the depth of cuts in the cutting path of individual picks have been considered with such variations resulting from the picks' kinematics during the advancement of transverse cutting heads parallel to the floor surface. The work presents examples of simulation results for mining with a roadheader's transverse head equipped with 80 conical picks and compares them with the outcomes obtained using the existing model.
Crevillén-García, D
2018-04-01
Time-consuming numerical simulators for solving groundwater flow and dissolution models of physico-chemical processes in deep aquifers normally require some of the model inputs to be defined in high-dimensional spaces in order to return realistic results. Sometimes, the outputs of interest are spatial fields leading to high-dimensional output spaces. Although Gaussian process emulation has been satisfactorily used for computing faithful and inexpensive approximations of complex simulators, these have been mostly applied to problems defined in low-dimensional input spaces. In this paper, we propose a method for simultaneously reducing the dimensionality of very high-dimensional input and output spaces in Gaussian process emulators for stochastic partial differential equation models while retaining the qualitative features of the original models. This allows us to build a surrogate model for the prediction of spatial fields in such time-consuming simulators. We apply the methodology to a model of convection and dissolution processes occurring during carbon capture and storage.
SVM-Based System for Prediction of Epileptic Seizures from iEEG Signal
Cherkassky, Vladimir; Lee, Jieun; Veber, Brandon; Patterson, Edward E.; Brinkmann, Benjamin H.; Worrell, Gregory A.
2017-01-01
Objective This paper describes a data-analytic modeling approach for prediction of epileptic seizures from intracranial electroencephalogram (iEEG) recording of brain activity. Even though it is widely accepted that statistical characteristics of iEEG signal change prior to seizures, robust seizure prediction remains a challenging problem due to subject-specific nature of data-analytic modeling. Methods Our work emphasizes understanding of clinical considerations important for iEEG-based seizure prediction, and proper translation of these clinical considerations into data-analytic modeling assumptions. Several design choices during pre-processing and post-processing are considered and investigated for their effect on seizure prediction accuracy. Results Our empirical results show that the proposed SVM-based seizure prediction system can achieve robust prediction of preictal and interictal iEEG segments from dogs with epilepsy. The sensitivity is about 90–100%, and the false-positive rate is about 0–0.3 times per day. The results also suggest good prediction is subject-specific (dog or human), in agreement with earlier studies. Conclusion Good prediction performance is possible only if the training data contain sufficiently many seizure episodes, i.e., at least 5–7 seizures. Significance The proposed system uses subject-specific modeling and unbalanced training data. This system also utilizes three different time scales during training and testing stages. PMID:27362758
Necpálová, Magdalena; Anex, Robert P.; Fienen, Michael N.; Del Grosso, Stephen J.; Castellano, Michael J.; Sawyer, John E.; Iqbal, Javed; Pantoja, Jose L.; Barker, Daniel W.
2015-01-01
The ability of biogeochemical ecosystem models to represent agro-ecosystems depends on their correct integration with field observations. We report simultaneous calibration of 67 DayCent model parameters using multiple observation types through inverse modeling using the PEST parameter estimation software. Parameter estimation reduced the total sum of weighted squared residuals by 56% and improved model fit to crop productivity, soil carbon, volumetric soil water content, soil temperature, N2O, and soil3NO− compared to the default simulation. Inverse modeling substantially reduced predictive model error relative to the default model for all model predictions, except for soil 3NO− and 4NH+. Post-processing analyses provided insights into parameter–observation relationships based on parameter correlations, sensitivity and identifiability. Inverse modeling tools are shown to be a powerful way to systematize and accelerate the process of biogeochemical model interrogation, improving our understanding of model function and the underlying ecosystem biogeochemical processes that they represent.
Using simplifications of reality in the real world: Robust benefits of models for decision making
NASA Astrophysics Data System (ADS)
Hunt, R. J.
2008-12-01
Models are by definition simplifications of reality; the degree and nature of simplification, however, is debated. One view is "the world is 3D, heterogeneous, and transient, thus good models are too" - the more a model directly simulates the complexity of the real world the better it is considered to be. An alternative view is to only use simple models up front because real-world complexity can never be truly known. A third view is construct and calibrate as many models as predictions. A fourth is to build highly parameterized models and either look at an ensemble of results, or use mathematical regularization to identify an optimal most reasonable parameter set and fit. Although each view may have utility for a given decision-making process, there are common threads that perhaps run through all views. First, the model-construction process itself can help the decision-making process because it raises the discussion of opposing parties from one of contrasting professional opinions to discussion of reasonable types and ranges of model inputs and processes. Secondly, no matter what view is used to guide the model building, model predictions for the future might be expected to perform poorly in the future due to unanticipated future changes and stressors to the underlying system simulated. Although this does not reduce the obligation of the modeler to build representative tools for the system, it should serve to temper expectations of model performance. Finally, perhaps the most under-appreciated utility of models is for calculating the reduction in prediction uncertainty resulting from different data collection strategies - an attractive feature separate from the calculation and minimization of absolute prediction uncertainty itself. This type of model output facilitates focusing on efficient use of current and future monitoring resources - something valued by many decision-makers regardless of background, system managed, and societal context.
Exercise habit formation in new gym members: a longitudinal study.
Kaushal, Navin; Rhodes, Ryan E
2015-08-01
Reasoned action approaches have primarily been applied to understand exercise behaviour for the past three decades, yet emerging findings in unconscious and Dual Process research show that behavior may also be predicted by automatic processes such as habit. The purpose of this study was to: (1) investigate the behavioral requirements for exercise habit formation, (2) how Dual Process approach predicts behaviour, and (3) what predicts habit by testing a model (Lally and Gardner in Health Psychol Rev 7:S137-S158, 2013). Participants (n = 111) were new gym members who completed surveys across 12 weeks. It was found that exercising for at least four bouts per week for 6 weeks was the minimum requirement to establish an exercise habit. Dual Process analysis using Linear Mixed Models (LMM) revealed habit and intention to be parallel predictors of exercise behavior in the trajectory analysis. Finally, the habit antecedent model in LLM showed that consistency (β = .21), low behavioral complexity (β = .19), environment (β = .17) and affective judgments (β = .13) all significantly (p < .05) predicted changes in habit formation over time. Trainers should keep exercises fun and simple for new clients and focus on consistency which could lead to habit formation in nearly 6 weeks.
White, J.D.; Running, S.W.; Thornton, P.E.; Keane, R.E.; Ryan, K.C.; Fagre, D.B.; Key, C.H.
1998-01-01
Glacier National Park served as a test site for ecosystem analyses than involved a suite of integrated models embedded within a geographic information system. The goal of the exercise was to provide managers with maps that could illustrate probable shifts in vegetation, net primary production (NPP), and hydrologic responses associated with two selected climatic scenarios. The climatic scenarios were (a) a recent 12-yr record of weather data, and (b) a reconstituted set that sequentially introduced in repeated 3-yr intervals wetter-cooler, drier-warmer, and typical conditions. To extrapolate the implications of changes in ecosystem processes and resulting growth and distribution of vegetation and snowpack, the model incorporated geographic data. With underlying digital elevation maps, soil depth and texture, extrapolated climate, and current information on vegetation types and satellite-derived estimates of a leaf area indices, simulations were extended to envision how the park might look after 120 yr. The predictions of change included underlying processes affecting the availability of water and nitrogen. Considerable field data were acquired to compare with model predictions under current climatic conditions. In general, the integrated landscape models of ecosystem processes had good agreement with measured NPP, snowpack, and streamflow, but the exercise revealed the difficulty and necessity of averaging point measurements across landscapes to achieve comparable results with modeled values. Under the extremely variable climate scenario significant changes in vegetation composition and growth as well as hydrologic responses were predicted across the park. In particular, a general rise in both the upper and lower limits of treeline was predicted. These shifts would probably occur along with a variety of disturbances (fire, insect, and disease outbreaks) as predictions of physiological stress (water, nutrients, light) altered competitive relations and hydrologic responses. The use of integrated landscape models applied in this exercise should provide managers with insights into the underlying processes important in maintaining community structure, and at the same time, locate where changes on the landscape are most likely to occur.
Predictive and Prognostic Models: Implications for Healthcare Decision-Making in a Modern Recession
Vogenberg, F. Randy
2009-01-01
Various modeling tools have been developed to address the lack of standardized processes that incorporate the perspectives of all healthcare stakeholders. Such models can assist in the decision-making process aimed at achieving specific clinical outcomes, as well as guide the allocation of healthcare resources and reduce costs. The current efforts in Congress to change the way healthcare is financed, reimbursed, and delivered have rendered the incorporation of modeling tools into the clinical decision-making all the more important. Prognostic and predictive models are particularly relevant to healthcare, particularly in the clinical decision-making, with implications for payers, patients, and providers. The use of these models is likely to increase, as providers and patients seek to improve their clinical decision process to achieve better outcomes, while reducing overall healthcare costs. PMID:25126292
Modeling of weld bead geometry for rapid manufacturing by robotic GMAW
NASA Astrophysics Data System (ADS)
Yang, Tao; Xiong, Jun; Chen, Hui; Chen, Yong
2015-03-01
Weld-based rapid prototyping (RP) has shown great promises for fabricating 3D complex parts. During the layered deposition of forming metallic parts with robotic gas metal arc welding, the geometry of a single weld bead has an important influence on surface finish quality, layer thickness and dimensional accuracy of the deposited layer. In order to obtain accurate, predictable and controllable bead geometry, it is essential to understand the relationships between the process variables with the bead geometry (bead width, bead height and ratio of bead width to bead height). This paper highlights an experimental study carried out to develop mathematical models to predict deposited bead geometry through the quadratic general rotary unitized design. The adequacy and significance of the models were verified via the analysis of variance. Complicated cause-effect relationships between the process parameters and the bead geometry were revealed. Results show that the developed models can be applied to predict the desired bead geometry with great accuracy in layered deposition with accordance to the slicing process of RP.
The treatment of uncertainties in reactive pollution dispersion models at urban scales.
Tomlin, A S; Ziehn, T; Goodman, P; Tate, J E; Dixon, N S
2016-07-18
The ability to predict NO2 concentrations ([NO2]) within urban street networks is important for the evaluation of strategies to reduce exposure to NO2. However, models aiming to make such predictions involve the coupling of several complex processes: traffic emissions under different levels of congestion; dispersion via turbulent mixing; chemical processes of relevance at the street-scale. Parameterisations of these processes are challenging to quantify with precision. Predictions are therefore subject to uncertainties which should be taken into account when using models within decision making. This paper presents an analysis of mean [NO2] predictions from such a complex modelling system applied to a street canyon within the city of York, UK including the treatment of model uncertainties and their causes. The model system consists of a micro-scale traffic simulation and emissions model, and a Reynolds averaged turbulent flow model coupled to a reactive Lagrangian particle dispersion model. The analysis focuses on the sensitivity of predicted in-street increments of [NO2] at different locations in the street to uncertainties in the model inputs. These include physical characteristics such as background wind direction, temperature and background ozone concentrations; traffic parameters such as overall demand and primary NO2 fraction; as well as model parameterisations such as roughness lengths, turbulent time- and length-scales and chemical reaction rate coefficients. Predicted [NO2] is shown to be relatively robust with respect to model parameterisations, although there are significant sensitivities to the activation energy for the reaction NO + O3 as well as the canyon wall roughness length. Under off-peak traffic conditions, demand is the key traffic parameter. Under peak conditions where the network saturates, road-side [NO2] is relatively insensitive to changes in demand and more sensitive to the primary NO2 fraction. The most important physical parameter was found to be the background wind direction. The study highlights the key parameters required for reliable [NO2] estimations suggesting that accurate reference measurements for wind direction should be a critical part of air quality assessments for in-street locations. It also highlights the importance of street scale chemical processes in forming road-side [NO2], particularly for regions of high NOx emissions such as close to traffic queues.
Lee, Jaebeom; Lee, Young-Joo
2018-01-01
Management of the vertical long-term deflection of a high-speed railway bridge is a crucial factor to guarantee traffic safety and passenger comfort. Therefore, there have been efforts to predict the vertical deflection of a railway bridge based on physics-based models representing various influential factors to vertical deflection such as concrete creep and shrinkage. However, it is not an easy task because the vertical deflection of a railway bridge generally involves several sources of uncertainty. This paper proposes a probabilistic method that employs a Gaussian process to construct a model to predict the vertical deflection of a railway bridge based on actual vision-based measurement and temperature. To deal with the sources of uncertainty which may cause prediction errors, a Gaussian process is modeled with multiple kernels and hyperparameters. Once the hyperparameters are identified through the Gaussian process regression using training data, the proposed method provides a 95% prediction interval as well as a predictive mean about the vertical deflection of the bridge. The proposed method is applied to an arch bridge under operation for high-speed trains in South Korea. The analysis results obtained from the proposed method show good agreement with the actual measurement data on the vertical deflection of the example bridge, and the prediction results can be utilized for decision-making on railway bridge maintenance. PMID:29747421
Lee, Jaebeom; Lee, Kyoung-Chan; Lee, Young-Joo
2018-05-09
Management of the vertical long-term deflection of a high-speed railway bridge is a crucial factor to guarantee traffic safety and passenger comfort. Therefore, there have been efforts to predict the vertical deflection of a railway bridge based on physics-based models representing various influential factors to vertical deflection such as concrete creep and shrinkage. However, it is not an easy task because the vertical deflection of a railway bridge generally involves several sources of uncertainty. This paper proposes a probabilistic method that employs a Gaussian process to construct a model to predict the vertical deflection of a railway bridge based on actual vision-based measurement and temperature. To deal with the sources of uncertainty which may cause prediction errors, a Gaussian process is modeled with multiple kernels and hyperparameters. Once the hyperparameters are identified through the Gaussian process regression using training data, the proposed method provides a 95% prediction interval as well as a predictive mean about the vertical deflection of the bridge. The proposed method is applied to an arch bridge under operation for high-speed trains in South Korea. The analysis results obtained from the proposed method show good agreement with the actual measurement data on the vertical deflection of the example bridge, and the prediction results can be utilized for decision-making on railway bridge maintenance.
NASA Astrophysics Data System (ADS)
Kim, Chan Moon; Parnichkun, Manukid
2017-11-01
Coagulation is an important process in drinking water treatment to attain acceptable treated water quality. However, the determination of coagulant dosage is still a challenging task for operators, because coagulation is nonlinear and complicated process. Feedback control to achieve the desired treated water quality is difficult due to lengthy process time. In this research, a hybrid of k-means clustering and adaptive neuro-fuzzy inference system ( k-means-ANFIS) is proposed for the settled water turbidity prediction and the optimal coagulant dosage determination using full-scale historical data. To build a well-adaptive model to different process states from influent water, raw water quality data are classified into four clusters according to its properties by a k-means clustering technique. The sub-models are developed individually on the basis of each clustered data set. Results reveal that the sub-models constructed by a hybrid k-means-ANFIS perform better than not only a single ANFIS model, but also seasonal models by artificial neural network (ANN). The finally completed model consisting of sub-models shows more accurate and consistent prediction ability than a single model of ANFIS and a single model of ANN based on all five evaluation indices. Therefore, the hybrid model of k-means-ANFIS can be employed as a robust tool for managing both treated water quality and production costs simultaneously.
Uncertainty analysis of a groundwater flow model in east-central Florida
Sepúlveda, Nicasio; Doherty, John E.
2014-01-01
A groundwater flow model for east-central Florida has been developed to help water-resource managers assess the impact of increased groundwater withdrawals from the Floridan aquifer system on heads and spring flows originating from the Upper Floridan aquifer. The model provides a probabilistic description of predictions of interest to water-resource managers, given the uncertainty associated with system heterogeneity, the large number of input parameters, and a nonunique groundwater flow solution. The uncertainty associated with these predictions can then be considered in decisions with which the model has been designed to assist. The “Null Space Monte Carlo” method is a stochastic probabilistic approach used to generate a suite of several hundred parameter field realizations, each maintaining the model in a calibrated state, and each considered to be hydrogeologically plausible. The results presented herein indicate that the model’s capacity to predict changes in heads or spring flows that originate from increased groundwater withdrawals is considerably greater than its capacity to predict the absolute magnitudes of heads or spring flows. Furthermore, the capacity of the model to make predictions that are similar in location and in type to those in the calibration dataset exceeds its capacity to make predictions of different types at different locations. The quantification of these outcomes allows defensible use of the modeling process in support of future water-resources decisions. The model allows the decision-making process to recognize the uncertainties, and the spatial/temporal variability of uncertainties that are associated with predictions of future system behavior in a complex hydrogeological context.
Brain mechanisms in religion and spirituality: An integrative predictive processing framework.
van Elk, Michiel; Aleman, André
2017-02-01
We present the theory of predictive processing as a unifying framework to account for the neurocognitive basis of religion and spirituality. Our model is substantiated by discussing four different brain mechanisms that play a key role in religion and spirituality: temporal brain areas are associated with religious visions and ecstatic experiences; multisensory brain areas and the default mode network are involved in self-transcendent experiences; the Theory of Mind-network is associated with prayer experiences and over attribution of intentionality; top-down mechanisms instantiated in the anterior cingulate cortex and the medial prefrontal cortex could be involved in acquiring and maintaining intuitive supernatural beliefs. We compare the predictive processing model with two-systems accounts of religion and spirituality, by highlighting the central role of prediction error monitoring. We conclude by presenting novel predictions for future research and by discussing the philosophical and theological implications of neuroscientific research on religion and spirituality. Copyright © 2016 Elsevier Ltd. All rights reserved.
González-Domínguez, Elisa; Armengol, Josep; Rossi, Vittorio
2014-01-01
A mechanistic, dynamic model was developed to predict infection of loquat fruit by conidia of Fusicladium eriobotryae, the causal agent of loquat scab. The model simulates scab infection periods and their severity through the sub-processes of spore dispersal, infection, and latency (i.e., the state variables); change from one state to the following one depends on environmental conditions and on processes described by mathematical equations. Equations were developed using published data on F. eriobotryae mycelium growth, conidial germination, infection, and conidial dispersion pattern. The model was then validated by comparing model output with three independent data sets. The model accurately predicts the occurrence and severity of infection periods as well as the progress of loquat scab incidence on fruit (with concordance correlation coefficients >0.95). Model output agreed with expert assessment of the disease severity in seven loquat-growing seasons. Use of the model for scheduling fungicide applications in loquat orchards may help optimise scab management and reduce fungicide applications. PMID:25233340
Sampling design optimisation for rainfall prediction using a non-stationary geostatistical model
NASA Astrophysics Data System (ADS)
Wadoux, Alexandre M. J.-C.; Brus, Dick J.; Rico-Ramirez, Miguel A.; Heuvelink, Gerard B. M.
2017-09-01
The accuracy of spatial predictions of rainfall by merging rain-gauge and radar data is partly determined by the sampling design of the rain-gauge network. Optimising the locations of the rain-gauges may increase the accuracy of the predictions. Existing spatial sampling design optimisation methods are based on minimisation of the spatially averaged prediction error variance under the assumption of intrinsic stationarity. Over the past years, substantial progress has been made to deal with non-stationary spatial processes in kriging. Various well-documented geostatistical models relax the assumption of stationarity in the mean, while recent studies show the importance of considering non-stationarity in the variance for environmental processes occurring in complex landscapes. We optimised the sampling locations of rain-gauges using an extension of the Kriging with External Drift (KED) model for prediction of rainfall fields. The model incorporates both non-stationarity in the mean and in the variance, which are modelled as functions of external covariates such as radar imagery, distance to radar station and radar beam blockage. Spatial predictions are made repeatedly over time, each time recalibrating the model. The space-time averaged KED variance was minimised by Spatial Simulated Annealing (SSA). The methodology was tested using a case study predicting daily rainfall in the north of England for a one-year period. Results show that (i) the proposed non-stationary variance model outperforms the stationary variance model, and (ii) a small but significant decrease of the rainfall prediction error variance is obtained with the optimised rain-gauge network. In particular, it pays off to place rain-gauges at locations where the radar imagery is inaccurate, while keeping the distribution over the study area sufficiently uniform.
National Centers for Environmental Prediction
Statistics Observational Data Processing Data Assimilation Monsoon Desk Model Transition Seminars Seminar The Mesoscale Modeling Branch conducts a program of research and development in support of the prediction. This research and development includes mesoscale four-dimensional data assimilation of domestic
National Centers for Environmental Prediction
Statistics Observational Data Processing Data Assimilation Monsoon Desk Model Transition Seminars Seminar Hurricane Weather Research and Forecast System ANALYSIS FORECAST MODEL GSI Gridpoint Statistical Weather and Climate Prediction (NCWCP) 5830 University Research Court College Park, MD 20740 Page Author
National Centers for Environmental Prediction
Statistics Observational Data Processing Data Assimilation Monsoon Desk Model Transition Seminars Seminar WEATHER RESEARCH and FORECASTING HMON HMON - OPERATIONAL HURRICANE FORECASTING WAVEWATCH III WAVEWATCH III Modeling Center NOAA Center for Weather and Climate Prediction (NCWCP) 5830 University Research Court
Del Rio-Chanona, Ehecatl A; Liu, Jiao; Wagner, Jonathan L; Zhang, Dongda; Meng, Yingying; Xue, Song; Shah, Nilay
2018-02-01
Biodiesel produced from microalgae has been extensively studied due to its potentially outstanding advantages over traditional transportation fuels. In order to facilitate its industrialization and improve the process profitability, it is vital to construct highly accurate models capable of predicting the complex behavior of the investigated biosystem for process optimization and control, which forms the current research goal. Three original contributions are described in this paper. Firstly, a dynamic model is constructed to simulate the complicated effect of light intensity, nutrient supply and light attenuation on both biomass growth and biolipid production. Secondly, chlorophyll fluorescence, an instantly measurable variable and indicator of photosynthetic activity, is embedded into the model to monitor and update model accuracy especially for the purpose of future process optimal control, and its correlation between intracellular nitrogen content is quantified, which to the best of our knowledge has never been addressed so far. Thirdly, a thorough experimental verification is conducted under different scenarios including both continuous illumination and light/dark cycle conditions to testify the model predictive capability particularly for long-term operation, and it is concluded that the current model is characterized by a high level of predictive capability. Based on the model, the optimal light intensity for algal biomass growth and lipid synthesis is estimated. This work, therefore, paves the way to forward future process design and real-time optimization. © 2017 Wiley Periodicals, Inc.
Prediction of biodiversity hotspots in the Anthropocene: The case of veteran oaks.
Skarpaas, Olav; Blumentrath, Stefan; Evju, Marianne; Sverdrup-Thygeson, Anne
2017-10-01
Over the past centuries, humans have transformed large parts of the biosphere, and there is a growing need to understand and predict the distribution of biodiversity hotspots influenced by the presence of humans. Our basic hypothesis is that human influence in the Anthropocene is ubiquitous, and we predict that biodiversity hot spot modeling can be improved by addressing three challenges raised by the increasing ecological influence of humans: (i) anthropogenically modified responses to individual ecological factors, (ii) fundamentally different processes and predictors in landscape types shaped by different land use histories and (iii) a multitude and complexity of natural and anthropogenic processes that may require many predictors and even multiple models in different landscape types. We modeled the occurrence of veteran oaks in Norway, and found, in accordance with our basic hypothesis and predictions, that humans influence the distribution of veteran oaks throughout its range, but in different ways in forests and open landscapes. In forests, geographical and topographic variables related to the oak niche are still important, but the occurrence of veteran oaks is shifted toward steeper slopes, where logging is difficult. In open landscapes, land cover variables are more important, and veteran oaks are more common toward the north than expected from the fundamental oak niche. In both landscape types, multiple predictor variables representing ecological and human-influenced processes were needed to build a good model, and several models performed almost equally well. Models accounting for the different anthropogenic influences on landscape structure and processes consistently performed better than models based exclusively on natural biogeographical and ecological predictors. Thus, our results for veteran oaks clearly illustrate the challenges to distribution modeling raised by the ubiquitous influence of humans, even in a moderately populated region, but also show that predictions can be improved by explicitly addressing these anthropogenic complexities.
Multivariate regression model for predicting lumber grade volumes of northern red oak sawlogs
Daniel A. Yaussy; Robert L. Brisbin
1983-01-01
A multivariate regression model was developed to predict green board-foot yields for the seven common factory lumber grades processed from northern red oak (Quercus rubra L.) factory grade logs. The model uses the standard log measurements of grade, scaling diameter, length, and percent defect. It was validated with an independent data set. The model...
Safiuddin, Md.; Raman, Sudharshan N.; Abdus Salam, Md.; Jumaat, Mohd. Zamin
2016-01-01
Modeling is a very useful method for the performance prediction of concrete. Most of the models available in literature are related to the compressive strength because it is a major mechanical property used in concrete design. Many attempts were taken to develop suitable mathematical models for the prediction of compressive strength of different concretes, but not for self-consolidating high-strength concrete (SCHSC) containing palm oil fuel ash (POFA). The present study has used artificial neural networks (ANN) to predict the compressive strength of SCHSC incorporating POFA. The ANN model has been developed and validated in this research using the mix proportioning and experimental strength data of 20 different SCHSC mixes. Seventy percent (70%) of the data were used to carry out the training of the ANN model. The remaining 30% of the data were used for testing the model. The training of the ANN model was stopped when the root mean square error (RMSE) and the percentage of good patterns was 0.001 and ≈100%, respectively. The predicted compressive strength values obtained from the trained ANN model were much closer to the experimental values of compressive strength. The coefficient of determination (R2) for the relationship between the predicted and experimental compressive strengths was 0.9486, which shows the higher degree of accuracy of the network pattern. Furthermore, the predicted compressive strength was found very close to the experimental compressive strength during the testing process of the ANN model. The absolute and percentage relative errors in the testing process were significantly low with a mean value of 1.74 MPa and 3.13%, respectively, which indicated that the compressive strength of SCHSC including POFA can be efficiently predicted by the ANN. PMID:28773520
Safiuddin, Md; Raman, Sudharshan N; Abdus Salam, Md; Jumaat, Mohd Zamin
2016-05-20
Modeling is a very useful method for the performance prediction of concrete. Most of the models available in literature are related to the compressive strength because it is a major mechanical property used in concrete design. Many attempts were taken to develop suitable mathematical models for the prediction of compressive strength of different concretes, but not for self-consolidating high-strength concrete (SCHSC) containing palm oil fuel ash (POFA). The present study has used artificial neural networks (ANN) to predict the compressive strength of SCHSC incorporating POFA. The ANN model has been developed and validated in this research using the mix proportioning and experimental strength data of 20 different SCHSC mixes. Seventy percent (70%) of the data were used to carry out the training of the ANN model. The remaining 30% of the data were used for testing the model. The training of the ANN model was stopped when the root mean square error (RMSE) and the percentage of good patterns was 0.001 and ≈100%, respectively. The predicted compressive strength values obtained from the trained ANN model were much closer to the experimental values of compressive strength. The coefficient of determination ( R ²) for the relationship between the predicted and experimental compressive strengths was 0.9486, which shows the higher degree of accuracy of the network pattern. Furthermore, the predicted compressive strength was found very close to the experimental compressive strength during the testing process of the ANN model. The absolute and percentage relative errors in the testing process were significantly low with a mean value of 1.74 MPa and 3.13%, respectively, which indicated that the compressive strength of SCHSC including POFA can be efficiently predicted by the ANN.
Robust functional regression model for marginal mean and subject-specific inferences.
Cao, Chunzheng; Shi, Jian Qing; Lee, Youngjo
2017-01-01
We introduce flexible robust functional regression models, using various heavy-tailed processes, including a Student t-process. We propose efficient algorithms in estimating parameters for the marginal mean inferences and in predicting conditional means as well as interpolation and extrapolation for the subject-specific inferences. We develop bootstrap prediction intervals (PIs) for conditional mean curves. Numerical studies show that the proposed model provides a robust approach against data contamination or distribution misspecification, and the proposed PIs maintain the nominal confidence levels. A real data application is presented as an illustrative example.
Development of an automated energy audit protocol for office buildings
NASA Astrophysics Data System (ADS)
Deb, Chirag
This study aims to enhance the building energy audit process, and bring about reduction in time and cost requirements in the conduction of a full physical audit. For this, a total of 5 Energy Service Companies in Singapore have collaborated and provided energy audit reports for 62 office buildings. Several statistical techniques are adopted to analyse these reports. These techniques comprise cluster analysis and development of prediction models to predict energy savings for buildings. The cluster analysis shows that there are 3 clusters of buildings experiencing different levels of energy savings. To understand the effect of building variables on the change in EUI, a robust iterative process for selecting the appropriate variables is developed. The results show that the 4 variables of GFA, non-air-conditioning energy consumption, average chiller plant efficiency and installed capacity of chillers should be taken for clustering. This analysis is extended to the development of prediction models using linear regression and artificial neural networks (ANN). An exhaustive variable selection algorithm is developed to select the input variables for the two energy saving prediction models. The results show that the ANN prediction model can predict the energy saving potential of a given building with an accuracy of +/-14.8%.
Model coupling intraparticle diffusion/sorption, nonlinear sorption, and biodegradation processes
Karapanagioti, Hrissi K.; Gossard, Chris M.; Strevett, Keith A.; Kolar, Randall L.; Sabatini, David A.
2001-01-01
Diffusion, sorption and biodegradation are key processes impacting the efficiency of natural attenuation. While each process has been studied individually, limited information exists on the kinetic coupling of these processes. In this paper, a model is presented that couples nonlinear and nonequilibrium sorption (intraparticle diffusion) with biodegradation kinetics. Initially, these processes are studied independently (i.e., intraparticle diffusion, nonlinear sorption and biodegradation), with appropriate parameters determined from these independent studies. Then, the coupled processes are studied, with an initial data set used to determine biodegradation constants that were subsequently used to successfully predict the behavior of a second data set. The validated model is then used to conduct a sensitivity analysis, which reveals conditions where biodegradation becomes desorption rate-limited. If the chemical is not pre-equilibrated with the soil prior to the onset of biodegradation, then fast sorption will reduce aqueous concentrations and thus biodegradation rates. Another sensitivity analysis demonstrates the importance of including nonlinear sorption in a coupled diffusion/sorption and biodegradation model. While predictions based on linear sorption isotherms agree well with solution concentrations, for the conditions evaluated this approach overestimates the percentage of contaminant biodegraded by as much as 50%. This research demonstrates that nonlinear sorption should be coupled with diffusion/sorption and biodegradation models in order to accurately predict bioremediation and natural attenuation processes. To our knowledge this study is unique in studying nonlinear sorption coupled with intraparticle diffusion and biodegradation kinetics with natural media.
Machine learning for the New York City power grid.
Rudin, Cynthia; Waltz, David; Anderson, Roger N; Boulanger, Albert; Salleb-Aouissi, Ansaf; Chow, Maggie; Dutta, Haimonti; Gross, Philip N; Huang, Bert; Ierome, Steve; Isaac, Delfina F; Kressner, Arthur; Passonneau, Rebecca J; Radeva, Axinia; Wu, Leon
2012-02-01
Power companies can benefit from the use of knowledge discovery methods and statistical machine learning for preventive maintenance. We introduce a general process for transforming historical electrical grid data into models that aim to predict the risk of failures for components and systems. These models can be used directly by power companies to assist with prioritization of maintenance and repair work. Specialized versions of this process are used to produce 1) feeder failure rankings, 2) cable, joint, terminator, and transformer rankings, 3) feeder Mean Time Between Failure (MTBF) estimates, and 4) manhole events vulnerability rankings. The process in its most general form can handle diverse, noisy, sources that are historical (static), semi-real-time, or realtime, incorporates state-of-the-art machine learning algorithms for prioritization (supervised ranking or MTBF), and includes an evaluation of results via cross-validation and blind test. Above and beyond the ranked lists and MTBF estimates are business management interfaces that allow the prediction capability to be integrated directly into corporate planning and decision support; such interfaces rely on several important properties of our general modeling approach: that machine learning features are meaningful to domain experts, that the processing of data is transparent, and that prediction results are accurate enough to support sound decision making. We discuss the challenges in working with historical electrical grid data that were not designed for predictive purposes. The “rawness” of these data contrasts with the accuracy of the statistical models that can be obtained from the process; these models are sufficiently accurate to assist in maintaining New York City’s electrical grid.
Hydrologic modeling strategy for the Islamic Republic of Mauritania, Africa
Friedel, Michael J.
2008-01-01
The government of Mauritania is interested in how to maintain hydrologic balance to ensure a long-term stable water supply for minerals-related, domestic, and other purposes. Because of the many complicating and competing natural and anthropogenic factors, hydrologists will perform quantitative analysis with specific objectives and relevant computer models in mind. Whereas various computer models are available for studying water-resource priorities, the success of these models to provide reliable predictions largely depends on adequacy of the model-calibration process. Predictive analysis helps us evaluate the accuracy and uncertainty associated with simulated dependent variables of our calibrated model. In this report, the hydrologic modeling process is reviewed and a strategy summarized for future Mauritanian hydrologic modeling studies.
NASA Astrophysics Data System (ADS)
Nourani, Vahid; Andalib, Gholamreza; Dąbrowska, Dominika
2017-05-01
Accurate nitrate load predictions can elevate decision management of water quality of watersheds which affects to environment and drinking water. In this paper, two scenarios were considered for Multi-Station (MS) nitrate load modeling of the Little River watershed. In the first scenario, Markovian characteristics of streamflow-nitrate time series were proposed for the MS modeling. For this purpose, feature extraction criterion of Mutual Information (MI) was employed for input selection of artificial intelligence models (Feed Forward Neural Network, FFNN and least square support vector machine). In the second scenario for considering seasonality-based characteristics of the time series, wavelet transform was used to extract multi-scale features of streamflow-nitrate time series of the watershed's sub-basins to model MS nitrate loads. Self-Organizing Map (SOM) clustering technique which finds homogeneous sub-series clusters was also linked to MI for proper cluster agent choice to be imposed into the models for predicting the nitrate loads of the watershed's sub-basins. The proposed MS method not only considers the prediction of the outlet nitrate but also covers predictions of interior sub-basins nitrate load values. The results indicated that the proposed FFNN model coupled with the SOM-MI improved the performance of MS nitrate predictions compared to the Markovian-based models up to 39%. Overall, accurate selection of dominant inputs which consider seasonality-based characteristics of streamflow-nitrate process could enhance the efficiency of nitrate load predictions.
Manne, Sharon; Ostroff, Jamie; Fox, Kevin; Grana, Generosa; Winkel, Gary
2009-01-01
Introduction The diagnosis and subsequent treatment for early stage breast cancer is stressful for partners. Little is known about the role of cognitive and social processes predicting the longitudinal course of partners’ psychosocial adaptation. This study evaluated the role of cognitive and social processing in partner psychological adaptation to early stage breast cancer, evaluating both main and moderator effect models. Moderating effects for meaning-making, acceptance, and positive reappraisal on the predictive association of searching for meaning, emotional processing, and emotional expression on partner psychological distress were examined. Materials and Methods Partners of women diagnosed with early stage breast cancer were evaluated shortly after the ill partner’s diagnosis (n= 253), nine (n = 167), and 18 months (n = 149) later. Partners completed measures of emotional expression, emotional processing, acceptance, meaning-making, and general and cancer-specific distress at all time points. Results Lower satisfaction with partner support predicted greater global distress, and greater use of positive reappraisal was associated with greater distress. The predicted moderator effects for found meaning on the associations between the search for meaning and cancer-specific distress were found and similar moderating effects for positive reappraisal on the associations between emotional expression and global distress and for acceptance on the association between emotional processing and cancer-specific distress were found. Conclusions Results indicate several cognitive-social processes directly predict partner distress. However, moderator effect models in which the effects of partners’ processing depends upon whether these efforts result changes in perceptions of the cancer experience may add to the understanding of partners’ adaptation to cancer. PMID:18435865
Manne, Sharon; Ostroff, Jamie; Fox, Kevin; Grana, Generosa; Winkel, Gary
2009-02-01
The diagnosis and subsequent treatment for early stage breast cancer is stressful for partners. Little is known about the role of cognitive and social processes predicting the longitudinal course of partners' psychosocial adaptation. This study evaluated the role of cognitive and social processing in partner psychological adaptation to early stage breast cancer, evaluating both main and moderator effect models. Moderating effects for meaning making, acceptance, and positive reappraisal on the predictive association of searching for meaning, emotional processing, and emotional expression on partner psychological distress were examined. Partners of women diagnosed with early stage breast cancer were evaluated shortly after the ill partner's diagnosis (N=253), 9 (N=167), and 18 months (N=149) later. Partners completed measures of emotional expression, emotional processing, acceptance, meaning making, and general and cancer-specific distress at all time points. Lower satisfaction with partner support predicted greater global distress, and greater use of positive reappraisal was associated with greater distress. The predicted moderator effects for found meaning on the associations between the search for meaning and cancer-specific distress were found and similar moderating effects for positive reappraisal on the associations between emotional expression and global distress and for acceptance on the association between emotional processing and cancer-specific distress were found. Results indicate several cognitive-social processes directly predict partner distress. However, moderator effect models in which the effects of partners' processing depends upon whether these efforts result in changes in perceptions of the cancer experience may add to the understanding of partners' adaptation to cancer.
Recurrent connectivity can account for the dynamics of disparity processing in V1
Samonds, Jason M.; Potetz, Brian R.; Tyler, Christopher W.; Lee, Tai Sing
2013-01-01
Disparity tuning measured in the primary visual cortex (V1) is described well by the disparity energy model, but not all aspects of disparity tuning are fully explained by the model. Such deviations from the disparity energy model provide us with insight into how network interactions may play a role in disparity processing and help to solve the stereo correspondence problem. Here, we propose a neuronal circuit model with recurrent connections that provides a simple account of the observed deviations. The model is based on recurrent connections inferred from neurophysiological observations on spike timing correlations, and is in good accord with existing data on disparity tuning dynamics. We further performed two additional experiments to test predictions of the model. First, we increased the size of stimuli to drive more neurons and provide a stronger recurrent input. Our model predicted sharper disparity tuning for larger stimuli. Second, we displayed anti-correlated stereograms, where dots of opposite luminance polarity are matched between the left- and right-eye images and result in inverted disparity tuning in the disparity energy model. In this case, our model predicted reduced sharpening and strength of inverted disparity tuning. For both experiments, the dynamics of disparity tuning observed from the neurophysiological recordings in macaque V1 matched model simulation predictions. Overall, the results of this study support the notion that, while the disparity energy model provides a primary account of disparity tuning in V1 neurons, neural disparity processing in V1 neurons is refined by recurrent interactions among elements in the neural circuit. PMID:23407952
Drift-Scale Coupled Processes (DST and THC Seepage) Models
DOE Office of Scientific and Technical Information (OSTI.GOV)
E. Gonnenthal; N. Spyoher
The purpose of this Analysis/Model Report (AMR) is to document the Near-Field Environment (NFE) and Unsaturated Zone (UZ) models used to evaluate the potential effects of coupled thermal-hydrologic-chemical (THC) processes on unsaturated zone flow and transport. This is in accordance with the ''Technical Work Plan (TWP) for Unsaturated Zone Flow and Transport Process Model Report'', Addendum D, Attachment D-4 (Civilian Radioactive Waste Management System (CRWMS) Management and Operating Contractor (M and O) 2000 [153447]) and ''Technical Work Plan for Nearfield Environment Thermal Analyses and Testing'' (CRWMS M and O 2000 [153309]). These models include the Drift Scale Test (DST) THCmore » Model and several THC seepage models. These models provide the framework to evaluate THC coupled processes at the drift scale, predict flow and transport behavior for specified thermal loading conditions, and predict the chemistry of waters and gases entering potential waste-emplacement drifts. The intended use of this AMR is to provide input for the following: (1) Performance Assessment (PA); (2) Abstraction of Drift-Scale Coupled Processes AMR (ANL-NBS-HS-000029); (3) UZ Flow and Transport Process Model Report (PMR); and (4) Near-Field Environment (NFE) PMR. The work scope for this activity is presented in the TWPs cited above, and summarized as follows: continue development of the repository drift-scale THC seepage model used in support of the TSPA in-drift geochemical model; incorporate heterogeneous fracture property realizations; study sensitivity of results to changes in input data and mineral assemblage; validate the DST model by comparison with field data; perform simulations to predict mineral dissolution and precipitation and their effects on fracture properties and chemistry of water (but not flow rates) that may seep into drifts; submit modeling results to the TDMS and document the models. The model development, input data, sensitivity and validation studies described in this AMR are required to fully document and address the requirements of the TWPs.« less
Drift-Scale Coupled Processes (DST and THC Seepage) Models
DOE Office of Scientific and Technical Information (OSTI.GOV)
E. Sonnenthale
The purpose of this Analysis/Model Report (AMR) is to document the Near-Field Environment (NFE) and Unsaturated Zone (UZ) models used to evaluate the potential effects of coupled thermal-hydrologic-chemical (THC) processes on unsaturated zone flow and transport. This is in accordance with the ''Technical Work Plan (TWP) for Unsaturated Zone Flow and Transport Process Model Report'', Addendum D, Attachment D-4 (Civilian Radioactive Waste Management System (CRWMS) Management and Operating Contractor (M&O) 2000 [1534471]) and ''Technical Work Plan for Nearfield Environment Thermal Analyses and Testing'' (CRWMS M&O 2000 [153309]). These models include the Drift Scale Test (DST) THC Model and several THCmore » seepage models. These models provide the framework to evaluate THC coupled processes at the drift scale, predict flow and transport behavior for specified thermal loading conditions, and predict the chemistry of waters and gases entering potential waste-emplacement drifts. The intended use of this AMR is to provide input for the following: Performance Assessment (PA); Near-Field Environment (NFE) PMR; Abstraction of Drift-Scale Coupled Processes AMR (ANL-NBS-HS-000029); and UZ Flow and Transport Process Model Report (PMR). The work scope for this activity is presented in the TWPs cited above, and summarized as follows: Continue development of the repository drift-scale THC seepage model used in support of the TSPA in-drift geochemical model; incorporate heterogeneous fracture property realizations; study sensitivity of results to changes in input data and mineral assemblage; validate the DST model by comparison with field data; perform simulations to predict mineral dissolution and precipitation and their effects on fracture properties and chemistry of water (but not flow rates) that may seep into drifts; submit modeling results to the TDMS and document the models. The model development, input data, sensitivity and validation studies described in this AMR are required to fully document and address the requirements of the TWPs.« less
Rodriguez, Christina M; Smith, Tamika L; Silvia, Paul J
2016-01-01
The Social Information Processing (SIP) model postulates that parents undergo a series of stages in implementing physical discipline that can escalate into physical child abuse. The current study utilized a multimethod approach to investigate whether SIP factors can predict risk of parent-child aggression (PCA) in a diverse sample of expectant mothers and fathers. SIP factors of PCA attitudes, negative child attributions, reactivity, and empathy were considered as potential predictors of PCA risk; additionally, analyses considered whether personal history of PCA predicted participants' own PCA risk through its influence on their attitudes and attributions. Findings indicate that, for both mothers and fathers, history influenced attitudes but not attributions in predicting PCA risk, and attitudes and attributions predicted PCA risk; empathy and reactivity predicted negative child attributions for expectant mothers, but only reactivity significantly predicted attributions for expectant fathers. Path models for expectant mothers and fathers were remarkably similar. Overall, the findings provide support for major aspects of the SIP model. Continued work is needed in studying the progression of these factors across time for both mothers and fathers as well as the inclusion of other relevant ecological factors to the SIP model. Copyright © 2015 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Caldararu, S.; Kern, M.; Engel, J.; Zaehle, S.
2016-12-01
Despite recent advances in global vegetation models, we still lack the capacity to predict observed vegetation responses to experimental environmental changes such as elevated CO2, increased temperature or nutrient additions. In particular for elevated CO2 (FACE) experiments, studies have shown that this is related in part to the models' inability to represent plastic changes in nutrient use and biomass allocation. We present a newly developed vegetation model which aims to overcome these problems by including optimality processes to describe nitrogen (N) and carbon allocation within the plant. We represent nitrogen allocation to the canopy and within the canopy between photosynthetic components as an optimal processes which aims to maximize net primary production (NPP) of the plant. We also represent biomass investment into aboveground and belowground components (root nitrogen uptake , biological N fixation) as an optimal process that maximizes plant growth by considering plant carbon and nutrient demands as well as acquisition costs. The model can now represent plastic changes in canopy N content and chlorophyll and Rubisco concentrations as well as in belowground allocation both on seasonal and inter-annual time scales. Specifically, we show that under elevated CO2 conditions, the model predicts a lower optimal leaf N concentration, which, combined with a redistribution of leaf N between the Rubisco and chlorophyll components, leads to a continued NPP response under high CO2, where models with a fixed canopy stoichiometry would predicts a quick onset of N limitation. In general, our model aims to include physiologically-based plant processes and avoid arbitrarily imposed parameters and thresholds in order to improve our predictive capability of vegetation responses under changing environmental conditions.
USDA-ARS?s Scientific Manuscript database
Streambank retreat is a complex cyclical process involving subaerial processes, fluvial erosion, seepage erosion, and geotechnical failures and is driven by several soil properties that themselves are temporally and spatially variable. Therefore, it can be extremely challenging to predict and model ...
Process-based models are required to manage ecological systems in a changing world
K. Cuddington; M.-J. Fortin; L.R. Gerber; A. Hastings; A. Liebhold; M. OConnor; C. Ray
2013-01-01
Several modeling approaches can be used to guide management decisions. However, some approaches are better fitted than others to address the problem of prediction under global change. Process-based models, which are based on a theoretical understanding of relevant ecological processes, provide a useful framework to incorporate specific responses to altered...
Automated image processing of LANDSAT 2 digital data for watershed runoff prediction
NASA Technical Reports Server (NTRS)
Sasso, R. R.; Jensen, J. R.; Estes, J. E.
1977-01-01
The U.S. Soil Conservation Service (SCS) model for watershed runoff prediction uses soil and land cover information as its major drivers. Kern County Water Agency is implementing the SCS model to predict runoff for 10,400 sq cm of mountainous watershed in Kern County, California. The Remote Sensing Unit, University of California, Santa Barbara, was commissioned by KCWA to conduct a 230 sq cm feasibility study in the Lake Isabella, California region to evaluate remote sensing methodologies which could be ultimately extrapolated to the entire 10,400 sq cm Kern County watershed. Digital results indicate that digital image processing of Landsat 2 data will provide usable land cover required by KCWA for input to the SCS runoff model.
NASA Astrophysics Data System (ADS)
Combeau, Hervé; Založnik, Miha; Bedel, Marie
2016-08-01
Prediction of solidification defects, such as macrosegregation and inhomogeneous microstructures, constitutes a key issue for industry. The development of models of casting processes needs to account for several imbricated length scales and different physical phenomena. For example, the kinetics of the growth of microstructures needs to be coupled with the multiphase flow at the process scale. We introduce such a state-of-the-art model and outline its principles. We present the most recent applications of the model to casting of a heavy steel ingot and to direct chill casting of a large Al alloy sheet ingot. Their ability to help in the understanding of complex phenomena, such as the competition between nucleation and growth of grains in the presence of convection of the liquid and of grain motion is shown, and its predictive capabilities are discussed. Key issues for future developments and research are addressed.
Chira, Camelia; Horvath, Dragos; Dumitrescu, D
2011-07-30
Proteins are complex structures made of amino acids having a fundamental role in the correct functioning of living cells. The structure of a protein is the result of the protein folding process. However, the general principles that govern the folding of natural proteins into a native structure are unknown. The problem of predicting a protein structure with minimum-energy starting from the unfolded amino acid sequence is a highly complex and important task in molecular and computational biology. Protein structure prediction has important applications in fields such as drug design and disease prediction. The protein structure prediction problem is NP-hard even in simplified lattice protein models. An evolutionary model based on hill-climbing genetic operators is proposed for protein structure prediction in the hydrophobic - polar (HP) model. Problem-specific search operators are implemented and applied using a steepest-ascent hill-climbing approach. Furthermore, the proposed model enforces an explicit diversification stage during the evolution in order to avoid local optimum. The main features of the resulting evolutionary algorithm - hill-climbing mechanism and diversification strategy - are evaluated in a set of numerical experiments for the protein structure prediction problem to assess their impact to the efficiency of the search process. Furthermore, the emerging consolidated model is compared to relevant algorithms from the literature for a set of difficult bidimensional instances from lattice protein models. The results obtained by the proposed algorithm are promising and competitive with those of related methods.
Modeling of heat transfer in compacted machining chips during friction consolidation process
NASA Astrophysics Data System (ADS)
Abbas, Naseer; Deng, Xiaomin; Li, Xiao; Reynolds, Anthony
2018-04-01
The current study aims to provide an understanding of the heat transfer process in compacted aluminum alloy AA6061 machining chips during the friction consolidation process (FCP) through experimental investigations and mathematical modelling and numerical simulation. Compaction and friction consolidation of machining chips is the first stage of the Friction Extrusion Process (FEP), which is a novel method for recycling machining chips to produce useful products such as wires. In this study, compacted machining chips are modelled as a continuum whose material properties vary with density during friction consolidation. Based on density and temperature dependent thermal properties, the temperature field in the chip material and process chamber caused by frictional heating during the friction consolidation process is predicted. The predicted temperature field is found to compare well with temperature measurements at select points where such measurements can be made using thermocouples.
D'Andrade, Roy G; Romney, A Kimball
2003-05-13
This article presents a computational model of the process through which the human visual system transforms reflectance spectra into perceptions of color. Using physical reflectance spectra data and standard human cone sensitivity functions we describe the transformations necessary for predicting the location of colors in the Munsell color space. These transformations include quantitative estimates of the opponent process weights needed to transform cone activations into Munsell color space coordinates. Using these opponent process weights, the Munsell position of specific colors can be predicted from their physical spectra with a mean correlation of 0.989.
NASA Astrophysics Data System (ADS)
Lombardozzi, D.; Levis, S.; Bonan, G.; Sparks, J. P.
2012-08-01
Plants exchange greenhouse gases carbon dioxide and water with the atmosphere through the processes of photosynthesis and transpiration, making them essential in climate regulation. Carbon dioxide and water exchange are typically coupled through the control of stomatal conductance, and the parameterization in many models often predict conductance based on photosynthesis values. Some environmental conditions, like exposure to high ozone (O3) concentrations, alter photosynthesis independent of stomatal conductance, so models that couple these processes cannot accurately predict both. The goals of this study were to test direct and indirect photosynthesis and stomatal conductance modifications based on O3 damage to tulip poplar (Liriodendron tulipifera) in a coupled Farquhar/Ball-Berry model. The same modifications were then tested in the Community Land Model (CLM) to determine the impacts on gross primary productivity (GPP) and transpiration at a constant O3 concentration of 100 parts per billion (ppb). Modifying the Vcmax parameter and directly modifying stomatal conductance best predicts photosynthesis and stomatal conductance responses to chronic O3 over a range of environmental conditions. On a global scale, directly modifying conductance reduces the effect of O3 on both transpiration and GPP compared to indirectly modifying conductance, particularly in the tropics. The results of this study suggest that independently modifying stomatal conductance can improve the ability of models to predict hydrologic cycling, and therefore improve future climate predictions.
Graham, Emily B.; Knelman, Joseph E.; Schindlbacher, Andreas; ...
2016-02-24
In this study, microorganisms are vital in mediating the earth’s biogeochemical cycles; yet, despite our rapidly increasing ability to explore complex environmental microbial communities, the relationship between microbial community structure and ecosystem processes remains poorly understood. Here, we address a fundamental and unanswered question in microbial ecology: ‘When do we need to understand microbial community structure to accurately predict function?’ We present a statistical analysis investigating the value of environmental data and microbial community structure independently and in combination for explaining rates of carbon and nitrogen cycling processes within 82 global datasets. Environmental variables were the strongest predictors of processmore » rates but left 44% of variation unexplained on average, suggesting the potential for microbial data to increase model accuracy. Although only 29% of our datasets were significantly improved by adding information on microbial community structure, we observed improvement in models of processes mediated by narrow phylogenetic guilds via functional gene data, and conversely, improvement in models of facultative microbial processes via community diversity metrics. Our results also suggest that microbial diversity can strengthen predictions of respiration rates beyond microbial biomass parameters, as 53% of models were improved by incorporating both sets of predictors compared to 35% by microbial biomass alone. Our analysis represents the first comprehensive analysis of research examining links between microbial community structure and ecosystem function. Taken together, our results indicate that a greater understanding of microbial communities informed by ecological principles may enhance our ability to predict ecosystem process rates relative to assessments based on environmental variables and microbial physiology.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Graham, Emily B.; Knelman, Joseph E.; Schindlbacher, Andreas
In this study, microorganisms are vital in mediating the earth’s biogeochemical cycles; yet, despite our rapidly increasing ability to explore complex environmental microbial communities, the relationship between microbial community structure and ecosystem processes remains poorly understood. Here, we address a fundamental and unanswered question in microbial ecology: ‘When do we need to understand microbial community structure to accurately predict function?’ We present a statistical analysis investigating the value of environmental data and microbial community structure independently and in combination for explaining rates of carbon and nitrogen cycling processes within 82 global datasets. Environmental variables were the strongest predictors of processmore » rates but left 44% of variation unexplained on average, suggesting the potential for microbial data to increase model accuracy. Although only 29% of our datasets were significantly improved by adding information on microbial community structure, we observed improvement in models of processes mediated by narrow phylogenetic guilds via functional gene data, and conversely, improvement in models of facultative microbial processes via community diversity metrics. Our results also suggest that microbial diversity can strengthen predictions of respiration rates beyond microbial biomass parameters, as 53% of models were improved by incorporating both sets of predictors compared to 35% by microbial biomass alone. Our analysis represents the first comprehensive analysis of research examining links between microbial community structure and ecosystem function. Taken together, our results indicate that a greater understanding of microbial communities informed by ecological principles may enhance our ability to predict ecosystem process rates relative to assessments based on environmental variables and microbial physiology.« less
Dinov, Ivo D; Heavner, Ben; Tang, Ming; Glusman, Gustavo; Chard, Kyle; Darcy, Mike; Madduri, Ravi; Pa, Judy; Spino, Cathie; Kesselman, Carl; Foster, Ian; Deutsch, Eric W; Price, Nathan D; Van Horn, John D; Ames, Joseph; Clark, Kristi; Hood, Leroy; Hampstead, Benjamin M; Dauer, William; Toga, Arthur W
2016-01-01
A unique archive of Big Data on Parkinson's Disease is collected, managed and disseminated by the Parkinson's Progression Markers Initiative (PPMI). The integration of such complex and heterogeneous Big Data from multiple sources offers unparalleled opportunities to study the early stages of prevalent neurodegenerative processes, track their progression and quickly identify the efficacies of alternative treatments. Many previous human and animal studies have examined the relationship of Parkinson's disease (PD) risk to trauma, genetics, environment, co-morbidities, or life style. The defining characteristics of Big Data-large size, incongruency, incompleteness, complexity, multiplicity of scales, and heterogeneity of information-generating sources-all pose challenges to the classical techniques for data management, processing, visualization and interpretation. We propose, implement, test and validate complementary model-based and model-free approaches for PD classification and prediction. To explore PD risk using Big Data methodology, we jointly processed complex PPMI imaging, genetics, clinical and demographic data. Collective representation of the multi-source data facilitates the aggregation and harmonization of complex data elements. This enables joint modeling of the complete data, leading to the development of Big Data analytics, predictive synthesis, and statistical validation. Using heterogeneous PPMI data, we developed a comprehensive protocol for end-to-end data characterization, manipulation, processing, cleaning, analysis and validation. Specifically, we (i) introduce methods for rebalancing imbalanced cohorts, (ii) utilize a wide spectrum of classification methods to generate consistent and powerful phenotypic predictions, and (iii) generate reproducible machine-learning based classification that enables the reporting of model parameters and diagnostic forecasting based on new data. We evaluated several complementary model-based predictive approaches, which failed to generate accurate and reliable diagnostic predictions. However, the results of several machine-learning based classification methods indicated significant power to predict Parkinson's disease in the PPMI subjects (consistent accuracy, sensitivity, and specificity exceeding 96%, confirmed using statistical n-fold cross-validation). Clinical (e.g., Unified Parkinson's Disease Rating Scale (UPDRS) scores), demographic (e.g., age), genetics (e.g., rs34637584, chr12), and derived neuroimaging biomarker (e.g., cerebellum shape index) data all contributed to the predictive analytics and diagnostic forecasting. Model-free Big Data machine learning-based classification methods (e.g., adaptive boosting, support vector machines) can outperform model-based techniques in terms of predictive precision and reliability (e.g., forecasting patient diagnosis). We observed that statistical rebalancing of cohort sizes yields better discrimination of group differences, specifically for predictive analytics based on heterogeneous and incomplete PPMI data. UPDRS scores play a critical role in predicting diagnosis, which is expected based on the clinical definition of Parkinson's disease. Even without longitudinal UPDRS data, however, the accuracy of model-free machine learning based classification is over 80%. The methods, software and protocols developed here are openly shared and can be employed to study other neurodegenerative disorders (e.g., Alzheimer's, Huntington's, amyotrophic lateral sclerosis), as well as for other predictive Big Data analytics applications.
Integrating WEPP into the WEPS infrastructure
USDA-ARS?s Scientific Manuscript database
The Wind Erosion Prediction System (WEPS) and the Water Erosion Prediction Project (WEPP) share a common modeling philosophy, that of moving away from primarily empirically based models based on indices or "average conditions", and toward a more process based approach which can be evaluated using ac...
National Centers for Environmental Prediction
Statistics Observational Data Processing Data Assimilation Monsoon Desk Model Transition Seminars Seminar ) of the Environmental Modeling Center (EMC) conducts a program of research and development in support Climate Prediction (NCWCP) 5830 University Research Court College Park, MD 20740 Page Author: EMC
Kell, Alexander J E; Yamins, Daniel L K; Shook, Erica N; Norman-Haignere, Sam V; McDermott, Josh H
2018-05-02
A core goal of auditory neuroscience is to build quantitative models that predict cortical responses to natural sounds. Reasoning that a complete model of auditory cortex must solve ecologically relevant tasks, we optimized hierarchical neural networks for speech and music recognition. The best-performing network contained separate music and speech pathways following early shared processing, potentially replicating human cortical organization. The network performed both tasks as well as humans and exhibited human-like errors despite not being optimized to do so, suggesting common constraints on network and human performance. The network predicted fMRI voxel responses substantially better than traditional spectrotemporal filter models throughout auditory cortex. It also provided a quantitative signature of cortical representational hierarchy-primary and non-primary responses were best predicted by intermediate and late network layers, respectively. The results suggest that task optimization provides a powerful set of tools for modeling sensory systems. Copyright © 2018 Elsevier Inc. All rights reserved.
Chen, Yingyi; Yu, Huihui; Cheng, Yanjun; Cheng, Qianqian; Li, Daoliang
2018-01-01
A precise predictive model is important for obtaining a clear understanding of the changes in dissolved oxygen content in crab ponds. Highly accurate interval forecasting of dissolved oxygen content is fundamental to reduce risk, and three-dimensional prediction can provide more accurate results and overall guidance. In this study, a hybrid three-dimensional (3D) dissolved oxygen content prediction model based on a radial basis function (RBF) neural network, K-means and subtractive clustering was developed and named the subtractive clustering (SC)-K-means-RBF model. In this modeling process, K-means and subtractive clustering methods were employed to enhance the hyperparameters required in the RBF neural network model. The comparison of the predicted results of different traditional models validated the effectiveness and accuracy of the proposed hybrid SC-K-means-RBF model for three-dimensional prediction of dissolved oxygen content. Consequently, the proposed model can effectively display the three-dimensional distribution of dissolved oxygen content and serve as a guide for feeding and future studies.
Chouinard, Maud-Christine; Robichaud-Ekstrand, Sylvie
2007-02-01
Several authors have questioned the transtheoretical model. Determining the predictive value of each cognitive-behavioural element within this model could explain the multiple successes reported in smoking cessation programmes. The purpose of this study was to predict point-prevalent smoking abstinence at 2 and 6 months, using the constructs of the transtheoretical model, when applied to a pooled sample of individuals who were hospitalized for a cardiovascular event. The study follows a predictive correlation design. Recently hospitalized patients (n=168) with cardiovascular disease were pooled from a randomized, controlled trial. Independent variables of the predictive transtheoretical model comprise stages and processes of change, pros and cons to quit smoking (decisional balance), self-efficacy, and social support. These were evaluated at baseline, 2 and 6 months. Compared to smokers, individuals who abstained from smoking at 2 and 6 months were more confident at baseline to remain non-smokers, perceived less pros and cons to continue smoking, utilized less consciousness raising and self-re-evaluation experiential processes of change, and received more positive reinforcement from their social network with regard to their smoke-free behaviour. Self-efficacy and stages of change at baseline were predictive of smoking abstinence after 6 months. Other variables found to be predictive of smoking abstinence at 6 months were an increase in self-efficacy; an increase in positive social support behaviour and a decrease of the pros within the decisional balance. The results partially support the predictive value of the transtheoretical model constructs in smoking cessation for cardiovascular disease patients.
Li, Xiaoqing; Zhang, Yuping; Xia, Jinyan; Swaab, Tamara Y
2017-07-28
Although numerous studies have demonstrated that the language processing system can predict upcoming content during comprehension, there is still no clear picture of the anticipatory stage of predictive processing. This electroencephalograph study examined the cognitive and neural oscillatory mechanisms underlying anticipatory processing during language comprehension, and the consequences of this prediction for bottom-up processing of predicted/unpredicted content. Participants read Mandarin Chinese sentences that were either strongly or weakly constraining and that contained critical nouns that were congruent or incongruent with the sentence contexts. We examined the effects of semantic predictability on anticipatory processing prior to the onset of the critical nouns and on integration of the critical nouns. The results revealed that, at the integration stage, the strong-constraint condition (compared to the weak-constraint condition) elicited a reduced N400 and reduced theta activity (4-7Hz) for the congruent nouns, but induced beta (13-18Hz) and theta (4-7Hz) power decreases for the incongruent nouns, indicating benefits of confirmed predictions and potential costs of disconfirmed predictions. More importantly, at the anticipatory stage, the strongly constraining context elicited an enhanced sustained anterior negativity and beta power decrease (19-25Hz), which indicates that strong prediction places a higher processing load on the anticipatory stage of processing. The differences (in the ease of processing and the underlying neural oscillatory activities) between anticipatory and integration stages of lexical processing were discussed with regard to predictive processing models. Copyright © 2017 Elsevier Ltd. All rights reserved.
Development of a multi-ensemble Prediction Model for China
NASA Astrophysics Data System (ADS)
Brasseur, G. P.; Bouarar, I.; Petersen, A. K.
2016-12-01
As part of the EU-sponsored Panda and MarcoPolo Projects, a multi-model prediction system including 7 models has been developed. Most regional models use global air quality predictions provided by the Copernicus Atmospheric Monitoring Service and downscale the forecast at relatively high spatial resolution in eastern China. The paper will describe the forecast system and show examples of forecasts produced for several Chinese urban areas and displayed on a web site developed by the Dutch Meteorological service. A discussion on the accuracy of the predictions based on a detailed validation process using surface measurements from the Chinese monitoring network will be presented.
NASA Astrophysics Data System (ADS)
Hartmann, A. J.; Ireson, A. M.
2017-12-01
Chalk aquifers represent an important source of drinking water in the UK. Due to its fractured-porous structure, Chalk aquifers are characterized by highly dynamic groundwater fluctuations that enhance the risk of groundwater flooding. The risk of groundwater flooding can be assessed by physically-based groundwater models. But for reliable results, a-priori information about the distribution of hydraulic conductivities and porosities is necessary, which is often not available. For that reason, conceptual simulation models are often used to predict groundwater behaviour. They commonly require calibration by historic groundwater observations. Consequently, their prediction performance may reduce significantly, when it comes to system states that did not occur within the calibration time series. In this study, we calibrate a conceptual model to the observed groundwater level observations at several locations within a Chalk system in Southern England. During the calibration period, no groundwater flooding occurred. We then apply our model to predict the groundwater dynamics of the system at a time that includes a groundwater flooding event. We show that the calibrated model provides reasonable predictions before and after the flooding event but it over-estimates groundwater levels during the event. After modifying the model structure to include topographic information, the model is capable of prediction the groundwater flooding event even though groundwater flooding never occurred in the calibration period. Although straight forward, our approach shows how conceptual process-based models can be applied to predict system states and dynamics that did not occur in the calibration period. We believe such an approach can be transferred to similar cases, especially to regions where rainfall intensities are expected to trigger processes and system states that may have not yet been observed.
First-Principles Prediction of Liquid/Liquid Interfacial Tension.
Andersson, M P; Bennetzen, M V; Klamt, A; Stipp, S L S
2014-08-12
The interfacial tension between two liquids is the free energy per unit surface area required to create that interface. Interfacial tension is a determining factor for two-phase liquid behavior in a wide variety of systems ranging from water flooding in oil recovery processes and remediation of groundwater aquifers contaminated by chlorinated solvents to drug delivery and a host of industrial processes. Here, we present a model for predicting interfacial tension from first principles using density functional theory calculations. Our model requires no experimental input and is applicable to liquid/liquid systems of arbitrary compositions. The consistency of the predictions with experimental data is significant for binary, ternary, and multicomponent water/organic compound systems, which offers confidence in using the model to predict behavior where no data exists. The method is fast and can be used as a screening technique as well as to extend experimental data into conditions where measurements are technically too difficult, time consuming, or impossible.
A mechanistic Individual-based Model of microbial communities.
Jayathilake, Pahala Gedara; Gupta, Prashant; Li, Bowen; Madsen, Curtis; Oyebamiji, Oluwole; González-Cabaleiro, Rebeca; Rushton, Steve; Bridgens, Ben; Swailes, David; Allen, Ben; McGough, A Stephen; Zuliani, Paolo; Ofiteru, Irina Dana; Wilkinson, Darren; Chen, Jinju; Curtis, Tom
2017-01-01
Accurate predictive modelling of the growth of microbial communities requires the credible representation of the interactions of biological, chemical and mechanical processes. However, although biological and chemical processes are represented in a number of Individual-based Models (IbMs) the interaction of growth and mechanics is limited. Conversely, there are mechanically sophisticated IbMs with only elementary biology and chemistry. This study focuses on addressing these limitations by developing a flexible IbM that can robustly combine the biological, chemical and physical processes that dictate the emergent properties of a wide range of bacterial communities. This IbM is developed by creating a microbiological adaptation of the open source Large-scale Atomic/Molecular Massively Parallel Simulator (LAMMPS). This innovation should provide the basis for "bottom up" prediction of the emergent behaviour of entire microbial systems. In the model presented here, bacterial growth, division, decay, mechanical contact among bacterial cells, and adhesion between the bacteria and extracellular polymeric substances are incorporated. In addition, fluid-bacteria interaction is implemented to simulate biofilm deformation and erosion. The model predicts that the surface morphology of biofilms becomes smoother with increased nutrient concentration, which agrees well with previous literature. In addition, the results show that increased shear rate results in smoother and more compact biofilms. The model can also predict shear rate dependent biofilm deformation, erosion, streamer formation and breakup.
A mechanistic Individual-based Model of microbial communities
Gupta, Prashant; Li, Bowen; Madsen, Curtis; Oyebamiji, Oluwole; González-Cabaleiro, Rebeca; Rushton, Steve; Bridgens, Ben; Swailes, David; Allen, Ben; McGough, A. Stephen; Zuliani, Paolo; Ofiteru, Irina Dana; Wilkinson, Darren; Chen, Jinju; Curtis, Tom
2017-01-01
Accurate predictive modelling of the growth of microbial communities requires the credible representation of the interactions of biological, chemical and mechanical processes. However, although biological and chemical processes are represented in a number of Individual-based Models (IbMs) the interaction of growth and mechanics is limited. Conversely, there are mechanically sophisticated IbMs with only elementary biology and chemistry. This study focuses on addressing these limitations by developing a flexible IbM that can robustly combine the biological, chemical and physical processes that dictate the emergent properties of a wide range of bacterial communities. This IbM is developed by creating a microbiological adaptation of the open source Large-scale Atomic/Molecular Massively Parallel Simulator (LAMMPS). This innovation should provide the basis for “bottom up” prediction of the emergent behaviour of entire microbial systems. In the model presented here, bacterial growth, division, decay, mechanical contact among bacterial cells, and adhesion between the bacteria and extracellular polymeric substances are incorporated. In addition, fluid-bacteria interaction is implemented to simulate biofilm deformation and erosion. The model predicts that the surface morphology of biofilms becomes smoother with increased nutrient concentration, which agrees well with previous literature. In addition, the results show that increased shear rate results in smoother and more compact biofilms. The model can also predict shear rate dependent biofilm deformation, erosion, streamer formation and breakup. PMID:28771505
Park, Minkyu; Anumol, Tarun; Daniels, Kevin D; Wu, Shimin; Ziska, Austin D; Snyder, Shane A
2017-08-01
Ozone oxidation has been demonstrated to be an effective treatment process for the attenuation of trace organic compounds (TOrCs); however, predicting TOrC attenuation by ozone processes is challenging in wastewaters. Since ozone is rapidly consumed, determining the exposure times of ozone and hydroxyl radical proves to be difficult. As direct potable reuse schemes continue to gain traction, there is an increasing need for the development of real-time monitoring strategies for TOrC abatement in ozone oxidation processes. Hence, this study is primarily aimed at developing indicator and surrogate models for the prediction of TOrC attenuation by ozone oxidation. To this end, the second-order kinetic equations with a second-phase R ct value (ratio of hydroxyl radical exposure to molecular ozone exposure) were used to calculate comparative kinetics of TOrC attenuation and the reduction of indicator and spectroscopic surrogate parameters, including UV absorbance at 254 nm (UVA 254 ) and total fluorescence (TF). The developed indicator model using meprobamate as an indicator compound and the surrogate models with UVA 254 and TF exhibited good predictive power for the attenuation of 13 kinetically distinct TOrCs in five filtered and unfiltered wastewater effluents (R 2 values > 0.8). This study is intended to help provide a guideline for the implementation of indicator/surrogate models for real-time monitoring of TOrC abatement with ozone processes and integrate them into a regulatory framework in water reuse. Copyright © 2017 Elsevier Ltd. All rights reserved.
Early prediction of extreme stratospheric polar vortex states based on causal precursors
NASA Astrophysics Data System (ADS)
Kretschmer, Marlene; Runge, Jakob; Coumou, Dim
2017-08-01
Variability in the stratospheric polar vortex (SPV) can influence the tropospheric circulation and thereby winter weather. Early predictions of extreme SPV states are thus important to improve forecasts of winter weather including cold spells. However, dynamical models are usually restricted in lead time because they poorly capture low-frequency processes. Empirical models often suffer from overfitting problems as the relevant physical processes and time lags are often not well understood. Here we introduce a novel empirical prediction method by uniting a response-guided community detection scheme with a causal discovery algorithm. This way, we objectively identify causal precursors of the SPV at subseasonal lead times and find them to be in good agreement with known physical drivers. A linear regression prediction model based on the causal precursors can explain most SPV variability (r2 = 0.58), and our scheme correctly predicts 58% (46%) of extremely weak SPV states for lead times of 1-15 (16-30) days with false-alarm rates of only approximately 5%. Our method can be applied to any variable relevant for (sub)seasonal weather forecasts and could thus help improving long-lead predictions.
Analytical Modeling of Plasma Arc Cutting of Steel Plate
NASA Astrophysics Data System (ADS)
Cimbala, John; Fisher, Lance; Settles, Gary; Lillis, Milan
2000-11-01
A transferred-arc plasma torch cuts steel plate, and in the process ejects a molten stream of iron and ferrous oxides ("ejecta"). Under non-optimum conditions - especially during low speed cuts and/or small-radius corner cuts - "dross" is formed. Dross is re-solidified molten metal that sticks to the underside of the cut and renders it rough. The present research is an attempt to analytically model this process, with the goal of predicting dross formation. With the aid of experimental data, a control volume formulation is used in a steady frame of reference to predict the mass flow of molten material inside the cut. Although simple, the model is three-dimensional, can predict the shear stress driving the molten material in the direction of the plasma jet, and can predict the velocity of molten material exiting the bottom of the plate. In order to predict formation of dross, a momentum balance is performed on the flowing melt, considering the resisting viscous and surface tension forces. Preliminary results are promising, and provide a potential means of predicting dross formation without resorting to detailed computational analyses.
NASA Astrophysics Data System (ADS)
Karl, Matthias; Kukkonen, Jaakko; Keuken, Menno P.; Lützenkirchen, Susanne; Pirjola, Liisa; Hussein, Tareq
2016-04-01
This study evaluates the influence of aerosol processes on the particle number (PN) concentrations in three major European cities on the temporal scale of 1 h, i.e., on the neighborhood and city scales. We have used selected measured data of particle size distributions from previous campaigns in the cities of Helsinki, Oslo and Rotterdam. The aerosol transformation processes were evaluated using the aerosol dynamics model MAFOR, combined with a simplified treatment of roadside and urban atmospheric dispersion. We have compared the model predictions of particle number size distributions with the measured data, and conducted sensitivity analyses regarding the influence of various model input variables. We also present a simplified parameterization for aerosol processes, which is based on the more complex aerosol process computations; this simple model can easily be implemented to both Gaussian and Eulerian urban dispersion models. Aerosol processes considered in this study were (i) the coagulation of particles, (ii) the condensation and evaporation of two organic vapors, and (iii) dry deposition. The chemical transformation of gas-phase compounds was not taken into account. By choosing concentrations and particle size distributions at roadside as starting point of the computations, nucleation of gas-phase vapors from the exhaust has been regarded as post tail-pipe emission, avoiding the need to include nucleation in the process analysis. Dry deposition and coagulation of particles were identified to be the most important aerosol dynamic processes that control the evolution and removal of particles. The error of the contribution from dry deposition to PN losses due to the uncertainty of measured deposition velocities ranges from -76 to +64 %. The removal of nanoparticles by coagulation enhanced considerably when considering the fractal nature of soot aggregates and the combined effect of van der Waals and viscous interactions. The effect of condensation and evaporation of organic vapors emitted by vehicles on particle numbers and on particle size distributions was examined. Under inefficient dispersion conditions, the model predicts that condensational growth contributes to the evolution of PN from roadside to the neighborhood scale. The simplified parameterization of aerosol processes predicts the change in particle number concentrations between roadside and urban background within 10 % of that predicted by the fully size-resolved MAFOR model.
On the application of multilevel modeling in environmental and ecological studies
Qian, Song S.; Cuffney, Thomas F.; Alameddine, Ibrahim; McMahon, Gerard; Reckhow, Kenneth H.
2010-01-01
This paper illustrates the advantages of a multilevel/hierarchical approach for predictive modeling, including flexibility of model formulation, explicitly accounting for hierarchical structure in the data, and the ability to predict the outcome of new cases. As a generalization of the classical approach, the multilevel modeling approach explicitly models the hierarchical structure in the data by considering both the within- and between-group variances leading to a partial pooling of data across all levels in the hierarchy. The modeling framework provides means for incorporating variables at different spatiotemporal scales. The examples used in this paper illustrate the iterative process of model fitting and evaluation, a process that can lead to improved understanding of the system being studied.
Evaluating models of climate and forest vegetation
NASA Technical Reports Server (NTRS)
Clark, James S.
1992-01-01
Understanding how the biosphere may respond to increasing trace gas concentrations in the atmosphere requires models that contain vegetation responses to regional climate. Most of the processes ecologists study in forests, including trophic interactions, nutrient cycling, and disturbance regimes, and vital components of the world economy, such as forest products and agriculture, will be influenced in potentially unexpected ways by changing climate. These vegetation changes affect climate in the following ways: changing C, N, and S pools; trace gases; albedo; and water balance. The complexity of the indirect interactions among variables that depend on climate, together with the range of different space/time scales that best describe these processes, make the problems of modeling and prediction enormously difficult. These problems of predicting vegetation response to climate warming and potential ways of testing model predictions are the subjects of this chapter.
Strategies for concurrent processing of complex algorithms in data driven architectures
NASA Technical Reports Server (NTRS)
Stoughton, John W.; Mielke, Roland R.; Som, Sukhamony
1990-01-01
The performance modeling and enhancement for periodic execution of large-grain, decision-free algorithms in data flow architectures is examined. Applications include real-time implementation of control and signal processing algorithms where performance is required to be highly predictable. The mapping of algorithms onto the specified class of data flow architectures is realized by a marked graph model called ATAMM (Algorithm To Architecture Mapping Model). Performance measures and bounds are established. Algorithm transformation techniques are identified for performance enhancement and reduction of resource (computing element) requirements. A systematic design procedure is described for generating operating conditions for predictable performance both with and without resource constraints. An ATAMM simulator is used to test and validate the performance prediction by the design procedure. Experiments on a three resource testbed provide verification of the ATAMM model and the design procedure.
Strategies for concurrent processing of complex algorithms in data driven architectures
NASA Technical Reports Server (NTRS)
Som, Sukhamoy; Stoughton, John W.; Mielke, Roland R.
1990-01-01
Performance modeling and performance enhancement for periodic execution of large-grain, decision-free algorithms in data flow architectures are discussed. Applications include real-time implementation of control and signal processing algorithms where performance is required to be highly predictable. The mapping of algorithms onto the specified class of data flow architectures is realized by a marked graph model called algorithm to architecture mapping model (ATAMM). Performance measures and bounds are established. Algorithm transformation techniques are identified for performance enhancement and reduction of resource (computing element) requirements. A systematic design procedure is described for generating operating conditions for predictable performance both with and without resource constraints. An ATAMM simulator is used to test and validate the performance prediction by the design procedure. Experiments on a three resource testbed provide verification of the ATAMM model and the design procedure.
SIM_ADJUST -- A computer code that adjusts simulated equivalents for observations or predictions
Poeter, Eileen P.; Hill, Mary C.
2008-01-01
This report documents the SIM_ADJUST computer code. SIM_ADJUST surmounts an obstacle that is sometimes encountered when using universal model analysis computer codes such as UCODE_2005 (Poeter and others, 2005), PEST (Doherty, 2004), and OSTRICH (Matott, 2005; Fredrick and others (2007). These codes often read simulated equivalents from a list in a file produced by a process model such as MODFLOW that represents a system of interest. At times values needed by the universal code are missing or assigned default values because the process model could not produce a useful solution. SIM_ADJUST can be used to (1) read a file that lists expected observation or prediction names and possible alternatives for the simulated values; (2) read a file produced by a process model that contains space or tab delimited columns, including a column of simulated values and a column of related observation or prediction names; (3) identify observations or predictions that have been omitted or assigned a default value by the process model; and (4) produce an adjusted file that contains a column of simulated values and a column of associated observation or prediction names. The user may provide alternatives that are constant values or that are alternative simulated values. The user may also provide a sequence of alternatives. For example, the heads from a series of cells may be specified to ensure that a meaningful value is available to compare with an observation located in a cell that may become dry. SIM_ADJUST is constructed using modules from the JUPITER API, and is intended for use on any computer operating system. SIM_ADJUST consists of algorithms programmed in Fortran90, which efficiently performs numerical calculations.
Predicting bending stiffness of randomly oriented hybrid panels
Laura Moya; William T.Y. Tze; Jerrold E. Winandy
2010-01-01
This study was conducted to develop a simple model to predict the bending modulus of elasticity (MOE) of randomly oriented hybrid panels. The modeling process involved three modules: the behavior of a single layer was computed by applying micromechanics equations, layer properties were adjusted for densification effects, and the entire panel was modeled as a three-...
USDA-ARS?s Scientific Manuscript database
Representing the performance of cattle finished on an all forage diet in process-based whole farm system models has presented a challenge. To address this challenge, a study was done to evaluate average daily gain (ADG) predictions of the Integrated Farm System Model (IFSM) for steers consuming all-...
Implementation of channel-routing routines in the Water Erosion Prediction Project (WEPP) model
Li Wang; Joan Q. Wu; William J. Elliott; Shuhui Dun; Sergey Lapin; Fritz R. Fiedler; Dennis C. Flanagan
2010-01-01
The Water Erosion Prediction Project (WEPP) model is a process-based, continuous-simulation, watershed hydrology and erosion model. It is an important tool for water erosion simulation owing to its unique functionality in representing diverse landuse and management conditions. Its applicability is limited to relatively small watersheds since its current version does...
Same day prediction of fecal indicator bacteria (FIB) concentrations and bather protection from the risk of exposure to pathogens are two important goals of implementing a modeling program at recreational beaches. Sampling efforts for modelling applications can be expensive and t...
Investigation of Periodic Pitching through the Static Stall Angle of Attack.
1987-03-01
been completed to characterize and predict the dynamic stall process. In 1968 Ham (Ref 11) completed a study to explain the torsional oscillation of...peak values of l.:t and moment could be predicted accurately, but the model did not predict when the peaks would occur. Another problem with the...model was that it required input from experimental results to tell when leading edge vortex separation occurred. The prediction of when vortex shedding
DOE Office of Scientific and Technical Information (OSTI.GOV)
Daly, Don S.; Anderson, Kevin K.; White, Amanda M.
Background: A microarray of enzyme-linked immunosorbent assays, or ELISA microarray, predicts simultaneously the concentrations of numerous proteins in a small sample. These predictions, however, are uncertain due to processing error and biological variability. Making sound biological inferences as well as improving the ELISA microarray process require require both concentration predictions and creditable estimates of their errors. Methods: We present a statistical method based on monotonic spline statistical models, penalized constrained least squares fitting (PCLS) and Monte Carlo simulation (MC) to predict concentrations and estimate prediction errors in ELISA microarray. PCLS restrains the flexible spline to a fit of assay intensitymore » that is a monotone function of protein concentration. With MC, both modeling and measurement errors are combined to estimate prediction error. The spline/PCLS/MC method is compared to a common method using simulated and real ELISA microarray data sets. Results: In contrast to the rigid logistic model, the flexible spline model gave credible fits in almost all test cases including troublesome cases with left and/or right censoring, or other asymmetries. For the real data sets, 61% of the spline predictions were more accurate than their comparable logistic predictions; especially the spline predictions at the extremes of the prediction curve. The relative errors of 50% of comparable spline and logistic predictions differed by less than 20%. Monte Carlo simulation rendered acceptable asymmetric prediction intervals for both spline and logistic models while propagation of error produced symmetric intervals that diverged unrealistically as the standard curves approached horizontal asymptotes. Conclusions: The spline/PCLS/MC method is a flexible, robust alternative to a logistic/NLS/propagation-of-error method to reliably predict protein concentrations and estimate their errors. The spline method simplifies model selection and fitting, and reliably estimates believable prediction errors. For the 50% of the real data sets fit well by both methods, spline and logistic predictions are practically indistinguishable, varying in accuracy by less than 15%. The spline method may be useful when automated prediction across simultaneous assays of numerous proteins must be applied routinely with minimal user intervention.« less
NASA Astrophysics Data System (ADS)
Fang, Jun
Thermotropic liquid crystalline polymers (TLCPs) are a class of promising engineering materials for high-demanding structural applications. Their excellent mechanical properties are highly correlated to the underlying molecular orientation states, which may be affected by complex flow fields during melt processing. Thus, understanding and eventually predicting how processing flows impact molecular orientation is a critical step towards rational design work in order to achieve favorable, balanced physical properties in finished products. This thesis aims to develop deeper understanding of orientation development in commercial TLCPs during processing by coordinating extensive experimental measurements with numerical computations. In situ measurements of orientation development of LCPs during processing are a focal point of this thesis. An x-ray capable injection molding apparatus is enhanced and utilized for time-resolved measurements of orientation development in multiple commercial TLCPs during injection molding. Ex situ wide angle x-ray scattering is also employed for more thorough characterization of molecular orientation distributions in molded plaques. Incompletely injection molded plaques ("short shots") are studied to gain further insights into the intermediate orientation states during mold filling. Finally, two surface orientation characterization techniques, near edge x-ray absorption fine structure (NEXAFS) and infrared attenuated total reflectance (FTIR-ATR) are combined to investigate the surface orientation distribution of injection molded plaques. Surface orientation states are found to be vastly different from their bulk counterparts due to different kinematics involved in mold filling. In general, complex distributions of orientation in molded plaques reflect the spatially varying competition between shear and extension during mold filling. To complement these experimental measurements, numerical calculations based on the Larson-Doi polydomain model are performed. The implementation of the Larson-Doi in complex processing flows is performed using a commercial process modeling software suite (MOLDFLOWRTM), exploiting a nearly exact analogy between the Larson-Doi model and a fiber orientation model that has been widely used in composites processing simulations. The modeling scheme is first verified by predicting many qualitative and quantitative features of molecular orientation distributions in isothermal extrusion-fed channel flows. In coordination with experiments, the model predictions are found to capture many qualitative features observed in injection molded plaques (including short shots). The final, stringent test of Larson-Doi model performance is prediction of in situ transient orientation data collected during mold filling. The model yields satisfactory results, though certain numerical approximations limit performance near the mold front.
CABS-fold: Server for the de novo and consensus-based prediction of protein structure.
Blaszczyk, Maciej; Jamroz, Michal; Kmiecik, Sebastian; Kolinski, Andrzej
2013-07-01
The CABS-fold web server provides tools for protein structure prediction from sequence only (de novo modeling) and also using alternative templates (consensus modeling). The web server is based on the CABS modeling procedures ranked in previous Critical Assessment of techniques for protein Structure Prediction competitions as one of the leading approaches for de novo and template-based modeling. Except for template data, fragmentary distance restraints can also be incorporated into the modeling process. The web server output is a coarse-grained trajectory of generated conformations, its Jmol representation and predicted models in all-atom resolution (together with accompanying analysis). CABS-fold can be freely accessed at http://biocomp.chem.uw.edu.pl/CABSfold.
CABS-fold: server for the de novo and consensus-based prediction of protein structure
Blaszczyk, Maciej; Jamroz, Michal; Kmiecik, Sebastian; Kolinski, Andrzej
2013-01-01
The CABS-fold web server provides tools for protein structure prediction from sequence only (de novo modeling) and also using alternative templates (consensus modeling). The web server is based on the CABS modeling procedures ranked in previous Critical Assessment of techniques for protein Structure Prediction competitions as one of the leading approaches for de novo and template-based modeling. Except for template data, fragmentary distance restraints can also be incorporated into the modeling process. The web server output is a coarse-grained trajectory of generated conformations, its Jmol representation and predicted models in all-atom resolution (together with accompanying analysis). CABS-fold can be freely accessed at http://biocomp.chem.uw.edu.pl/CABSfold. PMID:23748950
Dynamic modeling of Tampa Bay urban development using parallel computing
Xian, G.; Crane, M.; Steinwand, D.
2005-01-01
Urban land use and land cover has changed significantly in the environs of Tampa Bay, Florida, over the past 50 years. Extensive urbanization has created substantial change to the region's landscape and ecosystems. This paper uses a dynamic urban-growth model, SLEUTH, which applies six geospatial data themes (slope, land use, exclusion, urban extent, transportation, hillside), to study the process of urbanization and associated land use and land cover change in the Tampa Bay area. To reduce processing time and complete the modeling process within an acceptable period, the model is recoded and ported to a Beowulf cluster. The parallel-processing computer system accomplishes the massive amount of computation the modeling simulation requires. SLEUTH calibration process for the Tampa Bay urban growth simulation spends only 10 h CPU time. The model predicts future land use/cover change trends for Tampa Bay from 1992 to 2025. Urban extent is predicted to double in the Tampa Bay watershed between 1992 and 2025. Results show an upward trend of urbanization at the expense of a decline of 58% and 80% in agriculture and forested lands, respectively.
ERIC Educational Resources Information Center
Fauth, Elizabeth Braungart; Zarit, Steven H.; Malmberg, Bo; Johansson, Boo
2007-01-01
Purpose: This study used the Disablement Process Model to predict whether a sample of the oldest-old maintained their disability or disability-free status over a 2- and 4-year follow-up, or whether they transitioned into a state of disability during this time. Design and Methods: We followed a sample of 149 Swedish adults who were 86 years of age…
Explicit Processing Demands Reveal Language Modality-Specific Organization of Working Memory
ERIC Educational Resources Information Center
Rudner, Mary; Ronnberg, Jerker
2008-01-01
The working memory model for Ease of Language Understanding (ELU) predicts that processing differences between language modalities emerge when cognitive demands are explicit. This prediction was tested in three working memory experiments with participants who were Deaf Signers (DS), Hearing Signers (HS), or Hearing Nonsigners (HN). Easily nameable…
Palanichamy, A; Jayas, D S; Holley, R A
2008-01-01
The Canadian Food Inspection Agency required the meat industry to ensure Escherichia coli O157:H7 does not survive (experiences > or = 5 log CFU/g reduction) in dry fermented sausage (salami) during processing after a series of foodborne illness outbreaks resulting from this pathogenic bacterium occurred. The industry is in need of an effective technique like predictive modeling for estimating bacterial viability, because traditional microbiological enumeration is a time-consuming and laborious method. The accuracy and speed of artificial neural networks (ANNs) for this purpose is an attractive alternative (developed from predictive microbiology), especially for on-line processing in industry. Data from a study of interactive effects of different levels of pH, water activity, and the concentrations of allyl isothiocyanate at various times during sausage manufacture in reducing numbers of E. coli O157:H7 were collected. Data were used to develop predictive models using a general regression neural network (GRNN), a form of ANN, and a statistical linear polynomial regression technique. Both models were compared for their predictive error, using various statistical indices. GRNN predictions for training and test data sets had less serious errors when compared with the statistical model predictions. GRNN models were better and slightly better for training and test sets, respectively, than was the statistical model. Also, GRNN accurately predicted the level of allyl isothiocyanate required, ensuring a 5-log reduction, when an appropriate production set was created by interpolation. Because they are simple to generate, fast, and accurate, ANN models may be of value for industrial use in dry fermented sausage manufacture to reduce the hazard associated with E. coli O157:H7 in fresh beef and permit production of consistently safe products from this raw material.
Application of linear regression analysis in accuracy assessment of rolling force calculations
NASA Astrophysics Data System (ADS)
Poliak, E. I.; Shim, M. K.; Kim, G. S.; Choo, W. Y.
1998-10-01
Efficient operation of the computational models employed in process control systems require periodical assessment of the accuracy of their predictions. Linear regression is proposed as a tool which allows separate systematic and random prediction errors from those related to measurements. A quantitative characteristic of the model predictive ability is introduced in addition to standard statistical tests for model adequacy. Rolling force calculations are considered as an example for the application. However, the outlined approach can be used to assess the performance of any computational model.
Seasonal Drought Prediction: Advances, Challenges, and Future Prospects
NASA Astrophysics Data System (ADS)
Hao, Zengchao; Singh, Vijay P.; Xia, Youlong
2018-03-01
Drought prediction is of critical importance to early warning for drought managements. This review provides a synthesis of drought prediction based on statistical, dynamical, and hybrid methods. Statistical drought prediction is achieved by modeling the relationship between drought indices of interest and a suite of potential predictors, including large-scale climate indices, local climate variables, and land initial conditions. Dynamical meteorological drought prediction relies on seasonal climate forecast from general circulation models (GCMs), which can be employed to drive hydrological models for agricultural and hydrological drought prediction with the predictability determined by both climate forcings and initial conditions. Challenges still exist in drought prediction at long lead time and under a changing environment resulting from natural and anthropogenic factors. Future research prospects to improve drought prediction include, but are not limited to, high-quality data assimilation, improved model development with key processes related to drought occurrence, optimal ensemble forecast to select or weight ensembles, and hybrid drought prediction to merge statistical and dynamical forecasts.
A model of the human in a cognitive prediction task.
NASA Technical Reports Server (NTRS)
Rouse, W. B.
1973-01-01
The human decision maker's behavior when predicting future states of discrete linear dynamic systems driven by zero-mean Gaussian processes is modeled. The task is on a slow enough time scale that physiological constraints are insignificant compared with cognitive limitations. The model is basically a linear regression system identifier with a limited memory and noisy observations. Experimental data are presented and compared to the model.
ERIC Educational Resources Information Center
Nakamura, Yasuyuki; Nishi, Shinnosuke; Muramatsu, Yuta; Yasutake, Koichi; Yamakawa, Osamu; Tagawa, Takahiro
2014-01-01
In this paper, we introduce a mathematical model for collaborative learning and the answering process for multiple-choice questions. The collaborative learning model is inspired by the Ising spin model and the model for answering multiple-choice questions is based on their difficulty level. An intensive simulation study predicts the possibility of…
Real-Time Prediction of Temperature Elevation During Robotic Bone Drilling Using the Torque Signal.
Feldmann, Arne; Gavaghan, Kate; Stebinger, Manuel; Williamson, Tom; Weber, Stefan; Zysset, Philippe
2017-09-01
Bone drilling is a surgical procedure commonly required in many surgical fields, particularly orthopedics, dentistry and head and neck surgeries. While the long-term effects of thermal bone necrosis are unknown, the thermal damage to nerves in spinal or otolaryngological surgeries might lead to partial paralysis. Previous models to predict the temperature elevation have been suggested, but were not validated or have the disadvantages of computation time and complexity which does not allow real time predictions. Within this study, an analytical temperature prediction model is proposed which uses the torque signal of the drilling process to model the heat production of the drill bit. A simple Green's disk source function is used to solve the three dimensional heat equation along the drilling axis. Additionally, an extensive experimental study was carried out to validate the model. A custom CNC-setup with a load cell and a thermal camera was used to measure the axial drilling torque and force as well as temperature elevations. Bones with different sets of bone volume fraction were drilled with two drill bits ([Formula: see text]1.8 mm and [Formula: see text]2.5 mm) and repeated eight times. The model was calibrated with 5 of 40 measurements and successfully validated with the rest of the data ([Formula: see text]C). It was also found that the temperature elevation can be predicted using only the torque signal of the drilling process. In the future, the model could be used to monitor and control the drilling process of surgeries close to vulnerable structures.
NASA Astrophysics Data System (ADS)
Kugele, Daniel; Dörr, Dominik; Wittemann, Florian; Hangs, Benjamin; Rausch, Julius; Kärger, Luise; Henning, Frank
2017-10-01
The combination of thermoforming processes of continuous-fiber reinforced thermoplastics and injection molding offers a high potential for cost-effective use in automobile mass production. During manufacturing, the thermoplastic laminates are initially heated up to a temperature above the melting point. This is followed by continuous cooling of the material during the forming process, which leads to crystallization under non-isothermal conditions. To account for phase change effects in thermoforming simulation, an accurate modeling of the crystallization kinetics is required. In this context, it is important to consider the wide range of cooling rates, which are observed during processing. Consequently, this paper deals with the experimental investigation of the crystallization at cooling rates varying from 0.16 K/s to 100 K/s using standard differential scanning calorimetry (DSC) and fast scanning calorimetry (Flash DSC). Two different modeling approaches (Nakamura model, modified Nakamura-Ziabicki model) for predicting crystallization kinetics are parameterized according to DSC measurements. It turns out that only the modified Nakamura-Ziabicki model is capable of predicting crystallization kinetics for all investigated cooling rates. Finally, the modified Nakamura-Ziabicki model is validated by cooling experiments using PA6-CF laminates with embedded temperature sensors. It is shown that the modified Nakamura-Ziabicki model predicts crystallization at non-isothermal conditions and varying cooling rates with a good accuracy. Thus, the study contributes to a deeper understanding of the non-isothermal crystallization and presents an overall method for modeling crystallization under process conditions.
Revisiting low-fidelity two-fluid models for gas-solids transport
NASA Astrophysics Data System (ADS)
Adeleke, Najeem; Adewumi, Michael; Ityokumbul, Thaddeus
2016-08-01
Two-phase gas-solids transport models are widely utilized for process design and automation in a broad range of industrial applications. Some of these applications include proppant transport in gaseous fracking fluids, air/gas drilling hydraulics, coal-gasification reactors and food processing units. Systems automation and real time process optimization stand to benefit a great deal from availability of efficient and accurate theoretical models for operations data processing. However, modeling two-phase pneumatic transport systems accurately requires a comprehensive understanding of gas-solids flow behavior. In this study we discuss the prevailing flow conditions and present a low-fidelity two-fluid model equation for particulate transport. The model equations are formulated in a manner that ensures the physical flux term remains conservative despite the inclusion of solids normal stress through the empirical formula for modulus of elasticity. A new set of Roe-Pike averages are presented for the resulting strictly hyperbolic flux term in the system of equations, which was used to develop a Roe-type approximate Riemann solver. The resulting scheme is stable regardless of the choice of flux-limiter. The model is evaluated by the prediction of experimental results from both pneumatic riser and air-drilling hydraulics systems. We demonstrate the effect and impact of numerical formulation and choice of numerical scheme on model predictions. We illustrate the capability of a low-fidelity one-dimensional two-fluid model in predicting relevant flow parameters in two-phase particulate systems accurately even under flow regimes involving counter-current flow.
Alterations in choice behavior by manipulations of world model.
Green, C S; Benson, C; Kersten, D; Schrater, P
2010-09-14
How to compute initially unknown reward values makes up one of the key problems in reinforcement learning theory, with two basic approaches being used. Model-free algorithms rely on the accumulation of substantial amounts of experience to compute the value of actions, whereas in model-based learning, the agent seeks to learn the generative process for outcomes from which the value of actions can be predicted. Here we show that (i) "probability matching"-a consistent example of suboptimal choice behavior seen in humans-occurs in an optimal Bayesian model-based learner using a max decision rule that is initialized with ecologically plausible, but incorrect beliefs about the generative process for outcomes and (ii) human behavior can be strongly and predictably altered by the presence of cues suggestive of various generative processes, despite statistically identical outcome generation. These results suggest human decision making is rational and model based and not consistent with model-free learning.
Alterations in choice behavior by manipulations of world model
Green, C. S.; Benson, C.; Kersten, D.; Schrater, P.
2010-01-01
How to compute initially unknown reward values makes up one of the key problems in reinforcement learning theory, with two basic approaches being used. Model-free algorithms rely on the accumulation of substantial amounts of experience to compute the value of actions, whereas in model-based learning, the agent seeks to learn the generative process for outcomes from which the value of actions can be predicted. Here we show that (i) “probability matching”—a consistent example of suboptimal choice behavior seen in humans—occurs in an optimal Bayesian model-based learner using a max decision rule that is initialized with ecologically plausible, but incorrect beliefs about the generative process for outcomes and (ii) human behavior can be strongly and predictably altered by the presence of cues suggestive of various generative processes, despite statistically identical outcome generation. These results suggest human decision making is rational and model based and not consistent with model-free learning. PMID:20805507
Mechanistic modelling of drug release from a polymer matrix using magnetic resonance microimaging.
Kaunisto, Erik; Tajarobi, Farhad; Abrahmsen-Alami, Susanna; Larsson, Anette; Nilsson, Bernt; Axelsson, Anders
2013-03-12
In this paper a new model describing drug release from a polymer matrix tablet is presented. The utilization of the model is described as a two step process where, initially, polymer parameters are obtained from a previously published pure polymer dissolution model. The results are then combined with drug parameters obtained from literature data in the new model to predict solvent and drug concentration profiles and polymer and drug release profiles. The modelling approach was applied to the case of a HPMC matrix highly loaded with mannitol (model drug). The results showed that the drug release rate can be successfully predicted, using the suggested modelling approach. However, the model was not able to accurately predict the polymer release profile, possibly due to the sparse amount of usable pure polymer dissolution data. In addition to the case study, a sensitivity analysis of model parameters relevant to drug release was performed. The analysis revealed important information that can be useful in the drug formulation process. Copyright © 2013 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Larson, David J., Jr.; Casagrande, Louis G.; Di Marzio, Don; Levy, Alan; Carlson, Frederick M.; Lee, Taipao; Black, David R.; Wu, Jun; Dudley, Michael
1994-07-01
We have successfully validated theoretical models of seeded vertical Bridgman-Stockbarger CdZnTe crystal growth and post-solidification processing, using in-situ thermal monitoring and innovative material characterization techniques. The models predict the thermal gradients, interface shape, fluid flow and solute redistribution during solidification, as well as the distributions of accumulated excess stress that causes defect generation and redistribution. Data from the furnace and ampoule wall have validated predictions from the thermal model. Results are compared to predictions of the thermal and thermo-solutal models. We explain the measured initial, change-of-rate, and terminal compositional transients as well as the macrosegregation. Macro and micro-defect distributions have been imaged on CdZnTe wafers from 40 mm diameter boules. Superposition of topographic defect images and predicted excess stress patterns suggests the origin of some frequently encountered defects, particularly on a macro scale, to result from the applied and accumulated stress fields and the anisotropic nature of the CdZnTe crystal. Implications of these findings with respect to producibility are discussed.
Mueller, Martina; Wagner, Carol L; Annibale, David J; Knapp, Rebecca G; Hulsey, Thomas C; Almeida, Jonas S
2006-03-01
Approximately 30% of intubated preterm infants with respiratory distress syndrome (RDS) will fail attempted extubation, requiring reintubation and mechanical ventilation. Although ventilator technology and monitoring of premature infants have improved over time, optimal extubation remains challenging. Furthermore, extubation decisions for premature infants require complex informational processing, techniques implicitly learned through clinical practice. Computer-aided decision-support tools would benefit inexperienced clinicians, especially during peak neonatal intensive care unit (NICU) census. A five-step procedure was developed to identify predictive variables. Clinical expert (CE) thought processes comprised one model. Variables from that model were used to develop two mathematical models for the decision-support tool: an artificial neural network (ANN) and a multivariate logistic regression model (MLR). The ranking of the variables in the three models was compared using the Wilcoxon Signed Rank Test. The best performing model was used in a web-based decision-support tool with a user interface implemented in Hypertext Markup Language (HTML) and the mathematical model employing the ANN. CEs identified 51 potentially predictive variables for extubation decisions for an infant on mechanical ventilation. Comparisons of the three models showed a significant difference between the ANN and the CE (p = 0.0006). Of the original 51 potentially predictive variables, the 13 most predictive variables were used to develop an ANN as a web-based decision-tool. The ANN processes user-provided data and returns the prediction 0-1 score and a novelty index. The user then selects the most appropriate threshold for categorizing the prediction as a success or failure. Furthermore, the novelty index, indicating the similarity of the test case to the training case, allows the user to assess the confidence level of the prediction with regard to how much the new data differ from the data originally used for the development of the prediction tool. State-of-the-art, machine-learning methods can be employed for the development of sophisticated tools to aid clinicians' decisions. We identified numerous variables considered relevant for extubation decisions for mechanically ventilated premature infants with RDS. We then developed a web-based decision-support tool for clinicians which can be made widely available and potentially improve patient care world wide.
Molecular Modeling of Environmentally Important Processes: Reduction Potentials
ERIC Educational Resources Information Center
Lewis, Anne; Bumpus, John A.; Truhlar, Donald G.; Cramer, Christopher J.
2004-01-01
The increasing use of computational quantum chemistry in the modeling of environmentally important processes is described. The employment of computational quantum mechanics for the prediction of oxidation-reduction potential for solutes in an aqueous medium is discussed.
Global sensitivity analysis of DRAINMOD-FOREST, an integrated forest ecosystem model
Shiying Tian; Mohamed A. Youssef; Devendra M. Amatya; Eric D. Vance
2014-01-01
Global sensitivity analysis is a useful tool to understand process-based ecosystem models by identifying key parameters and processes controlling model predictions. This study reported a comprehensive global sensitivity analysis for DRAINMOD-FOREST, an integrated model for simulating water, carbon (C), and nitrogen (N) cycles and plant growth in lowland forests. The...
Adler, Philipp; Hugen, Thorsten; Wiewiora, Marzena; Kunz, Benno
2011-03-07
An unstructured model for an integrated fermentation/membrane extraction process for the production of the aroma compounds 2-phenylethanol and 2-phenylethylacetate by Kluyveromyces marxianus CBS 600 was developed. The extent to which this model, based only on data from the conventional fermentation and separation processes, provided an estimation of the integrated process was evaluated. The effect of product inhibition on specific growth rate and on biomass yield by both aroma compounds was approximated by multivariate regression. Simulations of the respective submodels for fermentation and the separation process matched well with experimental results. With respect to the in situ product removal (ISPR) process, the effect of reduced product inhibition due to product removal on specific growth rate and biomass yield was predicted adequately by the model simulations. Overall product yields were increased considerably in this process (4.0 g/L 2-PE+2-PEA vs. 1.4 g/L in conventional fermentation) and were even higher than predicted by the model. To describe the effect of product concentration on product formation itself, the model was extended using results from the conventional and the ISPR process, thus agreement between model and experimental data improved notably. Therefore, this model can be a useful tool for the development and optimization of an efficient integrated bioprocess. Copyright © 2010 Elsevier Inc. All rights reserved.
Development and application of an acceptance testing model
NASA Technical Reports Server (NTRS)
Pendley, Rex D.; Noonan, Caroline H.; Hall, Kenneth R.
1992-01-01
The process of acceptance testing large software systems for NASA has been analyzed, and an empirical planning model of the process constructed. This model gives managers accurate predictions of the staffing needed, the productivity of a test team, and the rate at which the system will pass. Applying the model to a new system shows a high level of agreement between the model and actual performance. The model also gives managers an objective measure of process improvement.
Real Time Land-Surface Hydrologic Modeling Over Continental US
NASA Technical Reports Server (NTRS)
Houser, Paul R.
1998-01-01
The land surface component of the hydrological cycle is fundamental to the overall functioning of the atmospheric and climate processes. Spatially and temporally variable rainfall and available energy, combined with land surface heterogeneity cause complex variations in all processes related to surface hydrology. The characterization of the spatial and temporal variability of water and energy cycles are critical to improve our understanding of land surface-atmosphere interaction and the impact of land surface processes on climate extremes. Because the accurate knowledge of these processes and their variability is important for climate predictions, most Numerical Weather Prediction (NWP) centers have incorporated land surface schemes in their models. However, errors in the NWP forcing accumulate in the surface and energy stores, leading to incorrect surface water and energy partitioning and related processes. This has motivated the NWP to impose ad hoc corrections to the land surface states to prevent this drift. A proposed methodology is to develop Land Data Assimilation schemes (LDAS), which are uncoupled models forced with observations, and not affected by NWP forcing biases. The proposed research is being implemented as a real time operation using an existing Surface Vegetation Atmosphere Transfer Scheme (SVATS) model at a 40 km degree resolution across the United States to evaluate these critical science questions. The model will be forced with real time output from numerical prediction models, satellite data, and radar precipitation measurements. Model parameters will be derived from the existing GIS vegetation and soil coverages. The model results will be aggregated to various scales to assess water and energy balances and these will be validated with various in-situ observations.
Issues of upscaling in space and time with soil erosion models
NASA Astrophysics Data System (ADS)
Brazier, R. E.; Parsons, A. J.; Wainwright, J.; Hutton, C.
2009-04-01
Soil erosion - the entrainment, transport and deposition of soil particles - is an important phenomenon to understand; the quantity of soil loss determines the long term on-site sustainability of agricultural production (Pimental et al., 1995), and has potentially important off-site impacts on water quality (Bilotta and Brazier, 2008). The fundamental mechanisms of the soil erosion process have been studied at the laboratory scale, plot scale (Wainwright et al., 2000), the small catchment scale (refs here) and river basin scale through sediment yield and budgeting work. Subsequently, soil erosion models have developed alongside and directly from this empirical work, from data-based models such as the USLE (Wischmeier and Smith, 1978), to ‘physics or process-based' models such as EUROSEM (Morgan et al., 1998) and WEPP (Nearing et al., 1989). Model development has helped to structure our understanding of the fundamental factors that control soil erosion process at the plot and field scale. Despite these advances, however, our understanding of and ability to predict erosion and sediment yield at the same plot, field and also larger catchment scales remains poor. Sediment yield has been shown to both increase and decrease as a function of drainage area (de Vente et al., 2006); the lack of a simple relationship demonstrates complex and scale-dependant process domination throughout a catchment, and emphasises our uncertainty and poor conceptual basis for predicting plot to catchment scale erosion rates and sediment yields (Parsons et al., 2006b). Therefore, this paper presents a review of the problems associated with modelling soil erosion across spatial and temporal scales and suggests some potential solutions to address these problems. The transport-distance approach to scaling erosion rates (Wainwright, et al., 2008) is assessed and discussed in light of alternative techniques to predict erosion across spatial and temporal scales. References Bilotta, G.S. and Brazier, R.E., 2008. Understanding the influence of suspended solids on water quality and aquatic biota. Water Research, 42(12): 2849-2861. de Vente, J., Poesen, J., Bazzoffi, P., Van Ropaey, A.V. and Verstraeten, G., 2006. Predicting catchment sediment yield in Mediterranean environments: the importance of sediment sources and connectivity in Italian drainage basins. Earth Surface Processes And Landforms, 31: 1017-1034. Morgan, R.P.C. et al., 1998. The European soil erosion model (EUROSEM): a dynamic approach for predicting sediment transport from fields to small catchments. Earth Surface Processes And Landforms, 23: 527-544. Nearing, M. A., G. R. Foster, L. J. Lane, and S. C. Finkner. 1989. A process-based soil erosion model for USDA Water Erosion Prediction Project technology. Trans. ASAE 32(5): 1587-1593. Parsons, A.J., Brazier, R.E., Wainwright, J. and Powell, D.M., 2006a. Scale relationships in hillslope runoff and erosion. Earth Surface Processes and Landforms, 31(11): 1384-1393. Parsons, A.J., Wainwright, J., Brazier, R.E. and Powell, D.M., 2006b. Is sediment delivery a fallacy? Earth Surface Processes and Landforms, 31(10): 1325-1328. Pimental, D. et al., 1995. Environmental and economic costs of soil erosion and conservation benefits. Science, 267:1117-1122. Wainwright, J., Parsons, A.J. and Abrahams, A.D., 2000. Plot-scale studies of vegetation, overland flow and erosion interactions: case studies from Arizona and New Mexico. Hydrological Processes, 14(16-17): 2921-2943. Wischmeier, W.H. and Smith, D.D., 1978. Predicting rainfall erosion losses - a guide for conservation planning., 537.
Motivation and justification: a dual-process model of culture in action.
Vaisey, Stephen
2009-05-01
This article presents a new model of culture in action. Although most sociologists who study culture emphasize its role in post hoc sense making, sociologists of religion and social psychologists tend to focus on the role beliefs play in motivation. The dual-process model integrates justificatory and motivational approaches by distinguishing between "discursive" and "practical" modes of culture and cognition. The author uses panel data from the National Study of Youth and Religion to illustrate the model's usefulness. Consistent with its predictions, he finds that though respondents cannot articulate clear principles of moral judgment, their choice from a list of moral-cultural scripts strongly predicts later behavior.
Modelling the influence of total suspended solids on E. coli removal in river water.
Qian, Jueying; Walters, Evelyn; Rutschmann, Peter; Wagner, Michael; Horn, Harald
2016-01-01
Following sewer overflows, fecal indicator bacteria enter surface waters and may experience different lysis or growth processes. A 1D mathematical model was developed to predict total suspended solids (TSS) and Escherichia coli concentrations based on field measurements in a large-scale flume system simulating a combined sewer overflow. The removal mechanisms of natural inactivation, UV inactivation, and sedimentation were modelled. For the sedimentation process, one, two or three particle size classes were incorporated separately into the model. Moreover, the UV sensitivity coefficient α and natural inactivation coefficient kd were both formulated as functions of TSS concentration. It was observed that the E. coli removal was predicted more accurately by incorporating two particle size classes. However, addition of a third particle size class only improved the model slightly. When α and kd were allowed to vary with the TSS concentration, the model was able to predict E. coli fate and transport at different TSS concentrations accurately and flexibly. A sensitivity analysis revealed that the mechanisms of UV and natural inactivation were more influential at low TSS concentrations, whereas the sedimentation process became more important at elevated TSS concentrations.
Characterization of Used Nuclear Fuel with Multivariate Analysis for Process Monitoring
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dayman, Kenneth J.; Coble, Jamie B.; Orton, Christopher R.
2014-01-01
The Multi-Isotope Process (MIP) Monitor combines gamma spectroscopy and multivariate analysis to detect anomalies in various process streams in a nuclear fuel reprocessing system. Measured spectra are compared to models of nominal behavior at each measurement location to detect unexpected changes in system behavior. In order to improve the accuracy and specificity of process monitoring, fuel characterization may be used to more accurately train subsequent models in a full analysis scheme. This paper presents initial development of a reactor-type classifier that is used to select a reactor-specific partial least squares model to predict fuel burnup. Nuclide activities for prototypic usedmore » fuel samples were generated in ORIGEN-ARP and used to investigate techniques to characterize used nuclear fuel in terms of reactor type (pressurized or boiling water reactor) and burnup. A variety of reactor type classification algorithms, including k-nearest neighbors, linear and quadratic discriminant analyses, and support vector machines, were evaluated to differentiate used fuel from pressurized and boiling water reactors. Then, reactor type-specific partial least squares models were developed to predict the burnup of the fuel. Using these reactor type-specific models instead of a model trained for all light water reactors improved the accuracy of burnup predictions. The developed classification and prediction models were combined and applied to a large dataset that included eight fuel assembly designs, two of which were not used in training the models, and spanned the range of the initial 235U enrichment, cooling time, and burnup values expected of future commercial used fuel for reprocessing. Error rates were consistent across the range of considered enrichment, cooling time, and burnup values. Average absolute relative errors in burnup predictions for validation data both within and outside the training space were 0.0574% and 0.0597%, respectively. The errors seen in this work are artificially low, because the models were trained, optimized, and tested on simulated, noise-free data. However, these results indicate that the developed models may generalize well to new data and that the proposed approach constitutes a viable first step in developing a fuel characterization algorithm based on gamma spectra.« less
Fordham, Damien A; Mellin, Camille; Russell, Bayden D; Akçakaya, Reşit H; Bradshaw, Corey J A; Aiello-Lammens, Matthew E; Caley, Julian M; Connell, Sean D; Mayfield, Stephen; Shepherd, Scoresby A; Brook, Barry W
2013-10-01
Evidence is accumulating that species' responses to climate changes are best predicted by modelling the interaction of physiological limits, biotic processes and the effects of dispersal-limitation. Using commercially harvested blacklip (Haliotis rubra) and greenlip abalone (Haliotis laevigata) as case studies, we determine the relative importance of accounting for interactions among physiology, metapopulation dynamics and exploitation in predictions of range (geographical occupancy) and abundance (spatially explicit density) under various climate change scenarios. Traditional correlative ecological niche models (ENM) predict that climate change will benefit the commercial exploitation of abalone by promoting increased abundances without any reduction in range size. However, models that account simultaneously for demographic processes and physiological responses to climate-related factors result in future (and present) estimates of area of occupancy (AOO) and abundance that differ from those generated by ENMs alone. Range expansion and population growth are unlikely for blacklip abalone because of important interactions between climate-dependent mortality and metapopulation processes; in contrast, greenlip abalone should increase in abundance despite a contraction in AOO. The strongly non-linear relationship between abalone population size and AOO has important ramifications for the use of ENM predictions that rely on metrics describing change in habitat area as proxies for extinction risk. These results show that predicting species' responses to climate change often require physiological information to understand climatic range determinants, and a metapopulation model that can make full use of this data to more realistically account for processes such as local extirpation, demographic rescue, source-sink dynamics and dispersal-limitation. © 2013 John Wiley & Sons Ltd.
Some considerations on the use of ecological models to predict species' geographic distributions
Peterjohn, B.G.
2001-01-01
Peterson (2001) used Genetic Algorithm for Rule-set Prediction (GARP) models to predict distribution patterns from Breeding Bird Survey (BBS) data. Evaluations of these models should consider inherent limitations of BBS data: (1) BBS methods may not sample species and habitats equally; (2) using BBS data for both model development and testing may overlook poor fit of some models; and (3) BBS data may not provide the desired spatial resolution or capture temporal changes in species distributions. The predictive value of GARP models requires additional study, especially comparisons with distribution patterns from independent data sets. When employed at appropriate temporal and geographic scales, GARP models show considerable promise for conservation biology applications but provide limited inferences concerning processes responsible for the observed patterns.
NASA Astrophysics Data System (ADS)
Samadian, Pedram; Parsa, Mohammad Habibi; Ahmadabadi, M. Nili; Mirzadeh, Hamed
2014-10-01
Knowledge about the transformation temperatures is crucial in processing of steels especially in thermomechanical processes because microstructures and mechanical properties after processing are closely related to the extent and type of transformations. The experimental determination of critical temperatures is costly, and therefore, it is preferred to predict them by mathematical methods. In the current work, new thermodynamically based models were developed for computing the Ae3 and Acm temperatures in the equilibrium cooling conditions when austenite is deformed at elevated temperatures. The main advantage of the proposed models is their capability to predict the temperatures of austenite equilibrium transformations in steels with total alloying elements (Mn + Si + Ni + Cr + Mo + Cu) less than 5 wt.% and Si less than 1 wt.% under the deformation conditions just by using the chemical potential of constituents, without the need for determining the total Gibbs free energy of steel which requires many experiments and computations.
On-Line Robust Modal Stability Prediction using Wavelet Processing
NASA Technical Reports Server (NTRS)
Brenner, Martin J.; Lind, Rick
1998-01-01
Wavelet analysis for filtering and system identification has been used to improve the estimation of aeroservoelastic stability margins. The conservatism of the robust stability margins is reduced with parametric and nonparametric time- frequency analysis of flight data in the model validation process. Nonparametric wavelet processing of data is used to reduce the effects of external disturbances and unmodeled dynamics. Parametric estimates of modal stability are also extracted using the wavelet transform. Computation of robust stability margins for stability boundary prediction depends on uncertainty descriptions derived from the data for model validation. The F-18 High Alpha Research Vehicle aeroservoelastic flight test data demonstrates improved robust stability prediction by extension of the stability boundary beyond the flight regime. Guidelines and computation times are presented to show the efficiency and practical aspects of these procedures for on-line implementation. Feasibility of the method is shown for processing flight data from time- varying nonstationary test points.
Emotional arousal amplifies the effects of biased competition in the brain
Lee, Tae-Ho; Sakaki, Michiko; Cheng, Ruth; Velasco, Ricardo
2014-01-01
The arousal-biased competition model predicts that arousal increases the gain on neural competition between stimuli representations. Thus, the model predicts that arousal simultaneously enhances processing of salient stimuli and impairs processing of relatively less-salient stimuli. We tested this model with a simple dot-probe task. On each trial, participants were simultaneously exposed to one face image as a salient cue stimulus and one place image as a non-salient stimulus. A border around the face cue location further increased its bottom-up saliency. Before these visual stimuli were shown, one of two tones played: one that predicted a shock (increasing arousal) or one that did not. An arousal-by-saliency interaction in category-specific brain regions (fusiform face area for salient faces and parahippocampal place area for non-salient places) indicated that brain activation associated with processing the salient stimulus was enhanced under arousal whereas activation associated with processing the non-salient stimulus was suppressed under arousal. This is the first functional magnetic resonance imaging study to demonstrate that arousal can enhance information processing for prioritized stimuli while simultaneously impairing processing of non-prioritized stimuli. Thus, it goes beyond previous research to show that arousal does not uniformly enhance perceptual processing, but instead does so selectively in ways that optimizes attention to highly salient stimuli. PMID:24532703
Pustozerov, Evgenii; Popova, Polina; Tkachuk, Aleksandra; Bolotko, Yana; Yuldashev, Zafar; Grineva, Elena
2018-01-09
Personalized blood glucose (BG) prediction for diabetes patients is an important goal that is pursued by many researchers worldwide. Despite many proposals, only a few projects are dedicated to the development of complete recommender system infrastructures that incorporate BG prediction algorithms for diabetes patients. The development and implementation of such a system aided by mobile technology is of particular interest to patients with gestational diabetes mellitus (GDM), especially considering the significant importance of quickly achieving adequate BG control for these patients in a short period (ie, during pregnancy) and a typically higher acceptance rate for mobile health (mHealth) solutions for short- to midterm usage. This study was conducted with the objective of developing infrastructure comprising data processing algorithms, BG prediction models, and an appropriate mobile app for patients' electronic record management to guide BG prediction-based personalized recommendations for patients with GDM. A mobile app for electronic diary management was developed along with data exchange and continuous BG signal processing software. Both components were coupled to obtain the necessary data for use in the personalized BG prediction system. Necessary data on meals, BG measurements, and other events were collected via the implemented mobile app and continuous glucose monitoring (CGM) system processing software. These data were used to tune and evaluate the BG prediction model, which included an algorithm for dynamic coefficients tuning. In the clinical study, 62 participants (GDM: n=49; control: n=13) took part in a 1-week monitoring trial during which they used the mobile app to track their meals and self-measurements of BG and CGM system for continuous BG monitoring. The data on 909 food intakes and corresponding postprandial BG curves as well as the set of patients' characteristics (eg, glycated hemoglobin, body mass index [BMI], age, and lifestyle parameters) were selected as inputs for the BG prediction models. The prediction results by the models for BG levels 1 hour after food intake were root mean square error=0.87 mmol/L, mean absolute error=0.69 mmol/L, and mean absolute percentage error=12.8%, which correspond to an adequate prediction accuracy for BG control decisions. The mobile app for the collection and processing of relevant data, appropriate software for CGM system signals processing, and BG prediction models were developed for a recommender system. The developed system may help improve BG control in patients with GDM; this will be the subject of evaluation in a subsequent study. ©Evgenii Pustozerov, Polina Popova, Aleksandra Tkachuk, Yana Bolotko, Zafar Yuldashev, Elena Grineva. Originally published in JMIR Mhealth and Uhealth (http://mhealth.jmir.org), 09.01.2018.
Orthogonal Gaussian process models
Plumlee, Matthew; Joseph, V. Roshan
2017-01-01
Gaussian processes models are widely adopted for nonparameteric/semi-parametric modeling. Identifiability issues occur when the mean model contains polynomials with unknown coefficients. Though resulting prediction is unaffected, this leads to poor estimation of the coefficients in the mean model, and thus the estimated mean model loses interpretability. This paper introduces a new Gaussian process model whose stochastic part is orthogonal to the mean part to address this issue. As a result, this paper also discusses applications to multi-fidelity simulations using data examples.
Orthogonal Gaussian process models
DOE Office of Scientific and Technical Information (OSTI.GOV)
Plumlee, Matthew; Joseph, V. Roshan
Gaussian processes models are widely adopted for nonparameteric/semi-parametric modeling. Identifiability issues occur when the mean model contains polynomials with unknown coefficients. Though resulting prediction is unaffected, this leads to poor estimation of the coefficients in the mean model, and thus the estimated mean model loses interpretability. This paper introduces a new Gaussian process model whose stochastic part is orthogonal to the mean part to address this issue. As a result, this paper also discusses applications to multi-fidelity simulations using data examples.
Predicting indoor pollutant concentrations, and applications to air quality management
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lorenzetti, David M.
Because most people spend more than 90% of their time indoors, predicting exposure to airborne pollutants requires models that incorporate the effect of buildings. Buildings affect the exposure of their occupants in a number of ways, both by design (for example, filters in ventilation systems remove particles) and incidentally (for example, sorption on walls can reduce peak concentrations, but prolong exposure to semivolatile organic compounds). Furthermore, building materials and occupant activities can generate pollutants. Indoor air quality depends not only on outdoor air quality, but also on the design, maintenance, and use of the building. For example, ''sick building'' symptomsmore » such as respiratory problems and headaches have been related to the presence of air-conditioning systems, to carpeting, to low ventilation rates, and to high occupant density (1). The physical processes of interest apply even in simple structures such as homes. Indoor air quality models simulate the processes, such as ventilation and filtration, that control pollutant concentrations in a building. Section 2 describes the modeling approach, and the important transport processes in buildings. Because advection usually dominates among the transport processes, Sections 3 and 4 describe methods for predicting airflows. The concluding section summarizes the application of these models.« less
Prediction of wastewater treatment plants performance based on artificial fish school neural network
NASA Astrophysics Data System (ADS)
Zhang, Ruicheng; Li, Chong
2011-10-01
A reliable model for wastewater treatment plant is essential in providing a tool for predicting its performance and to form a basis for controlling the operation of the process. This would minimize the operation costs and assess the stability of environmental balance. For the multi-variable, uncertainty, non-linear characteristics of the wastewater treatment system, an artificial fish school neural network prediction model is established standing on actual operation data in the wastewater treatment system. The model overcomes several disadvantages of the conventional BP neural network. The results of model calculation show that the predicted value can better match measured value, played an effect on simulating and predicting and be able to optimize the operation status. The establishment of the predicting model provides a simple and practical way for the operation and management in wastewater treatment plant, and has good research and engineering practical value.
Action perception as hypothesis testing.
Donnarumma, Francesco; Costantini, Marcello; Ambrosini, Ettore; Friston, Karl; Pezzulo, Giovanni
2017-04-01
We present a novel computational model that describes action perception as an active inferential process that combines motor prediction (the reuse of our own motor system to predict perceived movements) and hypothesis testing (the use of eye movements to disambiguate amongst hypotheses). The system uses a generative model of how (arm and hand) actions are performed to generate hypothesis-specific visual predictions, and directs saccades to the most informative places of the visual scene to test these predictions - and underlying hypotheses. We test the model using eye movement data from a human action observation study. In both the human study and our model, saccades are proactive whenever context affords accurate action prediction; but uncertainty induces a more reactive gaze strategy, via tracking the observed movements. Our model offers a novel perspective on action observation that highlights its active nature based on prediction dynamics and hypothesis testing. Copyright © 2017 The Authors. Published by Elsevier Ltd.. All rights reserved.
NASA Astrophysics Data System (ADS)
Lim, Yeerang; Jung, Youeyun; Bang, Hyochoong
2018-05-01
This study presents model predictive formation control based on an eccentricity/inclination vector separation strategy. Alternative collision avoidance can be accomplished by using eccentricity/inclination vectors and adding a simple goal function term for optimization process. Real-time control is also achievable with model predictive controller based on convex formulation. Constraint-tightening approach is address as well improve robustness of the controller, and simulation results are presented to verify performance enhancement for the proposed approach.
Advances and Computational Tools towards Predictable Design in Biological Engineering
2014-01-01
The design process of complex systems in all the fields of engineering requires a set of quantitatively characterized components and a method to predict the output of systems composed by such elements. This strategy relies on the modularity of the used components or the prediction of their context-dependent behaviour, when parts functioning depends on the specific context. Mathematical models usually support the whole process by guiding the selection of parts and by predicting the output of interconnected systems. Such bottom-up design process cannot be trivially adopted for biological systems engineering, since parts function is hard to predict when components are reused in different contexts. This issue and the intrinsic complexity of living systems limit the capability of synthetic biologists to predict the quantitative behaviour of biological systems. The high potential of synthetic biology strongly depends on the capability of mastering this issue. This review discusses the predictability issues of basic biological parts (promoters, ribosome binding sites, coding sequences, transcriptional terminators, and plasmids) when used to engineer simple and complex gene expression systems in Escherichia coli. A comparison between bottom-up and trial-and-error approaches is performed for all the discussed elements and mathematical models supporting the prediction of parts behaviour are illustrated. PMID:25161694
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zamecnik, J. R.; Edwards, T. B.
The conversions of nitrite to nitrate, the destruction of glycolate, and the conversion of glycolate to formate and oxalate were modeled for the Nitric-Glycolic flowsheet using data from Chemical Process Cell (CPC) simulant runs conducted by SRNL from 2011 to 2015. The goal of this work was to develop empirical correlations for these variables versus measureable variables from the chemical process so that these quantities could be predicted a-priori from the sludge composition and measurable processing variables. The need for these predictions arises from the need to predict the REDuction/OXidation (REDOX) state of the glass from the Defense Waste Processingmore » Facility (DWPF) melter. This report summarizes the initial work on these correlations based on the aforementioned data. Further refinement of the models as additional data is collected is recommended.« less
A geomorphology-based ANFIS model for multi-station modeling of rainfall-runoff process
NASA Astrophysics Data System (ADS)
Nourani, Vahid; Komasi, Mehdi
2013-05-01
This paper demonstrates the potential use of Artificial Intelligence (AI) techniques for predicting daily runoff at multiple gauging stations. Uncertainty and complexity of the rainfall-runoff process due to its variability in space and time in one hand and lack of historical data on the other hand, cause difficulties in the spatiotemporal modeling of the process. In this paper, an Integrated Geomorphological Adaptive Neuro-Fuzzy Inference System (IGANFIS) model conjugated with C-means clustering algorithm was used for rainfall-runoff modeling at multiple stations of the Eel River watershed, California. The proposed model could be used for predicting runoff in the stations with lack of data or any sub-basin within the watershed because of employing the spatial and temporal variables of the sub-basins as the model inputs. This ability of the integrated model for spatiotemporal modeling of the process was examined through the cross validation technique for a station. In this way, different ANFIS structures were trained using Sugeno algorithm in order to estimate daily discharge values at different stations. In order to improve the model efficiency, the input data were then classified into some clusters by the means of fuzzy C-means (FCMs) method. The goodness-of-fit measures support the gainful use of the IGANFIS and FCM methods in spatiotemporal modeling of hydrological processes.
NASA Astrophysics Data System (ADS)
Lowman, L.; Barros, A. P.
2017-12-01
Data assimilation (DA) is the widely accepted procedure for estimating parameters within predictive models because of the adaptability and uncertainty quantification offered by Bayesian methods. DA applications in phenology modeling offer critical insights into how extreme weather or changes in climate impact the vegetation life cycle. Changes in leaf onset and senescence, root phenology, and intermittent leaf shedding imply large changes in the surface radiative, water, and carbon budgets at multiple scales. Models of leaf phenology require concurrent atmospheric and soil conditions to determine how biophysical plant properties respond to changes in temperature, light and water demand. Presently, climatological records for fraction of photosynthetically active radiation (FPAR) and leaf area index (LAI), the modelled states indicative of plant phenology, are not available. Further, DA models are typically trained on short periods of record (e.g. less than 10 years). Using limited records with a DA framework imposes non-stationarity on estimated parameters and the resulting predicted model states. This talk discusses how uncertainty introduced by the inherent non-stationarity of the modeled processes propagates through a land-surface hydrology model coupled to a predictive phenology model. How water demand is accounted for in the upscaling of DA model inputs and analysis period serves as a key source of uncertainty in the FPAR and LAI predictions. Parameters estimated from different DA effectively calibrate a plant water-use strategy within the land-surface hydrology model. For example, when extreme droughts are included in the DA period, the plants are trained to uptake water, transpire, and assimilate carbon under favorable conditions and quickly shut down at the onset of water stress.
Assessing model sensitivity and uncertainty across multiple Free-Air CO2 Enrichment experiments.
NASA Astrophysics Data System (ADS)
Cowdery, E.; Dietze, M.
2015-12-01
As atmospheric levels of carbon dioxide levels continue to increase, it is critical that terrestrial ecosystem models can accurately predict ecological responses to the changing environment. Current predictions of net primary productivity (NPP) in response to elevated atmospheric CO2 concentrations are highly variable and contain a considerable amount of uncertainty. It is necessary that we understand which factors are driving this uncertainty. The Free-Air CO2 Enrichment (FACE) experiments have equipped us with a rich data source that can be used to calibrate and validate these model predictions. To identify and evaluate the assumptions causing inter-model differences we performed model sensitivity and uncertainty analysis across ambient and elevated CO2 treatments using the Data Assimilation Linked Ecosystem Carbon (DALEC) model and the Ecosystem Demography Model (ED2), two process-based models ranging from low to high complexity respectively. These modeled process responses were compared to experimental data from the Kennedy Space Center Open Top Chamber Experiment, the Nevada Desert Free Air CO2 Enrichment Facility, the Rhinelander FACE experiment, the Wyoming Prairie Heating and CO2 Enrichment Experiment, the Duke Forest Face experiment and the Oak Ridge Experiment on CO2 Enrichment. By leveraging data access proxy and data tilling services provided by the BrownDog data curation project alongside analysis modules available in the Predictive Ecosystem Analyzer (PEcAn), we produced automated, repeatable benchmarking workflows that are generalized to incorporate different sites and ecological models. Combining the observed patterns of uncertainty between the two models with results of the recent FACE-model data synthesis project (FACE-MDS) can help identify which processes need further study and additional data constraints. These findings can be used to inform future experimental design and in turn can provide informative starting point for data assimilation.
ERIC Educational Resources Information Center
Zhang, Jinguang
2010-01-01
Research suggests that first- and third-person perceptions are driven by the motive to self-enhance and cognitive processes involving the perception of social norms. This article proposes and tests a dual-process model that predicts an interaction between cognition and motivation. Consistent with the model, Experiment 1 (N = 112) showed that…
Using Pareto points for model identification in predictive toxicology
2013-01-01
Predictive toxicology is concerned with the development of models that are able to predict the toxicity of chemicals. A reliable prediction of toxic effects of chemicals in living systems is highly desirable in cosmetics, drug design or food protection to speed up the process of chemical compound discovery while reducing the need for lab tests. There is an extensive literature associated with the best practice of model generation and data integration but management and automated identification of relevant models from available collections of models is still an open problem. Currently, the decision on which model should be used for a new chemical compound is left to users. This paper intends to initiate the discussion on automated model identification. We present an algorithm, based on Pareto optimality, which mines model collections and identifies a model that offers a reliable prediction for a new chemical compound. The performance of this new approach is verified for two endpoints: IGC50 and LogP. The results show a great potential for automated model identification methods in predictive toxicology. PMID:23517649
Vitousek, Sean; Barnard, Patrick; Limber, Patrick W.; Erikson, Li; Cole, Blake
2017-01-01
We present a shoreline change model for coastal hazard assessment and management planning. The model, CoSMoS-COAST (Coastal One-line Assimilated Simulation Tool), is a transect-based, one-line model that predicts short-term and long-term shoreline response to climate change in the 21st century. The proposed model represents a novel, modular synthesis of process-based models of coastline evolution due to longshore and cross-shore transport by waves and sea-level rise. Additionally, the model uses an extended Kalman filter for data assimilation of historical shoreline positions to improve estimates of model parameters and thereby improve confidence in long-term predictions. We apply CoSMoS-COAST to simulate sandy shoreline evolution along 500 km of coastline in Southern California, which hosts complex mixtures of beach settings variably backed by dunes, bluffs, cliffs, estuaries, river mouths, and urban infrastructure, providing applicability of the model to virtually any coastal setting. Aided by data assimilation, the model is able to reproduce the observed signal of seasonal shoreline change for the hindcast period of 1995-2010, showing excellent agreement between modeled and observed beach states. The skill of the model during the hindcast period improves confidence in the model's predictive capability when applied to the forecast period (2010-2100) driven by GCM-projected wave and sea-level conditions. Predictions of shoreline change with limited human intervention indicate that 31% to 67% of Southern California beaches may become completely eroded by 2100 under sea-level rise scenarios of 0.93 to 2.0 m.
NASA Astrophysics Data System (ADS)
Vitousek, Sean; Barnard, Patrick L.; Limber, Patrick; Erikson, Li; Cole, Blake
2017-04-01
We present a shoreline change model for coastal hazard assessment and management planning. The model, CoSMoS-COAST (Coastal One-line Assimilated Simulation Tool), is a transect-based, one-line model that predicts short-term and long-term shoreline response to climate change in the 21st century. The proposed model represents a novel, modular synthesis of process-based models of coastline evolution due to longshore and cross-shore transport by waves and sea level rise. Additionally, the model uses an extended Kalman filter for data assimilation of historical shoreline positions to improve estimates of model parameters and thereby improve confidence in long-term predictions. We apply CoSMoS-COAST to simulate sandy shoreline evolution along 500 km of coastline in Southern California, which hosts complex mixtures of beach settings variably backed by dunes, bluffs, cliffs, estuaries, river mouths, and urban infrastructure, providing applicability of the model to virtually any coastal setting. Aided by data assimilation, the model is able to reproduce the observed signal of seasonal shoreline change for the hindcast period of 1995-2010, showing excellent agreement between modeled and observed beach states. The skill of the model during the hindcast period improves confidence in the model's predictive capability when applied to the forecast period (2010-2100) driven by GCM-projected wave and sea level conditions. Predictions of shoreline change with limited human intervention indicate that 31% to 67% of Southern California beaches may become completely eroded by 2100 under sea level rise scenarios of 0.93 to 2.0 m.
Reliable probabilities through statistical post-processing of ensemble predictions
NASA Astrophysics Data System (ADS)
Van Schaeybroeck, Bert; Vannitsem, Stéphane
2013-04-01
We develop post-processing or calibration approaches based on linear regression that make ensemble forecasts more reliable. We enforce climatological reliability in the sense that the total variability of the prediction is equal to the variability of the observations. Second, we impose ensemble reliability such that the spread around the ensemble mean of the observation coincides with the one of the ensemble members. In general the attractors of the model and reality are inhomogeneous. Therefore ensemble spread displays a variability not taken into account in standard post-processing methods. We overcome this by weighting the ensemble by a variable error. The approaches are tested in the context of the Lorenz 96 model (Lorenz 1996). The forecasts become more reliable at short lead times as reflected by a flatter rank histogram. Our best method turns out to be superior to well-established methods like EVMOS (Van Schaeybroeck and Vannitsem, 2011) and Nonhomogeneous Gaussian Regression (Gneiting et al., 2005). References [1] Gneiting, T., Raftery, A. E., Westveld, A., Goldman, T., 2005: Calibrated probabilistic forecasting using ensemble model output statistics and minimum CRPS estimation. Mon. Weather Rev. 133, 1098-1118. [2] Lorenz, E. N., 1996: Predictability - a problem partly solved. Proceedings, Seminar on Predictability ECMWF. 1, 1-18. [3] Van Schaeybroeck, B., and S. Vannitsem, 2011: Post-processing through linear regression, Nonlin. Processes Geophys., 18, 147.
Velderraín, José Dávila; Martínez-García, Juan Carlos; Álvarez-Buylla, Elena R
2017-01-01
Mathematical models based on dynamical systems theory are well-suited tools for the integration of available molecular experimental data into coherent frameworks in order to propose hypotheses about the cooperative regulatory mechanisms driving developmental processes. Computational analysis of the proposed models using well-established methods enables testing the hypotheses by contrasting predictions with observations. Within such framework, Boolean gene regulatory network dynamical models have been extensively used in modeling plant development. Boolean models are simple and intuitively appealing, ideal tools for collaborative efforts between theorists and experimentalists. In this chapter we present protocols used in our group for the study of diverse plant developmental processes. We focus on conceptual clarity and practical implementation, providing directions to the corresponding technical literature.
Hybrid wavelet-support vector machine approach for modelling rainfall-runoff process.
Komasi, Mehdi; Sharghi, Soroush
2016-01-01
Because of the importance of water resources management, the need for accurate modeling of the rainfall-runoff process has rapidly grown in the past decades. Recently, the support vector machine (SVM) approach has been used by hydrologists for rainfall-runoff modeling and the other fields of hydrology. Similar to the other artificial intelligence models, such as artificial neural network (ANN) and adaptive neural fuzzy inference system, the SVM model is based on the autoregressive properties. In this paper, the wavelet analysis was linked to the SVM model concept for modeling the rainfall-runoff process of Aghchai and Eel River watersheds. In this way, the main time series of two variables, rainfall and runoff, were decomposed to multiple frequent time series by wavelet theory; then, these time series were imposed as input data on the SVM model in order to predict the runoff discharge one day ahead. The obtained results show that the wavelet SVM model can predict both short- and long-term runoff discharges by considering the seasonality effects. Also, the proposed hybrid model is relatively more appropriate than classical autoregressive ones such as ANN and SVM because it uses the multi-scale time series of rainfall and runoff data in the modeling process.
Validating and Extending the Three Process Model of Alertness in Airline Operations
Ingre, Michael; Van Leeuwen, Wessel; Klemets, Tomas; Ullvetter, Christer; Hough, Stephen; Kecklund, Göran; Karlsson, David; Åkerstedt, Torbjörn
2014-01-01
Sleepiness and fatigue are important risk factors in the transport sector and bio-mathematical sleepiness, sleep and fatigue modeling is increasingly becoming a valuable tool for assessing safety of work schedules and rosters in Fatigue Risk Management Systems (FRMS). The present study sought to validate the inner workings of one such model, Three Process Model (TPM), on aircrews and extend the model with functions to model jetlag and to directly assess the risk of any sleepiness level in any shift schedule or roster with and without knowledge of sleep timings. We collected sleep and sleepiness data from 136 aircrews in a real life situation by means of an application running on a handheld touch screen computer device (iPhone, iPod or iPad) and used the TPM to predict sleepiness with varying level of complexity of model equations and data. The results based on multilevel linear and non-linear mixed effects models showed that the TPM predictions correlated with observed ratings of sleepiness, but explorative analyses suggest that the default model can be improved and reduced to include only two-processes (S+C), with adjusted phases of the circadian process based on a single question of circadian type. We also extended the model with a function to model jetlag acclimatization and with estimates of individual differences including reference limits accounting for 50%, 75% and 90% of the population as well as functions for predicting the probability of any level of sleepiness for ecological assessment of absolute and relative risk of sleepiness in shift systems for safety applications. PMID:25329575