Rubinstein, Justin L.; Ellsworth, William L.; Chen, Kate Huihsuan; Uchida, Naoki
2012-01-01
The behavior of individual events in repeating earthquake sequences in California, Taiwan and Japan is better predicted by a model with fixed inter-event time or fixed slip than it is by the time- and slip-predictable models for earthquake occurrence. Given that repeating earthquakes are highly regular in both inter-event time and seismic moment, the time- and slip-predictable models seem ideally suited to explain their behavior. Taken together with evidence from the companion manuscript that shows similar results for laboratory experiments we conclude that the short-term predictions of the time- and slip-predictable models should be rejected in favor of earthquake models that assume either fixed slip or fixed recurrence interval. This implies that the elastic rebound model underlying the time- and slip-predictable models offers no additional value in describing earthquake behavior in an event-to-event sense, but its value in a long-term sense cannot be determined. These models likely fail because they rely on assumptions that oversimplify the earthquake cycle. We note that the time and slip of these events is predicted quite well by fixed slip and fixed recurrence models, so in some sense they are time- and slip-predictable. While fixed recurrence and slip models better predict repeating earthquake behavior than the time- and slip-predictable models, we observe a correlation between slip and the preceding recurrence time for many repeating earthquake sequences in Parkfield, California. This correlation is not found in other regions, and the sequences with the correlative slip-predictable behavior are not distinguishable from nearby earthquake sequences that do not exhibit this behavior.
NASA Astrophysics Data System (ADS)
Rubinstein, Justin L.; Ellsworth, William L.; Beeler, Nicholas M.; Kilgore, Brian D.; Lockner, David A.; Savage, Heather M.
2012-02-01
The behavior of individual stick-slip events observed in three different laboratory experimental configurations is better explained by a "memoryless" earthquake model with fixed inter-event time or fixed slip than it is by the time- and slip-predictable models for earthquake occurrence. We make similar findings in the companion manuscript for the behavior of natural repeating earthquakes. Taken together, these results allow us to conclude that the predictions of a characteristic earthquake model that assumes either fixed slip or fixed recurrence interval should be preferred to the predictions of the time- and slip-predictable models for all earthquakes. Given that the fixed slip and recurrence models are the preferred models for all of the experiments we examine, we infer that in an event-to-event sense the elastic rebound model underlying the time- and slip-predictable models does not explain earthquake behavior. This does not indicate that the elastic rebound model should be rejected in a long-term-sense, but it should be rejected for short-term predictions. The time- and slip-predictable models likely offer worse predictions of earthquake behavior because they rely on assumptions that are too simple to explain the behavior of earthquakes. Specifically, the time-predictable model assumes a constant failure threshold and the slip-predictable model assumes that there is a constant minimum stress. There is experimental and field evidence that these assumptions are not valid for all earthquakes.
Dynamic Bus Travel Time Prediction Models on Road with Multiple Bus Routes
Bai, Cong; Peng, Zhong-Ren; Lu, Qing-Chang; Sun, Jian
2015-01-01
Accurate and real-time travel time information for buses can help passengers better plan their trips and minimize waiting times. A dynamic travel time prediction model for buses addressing the cases on road with multiple bus routes is proposed in this paper, based on support vector machines (SVMs) and Kalman filtering-based algorithm. In the proposed model, the well-trained SVM model predicts the baseline bus travel times from the historical bus trip data; the Kalman filtering-based dynamic algorithm can adjust bus travel times with the latest bus operation information and the estimated baseline travel times. The performance of the proposed dynamic model is validated with the real-world data on road with multiple bus routes in Shenzhen, China. The results show that the proposed dynamic model is feasible and applicable for bus travel time prediction and has the best prediction performance among all the five models proposed in the study in terms of prediction accuracy on road with multiple bus routes. PMID:26294903
Dynamic Bus Travel Time Prediction Models on Road with Multiple Bus Routes.
Bai, Cong; Peng, Zhong-Ren; Lu, Qing-Chang; Sun, Jian
2015-01-01
Accurate and real-time travel time information for buses can help passengers better plan their trips and minimize waiting times. A dynamic travel time prediction model for buses addressing the cases on road with multiple bus routes is proposed in this paper, based on support vector machines (SVMs) and Kalman filtering-based algorithm. In the proposed model, the well-trained SVM model predicts the baseline bus travel times from the historical bus trip data; the Kalman filtering-based dynamic algorithm can adjust bus travel times with the latest bus operation information and the estimated baseline travel times. The performance of the proposed dynamic model is validated with the real-world data on road with multiple bus routes in Shenzhen, China. The results show that the proposed dynamic model is feasible and applicable for bus travel time prediction and has the best prediction performance among all the five models proposed in the study in terms of prediction accuracy on road with multiple bus routes.
Liu, Zitao; Hauskrecht, Milos
2017-11-01
Building of an accurate predictive model of clinical time series for a patient is critical for understanding of the patient condition, its dynamics, and optimal patient management. Unfortunately, this process is not straightforward. First, patient-specific variations are typically large and population-based models derived or learned from many different patients are often unable to support accurate predictions for each individual patient. Moreover, time series observed for one patient at any point in time may be too short and insufficient to learn a high-quality patient-specific model just from the patient's own data. To address these problems we propose, develop and experiment with a new adaptive forecasting framework for building multivariate clinical time series models for a patient and for supporting patient-specific predictions. The framework relies on the adaptive model switching approach that at any point in time selects the most promising time series model out of the pool of many possible models, and consequently, combines advantages of the population, patient-specific and short-term individualized predictive models. We demonstrate that the adaptive model switching framework is very promising approach to support personalized time series prediction, and that it is able to outperform predictions based on pure population and patient-specific models, as well as, other patient-specific model adaptation strategies.
Family-supportive work environments and psychological strain: a longitudinal test of two theories.
Odle-Dusseau, Heather N; Herleman, Hailey A; Britt, Thomas W; Moore, Dewayne D; Castro, Carl A; McGurk, Dennis
2013-01-01
Based on the Job Demands-Resources (JDR) model (E. Demerouti, A. B. Bakker, F. Nachreiner, & W. B. Schaufeli, 2001, The job demands-resources model of burnout. Journal of Applied Psychology, 86, 499-512) and Conservation of Resources (COR) theory (S. E. Hobfoll, 2002, Social and psychological resources and adaptation. Review of General Psychology, 6, 307-324), we tested three competing models that predict different directions of causation for relationships over time between family-supportive work environments (FSWE) and psychological strain, with two waves of data from a military sample. Results revealed support for both the JDR and COR theories, first in the static model where FSWE at Time 1 predicted psychological strain at Time 2 and when testing the opposite direction, where psychological strain at Time 1 predicted FSWE at Time 2. For change models, FSWE predicted changes in psychological strain across time, although the reverse causation model was not supported (psychological strain at Time 1 did not predict changes in FSWE). Also, changes in FSWE across time predicted psychological strain at Time 2, whereas changes in psychological strain did not predict FSWE at Time 2. Theoretically, these results are important for the work-family interface in that they demonstrate the application of a systems approach to studying work and family interactions, as support was obtained for both the JDR model with perceptions of FSWE predicting psychological strain (in both the static and change models), and for COR theory where psychological strain predicts FSWE across time.
Two States Mapping Based Time Series Neural Network Model for Compensation Prediction Residual Error
NASA Astrophysics Data System (ADS)
Jung, Insung; Koo, Lockjo; Wang, Gi-Nam
2008-11-01
The objective of this paper was to design a model of human bio signal data prediction system for decreasing of prediction error using two states mapping based time series neural network BP (back-propagation) model. Normally, a lot of the industry has been applied neural network model by training them in a supervised manner with the error back-propagation algorithm for time series prediction systems. However, it still has got a residual error between real value and prediction result. Therefore, we designed two states of neural network model for compensation residual error which is possible to use in the prevention of sudden death and metabolic syndrome disease such as hypertension disease and obesity. We determined that most of the simulation cases were satisfied by the two states mapping based time series prediction model. In particular, small sample size of times series were more accurate than the standard MLP model.
Chi, Chih-Lin; Zeng, Wenjun; Oh, Wonsuk; Borson, Soo; Lenskaia, Tatiana; Shen, Xinpeng; Tonellato, Peter J
2017-12-01
Prediction of onset and progression of cognitive decline and dementia is important both for understanding the underlying disease processes and for planning health care for populations at risk. Predictors identified in research studies are typically accessed at one point in time. In this manuscript, we argue that an accurate model for predicting cognitive status over relatively long periods requires inclusion of time-varying components that are sequentially assessed at multiple time points (e.g., in multiple follow-up visits). We developed a pilot model to test the feasibility of using either estimated or observed risk factors to predict cognitive status. We developed two models, the first using a sequential estimation of risk factors originally obtained from 8 years prior, then improved by optimization. This model can predict how cognition will change over relatively long time periods. The second model uses observed rather than estimated time-varying risk factors and, as expected, results in better prediction. This model can predict when newly observed data are acquired in a follow-up visit. Performances of both models that are evaluated in10-fold cross-validation and various patient subgroups show supporting evidence for these pilot models. Each model consists of multiple base prediction units (BPUs), which were trained using the same set of data. The difference in usage and function between the two models is the source of input data: either estimated or observed data. In the next step of model refinement, we plan to integrate the two types of data together to flexibly predict dementia status and changes over time, when some time-varying predictors are measured only once and others are measured repeatedly. Computationally, both data provide upper and lower bounds for predictive performance. Copyright © 2017 Elsevier Inc. All rights reserved.
Evaluation of Fast-Time Wake Vortex Prediction Models
NASA Technical Reports Server (NTRS)
Proctor, Fred H.; Hamilton, David W.
2009-01-01
Current fast-time wake models are reviewed and three basic types are defined. Predictions from several of the fast-time models are compared. Previous statistical evaluations of the APA-Sarpkaya and D2P fast-time models are discussed. Root Mean Square errors between fast-time model predictions and Lidar wake measurements are examined for a 24 hr period at Denver International Airport. Shortcomings in current methodology for evaluating wake errors are also discussed.
Development and Validation of a New Air Carrier Block Time Prediction Model and Methodology
NASA Astrophysics Data System (ADS)
Litvay, Robyn Olson
Commercial airline operations rely on predicted block times as the foundation for critical, successive decisions that include fuel purchasing, crew scheduling, and airport facility usage planning. Small inaccuracies in the predicted block times have the potential to result in huge financial losses, and, with profit margins for airline operations currently almost nonexistent, potentially negate any possible profit. Although optimization techniques have resulted in many models targeting airline operations, the challenge of accurately predicting and quantifying variables months in advance remains elusive. The objective of this work is the development of an airline block time prediction model and methodology that is practical, easily implemented, and easily updated. Research was accomplished, and actual U.S., domestic, flight data from a major airline was utilized, to develop a model to predict airline block times with increased accuracy and smaller variance in the actual times from the predicted times. This reduction in variance represents tens of millions of dollars (U.S.) per year in operational cost savings for an individual airline. A new methodology for block time prediction is constructed using a regression model as the base, as it has both deterministic and probabilistic components, and historic block time distributions. The estimation of the block times for commercial, domestic, airline operations requires a probabilistic, general model that can be easily customized for a specific airline’s network. As individual block times vary by season, by day, and by time of day, the challenge is to make general, long-term estimations representing the average, actual block times while minimizing the variation. Predictions of block times for the third quarter months of July and August of 2011 were calculated using this new model. The resulting, actual block times were obtained from the Research and Innovative Technology Administration, Bureau of Transportation Statistics (Airline On-time Performance Data, 2008-2011) for comparison and analysis. Future block times are shown to be predicted with greater accuracy, without exception and network-wide, for a major, U.S., domestic airline.
Time-dependent oral absorption models
NASA Technical Reports Server (NTRS)
Higaki, K.; Yamashita, S.; Amidon, G. L.
2001-01-01
The plasma concentration-time profiles following oral administration of drugs are often irregular and cannot be interpreted easily with conventional models based on first- or zero-order absorption kinetics and lag time. Six new models were developed using a time-dependent absorption rate coefficient, ka(t), wherein the time dependency was varied to account for the dynamic processes such as changes in fluid absorption or secretion, in absorption surface area, and in motility with time, in the gastrointestinal tract. In the present study, the plasma concentration profiles of propranolol obtained in human subjects following oral dosing were analyzed using the newly derived models based on mass balance and compared with the conventional models. Nonlinear regression analysis indicated that the conventional compartment model including lag time (CLAG model) could not predict the rapid initial increase in plasma concentration after dosing and the predicted Cmax values were much lower than that observed. On the other hand, all models with the time-dependent absorption rate coefficient, ka(t), were superior to the CLAG model in predicting plasma concentration profiles. Based on Akaike's Information Criterion (AIC), the fluid absorption model without lag time (FA model) exhibited the best overall fit to the data. The two-phase model including lag time, TPLAG model was also found to be a good model judging from the values of sum of squares. This model also described the irregular profiles of plasma concentration with time and frequently predicted Cmax values satisfactorily. A comparison of the absorption rate profiles also suggested that the TPLAG model is better at prediction of irregular absorption kinetics than the FA model. In conclusion, the incorporation of a time-dependent absorption rate coefficient ka(t) allows the prediction of nonlinear absorption characteristics in a more reliable manner.
Improved Short-Term Clock Prediction Method for Real-Time Positioning.
Lv, Yifei; Dai, Zhiqiang; Zhao, Qile; Yang, Sheng; Zhou, Jinning; Liu, Jingnan
2017-06-06
The application of real-time precise point positioning (PPP) requires real-time precise orbit and clock products that should be predicted within a short time to compensate for the communication delay or data gap. Unlike orbit correction, clock correction is difficult to model and predict. The widely used linear model hardly fits long periodic trends with a small data set and exhibits significant accuracy degradation in real-time prediction when a large data set is used. This study proposes a new prediction model for maintaining short-term satellite clocks to meet the high-precision requirements of real-time clocks and provide clock extrapolation without interrupting the real-time data stream. Fast Fourier transform (FFT) is used to analyze the linear prediction residuals of real-time clocks. The periodic terms obtained through FFT are adopted in the sliding window prediction to achieve a significant improvement in short-term prediction accuracy. This study also analyzes and compares the accuracy of short-term forecasts (less than 3 h) by using different length observations. Experimental results obtained from International GNSS Service (IGS) final products and our own real-time clocks show that the 3-h prediction accuracy is better than 0.85 ns. The new model can replace IGS ultra-rapid products in the application of real-time PPP. It is also found that there is a positive correlation between the prediction accuracy and the short-term stability of on-board clocks. Compared with the accuracy of the traditional linear model, the accuracy of the static PPP using the new model of the 2-h prediction clock in N, E, and U directions is improved by about 50%. Furthermore, the static PPP accuracy of 2-h clock products is better than 0.1 m. When an interruption occurs in the real-time model, the accuracy of the kinematic PPP solution using 1-h clock prediction product is better than 0.2 m, without significant accuracy degradation. This model is of practical significance because it solves the problems of interruption and delay in data broadcast in real-time clock estimation and can meet the requirements of real-time PPP.
Considerations of the Use of 3-D Geophysical Models to Predict Test Ban Monitoring Observables
2007-09-01
predict first P arrival times. Since this is a 3-D model, the travel times are predicted with a 3-D finite-difference code solving the eikonal equations...for the eikonal wave equation should provide more accurate predictions of travel-time from 3D models. These techniques and others are being
Can We Predict Patient Wait Time?
Pianykh, Oleg S; Rosenthal, Daniel I
2015-10-01
The importance of patient wait-time management and predictability can hardly be overestimated: For most hospitals, it is the patient queues that drive and define every bit of clinical workflow. The objective of this work was to study the predictability of patient wait time and identify its most influential predictors. To solve this problem, we developed a comprehensive list of 25 wait-related parameters, suggested in earlier work and observed in our own experiments. All parameters were chosen as derivable from a typical Hospital Information System dataset. The parameters were fed into several time-predicting models, and the best parameter subsets, discovered through exhaustive model search, were applied to a large sample of actual patient wait data. We were able to discover the most efficient wait-time prediction factors and models, such as the line-size models introduced in this work. Moreover, these models proved to be equally accurate and computationally efficient. Finally, the selected models were implemented in our patient waiting areas, displaying predicted wait times on the monitors located at the front desks. The limitations of these models are also discussed. Optimal regression models based on wait-line sizes can provide accurate and efficient predictions for patient wait time. Copyright © 2015 American College of Radiology. Published by Elsevier Inc. All rights reserved.
Joint modeling of longitudinal data and discrete-time survival outcome.
Qiu, Feiyou; Stein, Catherine M; Elston, Robert C
2016-08-01
A predictive joint shared parameter model is proposed for discrete time-to-event and longitudinal data. A discrete survival model with frailty and a generalized linear mixed model for the longitudinal data are joined to predict the probability of events. This joint model focuses on predicting discrete time-to-event outcome, taking advantage of repeated measurements. We show that the probability of an event in a time window can be more precisely predicted by incorporating the longitudinal measurements. The model was investigated by comparison with a two-step model and a discrete-time survival model. Results from both a study on the occurrence of tuberculosis and simulated data show that the joint model is superior to the other models in discrimination ability, especially as the latent variables related to both survival times and the longitudinal measurements depart from 0. © The Author(s) 2013.
Visentin, G; McDermott, A; McParland, S; Berry, D P; Kenny, O A; Brodkorb, A; Fenelon, M A; De Marchi, M
2015-09-01
Rapid, cost-effective monitoring of milk technological traits is a significant challenge for dairy industries specialized in cheese manufacturing. The objective of the present study was to investigate the ability of mid-infrared spectroscopy to predict rennet coagulation time, curd-firming time, curd firmness at 30 and 60min after rennet addition, heat coagulation time, casein micelle size, and pH in cow milk samples, and to quantify associations between these milk technological traits and conventional milk quality traits. Samples (n=713) were collected from 605 cows from multiple herds; the samples represented multiple breeds, stages of lactation, parities, and milking times. Reference analyses were undertaken in accordance with standardized methods, and mid-infrared spectra in the range of 900 to 5,000cm(-1) were available for all samples. Prediction models were developed using partial least squares regression, and prediction accuracy was based on both cross and external validation. The proportion of variance explained by the prediction models in external validation was greatest for pH (71%), followed by rennet coagulation time (55%) and milk heat coagulation time (46%). Models to predict curd firmness 60min from rennet addition and casein micelle size, however, were poor, explaining only 25 and 13%, respectively, of the total variance in each trait within external validation. On average, all prediction models tended to be unbiased. The linear regression coefficient of the reference value on the predicted value varied from 0.17 (casein micelle size regression model) to 0.83 (pH regression model) but all differed from 1. The ratio performance deviation of 1.07 (casein micelle size prediction model) to 1.79 (pH prediction model) for all prediction models in the external validation was <2, suggesting that none of the prediction models could be used for analytical purposes. With the exception of casein micelle size and curd firmness at 60min after rennet addition, the developed prediction models may be useful as a screening method, because the concordance correlation coefficient ranged from 0.63 (heat coagulation time prediction model) to 0.84 (pH prediction model) in the external validation. Copyright © 2015 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.
PSO-MISMO modeling strategy for multistep-ahead time series prediction.
Bao, Yukun; Xiong, Tao; Hu, Zhongyi
2014-05-01
Multistep-ahead time series prediction is one of the most challenging research topics in the field of time series modeling and prediction, and is continually under research. Recently, the multiple-input several multiple-outputs (MISMO) modeling strategy has been proposed as a promising alternative for multistep-ahead time series prediction, exhibiting advantages compared with the two currently dominating strategies, the iterated and the direct strategies. Built on the established MISMO strategy, this paper proposes a particle swarm optimization (PSO)-based MISMO modeling strategy, which is capable of determining the number of sub-models in a self-adaptive mode, with varying prediction horizons. Rather than deriving crisp divides with equal-size s prediction horizons from the established MISMO, the proposed PSO-MISMO strategy, implemented with neural networks, employs a heuristic to create flexible divides with varying sizes of prediction horizons and to generate corresponding sub-models, providing considerable flexibility in model construction, which has been validated with simulated and real datasets.
Miranian, A; Abdollahzade, M
2013-02-01
Local modeling approaches, owing to their ability to model different operating regimes of nonlinear systems and processes by independent local models, seem appealing for modeling, identification, and prediction applications. In this paper, we propose a local neuro-fuzzy (LNF) approach based on the least-squares support vector machines (LSSVMs). The proposed LNF approach employs LSSVMs, which are powerful in modeling and predicting time series, as local models and uses hierarchical binary tree (HBT) learning algorithm for fast and efficient estimation of its parameters. The HBT algorithm heuristically partitions the input space into smaller subdomains by axis-orthogonal splits. In each partitioning, the validity functions automatically form a unity partition and therefore normalization side effects, e.g., reactivation, are prevented. Integration of LSSVMs into the LNF network as local models, along with the HBT learning algorithm, yield a high-performance approach for modeling and prediction of complex nonlinear time series. The proposed approach is applied to modeling and predictions of different nonlinear and chaotic real-world and hand-designed systems and time series. Analysis of the prediction results and comparisons with recent and old studies demonstrate the promising performance of the proposed LNF approach with the HBT learning algorithm for modeling and prediction of nonlinear and chaotic systems and time series.
2011-01-01
Background Thousands of children experience cardiac arrest events every year in pediatric intensive care units. Most of these children die. Cardiac arrest prediction tools are used as part of medical emergency team evaluations to identify patients in standard hospital beds that are at high risk for cardiac arrest. There are no models to predict cardiac arrest in pediatric intensive care units though, where the risk of an arrest is 10 times higher than for standard hospital beds. Current tools are based on a multivariable approach that does not characterize deterioration, which often precedes cardiac arrests. Characterizing deterioration requires a time series approach. The purpose of this study is to propose a method that will allow for time series data to be used in clinical prediction models. Successful implementation of these methods has the potential to bring arrest prediction to the pediatric intensive care environment, possibly allowing for interventions that can save lives and prevent disabilities. Methods We reviewed prediction models from nonclinical domains that employ time series data, and identified the steps that are necessary for building predictive models using time series clinical data. We illustrate the method by applying it to the specific case of building a predictive model for cardiac arrest in a pediatric intensive care unit. Results Time course analysis studies from genomic analysis provided a modeling template that was compatible with the steps required to develop a model from clinical time series data. The steps include: 1) selecting candidate variables; 2) specifying measurement parameters; 3) defining data format; 4) defining time window duration and resolution; 5) calculating latent variables for candidate variables not directly measured; 6) calculating time series features as latent variables; 7) creating data subsets to measure model performance effects attributable to various classes of candidate variables; 8) reducing the number of candidate features; 9) training models for various data subsets; and 10) measuring model performance characteristics in unseen data to estimate their external validity. Conclusions We have proposed a ten step process that results in data sets that contain time series features and are suitable for predictive modeling by a number of methods. We illustrated the process through an example of cardiac arrest prediction in a pediatric intensive care setting. PMID:22023778
Kennedy, Curtis E; Turley, James P
2011-10-24
Thousands of children experience cardiac arrest events every year in pediatric intensive care units. Most of these children die. Cardiac arrest prediction tools are used as part of medical emergency team evaluations to identify patients in standard hospital beds that are at high risk for cardiac arrest. There are no models to predict cardiac arrest in pediatric intensive care units though, where the risk of an arrest is 10 times higher than for standard hospital beds. Current tools are based on a multivariable approach that does not characterize deterioration, which often precedes cardiac arrests. Characterizing deterioration requires a time series approach. The purpose of this study is to propose a method that will allow for time series data to be used in clinical prediction models. Successful implementation of these methods has the potential to bring arrest prediction to the pediatric intensive care environment, possibly allowing for interventions that can save lives and prevent disabilities. We reviewed prediction models from nonclinical domains that employ time series data, and identified the steps that are necessary for building predictive models using time series clinical data. We illustrate the method by applying it to the specific case of building a predictive model for cardiac arrest in a pediatric intensive care unit. Time course analysis studies from genomic analysis provided a modeling template that was compatible with the steps required to develop a model from clinical time series data. The steps include: 1) selecting candidate variables; 2) specifying measurement parameters; 3) defining data format; 4) defining time window duration and resolution; 5) calculating latent variables for candidate variables not directly measured; 6) calculating time series features as latent variables; 7) creating data subsets to measure model performance effects attributable to various classes of candidate variables; 8) reducing the number of candidate features; 9) training models for various data subsets; and 10) measuring model performance characteristics in unseen data to estimate their external validity. We have proposed a ten step process that results in data sets that contain time series features and are suitable for predictive modeling by a number of methods. We illustrated the process through an example of cardiac arrest prediction in a pediatric intensive care setting.
Clinical Predictive Modeling Development and Deployment through FHIR Web Services.
Khalilia, Mohammed; Choi, Myung; Henderson, Amelia; Iyengar, Sneha; Braunstein, Mark; Sun, Jimeng
2015-01-01
Clinical predictive modeling involves two challenging tasks: model development and model deployment. In this paper we demonstrate a software architecture for developing and deploying clinical predictive models using web services via the Health Level 7 (HL7) Fast Healthcare Interoperability Resources (FHIR) standard. The services enable model development using electronic health records (EHRs) stored in OMOP CDM databases and model deployment for scoring individual patients through FHIR resources. The MIMIC2 ICU dataset and a synthetic outpatient dataset were transformed into OMOP CDM databases for predictive model development. The resulting predictive models are deployed as FHIR resources, which receive requests of patient information, perform prediction against the deployed predictive model and respond with prediction scores. To assess the practicality of this approach we evaluated the response and prediction time of the FHIR modeling web services. We found the system to be reasonably fast with one second total response time per patient prediction.
Clinical Predictive Modeling Development and Deployment through FHIR Web Services
Khalilia, Mohammed; Choi, Myung; Henderson, Amelia; Iyengar, Sneha; Braunstein, Mark; Sun, Jimeng
2015-01-01
Clinical predictive modeling involves two challenging tasks: model development and model deployment. In this paper we demonstrate a software architecture for developing and deploying clinical predictive models using web services via the Health Level 7 (HL7) Fast Healthcare Interoperability Resources (FHIR) standard. The services enable model development using electronic health records (EHRs) stored in OMOP CDM databases and model deployment for scoring individual patients through FHIR resources. The MIMIC2 ICU dataset and a synthetic outpatient dataset were transformed into OMOP CDM databases for predictive model development. The resulting predictive models are deployed as FHIR resources, which receive requests of patient information, perform prediction against the deployed predictive model and respond with prediction scores. To assess the practicality of this approach we evaluated the response and prediction time of the FHIR modeling web services. We found the system to be reasonably fast with one second total response time per patient prediction. PMID:26958207
Park, Soo Hyun; Talebi, Mohammad; Amos, Ruth I J; Tyteca, Eva; Haddad, Paul R; Szucs, Roman; Pohl, Christopher A; Dolan, John W
2017-11-10
Quantitative Structure-Retention Relationships (QSRR) are used to predict retention times of compounds based only on their chemical structures encoded by molecular descriptors. The main concern in QSRR modelling is to build models with high predictive power, allowing reliable retention prediction for the unknown compounds across the chromatographic space. With the aim of enhancing the prediction power of the models, in this work, our previously proposed QSRR modelling approach called "federation of local models" is extended in ion chromatography to predict retention times of unknown ions, where a local model for each target ion (unknown) is created using only structurally similar ions from the dataset. A Tanimoto similarity (TS) score was utilised as a measure of structural similarity and training sets were developed by including ions that were similar to the target ion, as defined by a threshold value. The prediction of retention parameters (a- and b-values) in the linear solvent strength (LSS) model in ion chromatography, log k=a - blog[eluent], allows the prediction of retention times under all eluent concentrations. The QSRR models for a- and b-values were developed by a genetic algorithm-partial least squares method using the retention data of inorganic and small organic anions and larger organic cations (molecular mass up to 507) on four Thermo Fisher Scientific columns (AS20, AS19, AS11HC and CS17). The corresponding predicted retention times were calculated by fitting the predicted a- and b-values of the models into the LSS model equation. The predicted retention times were also plotted against the experimental values to evaluate the goodness of fit and the predictive power of the models. The application of a TS threshold of 0.6 was found to successfully produce predictive and reliable QSRR models (Q ext(F2) 2 >0.8 and Mean Absolute Error<0.1), and hence accurate retention time predictions with an average Mean Absolute Error of 0.2min. Crown Copyright © 2017. Published by Elsevier B.V. All rights reserved.
Clinical time series prediction: Toward a hierarchical dynamical system framework.
Liu, Zitao; Hauskrecht, Milos
2015-09-01
Developing machine learning and data mining algorithms for building temporal models of clinical time series is important for understanding of the patient condition, the dynamics of a disease, effect of various patient management interventions and clinical decision making. In this work, we propose and develop a novel hierarchical framework for modeling clinical time series data of varied length and with irregularly sampled observations. Our hierarchical dynamical system framework for modeling clinical time series combines advantages of the two temporal modeling approaches: the linear dynamical system and the Gaussian process. We model the irregularly sampled clinical time series by using multiple Gaussian process sequences in the lower level of our hierarchical framework and capture the transitions between Gaussian processes by utilizing the linear dynamical system. The experiments are conducted on the complete blood count (CBC) panel data of 1000 post-surgical cardiac patients during their hospitalization. Our framework is evaluated and compared to multiple baseline approaches in terms of the mean absolute prediction error and the absolute percentage error. We tested our framework by first learning the time series model from data for the patients in the training set, and then using it to predict future time series values for the patients in the test set. We show that our model outperforms multiple existing models in terms of its predictive accuracy. Our method achieved a 3.13% average prediction accuracy improvement on ten CBC lab time series when it was compared against the best performing baseline. A 5.25% average accuracy improvement was observed when only short-term predictions were considered. A new hierarchical dynamical system framework that lets us model irregularly sampled time series data is a promising new direction for modeling clinical time series and for improving their predictive performance. Copyright © 2014 Elsevier B.V. All rights reserved.
Rutter, Carolyn M; Knudsen, Amy B; Marsh, Tracey L; Doria-Rose, V Paul; Johnson, Eric; Pabiniak, Chester; Kuntz, Karen M; van Ballegooijen, Marjolein; Zauber, Ann G; Lansdorp-Vogelaar, Iris
2016-07-01
Microsimulation models synthesize evidence about disease processes and interventions, providing a method for predicting long-term benefits and harms of prevention, screening, and treatment strategies. Because models often require assumptions about unobservable processes, assessing a model's predictive accuracy is important. We validated 3 colorectal cancer (CRC) microsimulation models against outcomes from the United Kingdom Flexible Sigmoidoscopy Screening (UKFSS) Trial, a randomized controlled trial that examined the effectiveness of one-time flexible sigmoidoscopy screening to reduce CRC mortality. The models incorporate different assumptions about the time from adenoma initiation to development of preclinical and symptomatic CRC. Analyses compare model predictions to study estimates across a range of outcomes to provide insight into the accuracy of model assumptions. All 3 models accurately predicted the relative reduction in CRC mortality 10 years after screening (predicted hazard ratios, with 95% percentile intervals: 0.56 [0.44, 0.71], 0.63 [0.51, 0.75], 0.68 [0.53, 0.83]; estimated with 95% confidence interval: 0.56 [0.45, 0.69]). Two models with longer average preclinical duration accurately predicted the relative reduction in 10-year CRC incidence. Two models with longer mean sojourn time accurately predicted the number of screen-detected cancers. All 3 models predicted too many proximal adenomas among patients referred to colonoscopy. Model accuracy can only be established through external validation. Analyses such as these are therefore essential for any decision model. Results supported the assumptions that the average time from adenoma initiation to development of preclinical cancer is long (up to 25 years), and mean sojourn time is close to 4 years, suggesting the window for early detection and intervention by screening is relatively long. Variation in dwell time remains uncertain and could have important clinical and policy implications. © The Author(s) 2016.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ballard, Sanford; Hipp, James R.; Begnaud, Michael L.
The task of monitoring the Earth for nuclear explosions relies heavily on seismic data to detect, locate, and characterize suspected nuclear tests. In this study, motivated by the need to locate suspected explosions as accurately and precisely as possible, we developed a tomographic model of the compressional wave slowness in the Earth’s mantle with primary focus on the accuracy and precision of travel-time predictions for P and Pn ray paths through the model. Path-dependent travel-time prediction uncertainties are obtained by computing the full 3D model covariance matrix and then integrating slowness variance and covariance along ray paths from source tomore » receiver. Path-dependent travel-time prediction uncertainties reflect the amount of seismic data that was used in tomography with very low values for paths represented by abundant data in the tomographic data set and very high values for paths through portions of the model that were poorly sampled by the tomography data set. The pattern of travel-time prediction uncertainty is a direct result of the off-diagonal terms of the model covariance matrix and underscores the importance of incorporating the full model covariance matrix in the determination of travel-time prediction uncertainty. In addition, the computed pattern of uncertainty differs significantly from that of 1D distance-dependent travel-time uncertainties computed using traditional methods, which are only appropriate for use with travel times computed through 1D velocity models.« less
Ballard, Sanford; Hipp, James R.; Begnaud, Michael L.; ...
2016-10-11
The task of monitoring the Earth for nuclear explosions relies heavily on seismic data to detect, locate, and characterize suspected nuclear tests. In this study, motivated by the need to locate suspected explosions as accurately and precisely as possible, we developed a tomographic model of the compressional wave slowness in the Earth’s mantle with primary focus on the accuracy and precision of travel-time predictions for P and Pn ray paths through the model. Path-dependent travel-time prediction uncertainties are obtained by computing the full 3D model covariance matrix and then integrating slowness variance and covariance along ray paths from source tomore » receiver. Path-dependent travel-time prediction uncertainties reflect the amount of seismic data that was used in tomography with very low values for paths represented by abundant data in the tomographic data set and very high values for paths through portions of the model that were poorly sampled by the tomography data set. The pattern of travel-time prediction uncertainty is a direct result of the off-diagonal terms of the model covariance matrix and underscores the importance of incorporating the full model covariance matrix in the determination of travel-time prediction uncertainty. In addition, the computed pattern of uncertainty differs significantly from that of 1D distance-dependent travel-time uncertainties computed using traditional methods, which are only appropriate for use with travel times computed through 1D velocity models.« less
Multi-Step Time Series Forecasting with an Ensemble of Varied Length Mixture Models.
Ouyang, Yicun; Yin, Hujun
2018-05-01
Many real-world problems require modeling and forecasting of time series, such as weather temperature, electricity demand, stock prices and foreign exchange (FX) rates. Often, the tasks involve predicting over a long-term period, e.g. several weeks or months. Most existing time series models are inheritably for one-step prediction, that is, predicting one time point ahead. Multi-step or long-term prediction is difficult and challenging due to the lack of information and uncertainty or error accumulation. The main existing approaches, iterative and independent, either use one-step model recursively or treat the multi-step task as an independent model. They generally perform poorly in practical applications. In this paper, as an extension of the self-organizing mixture autoregressive (AR) model, the varied length mixture (VLM) models are proposed to model and forecast time series over multi-steps. The key idea is to preserve the dependencies between the time points within the prediction horizon. Training data are segmented to various lengths corresponding to various forecasting horizons, and the VLM models are trained in a self-organizing fashion on these segments to capture these dependencies in its component AR models of various predicting horizons. The VLM models form a probabilistic mixture of these varied length models. A combination of short and long VLM models and an ensemble of them are proposed to further enhance the prediction performance. The effectiveness of the proposed methods and their marked improvements over the existing methods are demonstrated through a number of experiments on synthetic data, real-world FX rates and weather temperatures.
Predicting Time to Hospital Discharge for Extremely Preterm Infants
Hintz, Susan R.; Bann, Carla M.; Ambalavanan, Namasivayam; Cotten, C. Michael; Das, Abhik; Higgins, Rosemary D.
2010-01-01
As extremely preterm infant mortality rates have decreased, concerns regarding resource utilization have intensified. Accurate models to predict time to hospital discharge could aid in resource planning, family counseling, and perhaps stimulate quality improvement initiatives. Objectives For infants <27 weeks estimated gestational age (EGA), to develop, validate and compare several models to predict time to hospital discharge based on time-dependent covariates, and based on the presence of 5 key risk factors as predictors. Patients and Methods This was a retrospective analysis of infants <27 weeks EGA, born 7/2002-12/2005 and surviving to discharge from a NICHD Neonatal Research Network site. Time to discharge was modeled as continuous (postmenstrual age at discharge, PMAD), and categorical variables (“Early” and “Late” discharge). Three linear and logistic regression models with time-dependent covariate inclusion were developed (perinatal factors only, perinatal+early neonatal factors, perinatal+early+later factors). Models for Early and Late discharge using the cumulative presence of 5 key risk factors as predictors were also evaluated. Predictive capabilities were compared using coefficient of determination (R2) for linear models, and AUC of ROC curve for logistic models. Results Data from 2254 infants were included. Prediction of PMAD was poor, with only 38% of variation explained by linear models. However, models incorporating later clinical characteristics were more accurate in predicting “Early” or “Late” discharge (full models: AUC 0.76-0.83 vs. perinatal factor models: AUC 0.56-0.69). In simplified key risk factors models, predicted probabilities for Early and Late discharge compared favorably with observed rates. Furthermore, the AUC (0.75-0.77) were similar to those of models including the full factor set. Conclusions Prediction of Early or Late discharge is poor if only perinatal factors are considered, but improves substantially with knowledge of later-occurring morbidities. Prediction using a few key risk factors is comparable to full models, and may offer a clinically applicable strategy. PMID:20008430
Comparison of time series models for predicting campylobacteriosis risk in New Zealand.
Al-Sakkaf, A; Jones, G
2014-05-01
Predicting campylobacteriosis cases is a matter of considerable concern in New Zealand, after the number of the notified cases was the highest among the developed countries in 2006. Thus, there is a need to develop a model or a tool to predict accurately the number of campylobacteriosis cases as the Microbial Risk Assessment Model used to predict the number of campylobacteriosis cases failed to predict accurately the number of actual cases. We explore the appropriateness of classical time series modelling approaches for predicting campylobacteriosis. Finding the most appropriate time series model for New Zealand data has additional practical considerations given a possible structural change, that is, a specific and sudden change in response to the implemented interventions. A univariate methodological approach was used to predict monthly disease cases using New Zealand surveillance data of campylobacteriosis incidence from 1998 to 2009. The data from the years 1998 to 2008 were used to model the time series with the year 2009 held out of the data set for model validation. The best two models were then fitted to the full 1998-2009 data and used to predict for each month of 2010. The Holt-Winters (multiplicative) and ARIMA (additive) intervention models were considered the best models for predicting campylobacteriosis in New Zealand. It was noticed that the prediction by an additive ARIMA with intervention was slightly better than the prediction by a Holt-Winter multiplicative method for the annual total in year 2010, the former predicting only 23 cases less than the actual reported cases. It is confirmed that classical time series techniques such as ARIMA with intervention and Holt-Winters can provide a good prediction performance for campylobacteriosis risk in New Zealand. The results reported by this study are useful to the New Zealand Health and Safety Authority's efforts in addressing the problem of the campylobacteriosis epidemic. © 2013 Blackwell Verlag GmbH.
FPGA implementation of predictive degradation model for engine oil lifetime
NASA Astrophysics Data System (ADS)
Idros, M. F. M.; Razak, A. H. A.; Junid, S. A. M. Al; Suliman, S. I.; Halim, A. K.
2018-03-01
This paper presents the implementation of linear regression model for degradation prediction on Register Transfer Logic (RTL) using QuartusII. A stationary model had been identified in the degradation trend for the engine oil in a vehicle in time series method. As for RTL implementation, the degradation model is written in Verilog HDL and the data input are taken at a certain time. Clock divider had been designed to support the timing sequence of input data. At every five data, a regression analysis is adapted for slope variation determination and prediction calculation. Here, only the negative value are taken as the consideration for the prediction purposes for less number of logic gate. Least Square Method is adapted to get the best linear model based on the mean values of time series data. The coded algorithm has been implemented on FPGA for validation purposes. The result shows the prediction time to change the engine oil.
Nonstationary time series prediction combined with slow feature analysis
NASA Astrophysics Data System (ADS)
Wang, G.; Chen, X.
2015-07-01
Almost all climate time series have some degree of nonstationarity due to external driving forces perturbing the observed system. Therefore, these external driving forces should be taken into account when constructing the climate dynamics. This paper presents a new technique of obtaining the driving forces of a time series from the slow feature analysis (SFA) approach, and then introduces them into a predictive model to predict nonstationary time series. The basic theory of the technique is to consider the driving forces as state variables and to incorporate them into the predictive model. Experiments using a modified logistic time series and winter ozone data in Arosa, Switzerland, were conducted to test the model. The results showed improved prediction skills.
Assessment of Arctic and Antarctic Sea Ice Predictability in CMIP5 Decadal Hindcasts
NASA Technical Reports Server (NTRS)
Yang, Chao-Yuan; Liu, Jiping (Inventor); Hu, Yongyun; Horton, Radley M.; Chen, Liqi; Cheng, Xiao
2016-01-01
This paper examines the ability of coupled global climate models to predict decadal variability of Arctic and Antarctic sea ice. We analyze decadal hindcasts/predictions of 11 Coupled Model Intercomparison Project Phase 5 (CMIP5) models. Decadal hindcasts exhibit a large multimodel spread in the simulated sea ice extent, with some models deviating significantly from the observations as the predicted ice extent quickly drifts away from the initial constraint. The anomaly correlation analysis between the decadal hindcast and observed sea ice suggests that in the Arctic, for most models, the areas showing significant predictive skill become broader associated with increasing lead times. This area expansion is largely because nearly all the models are capable of predicting the observed decreasing Arctic sea ice cover. Sea ice extent in the North Pacific has better predictive skill than that in the North Atlantic (particularly at a lead time of 3-7 years), but there is a reemerging predictive skill in the North Atlantic at a lead time of 6-8 years. In contrast to the Arctic, Antarctic sea ice decadal hindcasts do not show broad predictive skill at any timescales, and there is no obvious improvement linking the areal extent of significant predictive skill to lead time increase. This might be because nearly all the models predict a retreating Antarctic sea ice cover, opposite to the observations. For the Arctic, the predictive skill of the multi-model ensemble mean outperforms most models and the persistence prediction at longer timescales, which is not the case for the Antarctic. Overall, for the Arctic, initialized decadal hindcasts show improved predictive skill compared to uninitialized simulations, although this improvement is not present in the Antarctic.
Clinical time series prediction: towards a hierarchical dynamical system framework
Liu, Zitao; Hauskrecht, Milos
2014-01-01
Objective Developing machine learning and data mining algorithms for building temporal models of clinical time series is important for understanding of the patient condition, the dynamics of a disease, effect of various patient management interventions and clinical decision making. In this work, we propose and develop a novel hierarchical framework for modeling clinical time series data of varied length and with irregularly sampled observations. Materials and methods Our hierarchical dynamical system framework for modeling clinical time series combines advantages of the two temporal modeling approaches: the linear dynamical system and the Gaussian process. We model the irregularly sampled clinical time series by using multiple Gaussian process sequences in the lower level of our hierarchical framework and capture the transitions between Gaussian processes by utilizing the linear dynamical system. The experiments are conducted on the complete blood count (CBC) panel data of 1000 post-surgical cardiac patients during their hospitalization. Our framework is evaluated and compared to multiple baseline approaches in terms of the mean absolute prediction error and the absolute percentage error. Results We tested our framework by first learning the time series model from data for the patient in the training set, and then applying the model in order to predict future time series values on the patients in the test set. We show that our model outperforms multiple existing models in terms of its predictive accuracy. Our method achieved a 3.13% average prediction accuracy improvement on ten CBC lab time series when it was compared against the best performing baseline. A 5.25% average accuracy improvement was observed when only short-term predictions were considered. Conclusion A new hierarchical dynamical system framework that lets us model irregularly sampled time series data is a promising new direction for modeling clinical time series and for improving their predictive performance. PMID:25534671
Cure modeling in real-time prediction: How much does it help?
Ying, Gui-Shuang; Zhang, Qiang; Lan, Yu; Li, Yimei; Heitjan, Daniel F
2017-08-01
Various parametric and nonparametric modeling approaches exist for real-time prediction in time-to-event clinical trials. Recently, Chen (2016 BMC Biomedical Research Methodology 16) proposed a prediction method based on parametric cure-mixture modeling, intending to cover those situations where it appears that a non-negligible fraction of subjects is cured. In this article we apply a Weibull cure-mixture model to create predictions, demonstrating the approach in RTOG 0129, a randomized trial in head-and-neck cancer. We compare the ultimate realized data in RTOG 0129 to interim predictions from a Weibull cure-mixture model, a standard Weibull model without a cure component, and a nonparametric model based on the Bayesian bootstrap. The standard Weibull model predicted that events would occur earlier than the Weibull cure-mixture model, but the difference was unremarkable until late in the trial when evidence for a cure became clear. Nonparametric predictions often gave undefined predictions or infinite prediction intervals, particularly at early stages of the trial. Simulations suggest that cure modeling can yield better-calibrated prediction intervals when there is a cured component, or the appearance of a cured component, but at a substantial cost in the average width of the intervals. Copyright © 2017 Elsevier Inc. All rights reserved.
Using Time Series Analysis to Predict Cardiac Arrest in a PICU.
Kennedy, Curtis E; Aoki, Noriaki; Mariscalco, Michele; Turley, James P
2015-11-01
To build and test cardiac arrest prediction models in a PICU, using time series analysis as input, and to measure changes in prediction accuracy attributable to different classes of time series data. Retrospective cohort study. Thirty-one bed academic PICU that provides care for medical and general surgical (not congenital heart surgery) patients. Patients experiencing a cardiac arrest in the PICU and requiring external cardiac massage for at least 2 minutes. None. One hundred three cases of cardiac arrest and 109 control cases were used to prepare a baseline dataset that consisted of 1,025 variables in four data classes: multivariate, raw time series, clinical calculations, and time series trend analysis. We trained 20 arrest prediction models using a matrix of five feature sets (combinations of data classes) with four modeling algorithms: linear regression, decision tree, neural network, and support vector machine. The reference model (multivariate data with regression algorithm) had an accuracy of 78% and 87% area under the receiver operating characteristic curve. The best model (multivariate + trend analysis data with support vector machine algorithm) had an accuracy of 94% and 98% area under the receiver operating characteristic curve. Cardiac arrest predictions based on a traditional model built with multivariate data and a regression algorithm misclassified cases 3.7 times more frequently than predictions that included time series trend analysis and built with a support vector machine algorithm. Although the final model lacks the specificity necessary for clinical application, we have demonstrated how information from time series data can be used to increase the accuracy of clinical prediction models.
Real-Time Ensemble Forecasting of Coronal Mass Ejections Using the Wsa-Enlil+Cone Model
NASA Astrophysics Data System (ADS)
Mays, M. L.; Taktakishvili, A.; Pulkkinen, A. A.; Odstrcil, D.; MacNeice, P. J.; Rastaetter, L.; LaSota, J. A.
2014-12-01
Ensemble forecasting of coronal mass ejections (CMEs) provides significant information in that it provides an estimation of the spread or uncertainty in CME arrival time predictions. Real-time ensemble modeling of CME propagation is performed by forecasters at the Space Weather Research Center (SWRC) using the WSA-ENLIL+cone model available at the Community Coordinated Modeling Center (CCMC). To estimate the effect of uncertainties in determining CME input parameters on arrival time predictions, a distribution of n (routinely n=48) CME input parameter sets are generated using the CCMC Stereo CME Analysis Tool (StereoCAT) which employs geometrical triangulation techniques. These input parameters are used to perform n different simulations yielding an ensemble of solar wind parameters at various locations of interest, including a probability distribution of CME arrival times (for hits), and geomagnetic storm strength (for Earth-directed hits). We present the results of ensemble simulations for a total of 38 CME events in 2013-2014. For 28 of the ensemble runs containing hits, the observed CME arrival was within the range of ensemble arrival time predictions for 14 runs (half). The average arrival time prediction was computed for each of the 28 ensembles predicting hits and using the actual arrival time, an average absolute error of 10.0 hours (RMSE=11.4 hours) was found for all 28 ensembles, which is comparable to current forecasting errors. Some considerations for the accuracy of ensemble CME arrival time predictions include the importance of the initial distribution of CME input parameters, particularly the mean and spread. When the observed arrivals are not within the predicted range, this still allows the ruling out of prediction errors caused by tested CME input parameters. Prediction errors can also arise from ambient model parameters such as the accuracy of the solar wind background, and other limitations. Additionally the ensemble modeling sysem was used to complete a parametric event case study of the sensitivity of the CME arrival time prediction to free parameters for ambient solar wind model and CME. The parameter sensitivity study suggests future directions for the system, such as running ensembles using various magnetogram inputs to the WSA model.
NASA Astrophysics Data System (ADS)
Febrian Umbara, Rian; Tarwidi, Dede; Budi Setiawan, Erwin
2018-03-01
The paper discusses the prediction of Jakarta Composite Index (JCI) in Indonesia Stock Exchange. The study is based on JCI historical data for 1286 days to predict the value of JCI one day ahead. This paper proposes predictions done in two stages., The first stage using Fuzzy Time Series (FTS) to predict values of ten technical indicators, and the second stage using Support Vector Regression (SVR) to predict the value of JCI one day ahead, resulting in a hybrid prediction model FTS-SVR. The performance of this combined prediction model is compared with the performance of the single stage prediction model using SVR only. Ten technical indicators are used as input for each model.
A real-time prediction model for post-irradiation malignant cervical lymph nodes.
Lo, W-C; Cheng, P-W; Shueng, P-W; Hsieh, C-H; Chang, Y-L; Liao, L-J
2018-04-01
To establish a real-time predictive scoring model based on sonographic characteristics for identifying malignant cervical lymph nodes (LNs) in cancer patients after neck irradiation. One-hundred forty-four irradiation-treated patients underwent ultrasonography and ultrasound-guided fine-needle aspirations (USgFNAs), and the resultant data were used to construct a real-time and computerised predictive scoring model. This scoring system was further compared with our previously proposed prediction model. A predictive scoring model, 1.35 × (L axis) + 2.03 × (S axis) + 2.27 × (margin) + 1.48 × (echogenic hilum) + 3.7, was generated by stepwise multivariate logistic regression analysis. Neck LNs were considered to be malignant when the score was ≥ 7, corresponding to a sensitivity of 85.5%, specificity of 79.4%, positive predictive value (PPV) of 82.3%, negative predictive value (NPV) of 83.1%, and overall accuracy of 82.6%. When this new model and the original model were compared, the areas under the receiver operating characteristic curve (c-statistic) were 0.89 and 0.81, respectively (P < .05). A real-time sonographic predictive scoring model was constructed to provide prompt and reliable guidance for USgFNA biopsies to manage cervical LNs after neck irradiation. © 2017 John Wiley & Sons Ltd.
State-space prediction model for chaotic time series
NASA Astrophysics Data System (ADS)
Alparslan, A. K.; Sayar, M.; Atilgan, A. R.
1998-08-01
A simple method for predicting the continuation of scalar chaotic time series ahead in time is proposed. The false nearest neighbors technique in connection with the time-delayed embedding is employed so as to reconstruct the state space. A local forecasting model based upon the time evolution of the topological neighboring in the reconstructed phase space is suggested. A moving root-mean-square error is utilized in order to monitor the error along the prediction horizon. The model is tested for the convection amplitude of the Lorenz model. The results indicate that for approximately 100 cycles of the training data, the prediction follows the actual continuation very closely about six cycles. The proposed model, like other state-space forecasting models, captures the long-term behavior of the system due to the use of spatial neighbors in the state space.
Kawabe, Takefumi; Tomitsuka, Toshiaki; Kajiro, Toshi; Kishi, Naoyuki; Toyo'oka, Toshimasa
2013-01-18
An optimization procedure of ternary isocratic mobile phase composition in the HPLC method using a statistical prediction model and visualization technique is described. In this report, two prediction models were first evaluated to obtain reliable prediction results. The retention time prediction model was constructed by modification from past respectable knowledge of retention modeling against ternary solvent strength changes. An excellent correlation between observed and predicted retention time was given in various kinds of pharmaceutical compounds by the multiple regression modeling of solvent strength parameters. The peak width of half height prediction model employed polynomial fitting of the retention time, because a linear relationship between the peak width of half height and the retention time was not obtained even after taking into account the contribution of the extra-column effect based on a moment method. Accurate prediction results were able to be obtained by such model, showing mostly over 0.99 value of correlation coefficient between observed and predicted peak width of half height. Then, a procedure to visualize a resolution Design Space was tried as the secondary challenge. An artificial neural network method was performed to link directly between ternary solvent strength parameters and predicted resolution, which were determined by accurate prediction results of retention time and a peak width of half height, and to visualize appropriate ternary mobile phase compositions as a range of resolution over 1.5 on the contour profile. By using mixtures of similar pharmaceutical compounds in case studies, we verified a possibility of prediction to find the optimal range of condition. Observed chromatographic results on the optimal condition mostly matched with the prediction and the average of difference between observed and predicted resolution were approximately 0.3. This means that enough accuracy for prediction could be achieved by the proposed procedure. Consequently, the procedure to search the optimal range of ternary solvent strength achieving an appropriate separation is provided by using the resolution Design Space based on accurate prediction. Copyright © 2012 Elsevier B.V. All rights reserved.
Improving Gastric Cancer Outcome Prediction Using Single Time-Point Artificial Neural Network Models
Nilsaz-Dezfouli, Hamid; Abu-Bakar, Mohd Rizam; Arasan, Jayanthi; Adam, Mohd Bakri; Pourhoseingholi, Mohamad Amin
2017-01-01
In cancer studies, the prediction of cancer outcome based on a set of prognostic variables has been a long-standing topic of interest. Current statistical methods for survival analysis offer the possibility of modelling cancer survivability but require unrealistic assumptions about the survival time distribution or proportionality of hazard. Therefore, attention must be paid in developing nonlinear models with less restrictive assumptions. Artificial neural network (ANN) models are primarily useful in prediction when nonlinear approaches are required to sift through the plethora of available information. The applications of ANN models for prognostic and diagnostic classification in medicine have attracted a lot of interest. The applications of ANN models in modelling the survival of patients with gastric cancer have been discussed in some studies without completely considering the censored data. This study proposes an ANN model for predicting gastric cancer survivability, considering the censored data. Five separate single time-point ANN models were developed to predict the outcome of patients after 1, 2, 3, 4, and 5 years. The performance of ANN model in predicting the probabilities of death is consistently high for all time points according to the accuracy and the area under the receiver operating characteristic curve. PMID:28469384
Weather models as virtual sensors to data-driven rainfall predictions in urban watersheds
NASA Astrophysics Data System (ADS)
Cozzi, Lorenzo; Galelli, Stefano; Pascal, Samuel Jolivet De Marc; Castelletti, Andrea
2013-04-01
Weather and climate predictions are a key element of urban hydrology where they are used to inform water management and assist in flood warning delivering. Indeed, the modelling of the very fast dynamics of urbanized catchments can be substantially improved by the use of weather/rainfall predictions. For example, in Singapore Marina Reservoir catchment runoff processes have a very short time of concentration (roughly one hour) and observational data are thus nearly useless for runoff predictions and weather prediction are required. Unfortunately, radar nowcasting methods do not allow to carrying out long - term weather predictions, whereas numerical models are limited by their coarse spatial scale. Moreover, numerical models are usually poorly reliable because of the fast motion and limited spatial extension of rainfall events. In this study we investigate the combined use of data-driven modelling techniques and weather variables observed/simulated with a numerical model as a way to improve rainfall prediction accuracy and lead time in the Singapore metropolitan area. To explore the feasibility of the approach, we use a Weather Research and Forecast (WRF) model as a virtual sensor network for the input variables (the states of the WRF model) to a machine learning rainfall prediction model. More precisely, we combine an input variable selection method and a non-parametric tree-based model to characterize the empirical relation between the rainfall measured at the catchment level and all possible weather input variables provided by WRF model. We explore different lead time to evaluate the model reliability for different long - term predictions, as well as different time lags to see how past information could improve results. Results show that the proposed approach allow a significant improvement of the prediction accuracy of the WRF model on the Singapore urban area.
A Seasonal Time-Series Model Based on Gene Expression Programming for Predicting Financial Distress
2018-01-01
The issue of financial distress prediction plays an important and challenging research topic in the financial field. Currently, there have been many methods for predicting firm bankruptcy and financial crisis, including the artificial intelligence and the traditional statistical methods, and the past studies have shown that the prediction result of the artificial intelligence method is better than the traditional statistical method. Financial statements are quarterly reports; hence, the financial crisis of companies is seasonal time-series data, and the attribute data affecting the financial distress of companies is nonlinear and nonstationary time-series data with fluctuations. Therefore, this study employed the nonlinear attribute selection method to build a nonlinear financial distress prediction model: that is, this paper proposed a novel seasonal time-series gene expression programming model for predicting the financial distress of companies. The proposed model has several advantages including the following: (i) the proposed model is different from the previous models lacking the concept of time series; (ii) the proposed integrated attribute selection method can find the core attributes and reduce high dimensional data; and (iii) the proposed model can generate the rules and mathematical formulas of financial distress for providing references to the investors and decision makers. The result shows that the proposed method is better than the listing classifiers under three criteria; hence, the proposed model has competitive advantages in predicting the financial distress of companies. PMID:29765399
A Seasonal Time-Series Model Based on Gene Expression Programming for Predicting Financial Distress.
Cheng, Ching-Hsue; Chan, Chia-Pang; Yang, Jun-He
2018-01-01
The issue of financial distress prediction plays an important and challenging research topic in the financial field. Currently, there have been many methods for predicting firm bankruptcy and financial crisis, including the artificial intelligence and the traditional statistical methods, and the past studies have shown that the prediction result of the artificial intelligence method is better than the traditional statistical method. Financial statements are quarterly reports; hence, the financial crisis of companies is seasonal time-series data, and the attribute data affecting the financial distress of companies is nonlinear and nonstationary time-series data with fluctuations. Therefore, this study employed the nonlinear attribute selection method to build a nonlinear financial distress prediction model: that is, this paper proposed a novel seasonal time-series gene expression programming model for predicting the financial distress of companies. The proposed model has several advantages including the following: (i) the proposed model is different from the previous models lacking the concept of time series; (ii) the proposed integrated attribute selection method can find the core attributes and reduce high dimensional data; and (iii) the proposed model can generate the rules and mathematical formulas of financial distress for providing references to the investors and decision makers. The result shows that the proposed method is better than the listing classifiers under three criteria; hence, the proposed model has competitive advantages in predicting the financial distress of companies.
Yamamoto, Yumi; Välitalo, Pyry A.; Huntjens, Dymphy R.; Proost, Johannes H.; Vermeulen, An; Krauwinkel, Walter; Beukers, Margot W.; van den Berg, Dirk‐Jan; Hartman, Robin; Wong, Yin Cheong; Danhof, Meindert; van Hasselt, John G. C.
2017-01-01
Drug development targeting the central nervous system (CNS) is challenging due to poor predictability of drug concentrations in various CNS compartments. We developed a generic physiologically based pharmacokinetic (PBPK) model for prediction of drug concentrations in physiologically relevant CNS compartments. System‐specific and drug‐specific model parameters were derived from literature and in silico predictions. The model was validated using detailed concentration‐time profiles from 10 drugs in rat plasma, brain extracellular fluid, 2 cerebrospinal fluid sites, and total brain tissue. These drugs, all small molecules, were selected to cover a wide range of physicochemical properties. The concentration‐time profiles for these drugs were adequately predicted across the CNS compartments (symmetric mean absolute percentage error for the model prediction was <91%). In conclusion, the developed PBPK model can be used to predict temporal concentration profiles of drugs in multiple relevant CNS compartments, which we consider valuable information for efficient CNS drug development. PMID:28891201
Predicting mortality over different time horizons: which data elements are needed?
Goldstein, Benjamin A; Pencina, Michael J; Montez-Rath, Maria E; Winkelmayer, Wolfgang C
2017-01-01
Electronic health records (EHRs) are a resource for "big data" analytics, containing a variety of data elements. We investigate how different categories of information contribute to prediction of mortality over different time horizons among patients undergoing hemodialysis treatment. We derived prediction models for mortality over 7 time horizons using EHR data on older patients from a national chain of dialysis clinics linked with administrative data using LASSO (least absolute shrinkage and selection operator) regression. We assessed how different categories of information relate to risk assessment and compared discrete models to time-to-event models. The best predictors used all the available data (c-statistic ranged from 0.72-0.76), with stronger models in the near term. While different variable groups showed different utility, exclusion of any particular group did not lead to a meaningfully different risk assessment. Discrete time models performed better than time-to-event models. Different variable groups were predictive over different time horizons, with vital signs most predictive for near-term mortality and demographic and comorbidities more important in long-term mortality. © The Author 2016. Published by Oxford University Press on behalf of the American Medical Informatics Association. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Li, Yi; Chen, Yuren
2016-12-30
To make driving assistance system more humanized, this study focused on the prediction and assistance of drivers' perception-response time on mountain highway curves. Field tests were conducted to collect real-time driving data and driver vision information. A driver-vision lane model quantified curve elements in drivers' vision. A multinomial log-linear model was established to predict perception-response time with traffic/road environment information, driver-vision lane model, and mechanical status (last second). A corresponding assistance model showed a positive impact on drivers' perception-response times on mountain highway curves. Model results revealed that the driver-vision lane model and visual elements did have important influence on drivers' perception-response time. Compared with roadside passive road safety infrastructure, proper visual geometry design, timely visual guidance, and visual information integrality of a curve are significant factors for drivers' perception-response time.
Blanche, Paul; Proust-Lima, Cécile; Loubère, Lucie; Berr, Claudine; Dartigues, Jean-François; Jacqmin-Gadda, Hélène
2015-03-01
Thanks to the growing interest in personalized medicine, joint modeling of longitudinal marker and time-to-event data has recently started to be used to derive dynamic individual risk predictions. Individual predictions are called dynamic because they are updated when information on the subject's health profile grows with time. We focus in this work on statistical methods for quantifying and comparing dynamic predictive accuracy of this kind of prognostic models, accounting for right censoring and possibly competing events. Dynamic area under the ROC curve (AUC) and Brier Score (BS) are used to quantify predictive accuracy. Nonparametric inverse probability of censoring weighting is used to estimate dynamic curves of AUC and BS as functions of the time at which predictions are made. Asymptotic results are established and both pointwise confidence intervals and simultaneous confidence bands are derived. Tests are also proposed to compare the dynamic prediction accuracy curves of two prognostic models. The finite sample behavior of the inference procedures is assessed via simulations. We apply the proposed methodology to compare various prediction models using repeated measures of two psychometric tests to predict dementia in the elderly, accounting for the competing risk of death. Models are estimated on the French Paquid cohort and predictive accuracies are evaluated and compared on the French Three-City cohort. © 2014, The International Biometric Society.
McGinitie, Teague M; Harynuk, James J
2012-09-14
A method was developed to accurately predict both the primary and secondary retention times for a series of alkanes, ketones and alcohols in a flow-modulated GC×GC system. This was accomplished through the use of a three-parameter thermodynamic model where ΔH, ΔS, and ΔC(p) for an analyte's interaction with the stationary phases in both dimensions are known. Coupling this thermodynamic model with a time summation calculation it was possible to accurately predict both (1)t(r) and (2)t(r) for all analytes. The model was able to predict retention times regardless of the temperature ramp used, with an average error of only 0.64% for (1)t(r) and an average error of only 2.22% for (2)t(r). The model shows promise for the accurate prediction of retention times in GC×GC for a wide range of compounds and is able to utilize data collected from 1D experiments. Copyright © 2012 Elsevier B.V. All rights reserved.
Joshi, Shuchi N; Srinivas, Nuggehally R; Parmar, Deven V
2018-03-01
Our aim was to develop and validate the extrapolative performance of a regression model using a limited sampling strategy for accurate estimation of the area under the plasma concentration versus time curve for saroglitazar. Healthy subject pharmacokinetic data from a well-powered food-effect study (fasted vs fed treatments; n = 50) was used in this work. The first 25 subjects' serial plasma concentration data up to 72 hours and corresponding AUC 0-t (ie, 72 hours) from the fasting group comprised a training dataset to develop the limited sampling model. The internal datasets for prediction included the remaining 25 subjects from the fasting group and all 50 subjects from the fed condition of the same study. The external datasets included pharmacokinetic data for saroglitazar from previous single-dose clinical studies. Limited sampling models were composed of 1-, 2-, and 3-concentration-time points' correlation with AUC 0-t of saroglitazar. Only models with regression coefficients (R 2 ) >0.90 were screened for further evaluation. The best R 2 model was validated for its utility based on mean prediction error, mean absolute prediction error, and root mean square error. Both correlations between predicted and observed AUC 0-t of saroglitazar and verification of precision and bias using Bland-Altman plot were carried out. None of the evaluated 1- and 2-concentration-time points models achieved R 2 > 0.90. Among the various 3-concentration-time points models, only 4 equations passed the predefined criterion of R 2 > 0.90. Limited sampling models with time points 0.5, 2, and 8 hours (R 2 = 0.9323) and 0.75, 2, and 8 hours (R 2 = 0.9375) were validated. Mean prediction error, mean absolute prediction error, and root mean square error were <30% (predefined criterion) and correlation (r) was at least 0.7950 for the consolidated internal and external datasets of 102 healthy subjects for the AUC 0-t prediction of saroglitazar. The same models, when applied to the AUC 0-t prediction of saroglitazar sulfoxide, showed mean prediction error, mean absolute prediction error, and root mean square error <30% and correlation (r) was at least 0.9339 in the same pool of healthy subjects. A 3-concentration-time points limited sampling model predicts the exposure of saroglitazar (ie, AUC 0-t ) within predefined acceptable bias and imprecision limit. Same model was also used to predict AUC 0-∞ . The same limited sampling model was found to predict the exposure of saroglitazar sulfoxide within predefined criteria. This model can find utility during late-phase clinical development of saroglitazar in the patient population. Copyright © 2018 Elsevier HS Journals, Inc. All rights reserved.
Jomah, N D; Ojo, J F; Odigie, E A; Olugasa, B O
2014-12-01
The post-civil war records of dog bite injuries (DBI) and rabies-like-illness (RLI) among humans in Liberia is a vital epidemiological resource for developing a predictive model to guide the allocation of resources towards human rabies control. Whereas DBI and RLI are high, they are largely under-reported. The objective of this study was to develop a time model of the case-pattern and apply it to derive predictors of time-trend point distribution of DBI-RLI cases. A retrospective 6 years data of DBI distribution among humans countrywide were converted to quarterly series using a transformation technique of Minimizing Squared First Difference statistic. The generated dataset was used to train a time-trend model of the DBI-RLI syndrome in Liberia. An additive detenninistic time-trend model was selected due to its performance compared to multiplication model of trend and seasonal movement. Parameter predictors were run on least square method to predict DBI cases for a prospective 4 years period, covering 2014-2017. The two-stage predictive model of DBI case-pattern between 2014 and 2017 was characterised by a uniform upward trend within Liberia's coastal and hinterland Counties over the forecast period. This paper describes a translational application of the time-trend distribution pattern of DBI epidemics, 2008-2013 reported in Liberia, on which a predictive model was developed. A computationally feasible two-stage time-trend permutation approach is proposed to estimate the time-trend parameters and conduct predictive inference on DBI-RLI in Liberia.
The Prediction of Length-of-day Variations Based on Gaussian Processes
NASA Astrophysics Data System (ADS)
Lei, Y.; Zhao, D. N.; Gao, Y. P.; Cai, H. B.
2015-01-01
Due to the complicated time-varying characteristics of the length-of-day (LOD) variations, the accuracies of traditional strategies for the prediction of the LOD variations such as the least squares extrapolation model, the time-series analysis model, and so on, have not met the requirements for real-time and high-precision applications. In this paper, a new machine learning algorithm --- the Gaussian process (GP) model is employed to forecast the LOD variations. Its prediction precisions are analyzed and compared with those of the back propagation neural networks (BPNN), general regression neural networks (GRNN) models, and the Earth Orientation Parameters Prediction Comparison Campaign (EOP PCC). The results demonstrate that the application of the GP model to the prediction of the LOD variations is efficient and feasible.
Real time wave forecasting using wind time history and numerical model
NASA Astrophysics Data System (ADS)
Jain, Pooja; Deo, M. C.; Latha, G.; Rajendran, V.
Operational activities in the ocean like planning for structural repairs or fishing expeditions require real time prediction of waves over typical time duration of say a few hours. Such predictions can be made by using a numerical model or a time series model employing continuously recorded waves. This paper presents another option to do so and it is based on a different time series approach in which the input is in the form of preceding wind speed and wind direction observations. This would be useful for those stations where the costly wave buoys are not deployed and instead only meteorological buoys measuring wind are moored. The technique employs alternative artificial intelligence approaches of an artificial neural network (ANN), genetic programming (GP) and model tree (MT) to carry out the time series modeling of wind to obtain waves. Wind observations at four offshore sites along the east coast of India were used. For calibration purpose the wave data was generated using a numerical model. The predicted waves obtained using the proposed time series models when compared with the numerically generated waves showed good resemblance in terms of the selected error criteria. Large differences across the chosen techniques of ANN, GP, MT were not noticed. Wave hindcasting at the same time step and the predictions over shorter lead times were better than the predictions over longer lead times. The proposed method is a cost effective and convenient option when a site-specific information is desired.
An online spatio-temporal prediction model for dengue fever epidemic in Kaohsiung,Taiwan
NASA Astrophysics Data System (ADS)
Cheng, Ming-Hung; Yu, Hwa-Lung; Angulo, Jose; Christakos, George
2013-04-01
Dengue Fever (DF) is one of the most serious vector-borne infectious diseases in tropical and subtropical areas. DF epidemics occur in Taiwan annually especially during summer and fall seasons. Kaohsiung city has been one of the major DF hotspots in decades. The emergence and re-emergence of the DF epidemic is complex and can be influenced by various factors including space-time dynamics of human and vector populations and virus serotypes as well as the associated uncertainties. This study integrates a stochastic space-time "Susceptible-Infected-Recovered" model under Bayesian maximum entropy framework (BME-SIR) to perform real-time prediction of disease diffusion across space-time. The proposed model is applied for spatiotemporal prediction of the DF epidemic at Kaohsiung city during 2002 when the historical series of high DF cases was recorded. The online prediction by BME-SIR model updates the parameters of SIR model and infected cases across districts over time. Results show that the proposed model is rigorous to initial guess of unknown model parameters, i.e. transmission and recovery rates, which can depend upon the virus serotypes and various human interventions. This study shows that spatial diffusion can be well characterized by BME-SIR model, especially at the district surrounding the disease outbreak locations. The prediction performance at DF hotspots, i.e. Cianjhen and Sanmin, can be degraded due to the implementation of various disease control strategies during the epidemics. The proposed online disease prediction BME-SIR model can provide the governmental agency with a valuable reference to timely identify, control, and efficiently prevent DF spread across space-time.
A Four-Stage Hybrid Model for Hydrological Time Series Forecasting
Di, Chongli; Yang, Xiaohua; Wang, Xiaochao
2014-01-01
Hydrological time series forecasting remains a difficult task due to its complicated nonlinear, non-stationary and multi-scale characteristics. To solve this difficulty and improve the prediction accuracy, a novel four-stage hybrid model is proposed for hydrological time series forecasting based on the principle of ‘denoising, decomposition and ensemble’. The proposed model has four stages, i.e., denoising, decomposition, components prediction and ensemble. In the denoising stage, the empirical mode decomposition (EMD) method is utilized to reduce the noises in the hydrological time series. Then, an improved method of EMD, the ensemble empirical mode decomposition (EEMD), is applied to decompose the denoised series into a number of intrinsic mode function (IMF) components and one residual component. Next, the radial basis function neural network (RBFNN) is adopted to predict the trend of all of the components obtained in the decomposition stage. In the final ensemble prediction stage, the forecasting results of all of the IMF and residual components obtained in the third stage are combined to generate the final prediction results, using a linear neural network (LNN) model. For illustration and verification, six hydrological cases with different characteristics are used to test the effectiveness of the proposed model. The proposed hybrid model performs better than conventional single models, the hybrid models without denoising or decomposition and the hybrid models based on other methods, such as the wavelet analysis (WA)-based hybrid models. In addition, the denoising and decomposition strategies decrease the complexity of the series and reduce the difficulties of the forecasting. With its effective denoising and accurate decomposition ability, high prediction precision and wide applicability, the new model is very promising for complex time series forecasting. This new forecast model is an extension of nonlinear prediction models. PMID:25111782
A four-stage hybrid model for hydrological time series forecasting.
Di, Chongli; Yang, Xiaohua; Wang, Xiaochao
2014-01-01
Hydrological time series forecasting remains a difficult task due to its complicated nonlinear, non-stationary and multi-scale characteristics. To solve this difficulty and improve the prediction accuracy, a novel four-stage hybrid model is proposed for hydrological time series forecasting based on the principle of 'denoising, decomposition and ensemble'. The proposed model has four stages, i.e., denoising, decomposition, components prediction and ensemble. In the denoising stage, the empirical mode decomposition (EMD) method is utilized to reduce the noises in the hydrological time series. Then, an improved method of EMD, the ensemble empirical mode decomposition (EEMD), is applied to decompose the denoised series into a number of intrinsic mode function (IMF) components and one residual component. Next, the radial basis function neural network (RBFNN) is adopted to predict the trend of all of the components obtained in the decomposition stage. In the final ensemble prediction stage, the forecasting results of all of the IMF and residual components obtained in the third stage are combined to generate the final prediction results, using a linear neural network (LNN) model. For illustration and verification, six hydrological cases with different characteristics are used to test the effectiveness of the proposed model. The proposed hybrid model performs better than conventional single models, the hybrid models without denoising or decomposition and the hybrid models based on other methods, such as the wavelet analysis (WA)-based hybrid models. In addition, the denoising and decomposition strategies decrease the complexity of the series and reduce the difficulties of the forecasting. With its effective denoising and accurate decomposition ability, high prediction precision and wide applicability, the new model is very promising for complex time series forecasting. This new forecast model is an extension of nonlinear prediction models.
Susan E. Meyer; Phil S. Allen
2009-01-01
A principal goal of seed germination modelling for wild species is to predict germination timing under fluctuating field conditions. We coupled our previously developed hydrothermal time, thermal and hydrothermal afterripening time, and hydration-dehydration models for dormancy loss and germination with field seed zone temperature and water potential measurements from...
Olugasa, Babasola O; Odigie, Eugene A; Lawani, Mike; Ojo, Johnson F
2015-01-01
The objective was to develop a case-pattern model for Lassa fever (LF) among humans and derive predictors of time-trend point distribution of LF cases in Liberia in view of the prevailing under-reporting and public health challenge posed by the disease in the country. A retrospective 5 years data of LF distribution countrywide among humans were used to train a time-trend model of the disease in Liberia. A time-trend quadratic model was selected due to its goodness-of-fit (R2 = 0.89, and P < 0.05) and best performance compared to linear and exponential models. Parameter predictors were run on least square method to predict LF cases for a prospective 5 years period, covering 2013-2017. The two-stage predictive model of LF case-pattern between 2013 and 2017 was characterized by a prospective decline within the South-coast County of Grand Bassa over the forecast period and an upward case-trend within the Northern County of Nimba. Case specific exponential increase was predicted for the first 2 years (2013-2014) with a geometric increase over the next 3 years (2015-2017) in Nimba County. This paper describes a translational application of the space-time distribution pattern of LF epidemics, 2008-2012 reported in Liberia, on which a predictive model was developed. We proposed a computationally feasible two-stage space-time permutation approach to estimate the time-trend parameters and conduct predictive inference on LF in Liberia.
Improved LTVMPC design for steering control of autonomous vehicle
NASA Astrophysics Data System (ADS)
Velhal, Shridhar; Thomas, Susy
2017-01-01
An improved linear time varying model predictive control for steering control of autonomous vehicle running on slippery road is presented. Control strategy is designed such that the vehicle will follow the predefined trajectory with highest possible entry speed. In linear time varying model predictive control, nonlinear vehicle model is successively linearized at each sampling instant. This linear time varying model is used to design MPC which will predict the future horizon. By incorporating predicted input horizon in each successive linearization the effectiveness of controller has been improved. The tracking performance using steering with front wheel and braking at four wheels are presented to illustrate the effectiveness of the proposed method.
Real-time Ensemble Forecasting of Coronal Mass Ejections using the WSA-ENLIL+Cone Model
NASA Astrophysics Data System (ADS)
Mays, M. L.; Taktakishvili, A.; Pulkkinen, A. A.; MacNeice, P. J.; Rastaetter, L.; Kuznetsova, M. M.; Odstrcil, D.
2013-12-01
Ensemble forecasting of coronal mass ejections (CMEs) provides significant information in that it provides an estimation of the spread or uncertainty in CME arrival time predictions due to uncertainties in determining CME input parameters. Ensemble modeling of CME propagation in the heliosphere is performed by forecasters at the Space Weather Research Center (SWRC) using the WSA-ENLIL cone model available at the Community Coordinated Modeling Center (CCMC). SWRC is an in-house research-based operations team at the CCMC which provides interplanetary space weather forecasting for NASA's robotic missions and performs real-time model validation. A distribution of n (routinely n=48) CME input parameters are generated using the CCMC Stereo CME Analysis Tool (StereoCAT) which employs geometrical triangulation techniques. These input parameters are used to perform n different simulations yielding an ensemble of solar wind parameters at various locations of interest (satellites or planets), including a probability distribution of CME shock arrival times (for hits), and geomagnetic storm strength (for Earth-directed hits). Ensemble simulations have been performed experimentally in real-time at the CCMC since January 2013. We present the results of ensemble simulations for a total of 15 CME events, 10 of which were performed in real-time. The observed CME arrival was within the range of ensemble arrival time predictions for 5 out of the 12 ensemble runs containing hits. The average arrival time prediction was computed for each of the twelve ensembles predicting hits and using the actual arrival time an average absolute error of 8.20 hours was found for all twelve ensembles, which is comparable to current forecasting errors. Some considerations for the accuracy of ensemble CME arrival time predictions include the importance of the initial distribution of CME input parameters, particularly the mean and spread. When the observed arrivals are not within the predicted range, this still allows the ruling out of prediction errors caused by tested CME input parameters. Prediction errors can also arise from ambient model parameters such as the accuracy of the solar wind background, and other limitations. Additionally the ensemble modeling setup was used to complete a parametric event case study of the sensitivity of the CME arrival time prediction to free parameters for ambient solar wind model and CME.
Jennifer K. Rawlins; Bruce A. Roundy; Dennis Eggett; Nathan Cline
2011-01-01
Accurate prediction of germination for species used for semi-arid land revegetation would support selection of plant materials for specific climatic conditions and sites. Wet thermal-time models predict germination time by summing progress toward germination subpopulation percentages as a function of temperature across intermittent wet periods or within singular wet...
Jin, Haomiao; Wu, Shinyi; Di Capua, Paul
2015-09-03
Depression is a common but often undiagnosed comorbid condition of people with diabetes. Mass screening can detect undiagnosed depression but may require significant resources and time. The objectives of this study were 1) to develop a clinical forecasting model that predicts comorbid depression among patients with diabetes and 2) to evaluate a model-based screening policy that saves resources and time by screening only patients considered as depressed by the clinical forecasting model. We trained and validated 4 machine learning models by using data from 2 safety-net clinical trials; we chose the one with the best overall predictive ability as the ultimate model. We compared model-based policy with alternative policies, including mass screening and partial screening, on the basis of depression history or diabetes severity. Logistic regression had the best overall predictive ability of the 4 models evaluated and was chosen as the ultimate forecasting model. Compared with mass screening, the model-based policy can save approximately 50% to 60% of provider resources and time but will miss identifying about 30% of patients with depression. Partial-screening policy based on depression history alone found only a low rate of depression. Two other heuristic-based partial screening policies identified depression at rates similar to those of the model-based policy but cost more in resources and time. The depression prediction model developed in this study has compelling predictive ability. By adopting the model-based depression screening policy, health care providers can use their resources and time better and increase their efficiency in managing their patients with depression.
Trehan, Sumeet; Carlberg, Kevin T.; Durlofsky, Louis J.
2017-07-14
A machine learning–based framework for modeling the error introduced by surrogate models of parameterized dynamical systems is proposed. The framework entails the use of high-dimensional regression techniques (eg, random forests, and LASSO) to map a large set of inexpensively computed “error indicators” (ie, features) produced by the surrogate model at a given time instance to a prediction of the surrogate-model error in a quantity of interest (QoI). This eliminates the need for the user to hand-select a small number of informative features. The methodology requires a training set of parameter instances at which the time-dependent surrogate-model error is computed bymore » simulating both the high-fidelity and surrogate models. Using these training data, the method first determines regression-model locality (via classification or clustering) and subsequently constructs a “local” regression model to predict the time-instantaneous error within each identified region of feature space. We consider 2 uses for the resulting error model: (1) as a correction to the surrogate-model QoI prediction at each time instance and (2) as a way to statistically model arbitrary functions of the time-dependent surrogate-model error (eg, time-integrated errors). We then apply the proposed framework to model errors in reduced-order models of nonlinear oil-water subsurface flow simulations, with time-varying well-control (bottom-hole pressure) parameters. The reduced-order models used in this work entail application of trajectory piecewise linearization in conjunction with proper orthogonal decomposition. Moreover, when the first use of the method is considered, numerical experiments demonstrate consistent improvement in accuracy in the time-instantaneous QoI prediction relative to the original surrogate model, across a large number of test cases. When the second use is considered, results show that the proposed method provides accurate statistical predictions of the time- and well-averaged errors.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Trehan, Sumeet; Carlberg, Kevin T.; Durlofsky, Louis J.
A machine learning–based framework for modeling the error introduced by surrogate models of parameterized dynamical systems is proposed. The framework entails the use of high-dimensional regression techniques (eg, random forests, and LASSO) to map a large set of inexpensively computed “error indicators” (ie, features) produced by the surrogate model at a given time instance to a prediction of the surrogate-model error in a quantity of interest (QoI). This eliminates the need for the user to hand-select a small number of informative features. The methodology requires a training set of parameter instances at which the time-dependent surrogate-model error is computed bymore » simulating both the high-fidelity and surrogate models. Using these training data, the method first determines regression-model locality (via classification or clustering) and subsequently constructs a “local” regression model to predict the time-instantaneous error within each identified region of feature space. We consider 2 uses for the resulting error model: (1) as a correction to the surrogate-model QoI prediction at each time instance and (2) as a way to statistically model arbitrary functions of the time-dependent surrogate-model error (eg, time-integrated errors). We then apply the proposed framework to model errors in reduced-order models of nonlinear oil-water subsurface flow simulations, with time-varying well-control (bottom-hole pressure) parameters. The reduced-order models used in this work entail application of trajectory piecewise linearization in conjunction with proper orthogonal decomposition. Moreover, when the first use of the method is considered, numerical experiments demonstrate consistent improvement in accuracy in the time-instantaneous QoI prediction relative to the original surrogate model, across a large number of test cases. When the second use is considered, results show that the proposed method provides accurate statistical predictions of the time- and well-averaged errors.« less
The RAPIDD ebola forecasting challenge: Synthesis and lessons learnt.
Viboud, Cécile; Sun, Kaiyuan; Gaffey, Robert; Ajelli, Marco; Fumanelli, Laura; Merler, Stefano; Zhang, Qian; Chowell, Gerardo; Simonsen, Lone; Vespignani, Alessandro
2018-03-01
Infectious disease forecasting is gaining traction in the public health community; however, limited systematic comparisons of model performance exist. Here we present the results of a synthetic forecasting challenge inspired by the West African Ebola crisis in 2014-2015 and involving 16 international academic teams and US government agencies, and compare the predictive performance of 8 independent modeling approaches. Challenge participants were invited to predict 140 epidemiological targets across 5 different time points of 4 synthetic Ebola outbreaks, each involving different levels of interventions and "fog of war" in outbreak data made available for predictions. Prediction targets included 1-4 week-ahead case incidences, outbreak size, peak timing, and several natural history parameters. With respect to weekly case incidence targets, ensemble predictions based on a Bayesian average of the 8 participating models outperformed any individual model and did substantially better than a null auto-regressive model. There was no relationship between model complexity and prediction accuracy; however, the top performing models for short-term weekly incidence were reactive models with few parameters, fitted to a short and recent part of the outbreak. Individual model outputs and ensemble predictions improved with data accuracy and availability; by the second time point, just before the peak of the epidemic, estimates of final size were within 20% of the target. The 4th challenge scenario - mirroring an uncontrolled Ebola outbreak with substantial data reporting noise - was poorly predicted by all modeling teams. Overall, this synthetic forecasting challenge provided a deep understanding of model performance under controlled data and epidemiological conditions. We recommend such "peace time" forecasting challenges as key elements to improve coordination and inspire collaboration between modeling groups ahead of the next pandemic threat, and to assess model forecasting accuracy for a variety of known and hypothetical pathogens. Published by Elsevier B.V.
The prediction of speech intelligibility in classrooms using computer models
NASA Astrophysics Data System (ADS)
Dance, Stephen; Dentoni, Roger
2005-04-01
Two classrooms were measured and modeled using the industry standard CATT model and the Web model CISM. Sound levels, reverberation times and speech intelligibility were predicted in these rooms using data for 7 octave bands. It was found that overall sound levels could be predicted to within 2 dB by both models. However, overall reverberation time was found to be accurately predicted by CATT 14% prediction error, but not by CISM, 41% prediction error. This compared to a 30% prediction error using classical theory. As for STI: CATT predicted within 11%, CISM to within 3% and Sabine to within 28% of the measured value. It should be noted that CISM took approximately 15 seconds to calculate, while CATT took 15 minutes. CISM is freely available on-line at www.whyverne.co.uk/acoustics/Pages/cism/cism.html
Can air temperature be used to project influences of climate change on stream temperature?
Arismendi, Ivan; Safeeq, Mohammad; Dunham, Jason B.; Johnson, Sherri L.
2014-01-01
Worldwide, lack of data on stream temperature has motivated the use of regression-based statistical models to predict stream temperatures based on more widely available data on air temperatures. Such models have been widely applied to project responses of stream temperatures under climate change, but the performance of these models has not been fully evaluated. To address this knowledge gap, we examined the performance of two widely used linear and nonlinear regression models that predict stream temperatures based on air temperatures. We evaluated model performance and temporal stability of model parameters in a suite of regulated and unregulated streams with 11–44 years of stream temperature data. Although such models may have validity when predicting stream temperatures within the span of time that corresponds to the data used to develop them, model predictions did not transfer well to other time periods. Validation of model predictions of most recent stream temperatures, based on air temperature–stream temperature relationships from previous time periods often showed poor performance when compared with observed stream temperatures. Overall, model predictions were less robust in regulated streams and they frequently failed in detecting the coldest and warmest temperatures within all sites. In many cases, the magnitude of errors in these predictions falls within a range that equals or exceeds the magnitude of future projections of climate-related changes in stream temperatures reported for the region we studied (between 0.5 and 3.0 °C by 2080). The limited ability of regression-based statistical models to accurately project stream temperatures over time likely stems from the fact that underlying processes at play, namely the heat budgets of air and water, are distinctive in each medium and vary among localities and through time.
NASA Technical Reports Server (NTRS)
He, Yuning
2015-01-01
Safety of unmanned aerial systems (UAS) is paramount, but the large number of dynamically changing controller parameters makes it hard to determine if the system is currently stable, and the time before loss of control if not. We propose a hierarchical statistical model using Treed Gaussian Processes to predict (i) whether a flight will be stable (success) or become unstable (failure), (ii) the time-to-failure if unstable, and (iii) time series outputs for flight variables. We first classify the current flight input into success or failure types, and then use separate models for each class to predict the time-to-failure and time series outputs. As different inputs may cause failures at different times, we have to model variable length output curves. We use a basis representation for curves and learn the mappings from input to basis coefficients. We demonstrate the effectiveness of our prediction methods on a NASA neuro-adaptive flight control system.
Progress towards a more predictive model for hohlraum radiation drive and symmetry
NASA Astrophysics Data System (ADS)
Jones, O. S.; Suter, L. J.; Scott, H. A.; Barrios, M. A.; Farmer, W. A.; Hansen, S. B.; Liedahl, D. A.; Mauche, C. W.; Moore, A. S.; Rosen, M. D.; Salmonson, J. D.; Strozzi, D. J.; Thomas, C. A.; Turnbull, D. P.
2017-05-01
For several years, we have been calculating the radiation drive in laser-heated gold hohlraums using flux-limited heat transport with a limiter of 0.15, tabulated values of local thermodynamic equilibrium gold opacity, and an approximate model for not in a local thermodynamic equilibrium (NLTE) gold emissivity (DCA_2010). This model has been successful in predicting the radiation drive in vacuum hohlraums, but for gas-filled hohlraums used to drive capsule implosions, the model consistently predicts too much drive and capsule bang times earlier than measured. In this work, we introduce a new model that brings the calculated bang time into better agreement with the measured bang time. The new model employs (1) a numerical grid that is fully converged in space, energy, and time, (2) a modified approximate NLTE model that includes more physics and is in better agreement with more detailed offline emissivity models, and (3) a reduced flux limiter value of 0.03. We applied this model to gas-filled hohlraum experiments using high density carbon and plastic ablator capsules that had hohlraum He fill gas densities ranging from 0.06 to 1.6 mg/cc and hohlraum diameters of 5.75 or 6.72 mm. The new model predicts bang times to within ±100 ps for most experiments with low to intermediate fill densities (up to 0.85 mg/cc). This model predicts higher temperatures in the plasma than the old model and also predicts that at higher gas fill densities, a significant amount of inner beam laser energy escapes the hohlraum through the opposite laser entrance hole.
Interactions of timing and prediction error learning.
Kirkpatrick, Kimberly
2014-01-01
Timing and prediction error learning have historically been treated as independent processes, but growing evidence has indicated that they are not orthogonal. Timing emerges at the earliest time point when conditioned responses are observed, and temporal variables modulate prediction error learning in both simple conditioning and cue competition paradigms. In addition, prediction errors, through changes in reward magnitude or value alter timing of behavior. Thus, there appears to be a bi-directional interaction between timing and prediction error learning. Modern theories have attempted to integrate the two processes with mixed success. A neurocomputational approach to theory development is espoused, which draws on neurobiological evidence to guide and constrain computational model development. Heuristics for future model development are presented with the goal of sparking new approaches to theory development in the timing and prediction error fields. Copyright © 2013 Elsevier B.V. All rights reserved.
Verification of models for ballistic movement time and endpoint variability.
Lin, Ray F; Drury, Colin G
2013-01-01
A hand control movement is composed of several ballistic movements. The time required in performing a ballistic movement and its endpoint variability are two important properties in developing movement models. The purpose of this study was to test potential models for predicting these two properties. Twelve participants conducted ballistic movements of specific amplitudes using a drawing tablet. The measured data of movement time and endpoint variability were then used to verify the models. This study was successful with Hoffmann and Gan's movement time model (Hoffmann, 1981; Gan and Hoffmann 1988) predicting more than 90.7% data variance for 84 individual measurements. A new theoretically developed ballistic movement variability model, proved to be better than Howarth, Beggs, and Bowden's (1971) model, predicting on average 84.8% of stopping-variable error and 88.3% of aiming-variable errors. These two validated models will help build solid theoretical movement models and evaluate input devices. This article provides better models for predicting end accuracy and movement time of ballistic movements that are desirable in rapid aiming tasks, such as keying in numbers on a smart phone. The models allow better design of aiming tasks, for example button sizes on mobile phones for different user populations.
Adaptive time-variant models for fuzzy-time-series forecasting.
Wong, Wai-Keung; Bai, Enjian; Chu, Alice Wai-Ching
2010-12-01
A fuzzy time series has been applied to the prediction of enrollment, temperature, stock indices, and other domains. Related studies mainly focus on three factors, namely, the partition of discourse, the content of forecasting rules, and the methods of defuzzification, all of which greatly influence the prediction accuracy of forecasting models. These studies use fixed analysis window sizes for forecasting. In this paper, an adaptive time-variant fuzzy-time-series forecasting model (ATVF) is proposed to improve forecasting accuracy. The proposed model automatically adapts the analysis window size of fuzzy time series based on the prediction accuracy in the training phase and uses heuristic rules to generate forecasting values in the testing phase. The performance of the ATVF model is tested using both simulated and actual time series including the enrollments at the University of Alabama, Tuscaloosa, and the Taiwan Stock Exchange Capitalization Weighted Stock Index (TAIEX). The experiment results show that the proposed ATVF model achieves a significant improvement in forecasting accuracy as compared to other fuzzy-time-series forecasting models.
A perturbative approach for enhancing the performance of time series forecasting.
de Mattos Neto, Paulo S G; Ferreira, Tiago A E; Lima, Aranildo R; Vasconcelos, Germano C; Cavalcanti, George D C
2017-04-01
This paper proposes a method to perform time series prediction based on perturbation theory. The approach is based on continuously adjusting an initial forecasting model to asymptotically approximate a desired time series model. First, a predictive model generates an initial forecasting for a time series. Second, a residual time series is calculated as the difference between the original time series and the initial forecasting. If that residual series is not white noise, then it can be used to improve the accuracy of the initial model and a new predictive model is adjusted using residual series. The whole process is repeated until convergence or the residual series becomes white noise. The output of the method is then given by summing up the outputs of all trained predictive models in a perturbative sense. To test the method, an experimental investigation was conducted on six real world time series. A comparison was made with six other methods experimented and ten other results found in the literature. Results show that not only the performance of the initial model is significantly improved but also the proposed method outperforms the other results previously published. Copyright © 2017 Elsevier Ltd. All rights reserved.
Epileptic Seizures Prediction Using Machine Learning Methods
Usman, Syed Muhammad
2017-01-01
Epileptic seizures occur due to disorder in brain functionality which can affect patient's health. Prediction of epileptic seizures before the beginning of the onset is quite useful for preventing the seizure by medication. Machine learning techniques and computational methods are used for predicting epileptic seizures from Electroencephalograms (EEG) signals. However, preprocessing of EEG signals for noise removal and features extraction are two major issues that have an adverse effect on both anticipation time and true positive prediction rate. Therefore, we propose a model that provides reliable methods of both preprocessing and feature extraction. Our model predicts epileptic seizures' sufficient time before the onset of seizure starts and provides a better true positive rate. We have applied empirical mode decomposition (EMD) for preprocessing and have extracted time and frequency domain features for training a prediction model. The proposed model detects the start of the preictal state, which is the state that starts few minutes before the onset of the seizure, with a higher true positive rate compared to traditional methods, 92.23%, and maximum anticipation time of 33 minutes and average prediction time of 23.6 minutes on scalp EEG CHB-MIT dataset of 22 subjects. PMID:29410700
Multivariable Time Series Prediction for the Icing Process on Overhead Power Transmission Line
Li, Peng; Zhao, Na; Zhou, Donghua; Cao, Min; Li, Jingjie; Shi, Xinling
2014-01-01
The design of monitoring and predictive alarm systems is necessary for successful overhead power transmission line icing. Given the characteristics of complexity, nonlinearity, and fitfulness in the line icing process, a model based on a multivariable time series is presented here to predict the icing load of a transmission line. In this model, the time effects of micrometeorology parameters for the icing process have been analyzed. The phase-space reconstruction theory and machine learning method were then applied to establish the prediction model, which fully utilized the history of multivariable time series data in local monitoring systems to represent the mapping relationship between icing load and micrometeorology factors. Relevant to the characteristic of fitfulness in line icing, the simulations were carried out during the same icing process or different process to test the model's prediction precision and robustness. According to the simulation results for the Tao-Luo-Xiong Transmission Line, this model demonstrates a good accuracy of prediction in different process, if the prediction length is less than two hours, and would be helpful for power grid departments when deciding to take action in advance to address potential icing disasters. PMID:25136653
Piecewise multivariate modelling of sequential metabolic profiling data.
Rantalainen, Mattias; Cloarec, Olivier; Ebbels, Timothy M D; Lundstedt, Torbjörn; Nicholson, Jeremy K; Holmes, Elaine; Trygg, Johan
2008-02-19
Modelling the time-related behaviour of biological systems is essential for understanding their dynamic responses to perturbations. In metabolic profiling studies, the sampling rate and number of sampling points are often restricted due to experimental and biological constraints. A supervised multivariate modelling approach with the objective to model the time-related variation in the data for short and sparsely sampled time-series is described. A set of piecewise Orthogonal Projections to Latent Structures (OPLS) models are estimated, describing changes between successive time points. The individual OPLS models are linear, but the piecewise combination of several models accommodates modelling and prediction of changes which are non-linear with respect to the time course. We demonstrate the method on both simulated and metabolic profiling data, illustrating how time related changes are successfully modelled and predicted. The proposed method is effective for modelling and prediction of short and multivariate time series data. A key advantage of the method is model transparency, allowing easy interpretation of time-related variation in the data. The method provides a competitive complement to commonly applied multivariate methods such as OPLS and Principal Component Analysis (PCA) for modelling and analysis of short time-series data.
NASA Astrophysics Data System (ADS)
Hu, Yijia; Zhong, Zhong; Zhu, Yimin; Ha, Yao
2018-04-01
In this paper, a statistical forecast model using the time-scale decomposition method is established to do the seasonal prediction of the rainfall during flood period (FPR) over the middle and lower reaches of the Yangtze River Valley (MLYRV). This method decomposites the rainfall over the MLYRV into three time-scale components, namely, the interannual component with the period less than 8 years, the interdecadal component with the period from 8 to 30 years, and the interdecadal component with the period larger than 30 years. Then, the predictors are selected for the three time-scale components of FPR through the correlation analysis. At last, a statistical forecast model is established using the multiple linear regression technique to predict the three time-scale components of the FPR, respectively. The results show that this forecast model can capture the interannual and interdecadal variation of FPR. The hindcast of FPR during 14 years from 2001 to 2014 shows that the FPR can be predicted successfully in 11 out of the 14 years. This forecast model performs better than the model using traditional scheme without time-scale decomposition. Therefore, the statistical forecast model using the time-scale decomposition technique has good skills and application value in the operational prediction of FPR over the MLYRV.
Liu, Shi; Wu, Yu; Wooten, H Omar; Green, Olga; Archer, Brent; Li, Harold; Yang, Deshan
2016-03-08
A software tool is developed, given a new treatment plan, to predict treatment delivery time for radiation therapy (RT) treatments of patients on ViewRay magnetic resonance image-guided radiation therapy (MR-IGRT) delivery system. This tool is necessary for managing patient treatment scheduling in our clinic. The predicted treatment delivery time and the assessment of plan complexities could also be useful to aid treatment planning. A patient's total treatment delivery time, not including time required for localization, is modeled as the sum of four components: 1) the treatment initialization time; 2) the total beam-on time; 3) the gantry rotation time; and 4) the multileaf collimator (MLC) motion time. Each of the four components is predicted separately. The total beam-on time can be calculated using both the planned beam-on time and the decay-corrected dose rate. To predict the remain-ing components, we retrospectively analyzed the patient treatment delivery record files. The initialization time is demonstrated to be random since it depends on the final gantry angle of the previous treatment. Based on modeling the relationships between the gantry rotation angles and the corresponding rotation time, linear regression is applied to predict the gantry rotation time. The MLC motion time is calculated using the leaves delay modeling method and the leaf motion speed. A quantitative analysis was performed to understand the correlation between the total treatment time and the plan complexity. The proposed algorithm is able to predict the ViewRay treatment delivery time with the average prediction error 0.22min or 1.82%, and the maximal prediction error 0.89 min or 7.88%. The analysis has shown the correlation between the plan modulation (PM) factor and the total treatment delivery time, as well as the treatment delivery duty cycle. A possibility has been identified to significantly reduce MLC motion time by optimizing the positions of closed MLC pairs. The accuracy of the proposed prediction algorithm is sufficient to support patient treatment appointment scheduling. This developed software tool is currently applied in use on a daily basis in our clinic, and could also be used as an important indicator for treatment plan complexity.
Adaptation of clinical prediction models for application in local settings.
Kappen, Teus H; Vergouwe, Yvonne; van Klei, Wilton A; van Wolfswinkel, Leo; Kalkman, Cor J; Moons, Karel G M
2012-01-01
When planning to use a validated prediction model in new patients, adequate performance is not guaranteed. For example, changes in clinical practice over time or a different case mix than the original validation population may result in inaccurate risk predictions. To demonstrate how clinical information can direct updating a prediction model and development of a strategy for handling missing predictor values in clinical practice. A previously derived and validated prediction model for postoperative nausea and vomiting was updated using a data set of 1847 patients. The update consisted of 1) changing the definition of an existing predictor, 2) reestimating the regression coefficient of a predictor, and 3) adding a new predictor to the model. The updated model was then validated in a new series of 3822 patients. Furthermore, several imputation models were considered to handle real-time missing values, so that possible missing predictor values could be anticipated during actual model use. Differences in clinical practice between our local population and the original derivation population guided the update strategy of the prediction model. The predictive accuracy of the updated model was better (c statistic, 0.68; calibration slope, 1.0) than the original model (c statistic, 0.62; calibration slope, 0.57). Inclusion of logistical variables in the imputation models, besides observed patient characteristics, contributed to a strategy to deal with missing predictor values at the time of risk calculation. Extensive knowledge of local, clinical processes provides crucial information to guide the process of adapting a prediction model to new clinical practices.
Kanai, Masashi; Okamoto, Kazuya; Yamamoto, Yosuke; Yoshioka, Akira; Hiramoto, Shuji; Nozaki, Akira; Nishikawa, Yoshitaka; Yamaguchi, Daisuke; Tomono, Teruko; Nakatsui, Masahiko; Baba, Mika; Morita, Tatsuya; Matsumoto, Shigemi; Kuroda, Tomohiro; Okuno, Yasushi; Muto, Manabu
2017-01-01
Background We aimed to develop an adaptable prognosis prediction model that could be applied at any time point during the treatment course for patients with cancer receiving chemotherapy, by applying time-series real-world big data. Methods Between April 2004 and September 2014, 4,997 patients with cancer who had received systemic chemotherapy were registered in a prospective cohort database at the Kyoto University Hospital. Of these, 2,693 patients with a death record were eligible for inclusion and divided into training (n = 1,341) and test (n = 1,352) cohorts. In total, 3,471,521 laboratory data at 115,738 time points, representing 40 laboratory items [e.g., white blood cell counts and albumin (Alb) levels] that were monitored for 1 year before the death event were applied for constructing prognosis prediction models. All possible prediction models comprising three different items from 40 laboratory items (40C3 = 9,880) were generated in the training cohort, and the model selection was performed in the test cohort. The fitness of the selected models was externally validated in the validation cohort from three independent settings. Results A prognosis prediction model utilizing Alb, lactate dehydrogenase, and neutrophils was selected based on a strong ability to predict death events within 1–6 months and a set of six prediction models corresponding to 1,2, 3, 4, 5, and 6 months was developed. The area under the curve (AUC) ranged from 0.852 for the 1 month model to 0.713 for the 6 month model. External validation supported the performance of these models. Conclusion By applying time-series real-world big data, we successfully developed a set of six adaptable prognosis prediction models for patients with cancer receiving chemotherapy. PMID:28837592
Time prediction of failure a type of lamps by using general composite hazard rate model
NASA Astrophysics Data System (ADS)
Riaman; Lesmana, E.; Subartini, B.; Supian, S.
2018-03-01
This paper discusses the basic survival model estimates to obtain the average predictive value of lamp failure time. This estimate is for the parametric model, General Composite Hazard Level Model. The random time variable model used is the exponential distribution model, as the basis, which has a constant hazard function. In this case, we discuss an example of survival model estimation for a composite hazard function, using an exponential model as its basis. To estimate this model is done by estimating model parameters, through the construction of survival function and empirical cumulative function. The model obtained, will then be used to predict the average failure time of the model, for the type of lamp. By grouping the data into several intervals and the average value of failure at each interval, then calculate the average failure time of a model based on each interval, the p value obtained from the tes result is 0.3296.
Large-Scale Aerosol Modeling and Analysis
2008-09-30
novel method of simultaneous real- time measurements of ice-nucleating particle concentrations and size- resolved chemical composition of individual...is to develop a practical predictive capability for visibility and weather effects of aerosol particles for the entire globe for timely use in...prediction follows that used in numerical weather prediction, namely real- time assessment for initialization of first-principles models. The Naval
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bessac, Julie; Constantinescu, Emil; Anitescu, Mihai
We propose a statistical space-time model for predicting atmospheric wind speed based on deterministic numerical weather predictions and historical measurements. We consider a Gaussian multivariate space-time framework that combines multiple sources of past physical model outputs and measurements in order to produce a probabilistic wind speed forecast within the prediction window. We illustrate this strategy on wind speed forecasts during several months in 2012 for a region near the Great Lakes in the United States. The results show that the prediction is improved in the mean-squared sense relative to the numerical forecasts as well as in probabilistic scores. Moreover, themore » samples are shown to produce realistic wind scenarios based on sample spectra and space-time correlation structure.« less
Bessac, Julie; Constantinescu, Emil; Anitescu, Mihai
2018-03-01
We propose a statistical space-time model for predicting atmospheric wind speed based on deterministic numerical weather predictions and historical measurements. We consider a Gaussian multivariate space-time framework that combines multiple sources of past physical model outputs and measurements in order to produce a probabilistic wind speed forecast within the prediction window. We illustrate this strategy on wind speed forecasts during several months in 2012 for a region near the Great Lakes in the United States. The results show that the prediction is improved in the mean-squared sense relative to the numerical forecasts as well as in probabilistic scores. Moreover, themore » samples are shown to produce realistic wind scenarios based on sample spectra and space-time correlation structure.« less
High accuracy satellite drag model (HASDM)
NASA Astrophysics Data System (ADS)
Storz, M.; Bowman, B.; Branson, J.
The dominant error source in the force models used to predict low perigee satellite trajectories is atmospheric drag. Errors in operational thermospheric density models cause significant errors in predicted satellite positions, since these models do not account for dynamic changes in atmospheric drag for orbit predictions. The Air Force Space Battlelab's High Accuracy Satellite Drag Model (HASDM) estimates and predicts (out three days) a dynamically varying high-resolution density field. HASDM includes the Dynamic Calibration Atmosphere (DCA) algorithm that solves for the phases and amplitudes of the diurnal, semidiurnal and terdiurnal variations of thermospheric density near real-time from the observed drag effects on a set of Low Earth Orbit (LEO) calibration satellites. The density correction is expressed as a function of latitude, local solar time and altitude. In HASDM, a time series prediction filter relates the extreme ultraviolet (EUV) energy index E10.7 and the geomagnetic storm index a p to the DCA density correction parameters. The E10.7 index is generated by the SOLAR2000 model, the first full spectrum model of solar irradiance. The estimated and predicted density fields will be used operationally to significantly improve the accuracy of predicted trajectories for all low perigee satellites.
High accuracy satellite drag model (HASDM)
NASA Astrophysics Data System (ADS)
Storz, Mark F.; Bowman, Bruce R.; Branson, Major James I.; Casali, Stephen J.; Tobiska, W. Kent
The dominant error source in force models used to predict low-perigee satellite trajectories is atmospheric drag. Errors in operational thermospheric density models cause significant errors in predicted satellite positions, since these models do not account for dynamic changes in atmospheric drag for orbit predictions. The Air Force Space Battlelab's High Accuracy Satellite Drag Model (HASDM) estimates and predicts (out three days) a dynamically varying global density field. HASDM includes the Dynamic Calibration Atmosphere (DCA) algorithm that solves for the phases and amplitudes of the diurnal and semidiurnal variations of thermospheric density near real-time from the observed drag effects on a set of Low Earth Orbit (LEO) calibration satellites. The density correction is expressed as a function of latitude, local solar time and altitude. In HASDM, a time series prediction filter relates the extreme ultraviolet (EUV) energy index E10.7 and the geomagnetic storm index ap, to the DCA density correction parameters. The E10.7 index is generated by the SOLAR2000 model, the first full spectrum model of solar irradiance. The estimated and predicted density fields will be used operationally to significantly improve the accuracy of predicted trajectories for all low-perigee satellites.
Bayesian dynamic modeling of time series of dengue disease case counts.
Martínez-Bello, Daniel Adyro; López-Quílez, Antonio; Torres-Prieto, Alexander
2017-07-01
The aim of this study is to model the association between weekly time series of dengue case counts and meteorological variables, in a high-incidence city of Colombia, applying Bayesian hierarchical dynamic generalized linear models over the period January 2008 to August 2015. Additionally, we evaluate the model's short-term performance for predicting dengue cases. The methodology shows dynamic Poisson log link models including constant or time-varying coefficients for the meteorological variables. Calendar effects were modeled using constant or first- or second-order random walk time-varying coefficients. The meteorological variables were modeled using constant coefficients and first-order random walk time-varying coefficients. We applied Markov Chain Monte Carlo simulations for parameter estimation, and deviance information criterion statistic (DIC) for model selection. We assessed the short-term predictive performance of the selected final model, at several time points within the study period using the mean absolute percentage error. The results showed the best model including first-order random walk time-varying coefficients for calendar trend and first-order random walk time-varying coefficients for the meteorological variables. Besides the computational challenges, interpreting the results implies a complete analysis of the time series of dengue with respect to the parameter estimates of the meteorological effects. We found small values of the mean absolute percentage errors at one or two weeks out-of-sample predictions for most prediction points, associated with low volatility periods in the dengue counts. We discuss the advantages and limitations of the dynamic Poisson models for studying the association between time series of dengue disease and meteorological variables. The key conclusion of the study is that dynamic Poisson models account for the dynamic nature of the variables involved in the modeling of time series of dengue disease, producing useful models for decision-making in public health.
An online spatiotemporal prediction model for dengue fever epidemic in Kaohsiung (Taiwan).
Yu, Hwa-Lung; Angulo, José M; Cheng, Ming-Hung; Wu, Jiaping; Christakos, George
2014-05-01
The emergence and re-emergence of disease epidemics is a complex question that may be influenced by diverse factors, including the space-time dynamics of human populations, environmental conditions, and associated uncertainties. This study proposes a stochastic framework to integrate space-time dynamics in the form of a Susceptible-Infected-Recovered (SIR) model, together with uncertain disease observations, into a Bayesian maximum entropy (BME) framework. The resulting model (BME-SIR) can be used to predict space-time disease spread. Specifically, it was applied to obtain a space-time prediction of the dengue fever (DF) epidemic that took place in Kaohsiung City (Taiwan) during 2002. In implementing the model, the SIR parameters were continually updated and information on new cases of infection was incorporated. The results obtained show that the proposed model is rigorous to user-specified initial values of unknown model parameters, that is, transmission and recovery rates. In general, this model provides a good characterization of the spatial diffusion of the DF epidemic, especially in the city districts proximal to the location of the outbreak. Prediction performance may be affected by various factors, such as virus serotypes and human intervention, which can change the space-time dynamics of disease diffusion. The proposed BME-SIR disease prediction model can provide government agencies with a valuable reference for the timely identification, control, and prevention of DF spread in space and time. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Poppeliers, Christian; Aur, Katherine Anderson; Preston, Leiph
This report shows the results of constructing predictive atmospheric models for the Source Physics Experiments 1-6. Historic atmospheric data are combined with topography to construct an atmo- spheric model that corresponds to the predicted (or actual) time of a given SPE event. The models are ultimately used to construct atmospheric Green's functions to be used for subsequent analysis. We present three atmospheric models for each SPE event: an average model based on ten one- hour snap shots of the atmosphere and two extrema models corresponding to the warmest, coolest, windiest, etc. atmospheric snap shots. The atmospheric snap shots consist ofmore » wind, temperature, and pressure profiles of the atmosphere for a one-hour time window centered at the time of the predicted SPE event, as well as nine additional snap shots for each of the nine preceding years, centered at the time and day of the SPE event.« less
Marchioro, C A; Krechemer, F S; de Moraes, C P; Foerster, L A
2015-12-01
The diamondback moth, Plutella xylostella (L.), is a cosmopolitan pest of brassicaceous crops occurring in regions with highly distinct climate conditions. Several studies have investigated the relationship between temperature and P. xylostella development rate, providing degree-day models for populations from different geographical regions. However, there are no data available to date to demonstrate the suitability of such models to make reliable projections on the development time for this species in field conditions. In the present study, 19 models available in the literature were tested regarding their ability to accurately predict the development time of two cohorts of P. xylostella under field conditions. Only 11 out of the 19 models tested accurately predicted the development time for the first cohort of P. xylostella, but only seven for the second cohort. Five models correctly predicted the development time for both cohorts evaluated. Our data demonstrate that the accuracy of the models available for P. xylostella varies widely and therefore should be used with caution for pest management purposes.
Xu, Xiaogang; Wang, Songling; Liu, Jinlian; Liu, Xinyu
2014-01-01
Blower and exhaust fans consume over 30% of electricity in a thermal power plant, and faults of these fans due to rotation stalls are one of the most frequent reasons for power plant outage failures. To accurately predict the occurrence of fan rotation stalls, we propose a support vector regression machine (SVRM) model that predicts the fan internal pressures during operation, leaving ample time for rotation stall detection. We train the SVRM model using experimental data samples, and perform pressure data prediction using the trained SVRM model. To prove the feasibility of using the SVRM model for rotation stall prediction, we further process the predicted pressure data via wavelet-transform-based stall detection. By comparison of the detection results from the predicted and measured pressure data, we demonstrate that the SVRM model can accurately predict the fan pressure and guarantee reliable stall detection with a time advance of up to 0.0625 s. This superior pressure data prediction capability leaves significant time for effective control and prevention of fan rotation stall faults. This model has great potential for use in intelligent fan systems with stall prevention capability, which will ensure safe operation and improve the energy efficiency of power plants. PMID:24854057
Real-time flutter boundary prediction based on time series models
NASA Astrophysics Data System (ADS)
Gu, Wenjing; Zhou, Li
2018-03-01
For the purpose of predicting the flutter boundary in real time during flutter flight tests, two time series models accompanied with corresponding stability criterion are adopted in this paper. The first method simplifies a long nonstationary response signal as many contiguous intervals and each is considered to be stationary. The traditional AR model is then established to represent each interval of signal sequence. While the second employs a time-varying AR model to characterize actual measured signals in flutter test with progression variable speed (FTPVS). To predict the flutter boundary, stability parameters are formulated by the identified AR coefficients combined with Jury's stability criterion. The behavior of the parameters is examined using both simulated and wind-tunnel experiment data. The results demonstrate that both methods show significant effectiveness in predicting the flutter boundary at lower speed level. A comparison between the two methods is also given in this paper.
Queuing Time Prediction Using WiFi Positioning Data in an Indoor Scenario.
Shu, Hua; Song, Ci; Pei, Tao; Xu, Lianming; Ou, Yang; Zhang, Libin; Li, Tao
2016-11-22
Queuing is common in urban public places. Automatically monitoring and predicting queuing time can not only help individuals to reduce their wait time and alleviate anxiety but also help managers to allocate resources more efficiently and enhance their ability to address emergencies. This paper proposes a novel method to estimate and predict queuing time in indoor environments based on WiFi positioning data. First, we use a series of parameters to identify the trajectories that can be used as representatives of queuing time. Next, we divide the day into equal time slices and estimate individuals' average queuing time during specific time slices. Finally, we build a nonstandard autoregressive (NAR) model trained using the previous day's WiFi estimation results and actual queuing time to predict the queuing time in the upcoming time slice. A case study comparing two other time series analysis models shows that the NAR model has better precision. Random topological errors caused by the drift phenomenon of WiFi positioning technology (locations determined by a WiFi positioning system may drift accidently) and systematic topological errors caused by the positioning system are the main factors that affect the estimation precision. Therefore, we optimize the deployment strategy during the positioning system deployment phase and propose a drift ratio parameter pertaining to the trajectory screening phase to alleviate the impact of topological errors and improve estimates. The WiFi positioning data from an eight-day case study conducted at the T3-C entrance of Beijing Capital International Airport show that the mean absolute estimation error is 147 s, which is approximately 26.92% of the actual queuing time. For predictions using the NAR model, the proportion is approximately 27.49%. The theoretical predictions and the empirical case study indicate that the NAR model is an effective method to estimate and predict queuing time in indoor public areas.
Queuing Time Prediction Using WiFi Positioning Data in an Indoor Scenario
Shu, Hua; Song, Ci; Pei, Tao; Xu, Lianming; Ou, Yang; Zhang, Libin; Li, Tao
2016-01-01
Queuing is common in urban public places. Automatically monitoring and predicting queuing time can not only help individuals to reduce their wait time and alleviate anxiety but also help managers to allocate resources more efficiently and enhance their ability to address emergencies. This paper proposes a novel method to estimate and predict queuing time in indoor environments based on WiFi positioning data. First, we use a series of parameters to identify the trajectories that can be used as representatives of queuing time. Next, we divide the day into equal time slices and estimate individuals’ average queuing time during specific time slices. Finally, we build a nonstandard autoregressive (NAR) model trained using the previous day’s WiFi estimation results and actual queuing time to predict the queuing time in the upcoming time slice. A case study comparing two other time series analysis models shows that the NAR model has better precision. Random topological errors caused by the drift phenomenon of WiFi positioning technology (locations determined by a WiFi positioning system may drift accidently) and systematic topological errors caused by the positioning system are the main factors that affect the estimation precision. Therefore, we optimize the deployment strategy during the positioning system deployment phase and propose a drift ratio parameter pertaining to the trajectory screening phase to alleviate the impact of topological errors and improve estimates. The WiFi positioning data from an eight-day case study conducted at the T3-C entrance of Beijing Capital International Airport show that the mean absolute estimation error is 147 s, which is approximately 26.92% of the actual queuing time. For predictions using the NAR model, the proportion is approximately 27.49%. The theoretical predictions and the empirical case study indicate that the NAR model is an effective method to estimate and predict queuing time in indoor public areas. PMID:27879663
NASA Astrophysics Data System (ADS)
Talaghat, Mohammad Reza; Jokar, Seyyed Mohammad
2018-03-01
The induction time is a time interval to detect the initial hydrate formation, which is counted from the moment when the stirrer is turned on until the first detection of hydrate formation. The main objective of the present work is to predict and measure the induction time of methane hydrate formation in the presence or absence of tetrahydrofuran (THF) as promoter in the flow loop system. A laboratory flow mini-loop apparatus was set up to measure the induction time of methane hydrate formation. The induction time is predicted using developed Kashchiev and Firoozabadi model and modified model of Natarajan for a flow loop system. Furthermore, the effects of volumetric flow rate of the fluid on the induction time were investigated. The results of the models were compared with experimental data. They show that the induction time of hydrate formation in the presence of THF is very short at high pressure and high volumetric flow rate of the fluid. It decreases with increasing pressure and liquid volumetric flow rate. It is also shown that the modified Natarajan model is more accurate than the Kashchiev and Firoozabadi ones in prediction of the induction time.
Model predictive control of P-time event graphs
NASA Astrophysics Data System (ADS)
Hamri, H.; Kara, R.; Amari, S.
2016-12-01
This paper deals with model predictive control of discrete event systems modelled by P-time event graphs. First, the model is obtained by using the dater evolution model written in the standard algebra. Then, for the control law, we used the finite-horizon model predictive control. For the closed-loop control, we used the infinite-horizon model predictive control (IH-MPC). The latter is an approach that calculates static feedback gains which allows the stability of the closed-loop system while respecting the constraints on the control vector. The problem of IH-MPC is formulated as a linear convex programming subject to a linear matrix inequality problem. Finally, the proposed methodology is applied to a transportation system.
Modeling time-to-event (survival) data using classification tree analysis.
Linden, Ariel; Yarnold, Paul R
2017-12-01
Time to the occurrence of an event is often studied in health research. Survival analysis differs from other designs in that follow-up times for individuals who do not experience the event by the end of the study (called censored) are accounted for in the analysis. Cox regression is the standard method for analysing censored data, but the assumptions required of these models are easily violated. In this paper, we introduce classification tree analysis (CTA) as a flexible alternative for modelling censored data. Classification tree analysis is a "decision-tree"-like classification model that provides parsimonious, transparent (ie, easy to visually display and interpret) decision rules that maximize predictive accuracy, derives exact P values via permutation tests, and evaluates model cross-generalizability. Using empirical data, we identify all statistically valid, reproducible, longitudinally consistent, and cross-generalizable CTA survival models and then compare their predictive accuracy to estimates derived via Cox regression and an unadjusted naïve model. Model performance is assessed using integrated Brier scores and a comparison between estimated survival curves. The Cox regression model best predicts average incidence of the outcome over time, whereas CTA survival models best predict either relatively high, or low, incidence of the outcome over time. Classification tree analysis survival models offer many advantages over Cox regression, such as explicit maximization of predictive accuracy, parsimony, statistical robustness, and transparency. Therefore, researchers interested in accurate prognoses and clear decision rules should consider developing models using the CTA-survival framework. © 2017 John Wiley & Sons, Ltd.
Gupta, Nidhi; Christiansen, Caroline Stordal; Hanisch, Christiana; Bay, Hans; Burr, Hermann; Holtermann, Andreas
2017-01-01
Objectives To investigate the differences between a questionnaire-based and accelerometer-based sitting time, and develop a model for improving the accuracy of questionnaire-based sitting time for predicting accelerometer-based sitting time. Methods 183 workers in a cross-sectional study reported sitting time per day using a single question during the measurement period, and wore 2 Actigraph GT3X+ accelerometers on the thigh and trunk for 1–4 working days to determine their actual sitting time per day using the validated Acti4 software. Least squares regression models were fitted with questionnaire-based siting time and other self-reported predictors to predict accelerometer-based sitting time. Results Questionnaire-based and accelerometer-based average sitting times were ≈272 and ≈476 min/day, respectively. A low Pearson correlation (r=0.32), high mean bias (204.1 min) and wide limits of agreement (549.8 to −139.7 min) between questionnaire-based and accelerometer-based sitting time were found. The prediction model based on questionnaire-based sitting explained 10% of the variance in accelerometer-based sitting time. Inclusion of 9 self-reported predictors in the model increased the explained variance to 41%, with 10% optimism using a resampling bootstrap validation. Based on a split validation analysis, the developed prediction model on ≈75% of the workers (n=132) reduced the mean and the SD of the difference between questionnaire-based and accelerometer-based sitting time by 64% and 42%, respectively, in the remaining 25% of the workers. Conclusions This study indicates that questionnaire-based sitting time has low validity and that a prediction model can be one solution to materially improve the precision of questionnaire-based sitting time. PMID:28093433
Gao, Xiang-Ming; Yang, Shi-Feng; Pan, San-Bo
2017-01-01
Predicting the output power of photovoltaic system with nonstationarity and randomness, an output power prediction model for grid-connected PV systems is proposed based on empirical mode decomposition (EMD) and support vector machine (SVM) optimized with an artificial bee colony (ABC) algorithm. First, according to the weather forecast data sets on the prediction date, the time series data of output power on a similar day with 15-minute intervals are built. Second, the time series data of the output power are decomposed into a series of components, including some intrinsic mode components IMFn and a trend component Res, at different scales using EMD. The corresponding SVM prediction model is established for each IMF component and trend component, and the SVM model parameters are optimized with the artificial bee colony algorithm. Finally, the prediction results of each model are reconstructed, and the predicted values of the output power of the grid-connected PV system can be obtained. The prediction model is tested with actual data, and the results show that the power prediction model based on the EMD and ABC-SVM has a faster calculation speed and higher prediction accuracy than do the single SVM prediction model and the EMD-SVM prediction model without optimization.
2017-01-01
Predicting the output power of photovoltaic system with nonstationarity and randomness, an output power prediction model for grid-connected PV systems is proposed based on empirical mode decomposition (EMD) and support vector machine (SVM) optimized with an artificial bee colony (ABC) algorithm. First, according to the weather forecast data sets on the prediction date, the time series data of output power on a similar day with 15-minute intervals are built. Second, the time series data of the output power are decomposed into a series of components, including some intrinsic mode components IMFn and a trend component Res, at different scales using EMD. The corresponding SVM prediction model is established for each IMF component and trend component, and the SVM model parameters are optimized with the artificial bee colony algorithm. Finally, the prediction results of each model are reconstructed, and the predicted values of the output power of the grid-connected PV system can be obtained. The prediction model is tested with actual data, and the results show that the power prediction model based on the EMD and ABC-SVM has a faster calculation speed and higher prediction accuracy than do the single SVM prediction model and the EMD-SVM prediction model without optimization. PMID:28912803
Testing model for prediction system of 1-AU arrival times of CME-associated interplanetary shocks
NASA Astrophysics Data System (ADS)
Ogawa, Tomoya; den, Mitsue; Tanaka, Takashi; Sugihara, Kohta; Takei, Toshifumi; Amo, Hiroyoshi; Watari, Shinichi
We test a model to predict arrival times of interplanetary shock waves associated with coronal mass ejections (CMEs) using a three-dimensional adaptive mesh refinement (AMR) code. The model is used for the prediction system we develop, which has a Web-based user interface and aims at people who is not familiar with operation of computers and numerical simulations or is not researcher. We apply the model to interplanetary CME events. We first choose coronal parameters so that property of background solar wind observed by ACE space craft is reproduced. Then we input CME parameters observed by SOHO/LASCO. Finally we compare the predicted arrival times with observed ones. We describe results of the test and discuss tendency of the model.
Gaussian mixture models as flux prediction method for central receivers
NASA Astrophysics Data System (ADS)
Grobler, Annemarie; Gauché, Paul; Smit, Willie
2016-05-01
Flux prediction methods are crucial to the design and operation of central receiver systems. Current methods such as the circular and elliptical (bivariate) Gaussian prediction methods are often used in field layout design and aiming strategies. For experimental or small central receiver systems, the flux profile of a single heliostat often deviates significantly from the circular and elliptical Gaussian models. Therefore a novel method of flux prediction was developed by incorporating the fitting of Gaussian mixture models onto flux profiles produced by flux measurement or ray tracing. A method was also developed to predict the Gaussian mixture model parameters of a single heliostat for a given time using image processing. Recording the predicted parameters in a database ensures that more accurate predictions are made in a shorter time frame.
Bayesian spatiotemporal crash frequency models with mixture components for space-time interactions.
Cheng, Wen; Gill, Gurdiljot Singh; Zhang, Yongping; Cao, Zhong
2018-03-01
The traffic safety research has developed spatiotemporal models to explore the variations in the spatial pattern of crash risk over time. Many studies observed notable benefits associated with the inclusion of spatial and temporal correlation and their interactions. However, the safety literature lacks sufficient research for the comparison of different temporal treatments and their interaction with spatial component. This study developed four spatiotemporal models with varying complexity due to the different temporal treatments such as (I) linear time trend; (II) quadratic time trend; (III) Autoregressive-1 (AR-1); and (IV) time adjacency. Moreover, the study introduced a flexible two-component mixture for the space-time interaction which allows greater flexibility compared to the traditional linear space-time interaction. The mixture component allows the accommodation of global space-time interaction as well as the departures from the overall spatial and temporal risk patterns. This study performed a comprehensive assessment of mixture models based on the diverse criteria pertaining to goodness-of-fit, cross-validation and evaluation based on in-sample data for predictive accuracy of crash estimates. The assessment of model performance in terms of goodness-of-fit clearly established the superiority of the time-adjacency specification which was evidently more complex due to the addition of information borrowed from neighboring years, but this addition of parameters allowed significant advantage at posterior deviance which subsequently benefited overall fit to crash data. The Base models were also developed to study the comparison between the proposed mixture and traditional space-time components for each temporal model. The mixture models consistently outperformed the corresponding Base models due to the advantages of much lower deviance. For cross-validation comparison of predictive accuracy, linear time trend model was adjudged the best as it recorded the highest value of log pseudo marginal likelihood (LPML). Four other evaluation criteria were considered for typical validation using the same data for model development. Under each criterion, observed crash counts were compared with three types of data containing Bayesian estimated, normal predicted, and model replicated ones. The linear model again performed the best in most scenarios except one case of using model replicated data and two cases involving prediction without including random effects. These phenomena indicated the mediocre performance of linear trend when random effects were excluded for evaluation. This might be due to the flexible mixture space-time interaction which can efficiently absorb the residual variability escaping from the predictable part of the model. The comparison of Base and mixture models in terms of prediction accuracy further bolstered the superiority of the mixture models as the mixture ones generated more precise estimated crash counts across all four models, suggesting that the advantages associated with mixture component at model fit were transferable to prediction accuracy. Finally, the residual analysis demonstrated the consistently superior performance of random effect models which validates the importance of incorporating the correlation structures to account for unobserved heterogeneity. Copyright © 2017 Elsevier Ltd. All rights reserved.
Toward a Time-Domain Fractal Lightning Simulation
NASA Astrophysics Data System (ADS)
Liang, C.; Carlson, B. E.; Lehtinen, N. G.; Cohen, M.; Lauben, D.; Inan, U. S.
2010-12-01
Electromagnetic simulations of lightning are useful for prediction of lightning properties and exploration of the underlying physical behavior. Fractal lightning models predict the spatial structure of the discharge, but thus far do not provide much information about discharge behavior in time and therefore cannot predict electromagnetic wave emissions or current characteristics. Here we develop a time-domain fractal lightning simulation from Maxwell's equations, the method of moments with the thin wire approximation, an adaptive time-stepping scheme, and a simplified electrical model of the lightning channel. The model predicts current pulse structure and electromagnetic wave emissions and can be used to simulate the entire duration of a lightning discharge. The model can be used to explore the electrical characteristics of the lightning channel, the temporal development of the discharge, and the effects of these characteristics on observable electromagnetic wave emissions.
Nonstationary time series prediction combined with slow feature analysis
NASA Astrophysics Data System (ADS)
Wang, G.; Chen, X.
2015-01-01
Almost all climate time series have some degree of nonstationarity due to external driving forces perturbations of the observed system. Therefore, these external driving forces should be taken into account when reconstructing the climate dynamics. This paper presents a new technique of combining the driving force of a time series obtained using the Slow Feature Analysis (SFA) approach, then introducing the driving force into a predictive model to predict non-stationary time series. In essence, the main idea of the technique is to consider the driving forces as state variables and incorporate them into the prediction model. To test the method, experiments using a modified logistic time series and winter ozone data in Arosa, Switzerland, were conducted. The results showed improved and effective prediction skill.
NASA Astrophysics Data System (ADS)
Arge, C. N.; Chen, J.; Slinker, S.; Pizzo, V. J.
2000-05-01
The method of Chen et al. [1997, JGR, 101, 27499] is designed to accurately identify and predict the occurrence, duration, and strength of largegeomagnetic storms using real-time solar wind data. The method estimates the IMF and the geoeffectiveness of the solar wind upstream of a monitor and can provide warning times that range from a few hours to more than 10 hours. The model uses physical features of solar wind structures that cause large storms: long durations of southward interplanetary magnetic field. It is currently undergoing testing, improvement, and validation at NOAA/SEC in effort to transition it into a real-time space weather forecasting tool. The original version of the model has modified so that it now makes hourly (as opposed to daily) predictions and has been improved in effort to enhance both its predictive capability and reliability. In this paper, we report on the results of a 2-year historical verification study of the model using ACE real-time data. The prediction performances of the original and improved versions of the model are then compared. A real-time prediction web page has been developed and is on line at NOAA/SEC. *Work supported by ONR.
Brownian motion with adaptive drift for remaining useful life prediction: Revisited
NASA Astrophysics Data System (ADS)
Wang, Dong; Tsui, Kwok-Leung
2018-01-01
Linear Brownian motion with constant drift is widely used in remaining useful life predictions because its first hitting time follows the inverse Gaussian distribution. State space modelling of linear Brownian motion was proposed to make the drift coefficient adaptive and incorporate on-line measurements into the first hitting time distribution. Here, the drift coefficient followed the Gaussian distribution, and it was iteratively estimated by using Kalman filtering once a new measurement was available. Then, to model nonlinear degradation, linear Brownian motion with adaptive drift was extended to nonlinear Brownian motion with adaptive drift. However, in previous studies, an underlying assumption used in the state space modelling was that in the update phase of Kalman filtering, the predicted drift coefficient at the current time exactly equalled the posterior drift coefficient estimated at the previous time, which caused a contradiction with the predicted drift coefficient evolution driven by an additive Gaussian process noise. In this paper, to alleviate such an underlying assumption, a new state space model is constructed. As a result, in the update phase of Kalman filtering, the predicted drift coefficient at the current time evolves from the posterior drift coefficient at the previous time. Moreover, the optimal Kalman filtering gain for iteratively estimating the posterior drift coefficient at any time is mathematically derived. A discussion that theoretically explains the main reasons why the constructed state space model can result in high remaining useful life prediction accuracies is provided. Finally, the proposed state space model and its associated Kalman filtering gain are applied to battery prognostics.
Jaffe, B.E.; Rubin, D.M.
1996-01-01
The time-dependent response of sediment suspension to flow velocity was explored by modeling field measurements collected in the surf zone during a large storm. Linear and nonlinear models were created and tested using flow velocity as input and suspended-sediment concentration as output. A sequence of past velocities (velocity history), as well as velocity from the same instant as the suspended-sediment concentration, was used as input; this velocity history length was allowed to vary. The models also allowed for a lag between input (instantaneous velocity or end of velocity sequence) and output (suspended-sediment concentration). Predictions of concentration from instantaneous velocity or instantaneous velocity raised to a power (up to 8) using linear models were poor (correlation coefficients between predicted and observed concentrations were less than 0.10). Allowing a lag between velocity and concentration improved linear models (correlation coefficient of 0.30), with optimum lag time increasing with elevation above the seabed (from 1.5 s at 13 cm to 8.5 s at 60 cm). These lags are largely due to the time for an observed flow event to effect the bed and mix sediment upward. Using a velocity history further improved linear models (correlation coefficient of 0.43). The best linear model used 12.5 s of velocity history (approximately one wave period) to predict concentration. Nonlinear models gave better predictions than linear models, and, as with linear models, nonlinear models using a velocity history performed better than models using only instantaneous velocity as input. Including a lag time between the velocity and concentration also improved the predictions. The best model (correlation coefficient of 0.58) used 3 s (approximately a quarter wave period) of the cross-shore velocity squared, starting at 4.5 s before the observed concentration, to predict concentration. Using a velocity history increases the performance of the models by specifying a more complete description of the dynamical forcing of the flow (including accelerations and wave phase and shape) responsible for sediment suspension. Incorporating such a velocity history and a lag time into the formulation of the forcing for time-dependent models for sediment suspension in the surf zone will greatly increase our ability to predict suspended-sediment transport.
A Two-Time Scale Decentralized Model Predictive Controller Based on Input and Output Model
Niu, Jian; Zhao, Jun; Xu, Zuhua; Qian, Jixin
2009-01-01
A decentralized model predictive controller applicable for some systems which exhibit different dynamic characteristics in different channels was presented in this paper. These systems can be regarded as combinations of a fast model and a slow model, the response speeds of which are in two-time scale. Because most practical models used for control are obtained in the form of transfer function matrix by plant tests, a singular perturbation method was firstly used to separate the original transfer function matrix into two models in two-time scale. Then a decentralized model predictive controller was designed based on the two models derived from the original system. And the stability of the control method was proved. Simulations showed that the method was effective. PMID:19834542
One-month validation of the Space Weather Modeling Framework geospace model
NASA Astrophysics Data System (ADS)
Haiducek, J. D.; Welling, D. T.; Ganushkina, N. Y.; Morley, S.; Ozturk, D. S.
2017-12-01
The Space Weather Modeling Framework (SWMF) geospace model consists of a magnetohydrodynamic (MHD) simulation coupled to an inner magnetosphere model and an ionosphere model. This provides a predictive capability for magnetopsheric dynamics, including ground-based and space-based magnetic fields, geomagnetic indices, currents and densities throughout the magnetosphere, cross-polar cap potential, and magnetopause and bow shock locations. The only inputs are solar wind parameters and F10.7 radio flux. We have conducted a rigorous validation effort consisting of a continuous simulation covering the month of January, 2005 using three different model configurations. This provides a relatively large dataset for assessment of the model's predictive capabilities. We find that the model does an excellent job of predicting the Sym-H index, and performs well at predicting Kp and CPCP during active times. Dayside magnetopause and bow shock positions are also well predicted. The model tends to over-predict Kp and CPCP during quiet times and under-predicts the magnitude of AL during disturbances. The model under-predicts the magnitude of night-side geosynchronous Bz, and over-predicts the radial distance to the flank magnetopause and bow shock. This suggests that the model over-predicts stretching of the magnetotail and the overall size of the magnetotail. With the exception of the AL index and the nightside geosynchronous magnetic field, we find the results to be insensitive to grid resolution.
Comparison of CME/Shock Propagation Models with Heliospheric Imaging and In Situ Observations
NASA Astrophysics Data System (ADS)
Zhao, Xinhua; Liu, Ying D.; Inhester, Bernd; Feng, Xueshang; Wiegelmann, Thomas; Lu, Lei
2016-10-01
The prediction of the arrival time for fast coronal mass ejections (CMEs) and their associated shocks is highly desirable in space weather studies. In this paper, we use two shock propagation models, I.e., Data Guided Shock Time Of Arrival (DGSTOA) and Data Guided Shock Propagation Model (DGSPM), to predict the kinematical evolution of interplanetary shocks associated with fast CMEs. DGSTOA is based on the similarity theory of shock waves in the solar wind reference frame, and DGSPM is based on the non-similarity theory in the stationary reference frame. The inputs are the kinematics of the CME front at the maximum speed moment obtained from the geometric triangulation method applied to STEREO imaging observations together with the Harmonic Mean approximation. The outputs provide the subsequent propagation of the associated shock. We apply these models to the CMEs on 2012 January 19, January 23, and March 7. We find that the shock models predict reasonably well the shock’s propagation after the impulsive acceleration. The shock’s arrival time and local propagation speed at Earth predicted by these models are consistent with in situ measurements of WIND. We also employ the Drag-Based Model (DBM) as a comparison, and find that it predicts a steeper deceleration than the shock models after the rapid deceleration phase. The predictions of DBM at 1 au agree with the following ICME or sheath structure, not the preceding shock. These results demonstrate the applicability of the shock models used here for future arrival time prediction of interplanetary shocks associated with fast CMEs.
Markovian prediction of future values for food grains in the economic survey
NASA Astrophysics Data System (ADS)
Sathish, S.; Khadar Babu, S. K.
2017-11-01
Now-a-days prediction and forecasting are plays a vital role in research. For prediction, regression is useful to predict the future value and current value on production process. In this paper, we assume food grain production exhibit Markov chain dependency and time homogeneity. The economic generative performance evaluation the balance time artificial fertilization different level in Estrusdetection using a daily Markov chain model. Finally, Markov process prediction gives better performance compare with Regression model.
NASA Astrophysics Data System (ADS)
Zho, Chen-Chen; Farr, Erik P.; Glover, William J.; Schwartz, Benjamin J.
2017-08-01
We use one-electron non-adiabatic mixed quantum/classical simulations to explore the temperature dependence of both the ground-state structure and the excited-state relaxation dynamics of the hydrated electron. We compare the results for both the traditional cavity picture and a more recent non-cavity model of the hydrated electron and make definite predictions for distinguishing between the different possible structural models in future experiments. We find that the traditional cavity model shows no temperature-dependent change in structure at constant density, leading to a predicted resonance Raman spectrum that is essentially temperature-independent. In contrast, the non-cavity model predicts a blue-shift in the hydrated electron's resonance Raman O-H stretch with increasing temperature. The lack of a temperature-dependent ground-state structural change of the cavity model also leads to a prediction of little change with temperature of both the excited-state lifetime and hot ground-state cooling time of the hydrated electron following photoexcitation. This is in sharp contrast to the predictions of the non-cavity model, where both the excited-state lifetime and hot ground-state cooling time are expected to decrease significantly with increasing temperature. These simulation-based predictions should be directly testable by the results of future time-resolved photoelectron spectroscopy experiments. Finally, the temperature-dependent differences in predicted excited-state lifetime and hot ground-state cooling time of the two models also lead to different predicted pump-probe transient absorption spectroscopy of the hydrated electron as a function of temperature. We perform such experiments and describe them in Paper II [E. P. Farr et al., J. Chem. Phys. 147, 074504 (2017)], and find changes in the excited-state lifetime and hot ground-state cooling time with temperature that match well with the predictions of the non-cavity model. In particular, the experiments reveal stimulated emission from the excited state with an amplitude and lifetime that decreases with increasing temperature, a result in contrast to the lack of stimulated emission predicted by the cavity model but in good agreement with the non-cavity model. Overall, until ab initio calculations describing the non-adiabatic excited-state dynamics of an excess electron with hundreds of water molecules at a variety of temperatures become computationally feasible, the simulations presented here provide a definitive route for connecting the predictions of cavity and non-cavity models of the hydrated electron with future experiments.
Prediction of Coronary Artery Disease Risk Based on Multiple Longitudinal Biomarkers
Yang, Lili; Yu, Menggang; Gao, Sujuan
2016-01-01
In the last decade, few topics in the area of cardiovascular disease (CVD) research have received as much attention as risk prediction. One of the well documented risk factors for CVD is high blood pressure (BP). Traditional CVD risk prediction models consider BP levels measured at a single time and such models form the basis for current clinical guidelines for CVD prevention. However, in clinical practice, BP levels are often observed and recorded in a longitudinal fashion. Information on BP trajectories can be powerful predictors for CVD events. We consider joint modeling of time to coronary artery disease and individual longitudinal measures of systolic and diastolic BPs in a primary care cohort with up to 20 years of follow-up. We applied novel prediction metrics to assess the predictive performance of joint models. Predictive performances of proposed joint models and other models were assessed via simulations and illustrated using the primary care cohort. PMID:26439685
Real-time stylistic prediction for whole-body human motions.
Matsubara, Takamitsu; Hyon, Sang-Ho; Morimoto, Jun
2012-01-01
The ability to predict human motion is crucial in several contexts such as human tracking by computer vision and the synthesis of human-like computer graphics. Previous work has focused on off-line processes with well-segmented data; however, many applications such as robotics require real-time control with efficient computation. In this paper, we propose a novel approach called real-time stylistic prediction for whole-body human motions to satisfy these requirements. This approach uses a novel generative model to represent a whole-body human motion including rhythmic motion (e.g., walking) and discrete motion (e.g., jumping). The generative model is composed of a low-dimensional state (phase) dynamics and a two-factor observation model, allowing it to capture the diversity of motion styles in humans. A real-time adaptation algorithm was derived to estimate both state variables and style parameter of the model from non-stationary unlabeled sequential observations. Moreover, with a simple modification, the algorithm allows real-time adaptation even from incomplete (partial) observations. Based on the estimated state and style, a future motion sequence can be accurately predicted. In our implementation, it takes less than 15 ms for both adaptation and prediction at each observation. Our real-time stylistic prediction was evaluated for human walking, running, and jumping behaviors. Copyright © 2011 Elsevier Ltd. All rights reserved.
Liu, Shi; Wu, Yu; Wooten, H. Omar; Green, Olga; Archer, Brent; Li, Harold
2016-01-01
A software tool is developed, given a new treatment plan, to predict treatment delivery time for radiation therapy (RT) treatments of patients on ViewRay magnetic resonance image‐guided radiation therapy (MR‐IGRT) delivery system. This tool is necessary for managing patient treatment scheduling in our clinic. The predicted treatment delivery time and the assessment of plan complexities could also be useful to aid treatment planning. A patient's total treatment delivery time, not including time required for localization, is modeled as the sum of four components: 1) the treatment initialization time; 2) the total beam‐on time; 3) the gantry rotation time; and 4) the multileaf collimator (MLC) motion time. Each of the four components is predicted separately. The total beam‐on time can be calculated using both the planned beam‐on time and the decay‐corrected dose rate. To predict the remain‐ing components, we retrospectively analyzed the patient treatment delivery record files. The initialization time is demonstrated to be random since it depends on the final gantry angle of the previous treatment. Based on modeling the relationships between the gantry rotation angles and the corresponding rotation time, linear regression is applied to predict the gantry rotation time. The MLC motion time is calculated using the leaves delay modeling method and the leaf motion speed. A quantitative analysis was performed to understand the correlation between the total treatment time and the plan complexity. The proposed algorithm is able to predict the ViewRay treatment delivery time with the average prediction error 0.22 min or 1.82%, and the maximal prediction error 0.89 min or 7.88%. The analysis has shown the correlation between the plan modulation (PM) factor and the total treatment delivery time, as well as the treatment delivery duty cycle. A possibility has been identified to significantly reduce MLC motion time by optimizing the positions of closed MLC pairs. The accuracy of the proposed prediction algorithm is sufficient to support patient treatment appointment scheduling. This developed software tool is currently applied in use on a daily basis in our clinic, and could also be used as an important indicator for treatment plan complexity. PACS number(s): 87.55.N PMID:27074472
Evaluating Predictive Uncertainty of Hyporheic Exchange Modelling
NASA Astrophysics Data System (ADS)
Chow, R.; Bennett, J.; Dugge, J.; Wöhling, T.; Nowak, W.
2017-12-01
Hyporheic exchange is the interaction of water between rivers and groundwater, and is difficult to predict. One of the largest contributions to predictive uncertainty for hyporheic fluxes have been attributed to the representation of heterogeneous subsurface properties. This research aims to evaluate which aspect of the subsurface representation - the spatial distribution of hydrofacies or the model for local-scale (within-facies) heterogeneity - most influences the predictive uncertainty. Also, we seek to identify data types that help reduce this uncertainty best. For this investigation, we conduct a modelling study of the Steinlach River meander, in Southwest Germany. The Steinlach River meander is an experimental site established in 2010 to monitor hyporheic exchange at the meander scale. We use HydroGeoSphere, a fully integrated surface water-groundwater model, to model hyporheic exchange and to assess the predictive uncertainty of hyporheic exchange transit times (HETT). A highly parameterized complex model is built and treated as `virtual reality', which is in turn modelled with simpler subsurface parameterization schemes (Figure). Then, we conduct Monte-Carlo simulations with these models to estimate the predictive uncertainty. Results indicate that: Uncertainty in HETT is relatively small for early times and increases with transit times. Uncertainty from local-scale heterogeneity is negligible compared to uncertainty in the hydrofacies distribution. Introducing more data to a poor model structure may reduce predictive variance, but does not reduce predictive bias. Hydraulic head observations alone cannot constrain the uncertainty of HETT, however an estimate of hyporheic exchange flux proves to be more effective at reducing this uncertainty. Figure: Approach for evaluating predictive model uncertainty. A conceptual model is first developed from the field investigations. A complex model (`virtual reality') is then developed based on that conceptual model. This complex model then serves as the basis to compare simpler model structures. Through this approach, predictive uncertainty can be quantified relative to a known reference solution.
A multimodel approach to interannual and seasonal prediction of Danube discharge anomalies
NASA Astrophysics Data System (ADS)
Rimbu, Norel; Ionita, Monica; Patrut, Simona; Dima, Mihai
2010-05-01
Interannual and seasonal predictability of Danube river discharge is investigated using three model types: 1) time series models 2) linear regression models of discharge with large-scale climate mode indices and 3) models based on stable teleconnections. All models are calibrated using discharge and climatic data for the period 1901-1977 and validated for the period 1978-2008 . Various time series models, like autoregressive (AR), moving average (MA), autoregressive and moving average (ARMA) or singular spectrum analysis and autoregressive moving average (SSA+ARMA) models have been calibrated and their skills evaluated. The best results were obtained using SSA+ARMA models. SSA+ARMA models proved to have the highest forecast skill also for other European rivers (Gamiz-Fortis et al. 2008). Multiple linear regression models using large-scale climatic mode indices as predictors have a higher forecast skill than the time series models. The best predictors for Danube discharge are the North Atlantic Oscillation (NAO) and the East Atlantic/Western Russia patterns during winter and spring. Other patterns, like Polar/Eurasian or Tropical Northern Hemisphere (TNH) are good predictors for summer and autumn discharge. Based on stable teleconnection approach (Ionita et al. 2008) we construct prediction models through a combination of sea surface temperature (SST), temperature (T) and precipitation (PP) from the regions where discharge and SST, T and PP variations are stable correlated. Forecast skills of these models are higher than forecast skills of the time series and multiple regression models. The models calibrated and validated in our study can be used for operational prediction of interannual and seasonal Danube discharge anomalies. References Gamiz-Fortis, S., D. Pozo-Vazquez, R.M. Trigo, and Y. Castro-Diez, Quantifying the predictability of winter river flow in Iberia. Part I: intearannual predictability. J. Climate, 2484-2501, 2008. Gamiz-Fortis, S., D. Pozo-Vazquez, R.M. Trigo, and Y. Castro-Diez, Quantifying the predictability of winter river flow in Iberia. Part II: seasonal predictability. J. Climate, 2503-2518, 2008. Ionita, M., G. Lohmann, and N. Rimbu, Prediction of spring Elbe river discharge based on stable teleconnections with global temperature and precipitation. J. Climate. 6215-6226, 2008.
NASA Astrophysics Data System (ADS)
Romano, M.; Mays, M. L.; Taktakishvili, A.; MacNeice, P. J.; Zheng, Y.; Pulkkinen, A. A.; Kuznetsova, M. M.; Odstrcil, D.
2013-12-01
Modeling coronal mass ejections (CMEs) is of great interest to the space weather research and forecasting communities. We present recent validation work of real-time CME arrival time predictions at different satellites using the WSA-ENLIL+Cone three-dimensional MHD heliospheric model available at the Community Coordinated Modeling Center (CCMC) and performed by the Space Weather Research Center (SWRC). SWRC is an in-house research-based operations team at the CCMC which provides interplanetary space weather forecasting for NASA's robotic missions and performs real-time model validation. The quality of model operation is evaluated by comparing its output to a measurable parameter of interest such as the CME arrival time and geomagnetic storm strength. The Kp index is calculated from the relation given in Newell et al. (2007), using solar wind parameters predicted by the WSA-ENLIL+Cone model at Earth. The CME arrival time error is defined as the difference between the predicted arrival time and the observed in-situ CME shock arrival time at the ACE, STEREO A, or STEREO B spacecraft. This study includes all real-time WSA-ENLIL+Cone model simulations performed between June 2011-2013 (over 400 runs) at the CCMC/SWRC. We report hit, miss, false alarm, and correct rejection statistics for all three spacecraft. For hits we show the average absolute CME arrival time error, and the dependence of this error on CME input parameters such as speed, width, and direction. We also present the predicted geomagnetic storm strength (using the Kp index) error for Earth-directed CMEs.
Challenges in predicting climate change impacts on pome fruit phenology
NASA Astrophysics Data System (ADS)
Darbyshire, Rebecca; Webb, Leanne; Goodwin, Ian; Barlow, E. W. R.
2014-08-01
Climate projection data were applied to two commonly used pome fruit flowering models to investigate potential differences in predicted full bloom timing. The two methods, fixed thermal time and sequential chill-growth, produced different results for seven apple and pear varieties at two Australian locations. The fixed thermal time model predicted incremental advancement of full bloom, while results were mixed from the sequential chill-growth model. To further investigate how the sequential chill-growth model reacts under climate perturbed conditions, four simulations were created to represent a wider range of species physiological requirements. These were applied to five Australian locations covering varied climates. Lengthening of the chill period and contraction of the growth period was common to most results. The relative dominance of the chill or growth component tended to predict whether full bloom advanced, remained similar or was delayed with climate warming. The simplistic structure of the fixed thermal time model and the exclusion of winter chill conditions in this method indicate it is unlikely to be suitable for projection analyses. The sequential chill-growth model includes greater complexity; however, reservations in using this model for impact analyses remain. The results demonstrate that appropriate representation of physiological processes is essential to adequately predict changes to full bloom under climate perturbed conditions with greater model development needed.
File Usage Analysis and Resource Usage Prediction: a Measurement-Based Study. Ph.D. Thesis
NASA Technical Reports Server (NTRS)
Devarakonda, Murthy V.-S.
1987-01-01
A probabilistic scheme was developed to predict process resource usage in UNIX. Given the identity of the program being run, the scheme predicts CPU time, file I/O, and memory requirements of a process at the beginning of its life. The scheme uses a state-transition model of the program's resource usage in its past executions for prediction. The states of the model are the resource regions obtained from an off-line cluster analysis of processes run on the system. The proposed method is shown to work on data collected from a VAX 11/780 running 4.3 BSD UNIX. The results show that the predicted values correlate well with the actual. The coefficient of correlation between the predicted and actual values of CPU time is 0.84. Errors in prediction are mostly small. Some 82% of errors in CPU time prediction are less than 0.5 standard deviations of process CPU time.
Predictability of process resource usage - A measurement-based study on UNIX
NASA Technical Reports Server (NTRS)
Devarakonda, Murthy V.; Iyer, Ravishankar K.
1989-01-01
A probabilistic scheme is developed to predict process resource usage in UNIX. Given the identity of the program being run, the scheme predicts CPU time, file I/O, and memory requirements of a process at the beginning of its life. The scheme uses a state-transition model of the program's resource usage in its past executions for prediction. The states of the model are the resource regions obtained from an off-line cluster analysis of processes run on the system. The proposed method is shown to work on data collected from a VAX 11/780 running 4.3 BSD UNIX. The results show that the predicted values correlate well with the actual. The correlation coefficient betweeen the predicted and actual values of CPU time is 0.84. Errors in prediction are mostly small. Some 82 percent of errors in CPU time prediction are less than 0.5 standard deviations of process CPU time.
Predictability of process resource usage: A measurement-based study of UNIX
NASA Technical Reports Server (NTRS)
Devarakonda, Murthy V.; Iyer, Ravishankar K.
1987-01-01
A probabilistic scheme is developed to predict process resource usage in UNIX. Given the identity of the program being run, the scheme predicts CPU time, file I/O, and memory requirements of a process at the beginning of its life. The scheme uses a state-transition model of the program's resource usage in its past executions for prediction. The states of the model are the resource regions obtained from an off-line cluster analysis of processes run on the system. The proposed method is shown to work on data collected from a VAX 11/780 running 4.3 BSD UNIX. The results show that the predicted values correlate well with the actual. The correlation coefficient between the predicted and actual values of CPU time is 0.84. Errors in prediction are mostly small. Some 82% of errors in CPU time prediction are less than 0.5 standard deviations of process CPU time.
Baquero, Oswaldo Santos; Santana, Lidia Maria Reis; Chiaravalloti-Neto, Francisco
2018-01-01
Globally, the number of dengue cases has been on the increase since 1990 and this trend has also been found in Brazil and its most populated city-São Paulo. Surveillance systems based on predictions allow for timely decision making processes, and in turn, timely and efficient interventions to reduce the burden of the disease. We conducted a comparative study of dengue predictions in São Paulo city to test the performance of trained seasonal autoregressive integrated moving average models, generalized additive models and artificial neural networks. We also used a naïve model as a benchmark. A generalized additive model with lags of the number of cases and meteorological variables had the best performance, predicted epidemics of unprecedented magnitude and its performance was 3.16 times higher than the benchmark and 1.47 higher that the next best performing model. The predictive models captured the seasonal patterns but differed in their capacity to anticipate large epidemics and all outperformed the benchmark. In addition to be able to predict epidemics of unprecedented magnitude, the best model had computational advantages, since its training and tuning was straightforward and required seconds or at most few minutes. These are desired characteristics to provide timely results for decision makers. However, it should be noted that predictions are made just one month ahead and this is a limitation that future studies could try to reduce.
A new model for predicting moisture uptake by packaged solid pharmaceuticals.
Chen, Y; Li, Y
2003-04-14
A novel mathematical model has been developed for predicting moisture uptake by packaged solid pharmaceutical products during storage. High density polyethylene (HDPE) bottles containing the tablet products of two new chemical entities and desiccants are investigated. Permeability of the bottles is determined at different temperatures using steady-state data. Moisture sorption isotherms of the two model drug products and desiccants at the same temperatures are determined and expressed in polynomial equations. The isotherms are used for modeling the time-humidity profile in the container, which enables the prediction of the moisture content of individual component during storage. Predicted moisture contents agree well with real time stability data. The current model could serve as a guide during packaging selection for moisture protection, so as to reduce the cost and cycle time of screening study.
NASA Astrophysics Data System (ADS)
Wold, A. M.; Mays, M. L.; Taktakishvili, A.; Odstrcil, D.; MacNeice, P. J.; Jian, L. K.
2017-12-01
The Wang-Sheeley-Arge (WSA)-ENLIL+Cone model is used extensively in space weather operations world-wide to model CME propagation. As such, it is important to assess its performance. We present validation results of the WSA-ENLIL+Cone model installed at the Community Coordinated Modeling Center (CCMC) and executed in real-time by the CCMC/Space Weather Research Center (SWRC). CCMC/SWRC uses the WSA-ENLIL+Cone model to predict CME arrivals at NASA missions throughout the inner heliosphere. In this work we compare model predicted CME arrival-times to in-situ ICME leading edge measurements near Earth, STEREO-A and STEREO-B for simulations completed between March 2010-December 2016 (over 1,800 CMEs). We report hit, miss, false alarm, and correct rejection statistics for all three spacecraft. For all predicted CME arrivals, the hit rate is 0.5, and the false alarm rate is 0.1. For the 273 events where the CME was predicted to arrive at Earth, STEREO-A, or STEREO-B and we observed an arrival (hit), the mean absolute arrival-time prediction error was 10.4 ± 0.9 hours, with a tendency to early prediction error of -4.0 hours. We show the dependence of the arrival-time error on CME input parameters. We also explore the impact of the multi-spacecraft observations used to initialize the model CME inputs by comparing model verification results before and after the STEREO-B communication loss (since September 2014) and STEREO-A side-lobe operations (August 2014-December 2015). There is an increase of 1.7 hours in the CME arrival time error during single, or limited two-viewpoint periods, compared to the three-spacecraft viewpoint period. This trend would apply to a future space weather mission at L5 or L4 as another coronagraph viewpoint to reduce CME arrival time errors compared to a single L1 viewpoint.
Xu, Yifan; Sun, Jiayang; Carter, Rebecca R; Bogie, Kath M
2014-05-01
Stereophotogrammetric digital imaging enables rapid and accurate detailed 3D wound monitoring. This rich data source was used to develop a statistically validated model to provide personalized predictive healing information for chronic wounds. 147 valid wound images were obtained from a sample of 13 category III/IV pressure ulcers from 10 individuals with spinal cord injury. Statistical comparison of several models indicated the best fit for the clinical data was a personalized mixed-effects exponential model (pMEE), with initial wound size and time as predictors and observed wound size as the response variable. Random effects capture personalized differences. Other models are only valid when wound size constantly decreases. This is often not achieved for clinical wounds. Our model accommodates this reality. Two criteria to determine effective healing time outcomes are proposed: r-fold wound size reduction time, t(r-fold), is defined as the time when wound size reduces to 1/r of initial size. t(δ) is defined as the time when the rate of the wound healing/size change reduces to a predetermined threshold δ < 0. Healing rate differs from patient to patient. Model development and validation indicates that accurate monitoring of wound geometry can adaptively predict healing progression and that larger wounds heal more rapidly. Accuracy of the prediction curve in the current model improves with each additional evaluation. Routine assessment of wounds using detailed stereophotogrammetric imaging can provide personalized predictions of wound healing time. Application of a valid model will help the clinical team to determine wound management care pathways. Published by Elsevier Ltd.
Wang, Wei-Qing; Cheng, Hong-Yan; Song, Song-Quan
2013-01-01
Effects of temperature, storage time and their combination on germination of aspen (Populus tomentosa) seeds were investigated. Aspen seeds were germinated at 5 to 30°C at 5°C intervals after storage for a period of time under 28°C and 75% relative humidity. The effect of temperature on aspen seed germination could not be effectively described by the thermal time (TT) model, which underestimated the germination rate at 5°C and poorly predicted the time courses of germination at 10, 20, 25 and 30°C. A modified TT model (MTT) which assumed a two-phased linear relationship between germination rate and temperature was more accurate in predicting the germination rate and percentage and had a higher likelihood of being correct than the TT model. The maximum lifetime threshold (MLT) model accurately described the effect of storage time on seed germination across all the germination temperatures. An aging thermal time (ATT) model combining both the TT and MLT models was developed to describe the effect of both temperature and storage time on seed germination. When the ATT model was applied to germination data across all the temperatures and storage times, it produced a relatively poor fit. Adjusting the ATT model to separately fit germination data at low and high temperatures in the suboptimal range increased the models accuracy for predicting seed germination. Both the MLT and ATT models indicate that germination of aspen seeds have distinct physiological responses to temperature within a suboptimal range. PMID:23658654
NASA Technical Reports Server (NTRS)
Mcbeath, Giorgio; Ghorashi, Bahman; Chun, Kue
1993-01-01
A thermal NO(x) prediction model is developed to interface with a CFD, k-epsilon based code. A converged solution from the CFD code is the input to the postprocessing model for prediction of thermal NO(x). The model uses a decoupled analysis to estimate the equilibrium level of (NO(x))e which is the constant rate limit. This value is used to estimate the flame (NO(x)) and in turn predict the rate of formation at each node using a two-step Zeldovich mechanism. The rate is fixed on the NO(x) production rate plot by estimating the time to reach equilibrium by a differential analysis based on the reaction: O + N2 = NO + N. The rate is integrated in the nonequilibrium time space based on the residence time at each node in the computational domain. The sum of all nodal predictions yields the total NO(x) level.
Key Technology of Real-Time Road Navigation Method Based on Intelligent Data Research
Tang, Haijing; Liang, Yu; Huang, Zhongnan; Wang, Taoyi; He, Lin; Du, Yicong; Ding, Gangyi
2016-01-01
The effect of traffic flow prediction plays an important role in routing selection. Traditional traffic flow forecasting methods mainly include linear, nonlinear, neural network, and Time Series Analysis method. However, all of them have some shortcomings. This paper analyzes the existing algorithms on traffic flow prediction and characteristics of city traffic flow and proposes a road traffic flow prediction method based on transfer probability. This method first analyzes the transfer probability of upstream of the target road and then makes the prediction of the traffic flow at the next time by using the traffic flow equation. Newton Interior-Point Method is used to obtain the optimal value of parameters. Finally, it uses the proposed model to predict the traffic flow at the next time. By comparing the existing prediction methods, the proposed model has proven to have good performance. It can fast get the optimal value of parameters faster and has higher prediction accuracy, which can be used to make real-time traffic flow prediction. PMID:27872637
NASA Technical Reports Server (NTRS)
Brinson, R. F.
1985-01-01
A method for lifetime or durability predictions for laminated fiber reinforced plastics is given. The procedure is similar to but not the same as the well known time-temperature-superposition principle for polymers. The method is better described as an analytical adaptation of time-stress-super-position methods. The analytical constitutive modeling is based upon a nonlinear viscoelastic constitutive model developed by Schapery. Time dependent failure models are discussed and are related to the constitutive models. Finally, results of an incremental lamination analysis using the constitutive and failure model are compared to experimental results. Favorable results between theory and predictions are presented using data from creep tests of about two months duration.
Cui, Yiqian; Shi, Junyou; Wang, Zili
2015-11-01
Quantum Neural Networks (QNN) models have attracted great attention since it innovates a new neural computing manner based on quantum entanglement. However, the existing QNN models are mainly based on the real quantum operations, and the potential of quantum entanglement is not fully exploited. In this paper, we proposes a novel quantum neuron model called Complex Quantum Neuron (CQN) that realizes a deep quantum entanglement. Also, a novel hybrid networks model Complex Rotation Quantum Dynamic Neural Networks (CRQDNN) is proposed based on Complex Quantum Neuron (CQN). CRQDNN is a three layer model with both CQN and classical neurons. An infinite impulse response (IIR) filter is embedded in the Networks model to enable the memory function to process time series inputs. The Levenberg-Marquardt (LM) algorithm is used for fast parameter learning. The networks model is developed to conduct time series predictions. Two application studies are done in this paper, including the chaotic time series prediction and electronic remaining useful life (RUL) prediction. Copyright © 2015 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Du, Kongchang; Zhao, Ying; Lei, Jiaqiang
2017-09-01
In hydrological time series prediction, singular spectrum analysis (SSA) and discrete wavelet transform (DWT) are widely used as preprocessing techniques for artificial neural network (ANN) and support vector machine (SVM) predictors. These hybrid or ensemble models seem to largely reduce the prediction error. In current literature researchers apply these techniques to the whole observed time series and then obtain a set of reconstructed or decomposed time series as inputs to ANN or SVM. However, through two comparative experiments and mathematical deduction we found the usage of SSA and DWT in building hybrid models is incorrect. Since SSA and DWT adopt 'future' values to perform the calculation, the series generated by SSA reconstruction or DWT decomposition contain information of 'future' values. These hybrid models caused incorrect 'high' prediction performance and may cause large errors in practice.
SEC proton prediction model: verification and analysis.
Balch, C C
1999-06-01
This paper describes a model that has been used at the NOAA Space Environment Center since the early 1970s as a guide for the prediction of solar energetic particle events. The algorithms for proton event probability, peak flux, and rise time are described. The predictions are compared with observations. The current model shows some ability to distinguish between proton event associated flares and flares that are not associated with proton events. The comparisons of predicted and observed peak flux show considerable scatter, with an rms error of almost an order of magnitude. Rise time comparisons also show scatter, with an rms error of approximately 28 h. The model algorithms are analyzed using historical data and improvements are suggested. Implementation of the algorithm modifications reduces the rms error in the log10 of the flux prediction by 21%, and the rise time rms error by 31%. Improvements are also realized in the probability prediction by deriving the conditional climatology for proton event occurrence given flare characteristics.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Joseph, Ackeem; Herrera, David; Hijal, Tarek
We describe a method for predicting waiting times in radiation oncology. Machine learning is a powerful predictive modelling tool that benefits from large, potentially complex, datasets. The essence of machine learning is to predict future outcomes by learning from previous experience. The patient waiting experience remains one of the most vexing challenges facing healthcare. Waiting time uncertainty can cause patients, who are already sick and in pain, to worry about when they will receive the care they need. In radiation oncology, patients typically experience three types of waiting: Waiting at home for their treatment plan to be prepared Waiting inmore » the waiting room for daily radiotherapy Waiting in the waiting room to see a physician in consultation or follow-up These waiting periods are difficult for staff to predict and only rough estimates are typically provided, based on personal experience. In the present era of electronic health records, waiting times need not be so uncertain. At our centre, we have incorporated the electronic treatment records of all previously-treated patients into our machine learning model. We found that the Random Forest Regression model provides the best predictions for daily radiotherapy treatment waiting times (type 2). Using this model, we achieved a median residual (actual minus predicted value) of 0.25 minutes and a standard deviation residual of 6.5 minutes. The main features that generated the best fit model (from most to least significant) are: Allocated time, median past duration, fraction number and the number of treatment fields.« less
Road Network State Estimation Using Random Forest Ensemble Learning
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hou, Yi; Edara, Praveen; Chang, Yohan
Network-scale travel time prediction not only enables traffic management centers (TMC) to proactively implement traffic management strategies, but also allows travelers make informed decisions about route choices between various origins and destinations. In this paper, a random forest estimator was proposed to predict travel time in a network. The estimator was trained using two years of historical travel time data for a case study network in St. Louis, Missouri. Both temporal and spatial effects were considered in the modeling process. The random forest models predicted travel times accurately during both congested and uncongested traffic conditions. The computational times for themore » models were low, thus useful for real-time traffic management and traveler information applications.« less
Ye, Min; Nagar, Swati; Korzekwa, Ken
2015-01-01
Predicting the pharmacokinetics of highly protein-bound drugs is difficult. Also, since historical plasma protein binding data was often collected using unbuffered plasma, the resulting inaccurate binding data could contribute to incorrect predictions. This study uses a generic physiologically based pharmacokinetic (PBPK) model to predict human plasma concentration-time profiles for 22 highly protein-bound drugs. Tissue distribution was estimated from in vitro drug lipophilicity data, plasma protein binding, and blood: plasma ratio. Clearance was predicted with a well-stirred liver model. Underestimated hepatic clearance for acidic and neutral compounds was corrected by an empirical scaling factor. Predicted values (pharmacokinetic parameters, plasma concentration-time profile) were compared with observed data to evaluate model accuracy. Of the 22 drugs, less than a 2-fold error was obtained for terminal elimination half-life (t1/2, 100% of drugs), peak plasma concentration (Cmax, 100%), area under the plasma concentration-time curve (AUC0–t, 95.4%), clearance (CLh, 95.4%), mean retention time (MRT, 95.4%), and steady state volume (Vss, 90.9%). The impact of fup errors on CLh and Vss prediction was evaluated. Errors in fup resulted in proportional errors in clearance prediction for low-clearance compounds, and in Vss prediction for high-volume neutral drugs. For high-volume basic drugs, errors in fup did not propagate to errors in Vss prediction. This is due to the cancellation of errors in the calculations for tissue partitioning of basic drugs. Overall, plasma profiles were well simulated with the present PBPK model. PMID:26531057
2017-01-01
Producing predictions of the probabilistic risks of operating materials for given lengths of time at stated operating conditions requires the assimilation of existing deterministic creep life prediction models (that only predict the average failure time) with statistical models that capture the random component of creep. To date, these approaches have rarely been combined to achieve this objective. The first half of this paper therefore provides a summary review of some statistical models to help bridge the gap between these two approaches. The second half of the paper illustrates one possible assimilation using 1Cr1Mo-0.25V steel. The Wilshire equation for creep life prediction is integrated into a discrete hazard based statistical model—the former being chosen because of its novelty and proven capability in accurately predicting average failure times and the latter being chosen because of its flexibility in modelling the failure time distribution. Using this model it was found that, for example, if this material had been in operation for around 15 years at 823 K and 130 MPa, the chances of failure in the next year is around 35%. However, if this material had been in operation for around 25 years, the chance of failure in the next year rises dramatically to around 80%. PMID:29039773
Gupta, Nidhi; Heiden, Marina; Mathiassen, Svend Erik; Holtermann, Andreas
2016-05-01
We aimed at developing and evaluating statistical models predicting objectively measured occupational time spent sedentary or in physical activity from self-reported information available in large epidemiological studies and surveys. Two-hundred-and-fourteen blue-collar workers responded to a questionnaire containing information about personal and work related variables, available in most large epidemiological studies and surveys. Workers also wore accelerometers for 1-4 days measuring time spent sedentary and in physical activity, defined as non-sedentary time. Least-squares linear regression models were developed, predicting objectively measured exposures from selected predictors in the questionnaire. A full prediction model based on age, gender, body mass index, job group, self-reported occupational physical activity (OPA), and self-reported occupational sedentary time (OST) explained 63% (R (2)adjusted) of the variance of both objectively measured time spent sedentary and in physical activity since these two exposures were complementary. Single-predictor models based only on self-reported information about either OPA or OST explained 21% and 38%, respectively, of the variance of the objectively measured exposures. Internal validation using bootstrapping suggested that the full and single-predictor models would show almost the same performance in new datasets as in that used for modelling. Both full and single-predictor models based on self-reported information typically available in most large epidemiological studies and surveys were able to predict objectively measured occupational time spent sedentary or in physical activity, with explained variances ranging from 21-63%.
Predictive Trip Detection for Nuclear Power Plants
NASA Astrophysics Data System (ADS)
Rankin, Drew J.; Jiang, Jin
2016-08-01
This paper investigates the use of a Kalman filter (KF) to predict, within the shutdown system (SDS) of a nuclear power plant (NPP), whether safety parameter measurements have reached a trip set-point. In addition, least squares (LS) estimation compensates for prediction error due to system-model mismatch. The motivation behind predictive shutdown is to reduce the amount of time between the occurrence of a fault or failure and the time of trip detection, referred to as time-to-trip. These reductions in time-to-trip can ultimately lead to increases in safety and productivity margins. The proposed predictive SDS differs from conventional SDSs in that it compares point-predictions of the measurements, rather than sensor measurements, against trip set-points. The predictive SDS is validated through simulation and experiments for the steam generator water level safety parameter. Performance of the proposed predictive SDS is compared against benchmark conventional SDS with respect to time-to-trip. In addition, this paper analyzes: prediction uncertainty, as well as; the conditions under which it is possible to achieve reduced time-to-trip. Simulation results demonstrate that on average the predictive SDS reduces time-to-trip by an amount of time equal to the length of the prediction horizon and that the distribution of times-to-trip is approximately Gaussian. Experimental results reveal that a reduced time-to-trip can be achieved in a real-world system with unknown system-model mismatch and that the predictive SDS can be implemented with a scan time of under 100ms. Thus, this paper is a proof of concept for KF/LS-based predictive trip detection.
Application of a time-magnitude prediction model for earthquakes
NASA Astrophysics Data System (ADS)
An, Weiping; Jin, Xueshen; Yang, Jialiang; Dong, Peng; Zhao, Jun; Zhang, He
2007-06-01
In this paper we discuss the physical meaning of the magnitude-time model parameters for earthquake prediction. The gestation process for strong earthquake in all eleven seismic zones in China can be described by the magnitude-time prediction model using the computations of the parameters of the model. The average model parameter values for China are: b = 0.383, c=0.154, d = 0.035, B = 0.844, C = -0.209, and D = 0.188. The robustness of the model parameters is estimated from the variation in the minimum magnitude of the transformed data, the spatial extent, and the temporal period. Analysis of the spatial and temporal suitability of the model indicates that the computation unit size should be at least 4° × 4° for seismic zones in North China, at least 3° × 3° in Southwest and Northwest China, and the time period should be as long as possible.
Zhang, Yingying; Wang, Juncheng; Vorontsov, A M; Hou, Guangli; Nikanorova, M N; Wang, Hongliang
2014-01-01
The international marine ecological safety monitoring demonstration station in the Yellow Sea was developed as a collaborative project between China and Russia. It is a nonprofit technical workstation designed as a facility for marine scientific research for public welfare. By undertaking long-term monitoring of the marine environment and automatic data collection, this station will provide valuable information for marine ecological protection and disaster prevention and reduction. The results of some initial research by scientists at the research station into predictive modeling of marine ecological environments and early warning are described in this paper. Marine ecological processes are influenced by many factors including hydrological and meteorological conditions, biological factors, and human activities. Consequently, it is very difficult to incorporate all these influences and their interactions in a deterministic or analysis model. A prediction model integrating a time series prediction approach with neural network nonlinear modeling is proposed for marine ecological parameters. The model explores the natural fluctuations in marine ecological parameters by learning from the latest observed data automatically, and then predicting future values of the parameter. The model is updated in a "rolling" fashion with new observed data from the monitoring station. Prediction experiments results showed that the neural network prediction model based on time series data is effective for marine ecological prediction and can be used for the development of early warning systems.
Limit of Predictability in Mantle Convection
NASA Astrophysics Data System (ADS)
Bello, L.; Coltice, N.; Rolf, T.; Tackley, P. J.
2013-12-01
Linking mantle convection models with Earth's tectonic history has received considerable attention in recent years: modeling the evolution of supercontinent cycles, predicting present-day mantle structure or improving plate reconstructions. Predictions of future supercontinents are currently being made based on seismic tomography images, plate motion history and mantle convection models, and methods of data assimilation for mantle flow are developing. However, so far there are no studies of the limit of predictability these models are facing. Indeed, given the chaotic nature of mantle convection, we can expect forecasts and hindcasts to have a limited range of predictability. We propose here to use an approach similar to those used in dynamic meteorology, and more recently for the geodynamo, to evaluate the predictability limit of mantle dynamics forecasts. Following the pioneering works in weather forecast (Lorenz 1965), we study the time evolution of twin experiments, started from two very close initial temperature fields and monitor the error growth. We extract a characteristic time of the system, known as the e-folding timescale, which will be used to estimate the predictability limit. The final predictability time will depend on the imposed initial error and the error tolerance in our model. We compute 3D spherical convection solutions using StagYY (Tackley, 2008). We first evaluate the influence of the Rayleigh number on the limit of predictability of isoviscous convection. Then, we investigate the effects of various rheologies, from the simplest (isoviscous mantle) to more complex ones (plate-like behavior and floating continents). We show that the e-folding time increases with the wavelength of the flow and reaches 10Myrs with plate-like behavior and continents. Such an e-folding time together with the uncertainties in mantle temperature distribution suggests prediction of mantle structure from an initial given state is limited to <50 Myrs. References: 1. Lorenz, B. E. N., Norake, D. & Meteorologiake, I. A study of the predictability of a 28-variable atmospheric model. Tellus XXVII, 322-333 (1965). 2. Tackley, P. J. Modelling compressible mantle convection with large viscosity contrasts in a three-dimensional spherical shell using the yin-yang grid. Physics of the Earth and Planetary Interiors 171, 7-18 (2008).
NASA Technical Reports Server (NTRS)
Kirtman, Ben P.; Min, Dughong; Infanti, Johnna M.; Kinter, James L., III; Paolino, Daniel A.; Zhang, Qin; vandenDool, Huug; Saha, Suranjana; Mendez, Malaquias Pena; Becker, Emily;
2013-01-01
The recent US National Academies report "Assessment of Intraseasonal to Interannual Climate Prediction and Predictability" was unequivocal in recommending the need for the development of a North American Multi-Model Ensemble (NMME) operational predictive capability. Indeed, this effort is required to meet the specific tailored regional prediction and decision support needs of a large community of climate information users. The multi-model ensemble approach has proven extremely effective at quantifying prediction uncertainty due to uncertainty in model formulation, and has proven to produce better prediction quality (on average) then any single model ensemble. This multi-model approach is the basis for several international collaborative prediction research efforts, an operational European system and there are numerous examples of how this multi-model ensemble approach yields superior forecasts compared to any single model. Based on two NOAA Climate Test Bed (CTB) NMME workshops (February 18, and April 8, 2011) a collaborative and coordinated implementation strategy for a NMME prediction system has been developed and is currently delivering real-time seasonal-to-interannual predictions on the NOAA Climate Prediction Center (CPC) operational schedule. The hindcast and real-time prediction data is readily available (e.g., http://iridl.ldeo.columbia.edu/SOURCES/.Models/.NMME/) and in graphical format from CPC (http://origin.cpc.ncep.noaa.gov/products/people/wd51yf/NMME/index.html). Moreover, the NMME forecast are already currently being used as guidance for operational forecasters. This paper describes the new NMME effort, presents an overview of the multi-model forecast quality, and the complementary skill associated with individual models.
Modeling the time--varying subjective quality of HTTP video streams with rate adaptations.
Chen, Chao; Choi, Lark Kwon; de Veciana, Gustavo; Caramanis, Constantine; Heath, Robert W; Bovik, Alan C
2014-05-01
Newly developed hypertext transfer protocol (HTTP)-based video streaming technologies enable flexible rate-adaptation under varying channel conditions. Accurately predicting the users' quality of experience (QoE) for rate-adaptive HTTP video streams is thus critical to achieve efficiency. An important aspect of understanding and modeling QoE is predicting the up-to-the-moment subjective quality of a video as it is played, which is difficult due to hysteresis effects and nonlinearities in human behavioral responses. This paper presents a Hammerstein-Wiener model for predicting the time-varying subjective quality (TVSQ) of rate-adaptive videos. To collect data for model parameterization and validation, a database of longer duration videos with time-varying distortions was built and the TVSQs of the videos were measured in a large-scale subjective study. The proposed method is able to reliably predict the TVSQ of rate adaptive videos. Since the Hammerstein-Wiener model has a very simple structure, the proposed method is suitable for online TVSQ prediction in HTTP-based streaming.
Ecological prediction with nonlinear multivariate time-frequency functional data models
Yang, Wen-Hsi; Wikle, Christopher K.; Holan, Scott H.; Wildhaber, Mark L.
2013-01-01
Time-frequency analysis has become a fundamental component of many scientific inquiries. Due to improvements in technology, the amount of high-frequency signals that are collected for ecological and other scientific processes is increasing at a dramatic rate. In order to facilitate the use of these data in ecological prediction, we introduce a class of nonlinear multivariate time-frequency functional models that can identify important features of each signal as well as the interaction of signals corresponding to the response variable of interest. Our methodology is of independent interest and utilizes stochastic search variable selection to improve model selection and performs model averaging to enhance prediction. We illustrate the effectiveness of our approach through simulation and by application to predicting spawning success of shovelnose sturgeon in the Lower Missouri River.
NASA Astrophysics Data System (ADS)
Markov, Detelin
2012-11-01
This paper presents an easy-to-understand procedure for prediction of indoor air composition time variation in air-tight occupied spaces during the night periods. The mathematical model is based on the assumptions for homogeneity and perfect mixing of the indoor air, the ideal gas model for non-reacting gas mixtures, mass conservation equations for the entire system and for each species, a model for prediction of basal metabolic rate of humans as well as a model for prediction of O2 consumption rate and both CO2 and H2O generation rates by breathing. Time variation of indoor air composition is predicted at constant indoor air temperature for three scenarios based on the analytical solution of the mathematical model. The results achieved reveal both the most probable scenario for indoor air time variation in air-tight occupied spaces as well as the cause for morning tiredness after having a sleep in a modern energy efficient space.
A Real-time Breakdown Prediction Method for Urban Expressway On-ramp Bottlenecks
NASA Astrophysics Data System (ADS)
Ye, Yingjun; Qin, Guoyang; Sun, Jian; Liu, Qiyuan
2018-01-01
Breakdown occurrence on expressway is considered to relate with various factors. Therefore, to investigate the association between breakdowns and these factors, a Bayesian network (BN) model is adopted in this paper. Based on the breakdown events identified at 10 urban expressways on-ramp in Shanghai, China, 23 parameters before breakdowns are extracted, including dynamic environment conditions aggregated with 5-minutes and static geometry features. Different time periods data are used to predict breakdown. Results indicate that the models using 5-10 min data prior to breakdown performs the best prediction, with the prediction accuracies higher than 73%. Moreover, one unified model for all bottlenecks is also built and shows reasonably good prediction performance with the classification accuracy of breakdowns about 75%, at best. Additionally, to simplify the model parameter input, the random forests (RF) model is adopted to identify the key variables. Modeling with the selected 7 parameters, the refined BN model can predict breakdown with adequate accuracy.
NASA Astrophysics Data System (ADS)
Hu, J.; Zhang, H.; Ying, Q.; Chen, S.-H.; Vandenberghe, F.; Kleeman, M. J.
2014-08-01
For the first time, a decadal (9 years from 2000 to 2008) air quality model simulation with 4 km horizontal resolution and daily time resolution has been conducted in California to provide air quality data for health effects studies. Model predictions are compared to measurements to evaluate the accuracy of the simulation with an emphasis on spatial and temporal variations that could be used in epidemiology studies. Better model performance is found at longer averaging times, suggesting that model results with averaging times ≥ 1 month should be the first to be considered in epidemiological studies. The UCD/CIT model predicts spatial and temporal variations in the concentrations of O3, PM2.5, EC, OC, nitrate, and ammonium that meet standard modeling performance criteria when compared to monthly-averaged measurements. Predicted sulfate concentrations do not meet target performance metrics due to missing sulfur sources in the emissions. Predicted seasonal and annual variations of PM2.5, EC, OC, nitrate, and ammonium have mean fractional biases that meet the model performance criteria in 95%, 100%, 71%, 73%, and 92% of the simulated months, respectively. The base dataset provides an improvement for predicted population exposure to PM concentrations in California compared to exposures estimated by central site monitors operated one day out of every 3 days at a few urban locations. Uncertainties in the model predictions arise from several issues. Incomplete understanding of secondary organic aerosol formation mechanisms leads to OC bias in the model results in summertime but does not affect OC predictions in winter when concentrations are typically highest. The CO and NO (species dominated by mobile emissions) results reveal temporal and spatial uncertainties associated with the mobile emissions generated by the EMFAC 2007 model. The WRF model tends to over-predict wind speed during stagnation events, leading to under-predictions of high PM concentrations, usually in winter months. The WRF model also generally under-predicts relative humidity, resulting in less particulate nitrate formation especially during winter months. These issues will be improved in future studies. All model results included in the current manuscript can be downloaded free of charge at http://faculty.engineering.ucdavis.edu/kleeman/.
Operational, Real-Time, Sun-to-Earth Interplanetary Shock Predictions During Solar Cycle 23
NASA Astrophysics Data System (ADS)
Fry, C. D.; Dryer, M.; Sun, W.; Deehr, C. S.; Smith, Z.; Akasofu, S.
2002-05-01
We report on our progress in predicting interplanetary shock arrival time (SAT) in real-time, using three forecast models: the Hakamada-Akasofu-Fry (HAF) modified kinematic model, the Interplanetary Shock Propagation Model (ISPM) and the Shock Time of Arrival (STOA) model. These models are run concurrently to provide real-time predictions of the arrival time at Earth of interplanetary shocks caused by solar events. These "fearless forecasts" are the first, and presently only, publicly distributed predictions of SAT and are undergoing quantitative evaluation for operational utility and scientific benchmarking. All three models predict SAT, but the HAF model also provides a global view of the propagation of interplanetary shocks through the pre-existing, non-uniform heliospheric structure. This allows the forecaster to track the propagation of the shock and to differentiate between shocks caused by solar events and those associated with co-rotating interaction regions (CIRs). This study includes 173 events during the period February, 1997 to October, 2000. Shock predictions were compared with spacecraft observations at the L1 location to determine how well the models perform. Sixty-eight shocks were observed at L1 within 120 hours of an event. We concluded that 6 of these observed shocks were caused by CIRs, and the remainder were caused by solar events. The forecast skill of the models are presented in terms of RMS errors, contingency tables and skill scores commonly used by the weather forecasting community. The false alarm rate for HAF was higher than for ISPM or STOA but much lower than for predictions based upon empirical studies or climatology. Of the parameters used to characterize a shock source at the Sun, the initial speed of the coronal shock, as represented by the observed metric type II speed, has the largest influence on the predicted SAT. We also found that HAF model predictions based upon type II speed are generally better for shocks originating from sites near central meridian, and worse for limb events. This tendency suggests that the observed type II speed is more representative of the interplanetary shock speed for events occurring near central meridian. In particular, the type II speed appears to underestimate the actual Earth-directed IP shock speed when the source of the event is near the limb. Several of the most interesting events (Bastille Day epoch (2000), April Fools Day epoch (2001))will be discussed in more detail with the use of real-time animations.
Predicting readmission risk with institution-specific prediction models.
Yu, Shipeng; Farooq, Faisal; van Esbroeck, Alexander; Fung, Glenn; Anand, Vikram; Krishnapuram, Balaji
2015-10-01
The ability to predict patient readmission risk is extremely valuable for hospitals, especially under the Hospital Readmission Reduction Program of the Center for Medicare and Medicaid Services which went into effect starting October 1, 2012. There is a plethora of work in the literature that deals with developing readmission risk prediction models, but most of them do not have sufficient prediction accuracy to be deployed in a clinical setting, partly because different hospitals may have different characteristics in their patient populations. We propose a generic framework for institution-specific readmission risk prediction, which takes patient data from a single institution and produces a statistical risk prediction model optimized for that particular institution and, optionally, for a specific condition. This provides great flexibility in model building, and is also able to provide institution-specific insights in its readmitted patient population. We have experimented with classification methods such as support vector machines, and prognosis methods such as the Cox regression. We compared our methods with industry-standard methods such as the LACE model, and showed the proposed framework is not only more flexible but also more effective. We applied our framework to patient data from three hospitals, and obtained some initial results for heart failure (HF), acute myocardial infarction (AMI), pneumonia (PN) patients as well as patients with all conditions. On Hospital 2, the LACE model yielded AUC 0.57, 0.56, 0.53 and 0.55 for AMI, HF, PN and All Cause readmission prediction, respectively, while the proposed model yielded 0.66, 0.65, 0.63, 0.74 for the corresponding conditions, all significantly better than the LACE counterpart. The proposed models that leverage all features at discharge time is more accurate than the models that only leverage features at admission time (0.66 vs. 0.61 for AMI, 0.65 vs. 0.61 for HF, 0.63 vs. 0.56 for PN, 0.74 vs. 0.60 for All Cause). Furthermore, the proposed admission-time models already outperform the performance of LACE, which is a discharge-time model (0.61 vs. 0.57 for AMI, 0.61 vs. 0.56 for HF, 0.56 vs. 0.53 for PN, 0.60 vs. 0.55 for All Cause). Similar conclusions can be drawn from other hospitals as well. The same performance comparison also holds for precision and recall at top-decile predictions. Most of the performance improvements are statistically significant. The institution-specific readmission risk prediction framework is more flexible and more effective than the one-size-fit-all models like the LACE, sometimes twice and three-time more effective. The admission-time models are able to give early warning signs compared to the discharge-time models, and may be able to help hospital staff intervene early while the patient is still in the hospital. Copyright © 2015 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Wang, Han; Yan, Jie; Liu, Yongqian; Han, Shuang; Li, Li; Zhao, Jing
2017-11-01
Increasing the accuracy of wind speed prediction lays solid foundation to the reliability of wind power forecasting. Most traditional correction methods for wind speed prediction establish the mapping relationship between wind speed of the numerical weather prediction (NWP) and the historical measurement data (HMD) at the corresponding time slot, which is free of time-dependent impacts of wind speed time series. In this paper, a multi-step-ahead wind speed prediction correction method is proposed with consideration of the passing effects from wind speed at the previous time slot. To this end, the proposed method employs both NWP and HMD as model inputs and the training labels. First, the probabilistic analysis of the NWP deviation for different wind speed bins is calculated to illustrate the inadequacy of the traditional time-independent mapping strategy. Then, support vector machine (SVM) is utilized as example to implement the proposed mapping strategy and to establish the correction model for all the wind speed bins. One Chinese wind farm in northern part of China is taken as example to validate the proposed method. Three benchmark methods of wind speed prediction are used to compare the performance. The results show that the proposed model has the best performance under different time horizons.
Li, Jia; Xia, Yunni; Luo, Xin
2014-01-01
OWL-S, one of the most important Semantic Web service ontologies proposed to date, provides a core ontological framework and guidelines for describing the properties and capabilities of their web services in an unambiguous, computer interpretable form. Predicting the reliability of composite service processes specified in OWL-S allows service users to decide whether the process meets the quantitative quality requirement. In this study, we consider the runtime quality of services to be fluctuating and introduce a dynamic framework to predict the runtime reliability of services specified in OWL-S, employing the Non-Markovian stochastic Petri net (NMSPN) and the time series model. The framework includes the following steps: obtaining the historical response times series of individual service components; fitting these series with a autoregressive-moving-average-model (ARMA for short) and predicting the future firing rates of service components; mapping the OWL-S process into a NMSPN model; employing the predicted firing rates as the model input of NMSPN and calculating the normal completion probability as the reliability estimate. In the case study, a comparison between the static model and our approach based on experimental data is presented and it is shown that our approach achieves higher prediction accuracy.
Development of an accident duration prediction model on the Korean Freeway Systems.
Chung, Younshik
2010-01-01
Since duration prediction is one of the most important steps in an accident management process, there have been several approaches developed for modeling accident duration. This paper presents a model for the purpose of accident duration prediction based on accurately recorded and large accident dataset from the Korean Freeway Systems. To develop the duration prediction model, this study utilizes the log-logistic accelerated failure time (AFT) metric model and a 2-year accident duration dataset from 2006 to 2007. Specifically, the 2006 dataset is utilized to develop the prediction model and then, the 2007 dataset was employed to test the temporal transferability of the 2006 model. Although the duration prediction model has limitations such as large prediction error due to the individual differences of the accident treatment teams in terms of clearing similar accidents, the results from the 2006 model yielded a reasonable prediction based on the mean absolute percentage error (MAPE) scale. Additionally, the results of the statistical test for temporal transferability indicated that the estimated parameters in the duration prediction model are stable over time. Thus, this temporal stability suggests that the model may have potential to be used as a basis for making rational diversion and dispatching decisions in the event of an accident. Ultimately, such information will beneficially help in mitigating traffic congestion due to accidents.
Study on Development of 1D-2D Coupled Real-time Urban Inundation Prediction model
NASA Astrophysics Data System (ADS)
Lee, Seungsoo
2017-04-01
In recent years, we are suffering abnormal weather condition due to climate change around the world. Therefore, countermeasures for flood defense are urgent task. In this research, study on development of 1D-2D coupled real-time urban inundation prediction model using predicted precipitation data based on remote sensing technology is conducted. 1 dimensional (1D) sewerage system analysis model which was introduced by Lee et al. (2015) is used to simulate inlet and overflow phenomena by interacting with surface flown as well as flows in conduits. 2 dimensional (2D) grid mesh refinement method is applied to depict road networks for effective calculation time. 2D surface model is coupled with 1D sewerage analysis model in order to consider bi-directional flow between both. Also parallel computing method, OpenMP, is applied to reduce calculation time. The model is estimated by applying to 25 August 2014 extreme rainfall event which caused severe inundation damages in Busan, Korea. Oncheoncheon basin is selected for study basin and observed radar data are assumed as predicted rainfall data. The model shows acceptable calculation speed with accuracy. Therefore it is expected that the model can be used for real-time urban inundation forecasting system to minimize damages.
Bayesian dynamic modeling of time series of dengue disease case counts
López-Quílez, Antonio; Torres-Prieto, Alexander
2017-01-01
The aim of this study is to model the association between weekly time series of dengue case counts and meteorological variables, in a high-incidence city of Colombia, applying Bayesian hierarchical dynamic generalized linear models over the period January 2008 to August 2015. Additionally, we evaluate the model’s short-term performance for predicting dengue cases. The methodology shows dynamic Poisson log link models including constant or time-varying coefficients for the meteorological variables. Calendar effects were modeled using constant or first- or second-order random walk time-varying coefficients. The meteorological variables were modeled using constant coefficients and first-order random walk time-varying coefficients. We applied Markov Chain Monte Carlo simulations for parameter estimation, and deviance information criterion statistic (DIC) for model selection. We assessed the short-term predictive performance of the selected final model, at several time points within the study period using the mean absolute percentage error. The results showed the best model including first-order random walk time-varying coefficients for calendar trend and first-order random walk time-varying coefficients for the meteorological variables. Besides the computational challenges, interpreting the results implies a complete analysis of the time series of dengue with respect to the parameter estimates of the meteorological effects. We found small values of the mean absolute percentage errors at one or two weeks out-of-sample predictions for most prediction points, associated with low volatility periods in the dengue counts. We discuss the advantages and limitations of the dynamic Poisson models for studying the association between time series of dengue disease and meteorological variables. The key conclusion of the study is that dynamic Poisson models account for the dynamic nature of the variables involved in the modeling of time series of dengue disease, producing useful models for decision-making in public health. PMID:28671941
NASA Astrophysics Data System (ADS)
Zhou, Ya-Tong; Fan, Yu; Chen, Zi-Yi; Sun, Jian-Cheng
2017-05-01
The contribution of this work is twofold: (1) a multimodality prediction method of chaotic time series with the Gaussian process mixture (GPM) model is proposed, which employs a divide and conquer strategy. It automatically divides the chaotic time series into multiple modalities with different extrinsic patterns and intrinsic characteristics, and thus can more precisely fit the chaotic time series. (2) An effective sparse hard-cut expectation maximization (SHC-EM) learning algorithm for the GPM model is proposed to improve the prediction performance. SHC-EM replaces a large learning sample set with fewer pseudo inputs, accelerating model learning based on these pseudo inputs. Experiments on Lorenz and Chua time series demonstrate that the proposed method yields not only accurate multimodality prediction, but also the prediction confidence interval. SHC-EM outperforms the traditional variational learning in terms of both prediction accuracy and speed. In addition, SHC-EM is more robust and insusceptible to noise than variational learning. Supported by the National Natural Science Foundation of China under Grant No 60972106, the China Postdoctoral Science Foundation under Grant No 2014M561053, the Humanity and Social Science Foundation of Ministry of Education of China under Grant No 15YJA630108, and the Hebei Province Natural Science Foundation under Grant No E2016202341.
Zanderigo, Francesca; Sparacino, Giovanni; Kovatchev, Boris; Cobelli, Claudio
2007-09-01
The aim of this article was to use continuous glucose error-grid analysis (CG-EGA) to assess the accuracy of two time-series modeling methodologies recently developed to predict glucose levels ahead of time using continuous glucose monitoring (CGM) data. We considered subcutaneous time series of glucose concentration monitored every 3 minutes for 48 hours by the minimally invasive CGM sensor Glucoday® (Menarini Diagnostics, Florence, Italy) in 28 type 1 diabetic volunteers. Two prediction algorithms, based on first-order polynomial and autoregressive (AR) models, respectively, were considered with prediction horizons of 30 and 45 minutes and forgetting factors (ff) of 0.2, 0.5, and 0.8. CG-EGA was used on the predicted profiles to assess their point and dynamic accuracies using original CGM profiles as reference. Continuous glucose error-grid analysis showed that the accuracy of both prediction algorithms is overall very good and that their performance is similar from a clinical point of view. However, the AR model seems preferable for hypoglycemia prevention. CG-EGA also suggests that, irrespective of the time-series model, the use of ff = 0.8 yields the highest accurate readings in all glucose ranges. For the first time, CG-EGA is proposed as a tool to assess clinically relevant performance of a prediction method separately at hypoglycemia, euglycemia, and hyperglycemia. In particular, we have shown that CG-EGA can be helpful in comparing different prediction algorithms, as well as in optimizing their parameters.
Smith, Andrew J; Abeyta, Andrew A; Hughes, Michael; Jones, Russell T
2015-03-01
This study tested a conceptual model merging anxiety buffer disruption and social-cognitive theories to predict persistent grief severity among students who lost a close friend, significant other, and/or professor/teacher in tragic university campus shootings. A regression-based path model tested posttraumatic stress (PTS) symptom severity 3 to 4 months postshooting (Time 1) as a predictor of grief severity 1 year postshootings (Time 2), both directly and indirectly through cognitive processes (self-efficacy and disrupted worldview). Results revealed a model that predicted 61% of the variance in Time 2 grief severity. Hypotheses were supported, demonstrating that Time 1 PTS severity indirectly, positively predicted Time 2 grief severity through undermining self-efficacy and more severely disrupting worldview. Findings and theoretical interpretation yield important insights for future research and clinical application. (PsycINFO Database Record (c) 2015 APA, all rights reserved).
Differences in care burden of patients undergoing dialysis in different centres in the netherlands.
de Kleijn, Ria; Uyl-de Groot, Carin; Hagen, Chris; Diepenbroek, Adry; Pasker-de Jong, Pieternel; Ter Wee, Piet
2017-06-01
A classification model was developed to simplify planning of personnel at dialysis centres. This model predicted the care burden based on dialysis characteristics. However, patient characteristics and different dialysis centre categories might also influence the amount of care time required. To determine if there is a difference in care burden between different categories of dialysis centres and if specific patient characteristics predict nursing time needed for patient treatment. An observational study. Two hundred and forty-two patients from 12 dialysis centres. In 12 dialysis centres, nurses filled out the classification list per patient and completed a form with patient characteristics. Nephrologists filled out the Charlson Comorbidity Index. Independent observers clocked the time nurses spent on separate steps of the dialysis for each patient. Dialysis centres were categorised into four types. Data were analysed using regression models. In contrast to other dialysis centres, academic centres needed 14 minutes more care time per patient per dialysis treatment than predicted in the classification model. No patient characteristics were found that influenced this difference. The only patient characteristic that predicted the time required was gender, with more time required to treat women. Gender did not affect the difference between measured and predicted care time. Differences in care burden were observed between academic and other centres, with more time required for treatment in academic centres. Contribution of patient characteristics to the time difference was minimal. The only patient characteristics that predicted care time were previous transplantation, which reduced the time required, and gender, with women requiring more care time. © 2017 European Dialysis and Transplant Nurses Association/European Renal Care Association.
Real-time 3-D space numerical shake prediction for earthquake early warning
NASA Astrophysics Data System (ADS)
Wang, Tianyun; Jin, Xing; Huang, Yandan; Wei, Yongxiang
2017-12-01
In earthquake early warning systems, real-time shake prediction through wave propagation simulation is a promising approach. Compared with traditional methods, it does not suffer from the inaccurate estimation of source parameters. For computation efficiency, wave direction is assumed to propagate on the 2-D surface of the earth in these methods. In fact, since the seismic wave propagates in the 3-D sphere of the earth, the 2-D space modeling of wave direction results in inaccurate wave estimation. In this paper, we propose a 3-D space numerical shake prediction method, which simulates the wave propagation in 3-D space using radiative transfer theory, and incorporate data assimilation technique to estimate the distribution of wave energy. 2011 Tohoku earthquake is studied as an example to show the validity of the proposed model. 2-D space model and 3-D space model are compared in this article, and the prediction results show that numerical shake prediction based on 3-D space model can estimate the real-time ground motion precisely, and overprediction is alleviated when using 3-D space model.
NASA Astrophysics Data System (ADS)
Paouris, Evangelos; Mavromichalaki, Helen
2017-12-01
In a previous work (Paouris and Mavromichalaki in Solar Phys. 292, 30, 2017), we presented a total of 266 interplanetary coronal mass ejections (ICMEs) with as much information as possible. We developed a new empirical model for estimating the acceleration of these events in the interplanetary medium from this analysis. In this work, we present a new approach on the effective acceleration model (EAM) for predicting the arrival time of the shock that preceds a CME, using data of a total of 214 ICMEs. For the first time, the projection effects of the linear speed of CMEs are taken into account in this empirical model, which significantly improves the prediction of the arrival time of the shock. In particular, the mean value of the time difference between the observed time of the shock and the predicted time was equal to +3.03 hours with a mean absolute error (MAE) of 18.58 hours and a root mean squared error (RMSE) of 22.47 hours. After the improvement of this model, the mean value of the time difference is decreased to -0.28 hours with an MAE of 17.65 hours and an RMSE of 21.55 hours. This improved version was applied to a set of three recent Earth-directed CMEs reported in May, June, and July of 2017, and we compare our results with the values predicted by other related models.
Remaining dischargeable time prediction for lithium-ion batteries using unscented Kalman filter
NASA Astrophysics Data System (ADS)
Dong, Guangzhong; Wei, Jingwen; Chen, Zonghai; Sun, Han; Yu, Xiaowei
2017-10-01
To overcome the range anxiety, one of the important strategies is to accurately predict the range or dischargeable time of the battery system. To accurately predict the remaining dischargeable time (RDT) of a battery, a RDT prediction framework based on accurate battery modeling and state estimation is presented in this paper. Firstly, a simplified linearized equivalent-circuit-model is developed to simulate the dynamic characteristics of a battery. Then, an online recursive least-square-algorithm method and unscented-Kalman-filter are employed to estimate the system matrices and SOC at every prediction point. Besides, a discrete wavelet transform technique is employed to capture the statistical information of past dynamics of input currents, which are utilized to predict the future battery currents. Finally, the RDT can be predicted based on the battery model, SOC estimation results and predicted future battery currents. The performance of the proposed methodology has been verified by a lithium-ion battery cell. Experimental results indicate that the proposed method can provide an accurate SOC and parameter estimation and the predicted RDT can solve the range anxiety issues.
Gupta, Nidhi; Christiansen, Caroline Stordal; Hanisch, Christiana; Bay, Hans; Burr, Hermann; Holtermann, Andreas
2017-01-16
To investigate the differences between a questionnaire-based and accelerometer-based sitting time, and develop a model for improving the accuracy of questionnaire-based sitting time for predicting accelerometer-based sitting time. 183 workers in a cross-sectional study reported sitting time per day using a single question during the measurement period, and wore 2 Actigraph GT3X+ accelerometers on the thigh and trunk for 1-4 working days to determine their actual sitting time per day using the validated Acti4 software. Least squares regression models were fitted with questionnaire-based siting time and other self-reported predictors to predict accelerometer-based sitting time. Questionnaire-based and accelerometer-based average sitting times were ≈272 and ≈476 min/day, respectively. A low Pearson correlation (r=0.32), high mean bias (204.1 min) and wide limits of agreement (549.8 to -139.7 min) between questionnaire-based and accelerometer-based sitting time were found. The prediction model based on questionnaire-based sitting explained 10% of the variance in accelerometer-based sitting time. Inclusion of 9 self-reported predictors in the model increased the explained variance to 41%, with 10% optimism using a resampling bootstrap validation. Based on a split validation analysis, the developed prediction model on ≈75% of the workers (n=132) reduced the mean and the SD of the difference between questionnaire-based and accelerometer-based sitting time by 64% and 42%, respectively, in the remaining 25% of the workers. This study indicates that questionnaire-based sitting time has low validity and that a prediction model can be one solution to materially improve the precision of questionnaire-based sitting time. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/.
Predicting landscape vegetation dynamics using state-and-transition simulation models
Colin J. Daniel; Leonardo Frid
2012-01-01
This paper outlines how state-and-transition simulation models (STSMs) can be used to project changes in vegetation over time across a landscape. STSMs are stochastic, empirical simulation models that use an adapted Markov chain approach to predict how vegetation will transition between states over time, typically in response to interactions between succession,...
Olyphant, Greg A.; Whitman, Richard L.
2004-01-01
Data on hydrometeorological conditions and E. coli concentration were simultaneously collected on 57 occasions during the summer of 2000 at 63rd Street Beach, Chicago, Illinois. The data were used to identify and calibrate a statistical regression model aimed at predicting when the bacterial concentration of the beach water was above or below the level considered safe for full body contact. A wide range of hydrological, meteorological, and water quality variables were evaluated as possible predictive variables. These included wind speed and direction, incoming solar radiation (insolation), various time frames of rainfall, air temperature, lake stage and wave height, and water temperature, specific conductance, dissolved oxygen, pH, and turbidity. The best-fit model combined real-time measurements of wind direction and speed (onshore component of resultant wind vector), rainfall, insolation, lake stage, water temperature and turbidity to predict the geometric mean E.coliconcentration in the swimming zone of the beach. The model, which contained both additive and multiplicative (interaction) terms, accounted for 71% of the observed variability in the log E. coliconcentrations. A comparison between model predictions of when the beach should be closed and when the actualbacterial concentrations were above or below the 235 cfu 100 ml-1 threshold value, indicated that the model accurately predicted openingsversus closures 88% of the time.
Na, Okpin; Cai, Xiao-Chuan; Xi, Yunping
2017-01-01
The prediction of the chloride-induced corrosion is very important because of the durable life of concrete structure. To simulate more realistic durability performance of concrete structures, complex scientific methods and more accurate material models are needed. In order to predict the robust results of corrosion initiation time and to describe the thin layer from concrete surface to reinforcement, a large number of fine meshes are also used. The purpose of this study is to suggest more realistic physical model regarding coupled hygro-chemo transport and to implement the model with parallel finite element algorithm. Furthermore, microclimate model with environmental humidity and seasonal temperature is adopted. As a result, the prediction model of chloride diffusion under unsaturated condition was developed with parallel algorithms and was applied to the existing bridge to validate the model with multi-boundary condition. As the number of processors increased, the computational time decreased until the number of processors became optimized. Then, the computational time increased because the communication time between the processors increased. The framework of present model can be extended to simulate the multi-species de-icing salts ingress into non-saturated concrete structures in future work. PMID:28772714
Analytic Guided-Search Model of Human Performance Accuracy in Target- Localization Search Tasks
NASA Technical Reports Server (NTRS)
Eckstein, Miguel P.; Beutter, Brent R.; Stone, Leland S.
2000-01-01
Current models of human visual search have extended the traditional serial/parallel search dichotomy. Two successful models for predicting human visual search are the Guided Search model and the Signal Detection Theory model. Although these models are inherently different, it has been difficult to compare them because the Guided Search model is designed to predict response time, while Signal Detection Theory models are designed to predict performance accuracy. Moreover, current implementations of the Guided Search model require the use of Monte-Carlo simulations, a method that makes fitting the model's performance quantitatively to human data more computationally time consuming. We have extended the Guided Search model to predict human accuracy in target-localization search tasks. We have also developed analytic expressions that simplify simulation of the model to the evaluation of a small set of equations using only three free parameters. This new implementation and extension of the Guided Search model will enable direct quantitative comparisons with human performance in target-localization search experiments and with the predictions of Signal Detection Theory and other search accuracy models.
Hossain, Monir; Wright, Steven; Petersen, Laura A
2002-04-01
One way to monitor patient access to emergent health care services is to use patient characteristics to predict arrival time at the hospital after onset of symptoms. This predicted arrival time can then be compared with actual arrival time to allow monitoring of access to services. Predicted arrival time could also be used to estimate potential effects of changes in health care service availability, such as closure of an emergency department or an acute care hospital. Our goal was to determine the best statistical method for prediction of arrival intervals for patients with acute myocardial infarction (AMI) symptoms. We compared the performance of multinomial logistic regression (MLR) and discriminant analysis (DA) models. Models for MLR and DA were developed using a dataset of 3,566 male veterans hospitalized with AMI in 81 VA Medical Centers in 1994-1995 throughout the United States. The dataset was randomly divided into a training set (n = 1,846) and a test set (n = 1,720). Arrival times were grouped into three intervals on the basis of treatment considerations: <6 hours, 6-12 hours, and >12 hours. One model for MLR and two models for DA were developed using the training dataset. One DA model had equal prior probabilities, and one DA model had proportional prior probabilities. Predictive performance of the models was compared using the test (n = 1,720) dataset. Using the test dataset, the proportions of patients in the three arrival time groups were 60.9% for <6 hours, 10.3% for 6-12 hours, and 28.8% for >12 hours after symptom onset. Whereas the overall predictive performance by MLR and DA with proportional priors was higher, the DA models with equal priors performed much better in the smaller groups. Correct classifications were 62.6% by MLR, 62.4% by DA using proportional prior probabilities, and 48.1% using equal prior probabilities of the groups. The misclassifications by MLR for the three groups were 9.5%, 100.0%, 74.2% for each time interval, respectively. Misclassifications by DA models were 9.8%, 100.0%, and 74.4% for the model with proportional priors and 47.6%, 79.5%, and 51.0% for the model with equal priors. The choice of MLR or DA with proportional priors, or DA with equal priors for monitoring time intervals of predicted hospital arrival time for a population should depend on the consequences of misclassification errors.
Univariate Time Series Prediction of Solar Power Using a Hybrid Wavelet-ARMA-NARX Prediction Method
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nazaripouya, Hamidreza; Wang, Yubo; Chu, Chi-Cheng
This paper proposes a new hybrid method for super short-term solar power prediction. Solar output power usually has a complex, nonstationary, and nonlinear characteristic due to intermittent and time varying behavior of solar radiance. In addition, solar power dynamics is fast and is inertia less. An accurate super short-time prediction is required to compensate for the fluctuations and reduce the impact of solar power penetration on the power system. The objective is to predict one step-ahead solar power generation based only on historical solar power time series data. The proposed method incorporates discrete wavelet transform (DWT), Auto-Regressive Moving Average (ARMA)more » models, and Recurrent Neural Networks (RNN), while the RNN architecture is based on Nonlinear Auto-Regressive models with eXogenous inputs (NARX). The wavelet transform is utilized to decompose the solar power time series into a set of richer-behaved forming series for prediction. ARMA model is employed as a linear predictor while NARX is used as a nonlinear pattern recognition tool to estimate and compensate the error of wavelet-ARMA prediction. The proposed method is applied to the data captured from UCLA solar PV panels and the results are compared with some of the common and most recent solar power prediction methods. The results validate the effectiveness of the proposed approach and show a considerable improvement in the prediction precision.« less
Physical and JIT Model Based Hybrid Modeling Approach for Building Thermal Load Prediction
NASA Astrophysics Data System (ADS)
Iino, Yutaka; Murai, Masahiko; Murayama, Dai; Motoyama, Ichiro
Energy conservation in building fields is one of the key issues in environmental point of view as well as that of industrial, transportation and residential fields. The half of the total energy consumption in a building is occupied by HVAC (Heating, Ventilating and Air Conditioning) systems. In order to realize energy conservation of HVAC system, a thermal load prediction model for building is required. This paper propose a hybrid modeling approach with physical and Just-in-Time (JIT) model for building thermal load prediction. The proposed method has features and benefits such as, (1) it is applicable to the case in which past operation data for load prediction model learning is poor, (2) it has a self checking function, which always supervises if the data driven load prediction and the physical based one are consistent or not, so it can find if something is wrong in load prediction procedure, (3) it has ability to adjust load prediction in real-time against sudden change of model parameters and environmental conditions. The proposed method is evaluated with real operation data of an existing building, and the improvement of load prediction performance is illustrated.
Temporal Prediction Errors Affect Short-Term Memory Scanning Response Time.
Limongi, Roberto; Silva, Angélica M
2016-11-01
The Sternberg short-term memory scanning task has been used to unveil cognitive operations involved in time perception. Participants produce time intervals during the task, and the researcher explores how task performance affects interval production - where time estimation error is the dependent variable of interest. The perspective of predictive behavior regards time estimation error as a temporal prediction error (PE), an independent variable that controls cognition, behavior, and learning. Based on this perspective, we investigated whether temporal PEs affect short-term memory scanning. Participants performed temporal predictions while they maintained information in memory. Model inference revealed that PEs affected memory scanning response time independently of the memory-set size effect. We discuss the results within the context of formal and mechanistic models of short-term memory scanning and predictive coding, a Bayes-based theory of brain function. We state the hypothesis that our finding could be associated with weak frontostriatal connections and weak striatal activity.
An empirical study of race times in recreational endurance runners.
Vickers, Andrew J; Vertosick, Emily A
2016-01-01
Studies of endurance running have typically involved elite athletes, small sample sizes and measures that require special expertise or equipment. We examined factors associated with race performance and explored methods for race time prediction using information routinely available to a recreational runner. An Internet survey was used to collect data from recreational endurance runners (N = 2303). The cohort was split 2:1 into a training set and validation set to create models to predict race time. Sex, age, BMI and race training were associated with mean race velocity for all race distances. The difference in velocity between males and females decreased with increasing distance. Tempo runs were more strongly associated with velocity for shorter distances, while typical weekly training mileage and interval training had similar associations with velocity for all race distances. The commonly used Riegel formula for race time prediction was well-calibrated for races up to a half-marathon, but dramatically underestimated marathon time, giving times at least 10 min too fast for half of runners. We built two models to predict marathon time. The mean squared error for Riegel was 381 compared to 228 (model based on one prior race) and 208 (model based on two prior races). Our findings can be used to inform race training and to provide more accurate race time predictions for better pacing.
Streamflow Prediction based on Chaos Theory
NASA Astrophysics Data System (ADS)
Li, X.; Wang, X.; Babovic, V. M.
2015-12-01
Chaos theory is a popular method in hydrologic time series prediction. Local model (LM) based on this theory utilizes time-delay embedding to reconstruct the phase-space diagram. For this method, its efficacy is dependent on the embedding parameters, i.e. embedding dimension, time lag, and nearest neighbor number. The optimal estimation of these parameters is thus critical to the application of Local model. However, these embedding parameters are conventionally estimated using Average Mutual Information (AMI) and False Nearest Neighbors (FNN) separately. This may leads to local optimization and thus has limitation to its prediction accuracy. Considering about these limitation, this paper applies a local model combined with simulated annealing (SA) to find the global optimization of embedding parameters. It is also compared with another global optimization approach of Genetic Algorithm (GA). These proposed hybrid methods are applied in daily and monthly streamflow time series for examination. The results show that global optimization can contribute to the local model to provide more accurate prediction results compared with local optimization. The LM combined with SA shows more advantages in terms of its computational efficiency. The proposed scheme here can also be applied to other fields such as prediction of hydro-climatic time series, error correction, etc.
Modelling road accidents: An approach using structural time series
NASA Astrophysics Data System (ADS)
Junus, Noor Wahida Md; Ismail, Mohd Tahir
2014-09-01
In this paper, the trend of road accidents in Malaysia for the years 2001 until 2012 was modelled using a structural time series approach. The structural time series model was identified using a stepwise method, and the residuals for each model were tested. The best-fitted model was chosen based on the smallest Akaike Information Criterion (AIC) and prediction error variance. In order to check the quality of the model, a data validation procedure was performed by predicting the monthly number of road accidents for the year 2012. Results indicate that the best specification of the structural time series model to represent road accidents is the local level with a seasonal model.
Code for Calculating Regional Seismic Travel Time
DOE Office of Scientific and Technical Information (OSTI.GOV)
BALLARD, SANFORD; HIPP, JAMES; & BARKER, GLENN
The RSTT software computes predictions of the travel time of seismic energy traveling from a source to a receiver through 2.5D models of the seismic velocity distribution within the Earth. The two primary applications for the RSTT library are tomographic inversion studies and seismic event location calculations. In tomographic inversions studies, a seismologist begins with number of source-receiver travel time observations and an initial starting model of the velocity distribution within the Earth. A forward travel time calculator, such as the RSTT library, is used to compute predictions of each observed travel time and all of the residuals (observed minusmore » predicted travel time) are calculated. The Earth model is then modified in some systematic way with the goal of minimizing the residuals. The Earth model obtained in this way is assumed to be a better model than the starting model if it has lower residuals. The other major application for the RSTT library is seismic event location. Given an Earth model, an initial estimate of the location of a seismic event, and some number of observations of seismic travel time thought to have originated from that event, location codes systematically modify the estimate of the location of the event with the goal of minimizing the difference between the observed and predicted travel times. The second application, seismic event location, is routinely implemented by the military as part of its effort to monitor the Earth for nuclear tests conducted by foreign countries.« less
Spatiotemporal Bayesian networks for malaria prediction.
Haddawy, Peter; Hasan, A H M Imrul; Kasantikul, Rangwan; Lawpoolsri, Saranath; Sa-Angchai, Patiwat; Kaewkungwal, Jaranit; Singhasivanon, Pratap
2018-01-01
Targeted intervention and resource allocation are essential for effective malaria control, particularly in remote areas, with predictive models providing important information for decision making. While a diversity of modeling technique have been used to create predictive models of malaria, no work has made use of Bayesian networks. Bayes nets are attractive due to their ability to represent uncertainty, model time lagged and nonlinear relations, and provide explanations. This paper explores the use of Bayesian networks to model malaria, demonstrating the approach by creating village level models with weekly temporal resolution for Tha Song Yang district in northern Thailand. The networks are learned using data on cases and environmental covariates. Three types of networks are explored: networks for numeric prediction, networks for outbreak prediction, and networks that incorporate spatial autocorrelation. Evaluation of the numeric prediction network shows that the Bayes net has prediction accuracy in terms of mean absolute error of about 1.4 cases for 1 week prediction and 1.7 cases for 6 week prediction. The network for outbreak prediction has an ROC AUC above 0.9 for all prediction horizons. Comparison of prediction accuracy of both Bayes nets against several traditional modeling approaches shows the Bayes nets to outperform the other models for longer time horizon prediction of high incidence transmission. To model spread of malaria over space, we elaborate the models with links between the village networks. This results in some very large models which would be far too laborious to build by hand. So we represent the models as collections of probability logic rules and automatically generate the networks. Evaluation of the models shows that the autocorrelation links significantly improve prediction accuracy for some villages in regions of high incidence. We conclude that spatiotemporal Bayesian networks are a highly promising modeling alternative for prediction of malaria and other vector-borne diseases. Copyright © 2017 Elsevier B.V. All rights reserved.
REFINEMENT OF A MODEL TO PREDICT THE PERMEATION OF PROTECTIVE CLOTHING MATERIALS
A prototype of a predictive model for estimating chemical permeation through protective clothing materials was refined and tested. he model applies Fickian diffusion theory and predicts permeation rates and cumulative permeation as a function of time for five materials: butyl rub...
NASA Astrophysics Data System (ADS)
Welling, D. T.; Manchester, W.; Savani, N.; Sokolov, I.; van der Holst, B.; Jin, M.; Toth, G.; Liemohn, M. W.; Gombosi, T. I.
2017-12-01
The future of space weather prediction depends on the community's ability to predict L1 values from observations of the solar atmosphere, which can yield hours of lead time. While both empirical and physics-based L1 forecast methods exist, it is not yet known if this nascent capability can translate to skilled dB/dt forecasts at the Earth's surface. This paper shows results for the first forecast-quality, solar-atmosphere-to-Earth's-surface dB/dt predictions. Two methods are used to predict solar wind and IMF conditions at L1 for several real-world coronal mass ejection events. The first method is an empirical and observationally based system to estimate the plasma characteristics. The magnetic field predictions are based on the Bz4Cast system which assumes that the CME has a cylindrical flux rope geometry locally around Earth's trajectory. The remaining plasma parameters of density, temperature and velocity are estimated from white-light coronagraphs via a variety of triangulation methods and forward based modelling. The second is a first-principles-based approach that combines the Eruptive Event Generator using Gibson-Low configuration (EEGGL) model with the Alfven Wave Solar Model (AWSoM). EEGGL specifies parameters for the Gibson-Low flux rope such that it erupts, driving a CME in the coronal model that reproduces coronagraph observations and propagates to 1AU. The resulting solar wind predictions are used to drive the operational Space Weather Modeling Framework (SWMF) for geospace. Following the configuration used by NOAA's Space Weather Prediction Center, this setup couples the BATS-R-US global magnetohydromagnetic model to the Rice Convection Model (RCM) ring current model and a height-integrated ionosphere electrodynamics model. The long lead time predictions of dB/dt are compared to model results that are driven by L1 solar wind observations. Both are compared to real-world observations from surface magnetometers at a variety of geomagnetic latitudes. Metrics are calculated to examine how the simulated solar wind drivers impact forecast skill. These results illustrate the current state of long-lead-time forecasting and the promise of this technology for operational use.
Data Prediction for Public Events in Professional Domains Based on Improved RNN- LSTM
NASA Astrophysics Data System (ADS)
Song, Bonan; Fan, Chunxiao; Wu, Yuexin; Sun, Juanjuan
2018-02-01
The traditional data services of prediction for emergency or non-periodic events usually cannot generate satisfying result or fulfill the correct prediction purpose. However, these events are influenced by external causes, which mean certain a priori information of these events generally can be collected through the Internet. This paper studied the above problems and proposed an improved model—LSTM (Long Short-term Memory) dynamic prediction and a priori information sequence generation model by combining RNN-LSTM and public events a priori information. In prediction tasks, the model is qualified for determining trends, and its accuracy also is validated. This model generates a better performance and prediction results than the previous one. Using a priori information can increase the accuracy of prediction; LSTM can better adapt to the changes of time sequence; LSTM can be widely applied to the same type of prediction tasks, and other prediction tasks related to time sequence.
NASA Astrophysics Data System (ADS)
de Montera, L.; Mallet, C.; Barthès, L.; Golé, P.
2008-08-01
This paper shows how nonlinear models originally developed in the finance field can be used to predict rain attenuation level and volatility in Earth-to-Satellite links operating at the Extremely High Frequencies band (EHF, 20 50 GHz). A common approach to solving this problem is to consider that the prediction error corresponds only to scintillations, whose variance is assumed to be constant. Nevertheless, this assumption does not seem to be realistic because of the heteroscedasticity of error time series: the variance of the prediction error is found to be time-varying and has to be modeled. Since rain attenuation time series behave similarly to certain stocks or foreign exchange rates, a switching ARIMA/GARCH model was implemented. The originality of this model is that not only the attenuation level, but also the error conditional distribution are predicted. It allows an accurate upper-bound of the future attenuation to be estimated in real time that minimizes the cost of Fade Mitigation Techniques (FMT) and therefore enables the communication system to reach a high percentage of availability. The performance of the switching ARIMA/GARCH model was estimated using a measurement database of the Olympus satellite 20/30 GHz beacons and this model is shown to outperform significantly other existing models. The model also includes frequency scaling from the downlink frequency to the uplink frequency. The attenuation effects (gases, clouds and rain) are first separated with a neural network and then scaled using specific scaling factors. As to the resulting uplink prediction error, the error contribution of the frequency scaling step is shown to be larger than that of the downlink prediction, indicating that further study should focus on improving the accuracy of the scaling factor.
Glassman, Patrick M; Chen, Yang; Balthasar, Joseph P
2015-10-01
Preclinical assessment of monoclonal antibody (mAb) disposition during drug development often includes investigations in non-human primate models. In many cases, mAb exhibit non-linear disposition that relates to mAb-target binding [i.e., target-mediated disposition (TMD)]. The goal of this work was to develop a physiologically-based pharmacokinetic (PBPK) model to predict non-linear mAb disposition in plasma and in tissues in monkeys. Physiological parameters for monkeys were collected from several sources, and plasma data for several mAbs associated with linear pharmacokinetics were digitized from prior literature reports. The digitized data displayed great variability; therefore, parameters describing inter-antibody variability in the rates of pinocytosis and convection were estimated. For prediction of the disposition of individual antibodies, we incorporated tissue concentrations of target proteins, where concentrations were estimated based on categorical immunohistochemistry scores, and with assumed localization of target within the interstitial space of each organ. Kinetics of target-mAb binding and target turnover, in the presence or absence of mAb, were implemented. The model was then employed to predict concentration versus time data, via Monte Carlo simulation, for two mAb that have been shown to exhibit TMD (2F8 and tocilizumab). Model predictions, performed a priori with no parameter fitting, were found to provide good prediction of dose-dependencies in plasma clearance, the areas under plasma concentration versu time curves, and the time-course of plasma concentration data. This PBPK model may find utility in predicting plasma and tissue concentration versus time data and, potentially, the time-course of receptor occupancy (i.e., mAb-target binding) to support the design and interpretation of preclinical pharmacokinetic-pharmacodynamic investigations in non-human primates.
Ye, Min; Nagar, Swati; Korzekwa, Ken
2016-04-01
Predicting the pharmacokinetics of highly protein-bound drugs is difficult. Also, since historical plasma protein binding data were often collected using unbuffered plasma, the resulting inaccurate binding data could contribute to incorrect predictions. This study uses a generic physiologically based pharmacokinetic (PBPK) model to predict human plasma concentration-time profiles for 22 highly protein-bound drugs. Tissue distribution was estimated from in vitro drug lipophilicity data, plasma protein binding and the blood: plasma ratio. Clearance was predicted with a well-stirred liver model. Underestimated hepatic clearance for acidic and neutral compounds was corrected by an empirical scaling factor. Predicted values (pharmacokinetic parameters, plasma concentration-time profile) were compared with observed data to evaluate the model accuracy. Of the 22 drugs, less than a 2-fold error was obtained for the terminal elimination half-life (t1/2 , 100% of drugs), peak plasma concentration (Cmax , 100%), area under the plasma concentration-time curve (AUC0-t , 95.4%), clearance (CLh , 95.4%), mean residence time (MRT, 95.4%) and steady state volume (Vss , 90.9%). The impact of fup errors on CLh and Vss prediction was evaluated. Errors in fup resulted in proportional errors in clearance prediction for low-clearance compounds, and in Vss prediction for high-volume neutral drugs. For high-volume basic drugs, errors in fup did not propagate to errors in Vss prediction. This is due to the cancellation of errors in the calculations for tissue partitioning of basic drugs. Overall, plasma profiles were well simulated with the present PBPK model. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.
Numerical and Qualitative Contrasts of Two Statistical Models ...
Two statistical approaches, weighted regression on time, discharge, and season and generalized additive models, have recently been used to evaluate water quality trends in estuaries. Both models have been used in similar contexts despite differences in statistical foundations and products. This study provided an empirical and qualitative comparison of both models using 29 years of data for two discrete time series of chlorophyll-a (chl-a) in the Patuxent River estuary. Empirical descriptions of each model were based on predictive performance against the observed data, ability to reproduce flow-normalized trends with simulated data, and comparisons of performance with validation datasets. Between-model differences were apparent but minor and both models had comparable abilities to remove flow effects from simulated time series. Both models similarly predicted observations for missing data with different characteristics. Trends from each model revealed distinct mainstem influences of the Chesapeake Bay with both models predicting a roughly 65% increase in chl-a over time in the lower estuary, whereas flow-normalized predictions for the upper estuary showed a more dynamic pattern, with a nearly 100% increase in chl-a in the last 10 years. Qualitative comparisons highlighted important differences in the statistical structure, available products, and characteristics of the data and desired analysis. This manuscript describes a quantitative comparison of two recently-
Predictability of the Indian Ocean Dipole in the coupled models
NASA Astrophysics Data System (ADS)
Liu, Huafeng; Tang, Youmin; Chen, Dake; Lian, Tao
2017-03-01
In this study, the Indian Ocean Dipole (IOD) predictability, measured by the Indian Dipole Mode Index (DMI), is comprehensively examined at the seasonal time scale, including its actual prediction skill and potential predictability, using the ENSEMBLES multiple model ensembles and the recently developed information-based theoretical framework of predictability. It was found that all model predictions have useful skill, which is normally defined by the anomaly correlation coefficient larger than 0.5, only at around 2-3 month leads. This is mainly because there are more false alarms in predictions as leading time increases. The DMI predictability has significant seasonal variation, and the predictions whose target seasons are boreal summer (JJA) and autumn (SON) are more reliable than that for other seasons. All of models fail to predict the IOD onset before May and suffer from the winter (DJF) predictability barrier. The potential predictability study indicates that, with the model development and initialization improvement, the prediction of IOD onset is likely to be improved but the winter barrier cannot be overcome. The IOD predictability also has decadal variation, with a high skill during the 1960s and the early 1990s, and a low skill during the early 1970s and early 1980s, which is very consistent with the potential predictability. The main factors controlling the IOD predictability, including its seasonal and decadal variations, are also analyzed in this study.
NASA Astrophysics Data System (ADS)
Passarelli, Luigi; Sanso, Bruno; Laura, Sandri; Marzocchi, Warner
2010-05-01
One of the main goals in volcanology is to forecast volcanic eruptions. A trenchant forecast should be made before the onset of a volcanic eruption, using the data available at that time, with the aim of mitigating the volcanic risk associated to the volcanic event. In other words, models implemented with forecast purposes have to take into account the possibility to provide "forward" forecasts and should avoid the idea of a merely "retrospective" fitting of the data available. In this perspective, the main idea of the present model is to forecast the next volcanic eruption after the end of the last one, using only the data available at that time. We focus our attention on volcanoes with open conduit regime and high eruption frequency. We assume a generalization of the classical time predictable model to describe the eruptive behavior of open conduit volcanoes and we use a Bayesian hierarchical model to make probabilistic forecast. We apply the model to Kilauea volcano eruptive data and Mt. Etna volcano flank eruption data. The aims of this model are: 1) to test whether or not the Kilauea and Mt Etna volcanoes follow a time predictable behavior; 2) to discuss the volcanological implications of the time predictable model parameters inferred; 3) to compare the forecast capabilities of this model with other models present in literature. The results obtained using the MCMC sampling algorithm show that both volcanoes follow a time predictable behavior. The numerical values of the time predictable model parameters inferred suggest that the amount of the erupted volume could change the dynamics of the magma chamber refilling process during the repose period. The probability gain of this model compared with other models already present in literature is appreciably greater than zero. This means that our model performs better forecast than previous models and it could be used in a probabilistic volcanic hazard assessment scheme. In this perspective, the probability of eruptions given by our model for Mt Etna volcano flank eruption are published on a internet website and are updated after any change in the eruptive activity.
Modeling and prediction of relaxation of polar order in high-activity nonlinear optical polymers
NASA Astrophysics Data System (ADS)
Guenthner, Andrew J.; Lindsay, Geoffrey A.; Wright, Michael E.; Fallis, Stephen; Ashley, Paul R.; Sanghadasa, Mohan
2007-09-01
Mach-Zehnder optical modulators were fabricated using the CLD and FTC chromophores in polymer-on-silicon optical waveguides. Up to 17 months of oven-ageing stability are reported for the poled polymer films. Modulators containing an FTC-polyimide had the best over all aging performance. To model and extrapolate the ageing data, a relaxation correlation function attributed to A. K. Jonscher was compared to the well-established stretched exponential correlation function. Both models gave a good fit to the data. The Jonscher model predicted a slower relaxation rate in the out years. Analysis showed that collecting data for a longer period relative to the relaxation time was more important for generating useful predictions than the precision with which individual model parameters could be estimated. Thus from a practical standpoint, time-temperature superposition must be assumed in order to generate meaningful predictions. For this purpose, Arrhenius-type expressions were found to relate the model time constants to the ageing temperatures.
Liang, Yong; Chai, Hua; Liu, Xiao-Ying; Xu, Zong-Ben; Zhang, Hai; Leung, Kwong-Sak
2016-03-01
One of the most important objectives of the clinical cancer research is to diagnose cancer more accurately based on the patients' gene expression profiles. Both Cox proportional hazards model (Cox) and accelerated failure time model (AFT) have been widely adopted to the high risk and low risk classification or survival time prediction for the patients' clinical treatment. Nevertheless, two main dilemmas limit the accuracy of these prediction methods. One is that the small sample size and censored data remain a bottleneck for training robust and accurate Cox classification model. In addition to that, similar phenotype tumours and prognoses are actually completely different diseases at the genotype and molecular level. Thus, the utility of the AFT model for the survival time prediction is limited when such biological differences of the diseases have not been previously identified. To try to overcome these two main dilemmas, we proposed a novel semi-supervised learning method based on the Cox and AFT models to accurately predict the treatment risk and the survival time of the patients. Moreover, we adopted the efficient L1/2 regularization approach in the semi-supervised learning method to select the relevant genes, which are significantly associated with the disease. The results of the simulation experiments show that the semi-supervised learning model can significant improve the predictive performance of Cox and AFT models in survival analysis. The proposed procedures have been successfully applied to four real microarray gene expression and artificial evaluation datasets. The advantages of our proposed semi-supervised learning method include: 1) significantly increase the available training samples from censored data; 2) high capability for identifying the survival risk classes of patient in Cox model; 3) high predictive accuracy for patients' survival time in AFT model; 4) strong capability of the relevant biomarker selection. Consequently, our proposed semi-supervised learning model is one more appropriate tool for survival analysis in clinical cancer research.
The Satellite Clock Bias Prediction Method Based on Takagi-Sugeno Fuzzy Neural Network
NASA Astrophysics Data System (ADS)
Cai, C. L.; Yu, H. G.; Wei, Z. C.; Pan, J. D.
2017-05-01
The continuous improvement of the prediction accuracy of Satellite Clock Bias (SCB) is the key problem of precision navigation. In order to improve the precision of SCB prediction and better reflect the change characteristics of SCB, this paper proposes an SCB prediction method based on the Takagi-Sugeno fuzzy neural network. Firstly, the SCB values are pre-treated based on their characteristics. Then, an accurate Takagi-Sugeno fuzzy neural network model is established based on the preprocessed data to predict SCB. This paper uses the precise SCB data with different sampling intervals provided by IGS (International Global Navigation Satellite System Service) to realize the short-time prediction experiment, and the results are compared with the ARIMA (Auto-Regressive Integrated Moving Average) model, GM(1,1) model, and the quadratic polynomial model. The results show that the Takagi-Sugeno fuzzy neural network model is feasible and effective for the SCB short-time prediction experiment, and performs well for different types of clocks. The prediction results for the proposed method are better than the conventional methods obviously.
Multi-time-scale heat transfer modeling of turbid tissues exposed to short-pulsed irradiations.
Kim, Kyunghan; Guo, Zhixiong
2007-05-01
A combined hyperbolic radiation and conduction heat transfer model is developed to simulate multi-time-scale heat transfer in turbid tissues exposed to short-pulsed irradiations. An initial temperature response of a tissue to an ultrashort pulse irradiation is analyzed by the volume-average method in combination with the transient discrete ordinates method for modeling the ultrafast radiation heat transfer. This response is found to reach pseudo steady state within 1 ns for the considered tissues. The single pulse result is then utilized to obtain the temperature response to pulse train irradiation at the microsecond/millisecond time scales. After that, the temperature field is predicted by the hyperbolic heat conduction model which is solved by the MacCormack's scheme with error terms correction. Finally, the hyperbolic conduction is compared with the traditional parabolic heat diffusion model. It is found that the maximum local temperatures are larger in the hyperbolic prediction than the parabolic prediction. In the modeled dermis tissue, a 7% non-dimensional temperature increase is found. After about 10 thermal relaxation times, thermal waves fade away and the predictions between the hyperbolic and parabolic models are consistent.
On the use and the performance of software reliability growth models
NASA Technical Reports Server (NTRS)
Keiller, Peter A.; Miller, Douglas R.
1991-01-01
We address the problem of predicting future failures for a piece of software. The number of failures occurring during a finite future time interval is predicted from the number failures observed during an initial period of usage by using software reliability growth models. Two different methods for using the models are considered: straightforward use of individual models, and dynamic selection among models based on goodness-of-fit and quality-of-prediction criteria. Performance is judged by the relative error of the predicted number of failures over future finite time intervals relative to the number of failures eventually observed during the intervals. Six of the former models and eight of the latter are evaluated, based on their performance on twenty data sets. Many open questions remain regarding the use and the performance of software reliability growth models.
Predicting the Risk of Attrition for Undergraduate Students with Time Based Modelling
ERIC Educational Resources Information Center
Chai, Kevin E. K.; Gibson, David
2015-01-01
Improving student retention is an important and challenging problem for universities. This paper reports on the development of a student attrition model for predicting which first year students are most at-risk of leaving at various points in time during their first semester of study. The objective of developing such a model is to assist…
Early prediction of extreme stratospheric polar vortex states based on causal precursors
NASA Astrophysics Data System (ADS)
Kretschmer, Marlene; Runge, Jakob; Coumou, Dim
2017-08-01
Variability in the stratospheric polar vortex (SPV) can influence the tropospheric circulation and thereby winter weather. Early predictions of extreme SPV states are thus important to improve forecasts of winter weather including cold spells. However, dynamical models are usually restricted in lead time because they poorly capture low-frequency processes. Empirical models often suffer from overfitting problems as the relevant physical processes and time lags are often not well understood. Here we introduce a novel empirical prediction method by uniting a response-guided community detection scheme with a causal discovery algorithm. This way, we objectively identify causal precursors of the SPV at subseasonal lead times and find them to be in good agreement with known physical drivers. A linear regression prediction model based on the causal precursors can explain most SPV variability (r2 = 0.58), and our scheme correctly predicts 58% (46%) of extremely weak SPV states for lead times of 1-15 (16-30) days with false-alarm rates of only approximately 5%. Our method can be applied to any variable relevant for (sub)seasonal weather forecasts and could thus help improving long-lead predictions.
NASA Astrophysics Data System (ADS)
Savani, N. P.; Vourlidas, A.; Richardson, I. G.; Szabo, A.; Thompson, B. J.; Pulkkinen, A.; Mays, M. L.; Nieves-Chinchilla, T.; Bothmer, V.
2017-02-01
This is a companion to Savani et al. (2015) that discussed how a first-order prediction of the internal magnetic field of a coronal mass ejection (CME) may be made from observations of its initial state at the Sun for space weather forecasting purposes (Bothmer-Schwenn scheme (BSS) model). For eight CME events, we investigate how uncertainties in their predicted magnetic structure influence predictions of the geomagnetic activity. We use an empirical relationship between the solar wind plasma drivers and Kp index together with the inferred magnetic vectors, to make a prediction of the time variation of Kp (Kp(BSS)). We find a 2σ uncertainty range on the magnetic field magnitude (|B|) provides a practical and convenient solution for predicting the uncertainty in geomagnetic storm strength. We also find the estimated CME velocity is a major source of error in the predicted maximum Kp. The time variation of Kp(BSS) is important for predicting periods of enhanced and maximum geomagnetic activity, driven by southerly directed magnetic fields, and periods of lower activity driven by northerly directed magnetic field. We compare the skill score of our model to a number of other forecasting models, including the NOAA/Space Weather Prediction Center (SWPC) and Community Coordinated Modeling Center (CCMC)/SWRC estimates. The BSS model was the most unbiased prediction model, while the other models predominately tended to significantly overforecast. The True skill score of the BSS prediction model (TSS = 0.43 ± 0.06) exceeds the results of two baseline models and the NOAA/SWPC forecast. The BSS model prediction performed equally with CCMC/SWRC predictions while demonstrating a lower uncertainty.
Nonlinear modeling of chaotic time series: Theory and applications
NASA Astrophysics Data System (ADS)
Casdagli, M.; Eubank, S.; Farmer, J. D.; Gibson, J.; Desjardins, D.; Hunter, N.; Theiler, J.
We review recent developments in the modeling and prediction of nonlinear time series. In some cases, apparent randomness in time series may be due to chaotic behavior of a nonlinear but deterministic system. In such cases, it is possible to exploit the determinism to make short term forecasts that are much more accurate than one could make from a linear stochastic model. This is done by first reconstructing a state space, and then using nonlinear function approximation methods to create a dynamical model. Nonlinear models are valuable not only as short term forecasters, but also as diagnostic tools for identifying and quantifying low-dimensional chaotic behavior. During the past few years, methods for nonlinear modeling have developed rapidly, and have already led to several applications where nonlinear models motivated by chaotic dynamics provide superior predictions to linear models. These applications include prediction of fluid flows, sunspots, mechanical vibrations, ice ages, measles epidemics, and human speech.
NASA Astrophysics Data System (ADS)
Xu, Xijin; Tang, Qian; Xia, Haiyue; Zhang, Yuling; Li, Weiqiu; Huo, Xia
2016-04-01
Chaotic time series prediction based on nonlinear systems showed a superior performance in prediction field. We studied prenatal exposure to polychlorinated biphenyls (PCBs) by chaotic time series prediction using the least squares self-exciting threshold autoregressive (SEATR) model in umbilical cord blood in an electronic waste (e-waste) contaminated area. The specific prediction steps basing on the proposal methods for prenatal PCB exposure were put forward, and the proposed scheme’s validity was further verified by numerical simulation experiments. Experiment results show: 1) seven kinds of PCB congeners negatively correlate with five different indices for birth status: newborn weight, height, gestational age, Apgar score and anogenital distance; 2) prenatal PCB exposed group at greater risks compared to the reference group; 3) PCBs increasingly accumulated with time in newborns; and 4) the possibility of newborns suffering from related diseases in the future was greater. The desirable numerical simulation experiments results demonstrated the feasibility of applying mathematical model in the environmental toxicology field.
Xu, Xijin; Tang, Qian; Xia, Haiyue; Zhang, Yuling; Li, Weiqiu; Huo, Xia
2016-01-01
Chaotic time series prediction based on nonlinear systems showed a superior performance in prediction field. We studied prenatal exposure to polychlorinated biphenyls (PCBs) by chaotic time series prediction using the least squares self-exciting threshold autoregressive (SEATR) model in umbilical cord blood in an electronic waste (e-waste) contaminated area. The specific prediction steps basing on the proposal methods for prenatal PCB exposure were put forward, and the proposed scheme’s validity was further verified by numerical simulation experiments. Experiment results show: 1) seven kinds of PCB congeners negatively correlate with five different indices for birth status: newborn weight, height, gestational age, Apgar score and anogenital distance; 2) prenatal PCB exposed group at greater risks compared to the reference group; 3) PCBs increasingly accumulated with time in newborns; and 4) the possibility of newborns suffering from related diseases in the future was greater. The desirable numerical simulation experiments results demonstrated the feasibility of applying mathematical model in the environmental toxicology field. PMID:27118260
A state-based probabilistic model for tumor respiratory motion prediction
NASA Astrophysics Data System (ADS)
Kalet, Alan; Sandison, George; Wu, Huanmei; Schmitz, Ruth
2010-12-01
This work proposes a new probabilistic mathematical model for predicting tumor motion and position based on a finite state representation using the natural breathing states of exhale, inhale and end of exhale. Tumor motion was broken down into linear breathing states and sequences of states. Breathing state sequences and the observables representing those sequences were analyzed using a hidden Markov model (HMM) to predict the future sequences and new observables. Velocities and other parameters were clustered using a k-means clustering algorithm to associate each state with a set of observables such that a prediction of state also enables a prediction of tumor velocity. A time average model with predictions based on average past state lengths was also computed. State sequences which are known a priori to fit the data were fed into the HMM algorithm to set a theoretical limit of the predictive power of the model. The effectiveness of the presented probabilistic model has been evaluated for gated radiation therapy based on previously tracked tumor motion in four lung cancer patients. Positional prediction accuracy is compared with actual position in terms of the overall RMS errors. Various system delays, ranging from 33 to 1000 ms, were tested. Previous studies have shown duty cycles for latencies of 33 and 200 ms at around 90% and 80%, respectively, for linear, no prediction, Kalman filter and ANN methods as averaged over multiple patients. At 1000 ms, the previously reported duty cycles range from approximately 62% (ANN) down to 34% (no prediction). Average duty cycle for the HMM method was found to be 100% and 91 ± 3% for 33 and 200 ms latency and around 40% for 1000 ms latency in three out of four breathing motion traces. RMS errors were found to be lower than linear and no prediction methods at latencies of 1000 ms. The results show that for system latencies longer than 400 ms, the time average HMM prediction outperforms linear, no prediction, and the more general HMM-type predictive models. RMS errors for the time average model approach the theoretical limit of the HMM, and predicted state sequences are well correlated with sequences known to fit the data.
Kusev, Petko; van Schaik, Paul; Tsaneva-Atanasova, Krasimira; Juliusson, Asgeir; Chater, Nick
2018-01-01
When attempting to predict future events, people commonly rely on historical data. One psychological characteristic of judgmental forecasting of time series, established by research, is that when people make forecasts from series, they tend to underestimate future values for upward trends and overestimate them for downward ones, so-called trend-damping (modeled by anchoring on, and insufficient adjustment from, the average of recent time series values). Events in a time series can be experienced sequentially (dynamic mode), or they can also be retrospectively viewed simultaneously (static mode), not experienced individually in real time. In one experiment, we studied the influence of presentation mode (dynamic and static) on two sorts of judgment: (a) predictions of the next event (forecast) and (b) estimation of the average value of all the events in the presented series (average estimation). Participants' responses in dynamic mode were anchored on more recent events than in static mode for all types of judgment but with different consequences; hence, dynamic presentation improved prediction accuracy, but not estimation. These results are not anticipated by existing theoretical accounts; we develop and present an agent-based model-the adaptive anchoring model (ADAM)-to account for the difference between processing sequences of dynamically and statically presented stimuli (visually presented data). ADAM captures how variation in presentation mode produces variation in responses (and the accuracy of these responses) in both forecasting and judgment tasks. ADAM's model predictions for the forecasting and judgment tasks fit better with the response data than a linear-regression time series model. Moreover, ADAM outperformed autoregressive-integrated-moving-average (ARIMA) and exponential-smoothing models, while neither of these models accounts for people's responses on the average estimation task. Copyright © 2017 The Authors. Cognitive Science published by Wiley Periodicals, Inc. on behalf of Cognitive Science Society.
van Mantgem, P.J.; Stephenson, N.L.
2005-01-01
1 We assess the use of simple, size-based matrix population models for projecting population trends for six coniferous tree species in the Sierra Nevada, California. We used demographic data from 16 673 trees in 15 permanent plots to create 17 separate time-invariant, density-independent population projection models, and determined differences between trends projected from initial surveys with a 5-year interval and observed data during two subsequent 5-year time steps. 2 We detected departures from the assumptions of the matrix modelling approach in terms of strong growth autocorrelations. We also found evidence of observation errors for measurements of tree growth and, to a more limited degree, recruitment. Loglinear analysis provided evidence of significant temporal variation in demographic rates for only two of the 17 populations. 3 Total population sizes were strongly predicted by model projections, although population dynamics were dominated by carryover from the previous 5-year time step (i.e. there were few cases of recruitment or death). Fractional changes to overall population sizes were less well predicted. Compared with a null model and a simple demographic model lacking size structure, matrix model projections were better able to predict total population sizes, although the differences were not statistically significant. Matrix model projections were also able to predict short-term rates of survival, growth and recruitment. Mortality frequencies were not well predicted. 4 Our results suggest that simple size-structured models can accurately project future short-term changes for some tree populations. However, not all populations were well predicted and these simple models would probably become more inaccurate over longer projection intervals. The predictive ability of these models would also be limited by disturbance or other events that destabilize demographic rates. ?? 2005 British Ecological Society.
2014-01-01
Response surface methodology using a face-centered cube design was used to describe and predict spore inactivation of Bacillus anthracis ∆Sterne and Bacillus thuringiensis Al Hakam spores after exposure of six spore-contaminated materials to hot, humid air. For each strain/material pair, an attempt was made to fit a first or second order model. All three independent predictor variables (temperature, relative humidity, and time) were significant in the models except that time was not significant for B. thuringiensis Al Hakam on nylon. Modeling was unsuccessful for wiring insulation and wet spores because there was complete spore inactivation in the majority of the experimental space. In cases where a predictive equation could be fit, response surface plots with time set to four days were generated. The survival of highly purified Bacillus spores can be predicted for most materials tested when given the settings for temperature, relative humidity, and time. These predictions were cross-checked with spore inactivation measurements. PMID:24949256
Financial Time Series Prediction Using Elman Recurrent Random Neural Networks
Wang, Jie; Wang, Jun; Fang, Wen; Niu, Hongli
2016-01-01
In recent years, financial market dynamics forecasting has been a focus of economic research. To predict the price indices of stock markets, we developed an architecture which combined Elman recurrent neural networks with stochastic time effective function. By analyzing the proposed model with the linear regression, complexity invariant distance (CID), and multiscale CID (MCID) analysis methods and taking the model compared with different models such as the backpropagation neural network (BPNN), the stochastic time effective neural network (STNN), and the Elman recurrent neural network (ERNN), the empirical results show that the proposed neural network displays the best performance among these neural networks in financial time series forecasting. Further, the empirical research is performed in testing the predictive effects of SSE, TWSE, KOSPI, and Nikkei225 with the established model, and the corresponding statistical comparisons of the above market indices are also exhibited. The experimental results show that this approach gives good performance in predicting the values from the stock market indices. PMID:27293423
Financial Time Series Prediction Using Elman Recurrent Random Neural Networks.
Wang, Jie; Wang, Jun; Fang, Wen; Niu, Hongli
2016-01-01
In recent years, financial market dynamics forecasting has been a focus of economic research. To predict the price indices of stock markets, we developed an architecture which combined Elman recurrent neural networks with stochastic time effective function. By analyzing the proposed model with the linear regression, complexity invariant distance (CID), and multiscale CID (MCID) analysis methods and taking the model compared with different models such as the backpropagation neural network (BPNN), the stochastic time effective neural network (STNN), and the Elman recurrent neural network (ERNN), the empirical results show that the proposed neural network displays the best performance among these neural networks in financial time series forecasting. Further, the empirical research is performed in testing the predictive effects of SSE, TWSE, KOSPI, and Nikkei225 with the established model, and the corresponding statistical comparisons of the above market indices are also exhibited. The experimental results show that this approach gives good performance in predicting the values from the stock market indices.
NASA Astrophysics Data System (ADS)
Wold, Alexandra M.; Mays, M. Leila; Taktakishvili, Aleksandre; Jian, Lan K.; Odstrcil, Dusan; MacNeice, Peter
2018-03-01
The Wang-Sheeley-Arge (WSA)-ENLIL+Cone model is used extensively in space weather operations world-wide to model coronal mass ejection (CME) propagation. As such, it is important to assess its performance. We present validation results of the WSA-ENLIL+Cone model installed at the Community Coordinated Modeling Center (CCMC) and executed in real-time by the CCMC space weather team. CCMC uses the WSA-ENLIL+Cone model to predict CME arrivals at NASA missions throughout the inner heliosphere. In this work we compare model predicted CME arrival-times to in situ interplanetary coronal mass ejection leading edge measurements at Solar TErrestrial RElations Observatory-Ahead (STEREO-A), Solar TErrestrial RElations Observatory-Behind (STEREO-B), and Earth (Wind and ACE) for simulations completed between March 2010 and December 2016 (over 1,800 CMEs). We report hit, miss, false alarm, and correct rejection statistics for all three locations. For all predicted CME arrivals, the hit rate is 0.5, and the false alarm rate is 0.1. For the 273 events where the CME was predicted to arrive at Earth, STEREO-A, or STEREO-B, and was actually observed (hit event), the mean absolute arrival-time prediction error was 10.4 ± 0.9 h, with a tendency to early prediction error of -4.0 h. We show the dependence of the arrival-time error on CME input parameters. We also explore the impact of the multi-spacecraft observations used to initialize the model CME inputs by comparing model verification results before and after the STEREO-B communication loss (since September 2014) and STEREO-A sidelobe operations (August 2014-December 2015). There is an increase of 1.7 h in the CME arrival time error during single, or limited two-viewpoint periods, compared to the three-spacecraft viewpoint period. This trend would apply to a future space weather mission at L5 or L4 as another coronagraph viewpoint to reduce CME arrival time errors compared to a single L1 viewpoint.
A Systems Modeling Approach to Forecast Corn Economic Optimum Nitrogen Rate.
Puntel, Laila A; Sawyer, John E; Barker, Daniel W; Thorburn, Peter J; Castellano, Michael J; Moore, Kenneth J; VanLoocke, Andrew; Heaton, Emily A; Archontoulis, Sotirios V
2018-01-01
Historically crop models have been used to evaluate crop yield responses to nitrogen (N) rates after harvest when it is too late for the farmers to make in-season adjustments. We hypothesize that the use of a crop model as an in-season forecast tool will improve current N decision-making. To explore this, we used the Agricultural Production Systems sIMulator (APSIM) calibrated with long-term experimental data for central Iowa, USA (16-years in continuous corn and 15-years in soybean-corn rotation) combined with actual weather data up to a specific crop stage and historical weather data thereafter. The objectives were to: (1) evaluate the accuracy and uncertainty of corn yield and economic optimum N rate (EONR) predictions at four forecast times (planting time, 6th and 12th leaf, and silking phenological stages); (2) determine whether the use of analogous historical weather years based on precipitation and temperature patterns as opposed to using a 35-year dataset could improve the accuracy of the forecast; and (3) quantify the value added by the crop model in predicting annual EONR and yields using the site-mean EONR and the yield at the EONR to benchmark predicted values. Results indicated that the mean corn yield predictions at planting time ( R 2 = 0.77) using 35-years of historical weather was close to the observed and predicted yield at maturity ( R 2 = 0.81). Across all forecasting times, the EONR predictions were more accurate in corn-corn than soybean-corn rotation (relative root mean square error, RRMSE, of 25 vs. 45%, respectively). At planting time, the APSIM model predicted the direction of optimum N rates (above, below or at average site-mean EONR) in 62% of the cases examined ( n = 31) with an average error range of ±38 kg N ha -1 (22% of the average N rate). Across all forecast times, prediction error of EONR was about three times higher than yield predictions. The use of the 35-year weather record was better than using selected historical weather years to forecast (RRMSE was on average 3% lower). Overall, the proposed approach of using the crop model as a forecasting tool could improve year-to-year predictability of corn yields and optimum N rates. Further improvements in modeling and set-up protocols are needed toward more accurate forecast, especially for extreme weather years with the most significant economic and environmental cost.
A Systems Modeling Approach to Forecast Corn Economic Optimum Nitrogen Rate
Puntel, Laila A.; Sawyer, John E.; Barker, Daniel W.; Thorburn, Peter J.; Castellano, Michael J.; Moore, Kenneth J.; VanLoocke, Andrew; Heaton, Emily A.; Archontoulis, Sotirios V.
2018-01-01
Historically crop models have been used to evaluate crop yield responses to nitrogen (N) rates after harvest when it is too late for the farmers to make in-season adjustments. We hypothesize that the use of a crop model as an in-season forecast tool will improve current N decision-making. To explore this, we used the Agricultural Production Systems sIMulator (APSIM) calibrated with long-term experimental data for central Iowa, USA (16-years in continuous corn and 15-years in soybean-corn rotation) combined with actual weather data up to a specific crop stage and historical weather data thereafter. The objectives were to: (1) evaluate the accuracy and uncertainty of corn yield and economic optimum N rate (EONR) predictions at four forecast times (planting time, 6th and 12th leaf, and silking phenological stages); (2) determine whether the use of analogous historical weather years based on precipitation and temperature patterns as opposed to using a 35-year dataset could improve the accuracy of the forecast; and (3) quantify the value added by the crop model in predicting annual EONR and yields using the site-mean EONR and the yield at the EONR to benchmark predicted values. Results indicated that the mean corn yield predictions at planting time (R2 = 0.77) using 35-years of historical weather was close to the observed and predicted yield at maturity (R2 = 0.81). Across all forecasting times, the EONR predictions were more accurate in corn-corn than soybean-corn rotation (relative root mean square error, RRMSE, of 25 vs. 45%, respectively). At planting time, the APSIM model predicted the direction of optimum N rates (above, below or at average site-mean EONR) in 62% of the cases examined (n = 31) with an average error range of ±38 kg N ha−1 (22% of the average N rate). Across all forecast times, prediction error of EONR was about three times higher than yield predictions. The use of the 35-year weather record was better than using selected historical weather years to forecast (RRMSE was on average 3% lower). Overall, the proposed approach of using the crop model as a forecasting tool could improve year-to-year predictability of corn yields and optimum N rates. Further improvements in modeling and set-up protocols are needed toward more accurate forecast, especially for extreme weather years with the most significant economic and environmental cost. PMID:29706974
NASA Astrophysics Data System (ADS)
Ballard, S.; Hipp, J. R.; Encarnacao, A.; Young, C. J.; Begnaud, M. L.; Phillips, W. S.
2012-12-01
Seismic event locations can be made more accurate and precise by computing predictions of seismic travel time through high fidelity 3D models of the wave speed in the Earth's interior. Given the variable data quality and uneven data sampling associated with this type of model, it is essential that there be a means to calculate high-quality estimates of the path-dependent variance and covariance associated with the predicted travel times of ray paths through the model. In this paper, we describe a methodology for accomplishing this by exploiting the full model covariance matrix and show examples of path-dependent travel time prediction uncertainty computed from SALSA3D, our global, seamless 3D tomographic P-velocity model. Typical global 3D models have on the order of 1/2 million nodes, so the challenge in calculating the covariance matrix is formidable: 0.9 TB storage for 1/2 of a symmetric matrix, necessitating an Out-Of-Core (OOC) blocked matrix solution technique. With our approach the tomography matrix (G which includes Tikhonov regularization terms) is multiplied by its transpose (GTG) and written in a blocked sub-matrix fashion. We employ a distributed parallel solution paradigm that solves for (GTG)-1 by assigning blocks to individual processing nodes for matrix decomposition update and scaling operations. We first find the Cholesky decomposition of GTG which is subsequently inverted. Next, we employ OOC matrix multiplication methods to calculate the model covariance matrix from (GTG)-1 and an assumed data covariance matrix. Given the model covariance matrix, we solve for the travel-time covariance associated with arbitrary ray-paths by summing the model covariance along both ray paths. Setting the paths equal and taking the square root yields the travel prediction uncertainty for the single path.
NASA Astrophysics Data System (ADS)
Dash, Rajashree
2017-11-01
Forecasting purchasing power of one currency with respect to another currency is always an interesting topic in the field of financial time series prediction. Despite the existence of several traditional and computational models for currency exchange rate forecasting, there is always a need for developing simpler and more efficient model, which will produce better prediction capability. In this paper, an evolutionary framework is proposed by using an improved shuffled frog leaping (ISFL) algorithm with a computationally efficient functional link artificial neural network (CEFLANN) for prediction of currency exchange rate. The model is validated by observing the monthly prediction measures obtained for three currency exchange data sets such as USD/CAD, USD/CHF, and USD/JPY accumulated within same period of time. The model performance is also compared with two other evolutionary learning techniques such as Shuffled frog leaping algorithm and Particle Swarm optimization algorithm. Practical analysis of results suggest that, the proposed model developed using the ISFL algorithm with CEFLANN network is a promising predictor model for currency exchange rate prediction compared to other models included in the study.
Rottmann, Joerg; Berbeco, Ross
2014-12-01
Precise prediction of respiratory motion is a prerequisite for real-time motion compensation techniques such as beam, dynamic couch, or dynamic multileaf collimator tracking. Collection of tumor motion data to train the prediction model is required for most algorithms. To avoid exposure of patients to additional dose from imaging during this procedure, the feasibility of training a linear respiratory motion prediction model with an external surrogate signal is investigated and its performance benchmarked against training the model with tumor positions directly. The authors implement a lung tumor motion prediction algorithm based on linear ridge regression that is suitable to overcome system latencies up to about 300 ms. Its performance is investigated on a data set of 91 patient breathing trajectories recorded from fiducial marker tracking during radiotherapy delivery to the lung of ten patients. The expected 3D geometric error is quantified as a function of predictor lookahead time, signal sampling frequency and history vector length. Additionally, adaptive model retraining is evaluated, i.e., repeatedly updating the prediction model after initial training. Training length for this is gradually increased with incoming (internal) data availability. To assess practical feasibility model calculation times as well as various minimum data lengths for retraining are evaluated. Relative performance of model training with external surrogate motion data versus tumor motion data is evaluated. However, an internal-external motion correlation model is not utilized, i.e., prediction is solely driven by internal motion in both cases. Similar prediction performance was achieved for training the model with external surrogate data versus internal (tumor motion) data. Adaptive model retraining can substantially boost performance in the case of external surrogate training while it has little impact for training with internal motion data. A minimum adaptive retraining data length of 8 s and history vector length of 3 s achieve maximal performance. Sampling frequency appears to have little impact on performance confirming previously published work. By using the linear predictor, a relative geometric 3D error reduction of about 50% was achieved (using adaptive retraining, a history vector length of 3 s and with results averaged over all investigated lookahead times and signal sampling frequencies). The absolute mean error could be reduced from (2.0 ± 1.6) mm when using no prediction at all to (0.9 ± 0.8) mm and (1.0 ± 0.9) mm when using the predictor trained with internal tumor motion training data and external surrogate motion training data, respectively (for a typical lookahead time of 250 ms and sampling frequency of 15 Hz). A linear prediction model can reduce latency induced tracking errors by an average of about 50% in real-time image guided radiotherapy systems with system latencies of up to 300 ms. Training a linear model for lung tumor motion prediction with an external surrogate signal alone is feasible and results in similar performance as training with (internal) tumor motion. Particularly for scenarios where motion data are extracted from fluoroscopic imaging with ionizing radiation, this may alleviate the need for additional imaging dose during the collection of model training data.
An MEG signature corresponding to an axiomatic model of reward prediction error.
Talmi, Deborah; Fuentemilla, Lluis; Litvak, Vladimir; Duzel, Emrah; Dolan, Raymond J
2012-01-02
Optimal decision-making is guided by evaluating the outcomes of previous decisions. Prediction errors are theoretical teaching signals which integrate two features of an outcome: its inherent value and prior expectation of its occurrence. To uncover the magnetic signature of prediction errors in the human brain we acquired magnetoencephalographic (MEG) data while participants performed a gambling task. Our primary objective was to use formal criteria, based upon an axiomatic model (Caplin and Dean, 2008a), to determine the presence and timing profile of MEG signals that express prediction errors. We report analyses at the sensor level, implemented in SPM8, time locked to outcome onset. We identified, for the first time, a MEG signature of prediction error, which emerged approximately 320 ms after an outcome and expressed as an interaction between outcome valence and probability. This signal followed earlier, separate signals for outcome valence and probability, which emerged approximately 200 ms after an outcome. Strikingly, the time course of the prediction error signal, as well as the early valence signal, resembled the Feedback-Related Negativity (FRN). In simultaneously acquired EEG data we obtained a robust FRN, but the win and loss signals that comprised this difference wave did not comply with the axiomatic model. Our findings motivate an explicit examination of the critical issue of timing embodied in computational models of prediction errors as seen in human electrophysiological data. Copyright © 2011 Elsevier Inc. All rights reserved.
The predictability of consumer visitation patterns
NASA Astrophysics Data System (ADS)
Krumme, Coco; Llorente, Alejandro; Cebrian, Manuel; Pentland, Alex ("Sandy"); Moro, Esteban
2013-04-01
We consider hundreds of thousands of individual economic transactions to ask: how predictable are consumers in their merchant visitation patterns? Our results suggest that, in the long-run, much of our seemingly elective activity is actually highly predictable. Notwithstanding a wide range of individual preferences, shoppers share regularities in how they visit merchant locations over time. Yet while aggregate behavior is largely predictable, the interleaving of shopping events introduces important stochastic elements at short time scales. These short- and long-scale patterns suggest a theoretical upper bound on predictability, and describe the accuracy of a Markov model in predicting a person's next location. We incorporate population-level transition probabilities in the predictive models, and find that in many cases these improve accuracy. While our results point to the elusiveness of precise predictions about where a person will go next, they suggest the existence, at large time-scales, of regularities across the population.
The predictability of consumer visitation patterns
Krumme, Coco; Llorente, Alejandro; Cebrian, Manuel; Pentland, Alex ("Sandy"); Moro, Esteban
2013-01-01
We consider hundreds of thousands of individual economic transactions to ask: how predictable are consumers in their merchant visitation patterns? Our results suggest that, in the long-run, much of our seemingly elective activity is actually highly predictable. Notwithstanding a wide range of individual preferences, shoppers share regularities in how they visit merchant locations over time. Yet while aggregate behavior is largely predictable, the interleaving of shopping events introduces important stochastic elements at short time scales. These short- and long-scale patterns suggest a theoretical upper bound on predictability, and describe the accuracy of a Markov model in predicting a person's next location. We incorporate population-level transition probabilities in the predictive models, and find that in many cases these improve accuracy. While our results point to the elusiveness of precise predictions about where a person will go next, they suggest the existence, at large time-scales, of regularities across the population. PMID:23598917
Predicting Operator Execution Times Using CogTool
NASA Technical Reports Server (NTRS)
Santiago-Espada, Yamira; Latorella, Kara A.
2013-01-01
Researchers and developers of NextGen systems can use predictive human performance modeling tools as an initial approach to obtain skilled user performance times analytically, before system testing with users. This paper describes the CogTool models for a two pilot crew executing two different types of a datalink clearance acceptance tasks, and on two different simulation platforms. The CogTool time estimates for accepting and executing Required Time of Arrival and Interval Management clearances were compared to empirical data observed in video tapes and registered in simulation files. Results indicate no statistically significant difference between empirical data and the CogTool predictions. A population comparison test found no significant differences between the CogTool estimates and the empirical execution times for any of the four test conditions. We discuss modeling caveats and considerations for applying CogTool to crew performance modeling in advanced cockpit environments.
Model Forecast Skill and Sensitivity to Initial Conditions in the Seasonal Sea Ice Outlook
NASA Technical Reports Server (NTRS)
Blanchard-Wrigglesworth, E.; Cullather, R. I.; Wang, W.; Zhang, J.; Bitz, C. M.
2015-01-01
We explore the skill of predictions of September Arctic sea ice extent from dynamical models participating in the Sea Ice Outlook (SIO). Forecasts submitted in August, at roughly 2 month lead times, are skillful. However, skill is lower in forecasts submitted to SIO, which began in 2008, than in hindcasts (retrospective forecasts) of the last few decades. The multimodel mean SIO predictions offer slightly higher skill than the single-model SIO predictions, but neither beats a damped persistence forecast at longer than 2 month lead times. The models are largely unsuccessful at predicting each other, indicating a large difference in model physics and/or initial conditions. Motivated by this, we perform an initial condition sensitivity experiment with four SIO models, applying a fixed -1 m perturbation to the initial sea ice thickness. The significant range of the response among the models suggests that different model physics make a significant contribution to forecast uncertainty.
Evolution of the Radial Abundance Gradient and Cold Gas along the Milky Way Disk
NASA Astrophysics Data System (ADS)
Chen, Q. S.; Chang, R. X.; Yin, J.
2014-03-01
We have constructed a phenomenological model of the chemical evolution of the Milky Way disk, and treated the molecular and atomic gas separately. Using this model, we explore the radial profiles of oxygen abundance, the surface density of cold gas, and their time evolutions. It is shown that the model predictions are very sensitive to the adopted infall time-scale. By comparing the model predictions with the observations, we find that the model adopting the star formation law based on H_2 can properly predict the observed radial distributions of cold gas and oxygen abundance gradient along the disk. We also compare the model results with the predictions of the model which adopts the instantaneous recycling approximation (IRA), and find that the IRA assumption has little influence on the model results, especially in the low-density gas region.
Reliability Prediction of Ontology-Based Service Compositions Using Petri Net and Time Series Models
Li, Jia; Xia, Yunni; Luo, Xin
2014-01-01
OWL-S, one of the most important Semantic Web service ontologies proposed to date, provides a core ontological framework and guidelines for describing the properties and capabilities of their web services in an unambiguous, computer interpretable form. Predicting the reliability of composite service processes specified in OWL-S allows service users to decide whether the process meets the quantitative quality requirement. In this study, we consider the runtime quality of services to be fluctuating and introduce a dynamic framework to predict the runtime reliability of services specified in OWL-S, employing the Non-Markovian stochastic Petri net (NMSPN) and the time series model. The framework includes the following steps: obtaining the historical response times series of individual service components; fitting these series with a autoregressive-moving-average-model (ARMA for short) and predicting the future firing rates of service components; mapping the OWL-S process into a NMSPN model; employing the predicted firing rates as the model input of NMSPN and calculating the normal completion probability as the reliability estimate. In the case study, a comparison between the static model and our approach based on experimental data is presented and it is shown that our approach achieves higher prediction accuracy. PMID:24688429
The Subseasonal Experiment (SubX) to Advance National Weather Service Predictions for Weeks 3-4
NASA Astrophysics Data System (ADS)
Mariotti, A.; Barrie, D.; Archambault, H. M.
2017-12-01
There is great practical interest in developing skillful predictions of extremes for lead times extending beyond the two-week theoretical predictability skill barrier for weather forecasts to the subseasonal-to-seasonal (S2S) time scale. The processes and phenomena specific to S2S are posited to require a unified approach to science, modeling, and predictions that draws expertise from both the weather and climate/seasonal communities. Based on this premise, in 2016, the NOAA Climate Program Office Modeling, Analysis, Predictions and Projections (MAPP) program, in partnership with the National Weather Service Office of Science and Technology Integration, launched a major research and transition initiative to meet NOAA's emerging research and transition needs for developing skillful S2S predictions. A major component of this initiative is an experiment to test single- and multi-model ensembles for subseasonal prediction, called the Subseasonal Experiment (SubX). SubX, which engages six modeling groups, is producing real time experimental forecasts based on weather, climate, and Earth system models for weeks 3-4. The project investigators are evaluating, testing, and optimizing this system, and the hindcast and real time forecast data are available to the broad community. SubX research is targeted at a number of important decision-making contexts including drought and extremes, as well as the broad variety of phenomena that are meaningful at subseasonal timescales (e.g., MJO, ENSO, stratosphere/troposphere coupling, etc.). This presentation will discuss the design and status of SubX in the broader context of MAPP program S2S prediction research.
NASA Astrophysics Data System (ADS)
Ceballos-Núñez, Verónika; Richardson, Andrew; Sierra, Carlos
2017-04-01
The global carbon cycle is strongly controlled by the source/sink strength of vegetation as well as the capacity of terrestrial ecosystems to retain this carbon. However, it is uncertain how some vegetation dynamics such as the allocation of carbon to different ecosystem compartments should be represented in models. The assumptions behind model structures may result in highly divergent model predictions. Here, we asses model performance by calculating the age of the carbon in the system and in each compartment, and the overall transit time of C in the system. We used these diagnostics to assess the influence of three different carbon allocation schemes on the rates of C cycling in vegetation. First, we used published measurements of ecosystem C compartments from the Harvard Forest Environmental Measurement Site to find the best set of parameters for the different model structures. Second, we calculated C stocks, respiration fluxes, radiocarbon values, ages, and transit times. We found a good fit of the three model structures to the available data, but the time series of C in foliage and wood need to be complemented with other ecosystem compartments in order to reduce the high parameter collinearity that we observed and reduce model equifinality. Differences in model structures had a small impact on predicting ecosystem C compartments, but overall they resulted in very different predictions of age and transit time distributions. In particular, the inclusion of a storage compartment had an important impact on predicting system ages and transit times. In the case of the models with 1 or 2 storage compartments, the age of carbon in the system and in each of the compartments was distributed more towards younger ages than in the model that had no storage; the mean system age of these two models with storage was 80 years younger than in the model without storage. As expected from these age distributions, the mean transit time for the two models with storage compartments was 50 years faster than for the model without storage. These results suggest that ages and transit times, which can be indirectly measured using isotope tracers, serve as important diagnostics of model structure and could largely help to reduce uncertainties in model predictions. Furthermore, by considering age and transit times of C in vegetation compartments as distributions, not only their mean values, we obtain additional insights on the temporal dynamics of carbon use, storage, and allocation to plant parts, which not only depends on the rate at which this C is transferred in and out of the compartments, but also on the stochastic nature of the process itself.
Finite difference time domain grid generation from AMC helicopter models
NASA Technical Reports Server (NTRS)
Cravey, Robin L.
1992-01-01
A simple technique is presented which forms a cubic grid model of a helicopter from an Aircraft Modeling Code (AMC) input file. The AMC input file defines the helicopter fuselage as a series of polygonal cross sections. The cubic grid model is used as an input to a Finite Difference Time Domain (FDTD) code to obtain predictions of antenna performance on a generic helicopter model. The predictions compare reasonably well with measured data.
Required Collaborative Work in Online Courses: A Predictive Modeling Approach
ERIC Educational Resources Information Center
Smith, Marlene A.; Kellogg, Deborah L.
2015-01-01
This article describes a predictive model that assesses whether a student will have greater perceived learning in group assignments or in individual work. The model produces correct classifications 87.5% of the time. The research is notable in that it is the first in the education literature to adopt a predictive modeling methodology using data…
Badgett, Majors J; Boyes, Barry; Orlando, Ron
2018-02-16
A model that predicts retention for peptides using a HALO ® penta-HILIC column and gradient elution was created. Coefficients for each amino acid were derived using linear regression analysis and these coefficients can be summed to predict the retention of peptides. This model has a high correlation between experimental and predicted retention times (0.946), which is on par with previous RP and HILIC models. External validation of the model was performed using a set of H. pylori samples on the same LC-MS system used to create the model, and the deviation from actual to predicted times was low. Apart from amino acid composition, length and location of amino acid residues on a peptide were examined and two site-specific corrections for hydrophobic residues at the N-terminus as well as hydrophobic residues one spot over from the N-terminus were created. Copyright © 2017 Elsevier B.V. All rights reserved.
Just-in-Time Correntropy Soft Sensor with Noisy Data for Industrial Silicon Content Prediction.
Chen, Kun; Liang, Yu; Gao, Zengliang; Liu, Yi
2017-08-08
Development of accurate data-driven quality prediction models for industrial blast furnaces encounters several challenges mainly because the collected data are nonlinear, non-Gaussian, and uneven distributed. A just-in-time correntropy-based local soft sensing approach is presented to predict the silicon content in this work. Without cumbersome efforts for outlier detection, a correntropy support vector regression (CSVR) modeling framework is proposed to deal with the soft sensor development and outlier detection simultaneously. Moreover, with a continuous updating database and a clustering strategy, a just-in-time CSVR (JCSVR) method is developed. Consequently, more accurate prediction and efficient implementations of JCSVR can be achieved. Better prediction performance of JCSVR is validated on the online silicon content prediction, compared with traditional soft sensors.
Just-in-Time Correntropy Soft Sensor with Noisy Data for Industrial Silicon Content Prediction
Chen, Kun; Liang, Yu; Gao, Zengliang; Liu, Yi
2017-01-01
Development of accurate data-driven quality prediction models for industrial blast furnaces encounters several challenges mainly because the collected data are nonlinear, non-Gaussian, and uneven distributed. A just-in-time correntropy-based local soft sensing approach is presented to predict the silicon content in this work. Without cumbersome efforts for outlier detection, a correntropy support vector regression (CSVR) modeling framework is proposed to deal with the soft sensor development and outlier detection simultaneously. Moreover, with a continuous updating database and a clustering strategy, a just-in-time CSVR (JCSVR) method is developed. Consequently, more accurate prediction and efficient implementations of JCSVR can be achieved. Better prediction performance of JCSVR is validated on the online silicon content prediction, compared with traditional soft sensors. PMID:28786957
Validity of Treadmill-Derived Critical Speed on Predicting 5000-Meter Track-Running Performance.
Nimmerichter, Alfred; Novak, Nina; Triska, Christoph; Prinz, Bernhard; Breese, Brynmor C
2017-03-01
Nimmerichter, A, Novak, N, Triska, C, Prinz, B, and Breese, BC. Validity of treadmill-derived critical speed on predicting 5,000-meter track-running performance. J Strength Cond Res 31(3): 706-714, 2017-To evaluate 3 models of critical speed (CS) for the prediction of 5,000-m running performance, 16 trained athletes completed an incremental test on a treadmill to determine maximal aerobic speed (MAS) and 3 randomly ordered runs to exhaustion at the [INCREMENT]70% intensity, at 110% and 98% of MAS. Critical speed and the distance covered above CS (D') were calculated using the hyperbolic speed-time (HYP), the linear distance-time (LIN), and the linear speed inverse-time model (INV). Five thousand meter performance was determined on a 400-m running track. Individual predictions of 5,000-m running time (t = [5,000-D']/CS) and speed (s = D'/t + CS) were calculated across the 3 models in addition to multiple regression analyses. Prediction accuracy was assessed with the standard error of estimate (SEE) from linear regression analysis and the mean difference expressed in units of measurement and coefficient of variation (%). Five thousand meter running performance (speed: 4.29 ± 0.39 m·s; time: 1,176 ± 117 seconds) was significantly better than the predictions from all 3 models (p < 0.0001). The mean difference was 65-105 seconds (5.7-9.4%) for time and -0.22 to -0.34 m·s (-5.0 to -7.5%) for speed. Predictions from multiple regression analyses with CS and D' as predictor variables were not significantly different from actual running performance (-1.0 to 1.1%). The SEE across all models and predictions was approximately 65 seconds or 0.20 m·s and is therefore considered as moderate. The results of this study have shown the importance of aerobic and anaerobic energy system contribution to predict 5,000-m running performance. Using estimates of CS and D' is valuable for predicting performance over race distances of 5,000 m.
Model-based prediction of myelosuppression and recovery based on frequent neutrophil monitoring.
Netterberg, Ida; Nielsen, Elisabet I; Friberg, Lena E; Karlsson, Mats O
2017-08-01
To investigate whether a more frequent monitoring of the absolute neutrophil counts (ANC) during myelosuppressive chemotherapy, together with model-based predictions, can improve therapy management, compared to the limited clinical monitoring typically applied today. Daily ANC in chemotherapy-treated cancer patients were simulated from a previously published population model describing docetaxel-induced myelosuppression. The simulated values were used to generate predictions of the individual ANC time-courses, given the myelosuppression model. The accuracy of the predicted ANC was evaluated under a range of conditions with reduced amount of ANC measurements. The predictions were most accurate when more data were available for generating the predictions and when making short forecasts. The inaccuracy of ANC predictions was highest around nadir, although a high sensitivity (≥90%) was demonstrated to forecast Grade 4 neutropenia before it occurred. The time for a patient to recover to baseline could be well forecasted 6 days (±1 day) before the typical value occurred on day 17. Daily monitoring of the ANC, together with model-based predictions, could improve anticancer drug treatment by identifying patients at risk for severe neutropenia and predicting when the next cycle could be initiated.
Austin, Peter C.; van Klaveren, David; Vergouwe, Yvonne; Nieboer, Daan; Lee, Douglas S.; Steyerberg, Ewout W.
2017-01-01
Objective Validation of clinical prediction models traditionally refers to the assessment of model performance in new patients. We studied different approaches to geographic and temporal validation in the setting of multicenter data from two time periods. Study Design and Setting We illustrated different analytic methods for validation using a sample of 14,857 patients hospitalized with heart failure at 90 hospitals in two distinct time periods. Bootstrap resampling was used to assess internal validity. Meta-analytic methods were used to assess geographic transportability. Each hospital was used once as a validation sample, with the remaining hospitals used for model derivation. Hospital-specific estimates of discrimination (c-statistic) and calibration (calibration intercepts and slopes) were pooled using random effects meta-analysis methods. I2 statistics and prediction interval width quantified geographic transportability. Temporal transportability was assessed using patients from the earlier period for model derivation and patients from the later period for model validation. Results Estimates of reproducibility, pooled hospital-specific performance, and temporal transportability were on average very similar, with c-statistics of 0.75. Between-hospital variation was moderate according to I2 statistics and prediction intervals for c-statistics. Conclusion This study illustrates how performance of prediction models can be assessed in settings with multicenter data at different time periods. PMID:27262237
ERIC Educational Resources Information Center
Culpepper, Steven Andrew
2010-01-01
Statistical prediction remains an important tool for decisions in a variety of disciplines. An equally important issue is identifying factors that contribute to more or less accurate predictions. The time series literature includes well developed methods for studying predictability and volatility over time. This article develops…
Jeon, Mi Young; Lee, Hye Won; Kim, Seung Up; Kim, Beom Kyung; Park, Jun Yong; Kim, Do Young; Han, Kwang-Hyub; Ahn, Sang Hoon
2018-04-01
Several risk prediction models for hepatocellular carcinoma (HCC) development are available. We explored whether the use of risk prediction models can dynamically predict HCC development at different time points in chronic hepatitis B (CHB) patients. Between 2006 and 2014, 1397 CHB patients were recruited. All patients underwent serial transient elastography at intervals of >6 months. The median age of this study population (931 males and 466 females) was 49.0 years. The median CU-HCC, REACH-B, LSM-HCC and mREACH-B score at enrolment were 4.0, 9.0, 10.0 and 8.0 respectively. During the follow-up period (median, 68.0 months), 87 (6.2%) patients developed HCC. All risk prediction models were successful in predicting HCC development at both the first liver stiffness (LS) measurement (hazard ratio [HR] = 1.067-1.467 in the subgroup without antiviral therapy [AVT] and 1.096-1.458 in the subgroup with AVT) and second LS measurement (HR = 1.125-1.448 in the subgroup without AVT and 1.087-1.249 in the subgroup with AVT). In contrast, neither the absolute nor percentage change in the scores from the risk prediction models predicted HCC development (all P > .05). The mREACH-B score performed similarly or significantly better than did the other scores (AUROCs at 5 years, 0.694-0.862 vs 0.537-0.875). Dynamic prediction of HCC development at different time points was achieved using four risk prediction models, but not using the changes in the absolute and percentage values between two time points. The mREACH-B score was the most appropriate prediction model of HCC development among four prediction models. © 2017 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.
Prediction Models for Dynamic Demand Response
DOE Office of Scientific and Technical Information (OSTI.GOV)
Aman, Saima; Frincu, Marc; Chelmis, Charalampos
2015-11-02
As Smart Grids move closer to dynamic curtailment programs, Demand Response (DR) events will become necessary not only on fixed time intervals and weekdays predetermined by static policies, but also during changing decision periods and weekends to react to real-time demand signals. Unique challenges arise in this context vis-a-vis demand prediction and curtailment estimation and the transformation of such tasks into an automated, efficient dynamic demand response (D 2R) process. While existing work has concentrated on increasing the accuracy of prediction models for DR, there is a lack of studies for prediction models for D 2R, which we address inmore » this paper. Our first contribution is the formal definition of D 2R, and the description of its challenges and requirements. Our second contribution is a feasibility analysis of very-short-term prediction of electricity consumption for D 2R over a diverse, large-scale dataset that includes both small residential customers and large buildings. Our third, and major contribution is a set of insights into the predictability of electricity consumption in the context of D 2R. Specifically, we focus on prediction models that can operate at a very small data granularity (here 15-min intervals), for both weekdays and weekends - all conditions that characterize scenarios for D 2R. We find that short-term time series and simple averaging models used by Independent Service Operators and utilities achieve superior prediction accuracy. We also observe that workdays are more predictable than weekends and holiday. Also, smaller customers have large variation in consumption and are less predictable than larger buildings. Key implications of our findings are that better models are required for small customers and for non-workdays, both of which are critical for D 2R. Also, prediction models require just few days’ worth of data indicating that small amounts of historical training data can be used to make reliable predictions, simplifying the complexity of big data challenge associated with D 2R.« less
Federal Register 2010, 2011, 2012, 2013, 2014
2010-12-07
... Wind Erosion Prediction System for Soil Erodibility System Calculations for the Natural Resources... Erosion Prediction System (WEPS) for soil erodibility system calculations scheduled for implementation for... computer model is a process-based, daily time-step computer model that predicts soil erosion via simulation...
Life prediction methodology for thermal-mechanical fatigue and elevated temperature creep design
NASA Astrophysics Data System (ADS)
Annigeri, Ravindra
Nickel-based superalloys are used for hot section components of gas turbine engines. Life prediction techniques are necessary to assess service damage in superalloy components resulting from thermal-mechanical fatigue (TMF) and elevated temperature creep. A new TMF life model based on continuum damage mechanics has been developed and applied to IN 738 LC substrate material with and without coating. The model also characterizes TMF failure in bulk NiCoCrAlY overlay and NiAl aluminide coatings. The inputs to the TMF life model are mechanical strain range, hold time, peak cycle temperatures and maximum stress measured from the stabilized or mid-life hysteresis loops. A viscoplastic model is used to predict the stress-strain hysteresis loops. A flow rule used in the viscoplastic model characterizes the inelastic strain rate as a function of the applied stress and a set of three internal stress variables known as back stress, drag stress and limit stress. Test results show that the viscoplastic model can reasonably predict time-dependent stress-strain response of the coated material and stress relaxation during hold times. In addition to the TMF life prediction methodology, a model has been developed to characterize the uniaxial and multiaxial creep behavior. An effective stress defined as the applied stress minus the back stress is used to characterize the creep recovery and primary creep behavior. The back stress has terms representing strain hardening, dynamic recovery and thermal recovery. Whenever the back stress is greater than the applied stress, the model predicts a negative creep rate observed during multiple stress and multiple temperature cyclic tests. The model also predicted the rupture time and the remaining life that are important for life assessment. The model has been applied to IN 738 LC, Mar-M247, bulk NiCoCrAlY overlay coating and 316 austenitic stainless steel. The proposed model predicts creep response with a reasonable accuracy for wide range of loading cases such as uniaxial tension, tension-torsion and tension-internal pressure loading.
Xu, Yadong; Serre, Marc L; Reyes, Jeanette; Vizuete, William
2016-04-19
To improve ozone exposure estimates for ambient concentrations at a national scale, we introduce our novel Regionalized Air Quality Model Performance (RAMP) approach to integrate chemical transport model (CTM) predictions with the available ozone observations using the Bayesian Maximum Entropy (BME) framework. The framework models the nonlinear and nonhomoscedastic relation between air pollution observations and CTM predictions and for the first time accounts for variability in CTM model performance. A validation analysis using only noncollocated data outside of a validation radius rv was performed and the R(2) between observations and re-estimated values for two daily metrics, the daily maximum 8-h average (DM8A) and the daily 24-h average (D24A) ozone concentrations, were obtained with the OBS scenario using ozone observations only in contrast with the RAMP and a Constant Air Quality Model Performance (CAMP) scenarios. We show that, by accounting for the spatial and temporal variability in model performance, our novel RAMP approach is able to extract more information in terms of R(2) increase percentage, with over 12 times for the DM8A and over 3.5 times for the D24A ozone concentrations, from CTM predictions than the CAMP approach assuming that model performance does not change across space and time.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Beer, W. Nicholas; Iltis, Susannah; Anderson, James J.
2009-01-01
Columbia Basin Research uses the COMPASS model on a daily basis during the outmigration of Snake River Chinook and steelhead smolts to predict downstream passage and survival. Fish arrival predictions and observations from program RealTime along with predicted and observed environmental conditions are used to make in-season predictions of arrival and survival to various dams in the Columbia and Snake Rivers. For 2008, calibrations of travel and survival parameters for two stocks of fish-Snake River yearling PIT-tagged wild chinook salmon (chin1pit) and Snake River PIT-tagged steelhead (lgrStlhd)-were used to model travel and survival of steelhead and chinook stocks from Lowermore » Granite Dam (LWG) or McNary Dam (MCN) to Bonneville Dam (BON). This report summarizes the success of the COMPASS/RealTime process to model these migrations as they occur. We compared model results on timing and survival to data from two sources: stock specific counts at dams and end-of-season control survival estimates (Jim Faulkner, NOAA, pers. comm. Dec. 16, 2008). The difference between the predicted and observed day of median passage and the Mean Absolute Deviation (MAD) between predicted and observed arrival cumulative distributions are measures of timing accuracy. MAD is essentially the average percentage error over the season. The difference between the predicted and observed survivals is a measure of survival accuracy. Model results and timing data were in good agreement from LWG to John Day Dam (JDA). Predictions of median passage days for the chin1pit and lgrStlhd stocks were 0 and 2 days (respectively) later than observed. MAD for chin1pit and lgrStlhd stocks at JDA were 2.3% and 5.9% (respectively). Between JDA and BON modeling and timing data were not as well matched. At BON, median passage predictions were 6 and 10 days later than observed and MAD values were 7.8% and 16.0% respectively. Model results and survival data were in good agreement from LWG to MCN. COMPASS predicted survivals of 0.77 and 0.69 for chin1pit and lgrStlhd, while the data control's survivals were 0.79 and 0.68. The differences are 0.02 and 0.01 (respectively), nearly identical. However, from MCN to BON, COMPASS predicted survivals of 0.74 and 0.69 while the data controls survivals were 0.47 and 0.53 respectively. Differences of 0.27 and 0.16. In summary: Travel and survival of chin1pit and lgrStlhd stocks were well modeled in the upper reaches. Fish in the lower reaches down through BON suffered unmodeled mortality, and/or passed BON undetected. A drop in bypass fraction and unmodeled mortality during the run could produce such patterns by shifting the observed median passage day to appear artificially early.« less
Adaptive MPC based on MIMO ARX-Laguerre model.
Ben Abdelwahed, Imen; Mbarek, Abdelkader; Bouzrara, Kais
2017-03-01
This paper proposes a method for synthesizing an adaptive predictive controller using a reduced complexity model. This latter is given by the projection of the ARX model on Laguerre bases. The resulting model is entitled MIMO ARX-Laguerre and it is characterized by an easy recursive representation. The adaptive predictive control law is computed based on multi-step-ahead finite-element predictors, identified directly from experimental input/output data. The model is tuned in each iteration by an online identification algorithms of both model parameters and Laguerre poles. The proposed approach avoids time consuming numerical optimization algorithms associated with most common linear predictive control strategies, which makes it suitable for real-time implementation. The method is used to synthesize and test in numerical simulations adaptive predictive controllers for the CSTR process benchmark. Copyright © 2016 ISA. Published by Elsevier Ltd. All rights reserved.
Predictive model for risk of cesarean section in pregnant women after induction of labor.
Hernández-Martínez, Antonio; Pascual-Pedreño, Ana I; Baño-Garnés, Ana B; Melero-Jiménez, María R; Tenías-Burillo, José M; Molina-Alarcón, Milagros
2016-03-01
To develop a predictive model for risk of cesarean section in pregnant women after induction of labor. A retrospective cohort study was conducted of 861 induced labors during 2009, 2010, and 2011 at Hospital "La Mancha-Centro" in Alcázar de San Juan, Spain. Multivariate analysis was used with binary logistic regression and areas under the ROC curves to determine predictive ability. Two predictive models were created: model A predicts the outcome at the time the woman is admitted to the hospital (before the decision to of the method of induction); and model B predicts the outcome at the time the woman is definitely admitted to the labor room. The predictive factors in the final model were: maternal height, body mass index, nulliparity, Bishop score, gestational age, macrosomia, gender of fetus, and the gynecologist's overall cesarean section rate. The predictive ability of model A was 0.77 [95% confidence interval (CI) 0.73-0.80] and model B was 0.79 (95% CI 0.76-0.83). The predictive ability for pregnant women with previous cesarean section with model A was 0.79 (95% CI 0.64-0.94) and with model B was 0.80 (95% CI 0.64-0.96). For a probability of estimated cesarean section ≥80%, the models A and B presented a positive likelihood ratio (+LR) for cesarean section of 22 and 20, respectively. Also, for a likelihood of estimated cesarean section ≤10%, the models A and B presented a +LR for vaginal delivery of 13 and 6, respectively. These predictive models have a good discriminative ability, both overall and for all subgroups studied. This tool can be useful in clinical practice, especially for pregnant women with previous cesarean section and diabetes.
The c-index is not proper for the evaluation of $t$-year predicted risks.
Blanche, Paul; Kattan, Michael W; Gerds, Thomas A
2018-02-16
We show that the widely used concordance index for time to event outcome is not proper when interest is in predicting a $t$-year risk of an event, for example 10-year mortality. In the situation with a fixed prediction horizon, the concordance index can be higher for a misspecified model than for a correctly specified model. Impropriety happens because the concordance index assesses the order of the event times and not the order of the event status at the prediction horizon. The time-dependent area under the receiver operating characteristic curve does not have this problem and is proper in this context.
NASA Astrophysics Data System (ADS)
MacDonald, D. D.; Saleh, A.; Lee, S. K.; Azizi, O.; Rosas-Camacho, O.; Al-Marzooqi, A.; Taylor, M.
2011-04-01
The prediction of corrosion damage of canisters to experimentally inaccessible times is vitally important in assessing various concepts for the disposal of High Level Nuclear Waste. Such prediction can only be made using deterministic models, whose predictions are constrained by the time-invariant natural laws. In this paper, we describe the measurement of experimental electrochemical data that will allow the prediction of damage to the carbon steel overpack of the super container in Belgium's proposed Boom Clay repository by using the Point Defect Model (PDM). PDM parameter values are obtained by optimizing the model on experimental, wide-band electrochemical impedance spectroscopy data.
Healthy work revisited: do changes in time strain predict well-being?
Moen, Phyllis; Kelly, Erin L; Lam, Jack
2013-04-01
Building on Karasek and Theorell (R. Karasek & T. Theorell, 1990, Healthy work: Stress, productivity, and the reconstruction of working life, New York, NY: Basic Books), we theorized and tested the relationship between time strain (work-time demands and control) and seven self-reported health outcomes. We drew on survey data from 550 employees fielded before and 6 months after the implementation of an organizational intervention, the results only work environment (ROWE) in a white-collar organization. Cross-sectional (wave 1) models showed psychological time demands and time control measures were related to health outcomes in expected directions. The ROWE intervention did not predict changes in psychological time demands by wave 2, but did predict increased time control (a sense of time adequacy and schedule control). Statistical models revealed increases in psychological time demands and time adequacy predicted changes in positive (energy, mastery, psychological well-being, self-assessed health) and negative (emotional exhaustion, somatic symptoms, psychological distress) outcomes in expected directions, net of job and home demands and covariates. This study demonstrates the value of including time strain in investigations of the health effects of job conditions. Results encourage longitudinal models of change in psychological time demands as well as time control, along with the development and testing of interventions aimed at reducing time strain in different populations of workers.
Strategies for Near Real Time Estimates of Precipitable Water Vapor from GPS Ground Receivers
NASA Technical Reports Server (NTRS)
Y., Bar-Sever; Runge, T.; Kroger, P.
1995-01-01
GPS-based estimates of precipitable water vapor (PWV) may be useful in numerical weather models to improve short-term weather predictions. To be effective in numerical weather prediction models, GPS PWV estimates must be produced with sufficient accuracy in near real time. Several estimation strategies for the near real time processing of GPS data are investigated.
Sturm, Marc; Quinten, Sascha; Huber, Christian G.; Kohlbacher, Oliver
2007-01-01
We propose a new model for predicting the retention time of oligonucleotides. The model is based on ν support vector regression using features derived from base sequence and predicted secondary structure of oligonucleotides. Because of the secondary structure information, the model is applicable even at relatively low temperatures where the secondary structure is not suppressed by thermal denaturing. This makes the prediction of oligonucleotide retention time for arbitrary temperatures possible, provided that the target temperature lies within the temperature range of the training data. We describe different possibilities of feature calculation from base sequence and secondary structure, present the results and compare our model to existing models. PMID:17567619
Data-adaptive Harmonic Decomposition and Real-time Prediction of Arctic Sea Ice Extent
NASA Astrophysics Data System (ADS)
Kondrashov, Dmitri; Chekroun, Mickael; Ghil, Michael
2017-04-01
Decline in the Arctic sea ice extent (SIE) has profound socio-economic implications and is a focus of active scientific research. Of particular interest is prediction of SIE on subseasonal time scales, i.e. from early summer into fall, when sea ice coverage in Arctic reaches its minimum. However, subseasonal forecasting of SIE is very challenging due to the high variability of ocean and atmosphere over Arctic in summer, as well as shortness of observational data and inadequacies of the physics-based models to simulate sea-ice dynamics. The Sea Ice Outlook (SIO) by Sea Ice Prediction Network (SIPN, http://www.arcus.org/sipn) is a collaborative effort to facilitate and improve subseasonal prediction of September SIE by physics-based and data-driven statistical models. Data-adaptive Harmonic Decomposition (DAH) and Multilayer Stuart-Landau Models (MSLM) techniques [Chekroun and Kondrashov, 2017], have been successfully applied to the nonlinear stochastic modeling, as well as retrospective and real-time forecasting of Multisensor Analyzed Sea Ice Extent (MASIE) dataset in key four Arctic regions. In particular, DAH-MSLM predictions outperformed most statistical models and physics-based models in real-time 2016 SIO submissions. The key success factors are associated with DAH ability to disentangle complex regional dynamics of MASIE by data-adaptive harmonic spatio-temporal patterns that reduce the data-driven modeling effort to elemental MSLMs stacked per frequency with fixed and small number of model coefficients to estimate.
Coiera, Enrico; Wang, Ying; Magrabi, Farah; Concha, Oscar Perez; Gallego, Blanca; Runciman, William
2014-05-21
Current prognostic models factor in patient and disease specific variables but do not consider cumulative risks of hospitalization over time. We developed risk models of the likelihood of death associated with cumulative exposure to hospitalization, based on time-varying risks of hospitalization over any given day, as well as day of the week. Model performance was evaluated alone, and in combination with simple disease-specific models. Patients admitted between 2000 and 2006 from 501 public and private hospitals in NSW, Australia were used for training and 2007 data for evaluation. The impact of hospital care delivered over different days of the week and or times of the day was modeled by separating hospitalization risk into 21 separate time periods (morning, day, night across the days of the week). Three models were developed to predict death up to 7-days post-discharge: 1/a simple background risk model using age, gender; 2/a time-varying risk model for exposure to hospitalization (admission time, days in hospital); 3/disease specific models (Charlson co-morbidity index, DRG). Combining these three generated a full model. Models were evaluated by accuracy, AUC, Akaike and Bayesian information criteria. There was a clear diurnal rhythm to hospital mortality in the data set, peaking in the evening, as well as the well-known 'weekend-effect' where mortality peaks with weekend admissions. Individual models had modest performance on the test data set (AUC 0.71, 0.79 and 0.79 respectively). The combined model which included time-varying risk however yielded an average AUC of 0.92. This model performed best for stays up to 7-days (93% of admissions), peaking at days 3 to 5 (AUC 0.94). Risks of hospitalization vary not just with the day of the week but also time of the day, and can be used to make predictions about the cumulative risk of death associated with an individual's hospitalization. Combining disease specific models with such time varying- estimates appears to result in robust predictive performance. Such risk exposure models should find utility both in enhancing standard prognostic models as well as estimating the risk of continuation of hospitalization.
Hierarchical time series bottom-up approach for forecast the export value in Central Java
NASA Astrophysics Data System (ADS)
Mahkya, D. A.; Ulama, B. S.; Suhartono
2017-10-01
The purpose of this study is Getting the best modeling and predicting the export value of Central Java using a Hierarchical Time Series. The export value is one variable injection in the economy of a country, meaning that if the export value of the country increases, the country’s economy will increase even more. Therefore, it is necessary appropriate modeling to predict the export value especially in Central Java. Export Value in Central Java are grouped into 21 commodities with each commodity has a different pattern. One approach that can be used time series is a hierarchical approach. Hierarchical Time Series is used Buttom-up. To Forecast the individual series at all levels using Autoregressive Integrated Moving Average (ARIMA), Radial Basis Function Neural Network (RBFNN), and Hybrid ARIMA-RBFNN. For the selection of the best models used Symmetric Mean Absolute Percentage Error (sMAPE). Results of the analysis showed that for the Export Value of Central Java, Bottom-up approach with Hybrid ARIMA-RBFNN modeling can be used for long-term predictions. As for the short and medium-term predictions, it can be used a bottom-up approach RBFNN modeling. Overall bottom-up approach with RBFNN modeling give the best result.
Application of Grey Model GM(1, 1) to Ultra Short-Term Predictions of Universal Time
NASA Astrophysics Data System (ADS)
Lei, Yu; Guo, Min; Zhao, Danning; Cai, Hongbing; Hu, Dandan
2016-03-01
A mathematical model known as one-order one-variable grey differential equation model GM(1, 1) has been herein employed successfully for the ultra short-term (<10days) predictions of universal time (UT1-UTC). The results of predictions are analyzed and compared with those obtained by other methods. It is shown that the accuracy of the predictions is comparable with that obtained by other prediction methods. The proposed method is able to yield an exact prediction even though only a few observations are provided. Hence it is very valuable in the case of a small size dataset since traditional methods, e.g., least-squares (LS) extrapolation, require longer data span to make a good forecast. In addition, these results can be obtained without making any assumption about an original dataset, and thus is of high reliability. Another advantage is that the developed method is easy to use. All these reveal a great potential of the GM(1, 1) model for UT1-UTC predictions.
NASA Astrophysics Data System (ADS)
Yao, Bing; Yang, Hui
2016-12-01
This paper presents a novel physics-driven spatiotemporal regularization (STRE) method for high-dimensional predictive modeling in complex healthcare systems. This model not only captures the physics-based interrelationship between time-varying explanatory and response variables that are distributed in the space, but also addresses the spatial and temporal regularizations to improve the prediction performance. The STRE model is implemented to predict the time-varying distribution of electric potentials on the heart surface based on the electrocardiogram (ECG) data from the distributed sensor network placed on the body surface. The model performance is evaluated and validated in both a simulated two-sphere geometry and a realistic torso-heart geometry. Experimental results show that the STRE model significantly outperforms other regularization models that are widely used in current practice such as Tikhonov zero-order, Tikhonov first-order and L1 first-order regularization methods.
Developing a predictive tropospheric ozone model for Tabriz
NASA Astrophysics Data System (ADS)
Khatibi, Rahman; Naghipour, Leila; Ghorbani, Mohammad A.; Smith, Michael S.; Karimi, Vahid; Farhoudi, Reza; Delafrouz, Hadi; Arvanaghi, Hadi
2013-04-01
Predictive ozone models are becoming indispensable tools by providing a capability for pollution alerts to serve people who are vulnerable to the risks. We have developed a tropospheric ozone prediction capability for Tabriz, Iran, by using the following five modeling strategies: three regression-type methods: Multiple Linear Regression (MLR), Artificial Neural Networks (ANNs), and Gene Expression Programming (GEP); and two auto-regression-type models: Nonlinear Local Prediction (NLP) to implement chaos theory and Auto-Regressive Integrated Moving Average (ARIMA) models. The regression-type modeling strategies explain the data in terms of: temperature, solar radiation, dew point temperature, and wind speed, by regressing present ozone values to their past values. The ozone time series are available at various time intervals, including hourly intervals, from August 2010 to March 2011. The results for MLR, ANN and GEP models are not overly good but those produced by NLP and ARIMA are promising for the establishing a forecasting capability.
Estimating epidemic arrival times using linear spreading theory
NASA Astrophysics Data System (ADS)
Chen, Lawrence M.; Holzer, Matt; Shapiro, Anne
2018-01-01
We study the dynamics of a spatially structured model of worldwide epidemics and formulate predictions for arrival times of the disease at any city in the network. The model is composed of a system of ordinary differential equations describing a meta-population susceptible-infected-recovered compartmental model defined on a network where each node represents a city and the edges represent the flight paths connecting cities. Making use of the linear determinacy of the system, we consider spreading speeds and arrival times in the system linearized about the unstable disease free state and compare these to arrival times in the nonlinear system. Two predictions are presented. The first is based upon expansion of the heat kernel for the linearized system. The second assumes that the dominant transmission pathway between any two cities can be approximated by a one dimensional lattice or a homogeneous tree and gives a uniform prediction for arrival times independent of the specific network features. We test these predictions on a real network describing worldwide airline traffic.
Predictive modeling of dynamic fracture growth in brittle materials with machine learning
DOE Office of Scientific and Technical Information (OSTI.GOV)
Moore, Bryan A.; Rougier, Esteban; O’Malley, Daniel
We use simulation data from a high delity Finite-Discrete Element Model to build an e cient Machine Learning (ML) approach to predict fracture growth and coalescence. Our goal is for the ML approach to be used as an emulator in place of the computationally intensive high delity models in an uncertainty quanti cation framework where thousands of forward runs are required. The failure of materials with various fracture con gurations (size, orientation and the number of initial cracks) are explored and used as data to train our ML model. This novel approach has shown promise in predicting spatial (path tomore » failure) and temporal (time to failure) aspects of brittle material failure. Predictions of where dominant fracture paths formed within a material were ~85% accurate and the time of material failure deviated from the actual failure time by an average of ~16%. Additionally, the ML model achieves a reduction in computational cost by multiple orders of magnitude.« less
Predictive modeling of dynamic fracture growth in brittle materials with machine learning
Moore, Bryan A.; Rougier, Esteban; O’Malley, Daniel; ...
2018-02-22
We use simulation data from a high delity Finite-Discrete Element Model to build an e cient Machine Learning (ML) approach to predict fracture growth and coalescence. Our goal is for the ML approach to be used as an emulator in place of the computationally intensive high delity models in an uncertainty quanti cation framework where thousands of forward runs are required. The failure of materials with various fracture con gurations (size, orientation and the number of initial cracks) are explored and used as data to train our ML model. This novel approach has shown promise in predicting spatial (path tomore » failure) and temporal (time to failure) aspects of brittle material failure. Predictions of where dominant fracture paths formed within a material were ~85% accurate and the time of material failure deviated from the actual failure time by an average of ~16%. Additionally, the ML model achieves a reduction in computational cost by multiple orders of magnitude.« less
Sampling design optimisation for rainfall prediction using a non-stationary geostatistical model
NASA Astrophysics Data System (ADS)
Wadoux, Alexandre M. J.-C.; Brus, Dick J.; Rico-Ramirez, Miguel A.; Heuvelink, Gerard B. M.
2017-09-01
The accuracy of spatial predictions of rainfall by merging rain-gauge and radar data is partly determined by the sampling design of the rain-gauge network. Optimising the locations of the rain-gauges may increase the accuracy of the predictions. Existing spatial sampling design optimisation methods are based on minimisation of the spatially averaged prediction error variance under the assumption of intrinsic stationarity. Over the past years, substantial progress has been made to deal with non-stationary spatial processes in kriging. Various well-documented geostatistical models relax the assumption of stationarity in the mean, while recent studies show the importance of considering non-stationarity in the variance for environmental processes occurring in complex landscapes. We optimised the sampling locations of rain-gauges using an extension of the Kriging with External Drift (KED) model for prediction of rainfall fields. The model incorporates both non-stationarity in the mean and in the variance, which are modelled as functions of external covariates such as radar imagery, distance to radar station and radar beam blockage. Spatial predictions are made repeatedly over time, each time recalibrating the model. The space-time averaged KED variance was minimised by Spatial Simulated Annealing (SSA). The methodology was tested using a case study predicting daily rainfall in the north of England for a one-year period. Results show that (i) the proposed non-stationary variance model outperforms the stationary variance model, and (ii) a small but significant decrease of the rainfall prediction error variance is obtained with the optimised rain-gauge network. In particular, it pays off to place rain-gauges at locations where the radar imagery is inaccurate, while keeping the distribution over the study area sufficiently uniform.
On the Selection of Models for Runtime Prediction of System Resources
NASA Astrophysics Data System (ADS)
Casolari, Sara; Colajanni, Michele
Applications and services delivered through large Internet Data Centers are now feasible thanks to network and server improvement, but also to virtualization, dynamic allocation of resources and dynamic migrations. The large number of servers and resources involved in these systems requires autonomic management strategies because no amount of human administrators would be capable of cloning and migrating virtual machines in time, as well as re-distributing or re-mapping the underlying hardware. At the basis of most autonomic management decisions, there is the need of evaluating own global behavior and change it when the evaluation indicates that they are not accomplishing what they were intended to do or some relevant anomalies are occurring. Decisions algorithms have to satisfy different time scales constraints. In this chapter we are interested to short-term contexts where runtime prediction models work on the basis of time series coming from samples of monitored system resources, such as disk, CPU and network utilization. In similar environments, we have to address two main issues. First, original time series are affected by limited predictability because measurements are characterized by noises due to system instability, variable offered load, heavy-tailed distributions, hardware and software interactions. Moreover, there is no existing criteria that can help us to choose a suitable prediction model and related parameters with the purpose of guaranteeing an adequate prediction quality. In this chapter, we evaluate the impact that different choices on prediction models have on different time series, and we suggest how to treat input data and whether it is convenient to choose the parameters of a prediction model in a static or dynamic way. Our conclusions are supported by a large set of analyses on realistic and synthetic data traces.
Financial Distress Prediction Using Discrete-time Hazard Model and Rating Transition Matrix Approach
NASA Astrophysics Data System (ADS)
Tsai, Bi-Huei; Chang, Chih-Huei
2009-08-01
Previous studies used constant cut-off indicator to distinguish distressed firms from non-distressed ones in the one-stage prediction models. However, distressed cut-off indicator must shift according to economic prosperity, rather than remains fixed all the time. This study focuses on Taiwanese listed firms and develops financial distress prediction models based upon the two-stage method. First, this study employs the firm-specific financial ratio and market factors to measure the probability of financial distress based on the discrete-time hazard models. Second, this paper further focuses on macroeconomic factors and applies rating transition matrix approach to determine the distressed cut-off indicator. The prediction models are developed by using the training sample from 1987 to 2004, and their levels of accuracy are compared with the test sample from 2005 to 2007. As for the one-stage prediction model, the model in incorporation with macroeconomic factors does not perform better than that without macroeconomic factors. This suggests that the accuracy is not improved for one-stage models which pool the firm-specific and macroeconomic factors together. In regards to the two stage models, the negative credit cycle index implies the worse economic status during the test period, so the distressed cut-off point is adjusted to increase based on such negative credit cycle index. After the two-stage models employ such adjusted cut-off point to discriminate the distressed firms from non-distressed ones, their error of misclassification becomes lower than that of one-stage ones. The two-stage models presented in this paper have incremental usefulness in predicting financial distress.
Artificial intelligence: a new approach for prescription and monitoring of hemodialysis therapy.
Akl, A I; Sobh, M A; Enab, Y M; Tattersall, J
2001-12-01
The effect of dialysis on patients is conventionally predicted using a formal mathematical model. This approach requires many assumptions of the processes involved, and validation of these may be difficult. The validity of dialysis urea modeling using a formal mathematical model has been challenged. Artificial intelligence using neural networks (NNs) has been used to solve complex problems without needing a mathematical model or an understanding of the mechanisms involved. In this study, we applied an NN model to study and predict concentrations of urea during a hemodialysis session. We measured blood concentrations of urea, patient weight, and total urea removal by direct dialysate quantification (DDQ) at 30-minute intervals during the session (in 15 chronic hemodialysis patients). The NN model was trained to recognize the evolution of measured urea concentrations and was subsequently able to predict hemodialysis session time needed to reach a target solute removal index (SRI) in patients not previously studied by the NN model (in another 15 chronic hemodialysis patients). Comparing results of the NN model with the DDQ model, the prediction error was 10.9%, with a not significant difference between predicted total urea nitrogen (UN) removal and measured UN removal by DDQ. NN model predictions of time showed a not significant difference with actual intervals needed to reach the same SRI level at the same patient conditions, except for the prediction of SRI at the first 30-minute interval, which showed a significant difference (P = 0.001). This indicates the sensitivity of the NN model to what is called patient clearance time; the prediction error was 8.3%. From our results, we conclude that artificial intelligence applications in urea kinetics can give an idea of intradialysis profiling according to individual clinical needs. In theory, this approach can be extended easily to other solutes, making the NN model a step forward to achieving artificial-intelligent dialysis control.
Anwar, Mohammad Y; Lewnard, Joseph A; Parikh, Sunil; Pitzer, Virginia E
2016-11-22
Malaria remains endemic in Afghanistan. National control and prevention strategies would be greatly enhanced through a better ability to forecast future trends in disease incidence. It is, therefore, of interest to develop a predictive tool for malaria patterns based on the current passive and affordable surveillance system in this resource-limited region. This study employs data from Ministry of Public Health monthly reports from January 2005 to September 2015. Malaria incidence in Afghanistan was forecasted using autoregressive integrated moving average (ARIMA) models in order to build a predictive tool for malaria surveillance. Environmental and climate data were incorporated to assess whether they improve predictive power of models. Two models were identified, each appropriate for different time horizons. For near-term forecasts, malaria incidence can be predicted based on the number of cases in the four previous months and 12 months prior (Model 1); for longer-term prediction, malaria incidence can be predicted using the rates 1 and 12 months prior (Model 2). Next, climate and environmental variables were incorporated to assess whether the predictive power of proposed models could be improved. Enhanced vegetation index was found to have increased the predictive accuracy of longer-term forecasts. Results indicate ARIMA models can be applied to forecast malaria patterns in Afghanistan, complementing current surveillance systems. The models provide a means to better understand malaria dynamics in a resource-limited context with minimal data input, yielding forecasts that can be used for public health planning at the national level.
Modelling obesity trends in Australia: unravelling the past and predicting the future.
Hayes, A J; Lung, T W C; Bauman, A; Howard, K
2017-01-01
Modelling is increasingly being used to predict the epidemiology of obesity progression and its consequences. The aims of this study were: (a) to present and validate a model for prediction of obesity among Australian adults and (b) to use the model to project the prevalence of obesity and severe obesity by 2025. Individual level simulation combined with survey estimation techniques to model changing population body mass index (BMI) distribution over time. The model input population was derived from a nationally representative survey in 1995, representing over 12 million adults. Simulations were run for 30 years. The model was validated retrospectively and then used to predict obesity and severe obesity by 2025 among different aged cohorts and at a whole population level. The changing BMI distribution over time was well predicted by the model and projected prevalence of weight status groups agreed with population level data in 2008, 2012 and 2014.The model predicts more growth in obesity among younger than older adult cohorts. Projections at a whole population level, were that healthy weight will decline, overweight will remain steady, but obesity and severe obesity prevalence will continue to increase beyond 2016. Adult obesity prevalence was projected to increase from 19% in 1995 to 35% by 2025. Severe obesity (BMI>35), which was only around 5% in 1995, was projected to be 13% by 2025, two to three times the 1995 levels. The projected rise in obesity severe obesity will have more substantial cost and healthcare system implications than in previous decades. Having a robust epidemiological model is key to predicting these long-term costs and health outcomes into the future.
HIGH TIME-RESOLVED COMPARISONS FOR IN-DEPTH PROBING OF CMAQ FINE-PARTICLES AND GAS PREDICTIONS
Model evaluation is important to develop confidence in models and develop an understanding of their predictions. Most comparisons in the U.S. involve time-integrated measurements of 24-hours or longer. Comparisons against continuous or semi-continuous particle and gaseous measur...
Thermal regimes are a critical factor in models predicting effects of watershed management activities on fish habitat suitability. We have assembled a database of lotic temperature time series across New England (> 7000 station-year combinations) from state and Federal data s...
Real-time Adaptive Control Using Neural Generalized Predictive Control
NASA Technical Reports Server (NTRS)
Haley, Pam; Soloway, Don; Gold, Brian
1999-01-01
The objective of this paper is to demonstrate the feasibility of a Nonlinear Generalized Predictive Control algorithm by showing real-time adaptive control on a plant with relatively fast time-constants. Generalized Predictive Control has classically been used in process control where linear control laws were formulated for plants with relatively slow time-constants. The plant of interest for this paper is a magnetic levitation device that is nonlinear and open-loop unstable. In this application, the reference model of the plant is a neural network that has an embedded nominal linear model in the network weights. The control based on the linear model provides initial stability at the beginning of network training. In using a neural network the control laws are nonlinear and online adaptation of the model is possible to capture unmodeled or time-varying dynamics. Newton-Raphson is the minimization algorithm. Newton-Raphson requires the calculation of the Hessian, but even with this computational expense the low iteration rate make this a viable algorithm for real-time control.
Haredasht, S Amirpour; Taylor, C J; Maes, P; Verstraeten, W W; Clement, J; Barrios, M; Lagrou, K; Van Ranst, M; Coppin, P; Berckmans, D; Aerts, J-M
2013-11-01
Wildlife-originated zoonotic diseases in general are a major contributor to emerging infectious diseases. Hantaviruses more specifically cause thousands of human disease cases annually worldwide, while understanding and predicting human hantavirus epidemics pose numerous unsolved challenges. Nephropathia epidemica (NE) is a human infection caused by Puumala virus, which is naturally carried and shed by bank voles (Myodes glareolus). The objective of this study was to develop a method that allows model-based predicting 3 months ahead of the occurrence of NE epidemics. Two data sets were utilized to develop and test the models. These data sets were concerned with NE cases in Finland and Belgium. In this study, we selected the most relevant inputs from all the available data for use in a dynamic linear regression (DLR) model. The number of NE cases in Finland were modelled using data from 1996 to 2008. The NE cases were predicted based on the time series data of average monthly air temperature (°C) and bank voles' trapping index using a DLR model. The bank voles' trapping index data were interpolated using a related dynamic harmonic regression model (DHR). Here, the DLR and DHR models used time-varying parameters. Both the DHR and DLR models were based on a unified state-space estimation framework. For the Belgium case, no time series of the bank voles' population dynamics were available. Several studies, however, have suggested that the population of bank voles is related to the variation in seed production of beech and oak trees in Northern Europe. Therefore, the NE occurrence pattern in Belgium was predicted based on a DLR model by using remotely sensed phenology parameters of broad-leaved forests, together with the oak and beech seed categories and average monthly air temperature (°C) using data from 2001 to 2009. Our results suggest that even without any knowledge about hantavirus dynamics in the host population, the time variation in NE outbreaks in Finland could be predicted 3 months ahead with a 34% mean relative prediction error (MRPE). This took into account solely the population dynamics of the carrier species (bank voles). The time series analysis also revealed that climate change, as represented by the vegetation index, changes in forest phenology derived from satellite images and directly measured air temperature, may affect the mechanics of NE transmission. NE outbreaks in Belgium were predicted 3 months ahead with a 40% MRPE, based only on the climatological and vegetation data, in this case, without any knowledge of the bank vole's population dynamics. In this research, we demonstrated that NE outbreaks can be predicted using climate and vegetation data or the bank vole's population dynamics, by using dynamic data-based models with time-varying parameters. Such a predictive modelling approach might be used as a step towards the development of new tools for the prevention of future NE outbreaks. © 2012 Blackwell Verlag GmbH.
Federal Register 2010, 2011, 2012, 2013, 2014
2013-11-25
... public. Mathematical and statistical models can be useful in predicting the timing and impact of the... applying any mathematical, statistical, or other approach to predictive modeling. This challenge will... Services (HHS) region level(s) in the United States by developing mathematical and statistical models that...
Real-time assessments of water quality: expanding nowcasting throughout the Great Lakes
,
2013-01-01
Nowcasts are systems that inform the public of current bacterial water-quality conditions at beaches on the basis of predictive models. During 2010–12, the U.S. Geological Survey (USGS) worked with 23 local and State agencies to improve existing operational beach nowcast systems at 4 beaches and expand the use of predictive models in nowcasts at an additional 45 beaches throughout the Great Lakes. The predictive models were specific to each beach, and the best model for each beach was based on a unique combination of environmental and water-quality explanatory variables. The variables used most often in models to predict Escherichia coli (E. coli) concentrations or the probability of exceeding a State recreational water-quality standard included turbidity, day of the year, wave height, wind direction and speed, antecedent rainfall for various time periods, and change in lake level over 24 hours. During validation of 42 beach models during 2012, the models performed better than the current method to assess recreational water quality (previous day's E. coli concentration). The USGS will continue to work with local agencies to improve nowcast predictions, enable technology transfer of predictive model development procedures, and implement more operational systems during 2013 and beyond.
Product component genealogy modeling and field-failure prediction
DOE Office of Scientific and Technical Information (OSTI.GOV)
King, Caleb; Hong, Yili; Meeker, William Q.
Many industrial products consist of multiple components that are necessary for system operation. There is an abundance of literature on modeling the lifetime of such components through competing risks models. During the life-cycle of a product, it is common for there to be incremental design changes to improve reliability, to reduce costs, or due to changes in availability of certain part numbers. These changes can affect product reliability but are often ignored in system lifetime modeling. By incorporating this information about changes in part numbers over time (information that is readily available in most production databases), better accuracy can bemore » achieved in predicting time to failure, thus yielding more accurate field-failure predictions. This paper presents methods for estimating parameters and predictions for this generational model and a comparison with existing methods through the use of simulation. Our results indicate that the generational model has important practical advantages and outperforms the existing methods in predicting field failures.« less
Product component genealogy modeling and field-failure prediction
King, Caleb; Hong, Yili; Meeker, William Q.
2016-04-13
Many industrial products consist of multiple components that are necessary for system operation. There is an abundance of literature on modeling the lifetime of such components through competing risks models. During the life-cycle of a product, it is common for there to be incremental design changes to improve reliability, to reduce costs, or due to changes in availability of certain part numbers. These changes can affect product reliability but are often ignored in system lifetime modeling. By incorporating this information about changes in part numbers over time (information that is readily available in most production databases), better accuracy can bemore » achieved in predicting time to failure, thus yielding more accurate field-failure predictions. This paper presents methods for estimating parameters and predictions for this generational model and a comparison with existing methods through the use of simulation. Our results indicate that the generational model has important practical advantages and outperforms the existing methods in predicting field failures.« less
NASA Astrophysics Data System (ADS)
Tiebin, Wu; Yunlian, Liu; Xinjun, Li; Yi, Yu; Bin, Zhang
2018-06-01
Aiming at the difficulty in quality prediction of sintered ores, a hybrid prediction model is established based on mechanism models of sintering and time-weighted error compensation on the basis of the extreme learning machine (ELM). At first, mechanism models of drum index, total iron, and alkalinity are constructed according to the chemical reaction mechanism and conservation of matter in the sintering process. As the process is simplified in the mechanism models, these models are not able to describe high nonlinearity. Therefore, errors are inevitable. For this reason, the time-weighted ELM based error compensation model is established. Simulation results verify that the hybrid model has a high accuracy and can meet the requirement for industrial applications.
Real-time predictive seasonal influenza model in Catalonia, Spain
Basile, Luca; Oviedo de la Fuente, Manuel; Torner, Nuria; Martínez, Ana; Jané, Mireia
2018-01-01
Influenza surveillance is critical to monitoring the situation during epidemic seasons and predictive mathematic models may aid the early detection of epidemic patterns. The objective of this study was to design a real-time spatial predictive model of ILI (Influenza Like Illness) incidence rate in Catalonia using one- and two-week forecasts. The available data sources used to select explanatory variables to include in the model were the statutory reporting disease system and the sentinel surveillance system in Catalonia for influenza incidence rates, the official climate service in Catalonia for meteorological data, laboratory data and Google Flu Trend. Time series for every explanatory variable with data from the last 4 seasons (from 2010–2011 to 2013–2014) was created. A pilot test was conducted during the 2014–2015 season to select the explanatory variables to be included in the model and the type of model to be applied. During the 2015–2016 season a real-time model was applied weekly, obtaining the intensity level and predicted incidence rates with 95% confidence levels one and two weeks away for each health region. At the end of the season, the confidence interval success rate (CISR) and intensity level success rate (ILSR) were analysed. For the 2015–2016 season a CISR of 85.3% at one week and 87.1% at two weeks and an ILSR of 82.9% and 82% were observed, respectively. The model described is a useful tool although it is hard to evaluate due to uncertainty. The accuracy of prediction at one and two weeks was above 80% globally, but was lower during the peak epidemic period. In order to improve the predictive power, new explanatory variables should be included. PMID:29513710
Challenges in Real-Time Prediction of Infectious Disease: A Case Study of Dengue in Thailand
Lauer, Stephen A.; Sakrejda, Krzysztof; Iamsirithaworn, Sopon; Hinjoy, Soawapak; Suangtho, Paphanij; Suthachana, Suthanun; Clapham, Hannah E.; Salje, Henrik; Cummings, Derek A. T.; Lessler, Justin
2016-01-01
Epidemics of communicable diseases place a huge burden on public health infrastructures across the world. Producing accurate and actionable forecasts of infectious disease incidence at short and long time scales will improve public health response to outbreaks. However, scientists and public health officials face many obstacles in trying to create such real-time forecasts of infectious disease incidence. Dengue is a mosquito-borne virus that annually infects over 400 million people worldwide. We developed a real-time forecasting model for dengue hemorrhagic fever in the 77 provinces of Thailand. We created a practical computational infrastructure that generated multi-step predictions of dengue incidence in Thai provinces every two weeks throughout 2014. These predictions show mixed performance across provinces, out-performing seasonal baseline models in over half of provinces at a 1.5 month horizon. Additionally, to assess the degree to which delays in case reporting make long-range prediction a challenging task, we compared the performance of our real-time predictions with predictions made with fully reported data. This paper provides valuable lessons for the implementation of real-time predictions in the context of public health decision making. PMID:27304062
Challenges in Real-Time Prediction of Infectious Disease: A Case Study of Dengue in Thailand.
Reich, Nicholas G; Lauer, Stephen A; Sakrejda, Krzysztof; Iamsirithaworn, Sopon; Hinjoy, Soawapak; Suangtho, Paphanij; Suthachana, Suthanun; Clapham, Hannah E; Salje, Henrik; Cummings, Derek A T; Lessler, Justin
2016-06-01
Epidemics of communicable diseases place a huge burden on public health infrastructures across the world. Producing accurate and actionable forecasts of infectious disease incidence at short and long time scales will improve public health response to outbreaks. However, scientists and public health officials face many obstacles in trying to create such real-time forecasts of infectious disease incidence. Dengue is a mosquito-borne virus that annually infects over 400 million people worldwide. We developed a real-time forecasting model for dengue hemorrhagic fever in the 77 provinces of Thailand. We created a practical computational infrastructure that generated multi-step predictions of dengue incidence in Thai provinces every two weeks throughout 2014. These predictions show mixed performance across provinces, out-performing seasonal baseline models in over half of provinces at a 1.5 month horizon. Additionally, to assess the degree to which delays in case reporting make long-range prediction a challenging task, we compared the performance of our real-time predictions with predictions made with fully reported data. This paper provides valuable lessons for the implementation of real-time predictions in the context of public health decision making.
Space can substitute for time in predicting climate-change effects on biodiversity.
Blois, Jessica L; Williams, John W; Fitzpatrick, Matthew C; Jackson, Stephen T; Ferrier, Simon
2013-06-04
"Space-for-time" substitution is widely used in biodiversity modeling to infer past or future trajectories of ecological systems from contemporary spatial patterns. However, the foundational assumption--that drivers of spatial gradients of species composition also drive temporal changes in diversity--rarely is tested. Here, we empirically test the space-for-time assumption by constructing orthogonal datasets of compositional turnover of plant taxa and climatic dissimilarity through time and across space from Late Quaternary pollen records in eastern North America, then modeling climate-driven compositional turnover. Predictions relying on space-for-time substitution were ∼72% as accurate as "time-for-time" predictions. However, space-for-time substitution performed poorly during the Holocene when temporal variation in climate was small relative to spatial variation and required subsampling to match the extent of spatial and temporal climatic gradients. Despite this caution, our results generally support the judicious use of space-for-time substitution in modeling community responses to climate change.
NASA Astrophysics Data System (ADS)
Nourani, Vahid; Andalib, Gholamreza; Dąbrowska, Dominika
2017-05-01
Accurate nitrate load predictions can elevate decision management of water quality of watersheds which affects to environment and drinking water. In this paper, two scenarios were considered for Multi-Station (MS) nitrate load modeling of the Little River watershed. In the first scenario, Markovian characteristics of streamflow-nitrate time series were proposed for the MS modeling. For this purpose, feature extraction criterion of Mutual Information (MI) was employed for input selection of artificial intelligence models (Feed Forward Neural Network, FFNN and least square support vector machine). In the second scenario for considering seasonality-based characteristics of the time series, wavelet transform was used to extract multi-scale features of streamflow-nitrate time series of the watershed's sub-basins to model MS nitrate loads. Self-Organizing Map (SOM) clustering technique which finds homogeneous sub-series clusters was also linked to MI for proper cluster agent choice to be imposed into the models for predicting the nitrate loads of the watershed's sub-basins. The proposed MS method not only considers the prediction of the outlet nitrate but also covers predictions of interior sub-basins nitrate load values. The results indicated that the proposed FFNN model coupled with the SOM-MI improved the performance of MS nitrate predictions compared to the Markovian-based models up to 39%. Overall, accurate selection of dominant inputs which consider seasonality-based characteristics of streamflow-nitrate process could enhance the efficiency of nitrate load predictions.
NASA Astrophysics Data System (ADS)
Baehr, J.; Fröhlich, K.; Botzet, M.; Domeisen, D. I. V.; Kornblueh, L.; Notz, D.; Piontek, R.; Pohlmann, H.; Tietsche, S.; Müller, W. A.
2015-05-01
A seasonal forecast system is presented, based on the global coupled climate model MPI-ESM as used for CMIP5 simulations. We describe the initialisation of the system and analyse its predictive skill for surface temperature. The presented system is initialised in the atmospheric, oceanic, and sea ice component of the model from reanalysis/observations with full field nudging in all three components. For the initialisation of the ensemble, bred vectors with a vertically varying norm are implemented in the ocean component to generate initial perturbations. In a set of ensemble hindcast simulations, starting each May and November between 1982 and 2010, we analyse the predictive skill. Bias-corrected ensemble forecasts for each start date reproduce the observed surface temperature anomalies at 2-4 months lead time, particularly in the tropics. Niño3.4 sea surface temperature anomalies show a small root-mean-square error and predictive skill up to 6 months. Away from the tropics, predictive skill is mostly limited to the ocean, and to regions which are strongly influenced by ENSO teleconnections. In summary, the presented seasonal prediction system based on a coupled climate model shows predictive skill for surface temperature at seasonal time scales comparable to other seasonal prediction systems using different underlying models and initialisation strategies. As the same model underlying our seasonal prediction system—with a different initialisation—is presently also used for decadal predictions, this is an important step towards seamless seasonal-to-decadal climate predictions.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tippett, Michael K.
2014-04-09
This report is a progress report of the accomplishments of the research grant “Collaborative Research: Separating Forced and Unforced Decadal Predictability in Models and Observa- tions” during the period 1 May 2011- 31 August 2013. This project is a collaborative one between Columbia University and George Mason University. George Mason University will submit a final technical report at the conclusion of their no-cost extension. The purpose of the proposed research is to identify unforced predictable components on decadal time scales, distinguish these components from forced predictable components, and to assess the reliability of model predictions of these components. Components ofmore » unforced decadal predictability will be isolated by maximizing the Average Predictability Time (APT) in long, multimodel control runs from state-of-the-art climate models. Components with decadal predictability have large APT, so maximizing APT ensures that components with decadal predictability will be detected. Optimal fingerprinting techniques, as used in detection and attribution analysis, will be used to separate variations due to natural and anthropogenic forcing from those due to unforced decadal predictability. This methodology will be applied to the decadal hindcasts generated by the CMIP5 project to assess the reliability of model projections. The question of whether anthropogenic forcing changes decadal predictability, or gives rise to new forms of decadal predictability, also will be investigated.« less
NASA Astrophysics Data System (ADS)
Petrova, Desislava; Koopman, Siem Jan; Ballester, Joan; Rodó, Xavier
2017-02-01
El Niño (EN) is a dominant feature of climate variability on inter-annual time scales driving changes in the climate throughout the globe, and having wide-spread natural and socio-economic consequences. In this sense, its forecast is an important task, and predictions are issued on a regular basis by a wide array of prediction schemes and climate centres around the world. This study explores a novel method for EN forecasting. In the state-of-the-art the advantageous statistical technique of unobserved components time series modeling, also known as structural time series modeling, has not been applied. Therefore, we have developed such a model where the statistical analysis, including parameter estimation and forecasting, is based on state space methods, and includes the celebrated Kalman filter. The distinguishing feature of this dynamic model is the decomposition of a time series into a range of stochastically time-varying components such as level (or trend), seasonal, cycles of different frequencies, irregular, and regression effects incorporated as explanatory covariates. These components are modeled separately and ultimately combined in a single forecasting scheme. Customary statistical models for EN prediction essentially use SST and wind stress in the equatorial Pacific. In addition to these, we introduce a new domain of regression variables accounting for the state of the subsurface ocean temperature in the western and central equatorial Pacific, motivated by our analysis, as well as by recent and classical research, showing that subsurface processes and heat accumulation there are fundamental for the genesis of EN. An important feature of the scheme is that different regression predictors are used at different lead months, thus capturing the dynamical evolution of the system and rendering more efficient forecasts. The new model has been tested with the prediction of all warm events that occurred in the period 1996-2015. Retrospective forecasts of these events were made for long lead times of at least two and a half years. Hence, the present study demonstrates that the theoretical limit of ENSO prediction should be sought much longer than the commonly accepted "Spring Barrier". The high correspondence between the forecasts and observations indicates that the proposed model outperforms all current operational statistical models, and behaves comparably to the best dynamical models used for EN prediction. Thus, the novel way in which the modeling scheme has been structured could also be used for improving other statistical and dynamical modeling systems.
Metabolomics biomarkers to predict acamprosate treatment response in alcohol-dependent subjects.
Hinton, David J; Vázquez, Marely Santiago; Geske, Jennifer R; Hitschfeld, Mario J; Ho, Ada M C; Karpyak, Victor M; Biernacka, Joanna M; Choi, Doo-Sup
2017-05-31
Precision medicine for alcohol use disorder (AUD) allows optimal treatment of the right patient with the right drug at the right time. Here, we generated multivariable models incorporating clinical information and serum metabolite levels to predict acamprosate treatment response. The sample of 120 patients was randomly split into a training set (n = 80) and test set (n = 40) five independent times. Treatment response was defined as complete abstinence (no alcohol consumption during 3 months of acamprosate treatment) while nonresponse was defined as any alcohol consumption during this period. In each of the five training sets, we built a predictive model using a least absolute shrinkage and section operator (LASSO) penalized selection method and then evaluated the predictive performance of each model in the corresponding test set. The models predicted acamprosate treatment response with a mean sensitivity and specificity in the test sets of 0.83 and 0.31, respectively, suggesting our model performed well at predicting responders, but not non-responders (i.e. many non-responders were predicted to respond). Studies with larger sample sizes and additional biomarkers will expand the clinical utility of predictive algorithms for pharmaceutical response in AUD.
Managing distribution changes in time series prediction
NASA Astrophysics Data System (ADS)
Matias, J. M.; Gonzalez-Manteiga, W.; Taboada, J.; Ordonez, C.
2006-07-01
When a problem is modeled statistically, a single distribution model is usually postulated that is assumed to be valid for the entire space. Nonetheless, this practice may be somewhat unrealistic in certain application areas, in which the conditions of the process that generates the data may change; as far as we are aware, however, no techniques have been developed to tackle this problem.This article proposes a technique for modeling and predicting this change in time series with a view to improving estimates and predictions. The technique is applied, among other models, to the hypernormal distribution recently proposed. When tested on real data from a range of stock market indices the technique produces better results that when a single distribution model is assumed to be valid for the entire period of time studied.Moreover, when a global model is postulated, it is highly recommended to select the hypernormal distribution parameter in the same likelihood maximization process.
Kakagianni, Myrsini; Gougouli, Maria; Koutsoumanis, Konstantinos P
2016-08-01
The presence of Geobacillus stearothermophilus spores in evaporated milk constitutes an important quality problem for the milk industry. This study was undertaken to provide an approach in modelling the effect of temperature on G. stearothermophilus ATCC 7953 growth and in predicting spoilage of evaporated milk. The growth of G. stearothermophilus was monitored in tryptone soy broth at isothermal conditions (35-67 °C). The data derived were used to model the effect of temperature on G. stearothermophilus growth with a cardinal type model. The cardinal values of the model for the maximum specific growth rate were Tmin = 33.76 °C, Tmax = 68.14 °C, Topt = 61.82 °C and μopt = 2.068/h. The growth of G. stearothermophilus was assessed in evaporated milk at Topt in order to adjust the model to milk. The efficiency of the model in predicting G. stearothermophilus growth at non-isothermal conditions was evaluated by comparing predictions with observed growth under dynamic conditions and the results showed a good performance of the model. The model was further used to predict the time-to-spoilage (tts) of evaporated milk. The spoilage of this product caused by acid coagulation when the pH approached a level around 5.2, eight generations after G. stearothermophilus reached the maximum population density (Nmax). Based on the above, the tts was predicted from the growth model as the sum of the time required for the microorganism to multiply from the initial to the maximum level ( [Formula: see text] ), plus the time required after the [Formula: see text] to complete eight generations. The observed tts was very close to the predicted one indicating that the model is able to describe satisfactorily the growth of G. stearothermophilus and to provide realistic predictions for evaporated milk spoilage. Copyright © 2016 Elsevier Ltd. All rights reserved.
Zhang, Yong; Green, Christopher T.; Baeumer, Boris
2014-01-01
Time-nonlocal transport models can describe non-Fickian diffusion observed in geological media, but the physical meaning of parameters can be ambiguous, and most applications are limited to curve-fitting. This study explores methods for predicting the parameters of a temporally tempered Lévy motion (TTLM) model for transient sub-diffusion in mobile–immobile like alluvial settings represented by high-resolution hydrofacies models. The TTLM model is a concise multi-rate mass transfer (MRMT) model that describes a linear mass transfer process where the transfer kinetics and late-time transport behavior are controlled by properties of the host medium, especially the immobile domain. The intrinsic connection between the MRMT and TTLM models helps to estimate the main time-nonlocal parameters in the TTLM model (which are the time scale index, the capacity coefficient, and the truncation parameter) either semi-analytically or empirically from the measurable aquifer properties. Further applications show that the TTLM model captures the observed solute snapshots, the breakthrough curves, and the spatial moments of plumes up to the fourth order. Most importantly, the a priori estimation of the time-nonlocal parameters outside of any breakthrough fitting procedure provides a reliable “blind” prediction of the late-time dynamics of subdiffusion observed in a spectrum of alluvial settings. Predictability of the time-nonlocal parameters may be due to the fact that the late-time subdiffusion is not affected by the exact location of each immobile zone, but rather is controlled by the time spent in immobile blocks surrounding the pathway of solute particles. Results also show that the effective dispersion coefficient has to be fitted due to the scale effect of transport, and the mean velocity can differ from local measurements or volume averages. The link between medium heterogeneity and time-nonlocal parameters will help to improve model predictability for non-Fickian transport in alluvial settings.
Robust model predictive control for constrained continuous-time nonlinear systems
NASA Astrophysics Data System (ADS)
Sun, Tairen; Pan, Yongping; Zhang, Jun; Yu, Haoyong
2018-02-01
In this paper, a robust model predictive control (MPC) is designed for a class of constrained continuous-time nonlinear systems with bounded additive disturbances. The robust MPC consists of a nonlinear feedback control and a continuous-time model-based dual-mode MPC. The nonlinear feedback control guarantees the actual trajectory being contained in a tube centred at the nominal trajectory. The dual-mode MPC is designed to ensure asymptotic convergence of the nominal trajectory to zero. This paper extends current results on discrete-time model-based tube MPC and linear system model-based tube MPC to continuous-time nonlinear model-based tube MPC. The feasibility and robustness of the proposed robust MPC have been demonstrated by theoretical analysis and applications to a cart-damper springer system and a one-link robot manipulator.
Sasakawa, Tomoki; Masui, Kenichi; Kazama, Tomiei; Iwasaki, Hiroshi
2016-08-01
Rocuronium concentration prediction using pharmacokinetic (PK) models would be useful for controlling rocuronium effects because neuromuscular monitoring throughout anesthesia can be difficult. This study assessed whether six different compartmental PK models developed from data obtained after bolus administration only could predict the measured plasma concentration (Cp) values of rocuronium delivered by bolus followed by continuous infusion. Rocuronium Cp values from 19 healthy subjects who received a bolus dose followed by continuous infusion in a phase III multicenter trial in Japan were used retrospectively as evaluation datasets. Six different compartmental PK models of rocuronium were used to simulate rocuronium Cp time course values, which were compared with measured Cp values. Prediction error (PE) derivatives of median absolute PE (MDAPE), median PE (MDPE), wobble, divergence absolute PE, and divergence PE were used to assess inaccuracy, bias, intra-individual variability, and time-related trends in APE and PE values. MDAPE and MDPE values were acceptable only for the Magorian and Kleijn models. The divergence PE value for the Kleijn model was lower than -10 %/h, indicating unstable prediction over time. The Szenohradszky model had the lowest divergence PE (-2.7 %/h) and wobble (5.4 %) values with negative bias (MDPE = -25.9 %). These three models were developed using the mixed-effects modeling approach. The Magorian model showed the best PE derivatives among the models assessed. A PK model developed from data obtained after single-bolus dosing can predict Cp values during bolus and continuous infusion. Thus, a mixed-effects modeling approach may be preferable in extrapolating such data.
Li, Xiang; Peng, Ling; Yao, Xiaojing; Cui, Shaolong; Hu, Yuan; You, Chengzeng; Chi, Tianhe
2017-12-01
Air pollutant concentration forecasting is an effective method of protecting public health by providing an early warning against harmful air pollutants. However, existing methods of air pollutant concentration prediction fail to effectively model long-term dependencies, and most neglect spatial correlations. In this paper, a novel long short-term memory neural network extended (LSTME) model that inherently considers spatiotemporal correlations is proposed for air pollutant concentration prediction. Long short-term memory (LSTM) layers were used to automatically extract inherent useful features from historical air pollutant data, and auxiliary data, including meteorological data and time stamp data, were merged into the proposed model to enhance the performance. Hourly PM 2.5 (particulate matter with an aerodynamic diameter less than or equal to 2.5 μm) concentration data collected at 12 air quality monitoring stations in Beijing City from Jan/01/2014 to May/28/2016 were used to validate the effectiveness of the proposed LSTME model. Experiments were performed using the spatiotemporal deep learning (STDL) model, the time delay neural network (TDNN) model, the autoregressive moving average (ARMA) model, the support vector regression (SVR) model, and the traditional LSTM NN model, and a comparison of the results demonstrated that the LSTME model is superior to the other statistics-based models. Additionally, the use of auxiliary data improved model performance. For the one-hour prediction tasks, the proposed model performed well and exhibited a mean absolute percentage error (MAPE) of 11.93%. In addition, we conducted multiscale predictions over different time spans and achieved satisfactory performance, even for 13-24 h prediction tasks (MAPE = 31.47%). Copyright © 2017 Elsevier Ltd. All rights reserved.
Space Environment Modelling with the Use of Artificial Intelligence Methods
NASA Astrophysics Data System (ADS)
Lundstedt, H.; Wintoft, P.; Wu, J.-G.; Gleisner, H.; Dovheden, V.
1996-12-01
Space based technological systems are affected by the space weather in many ways. Several severe failures of satellites have been reported at times of space storms. Our society also increasingly depends on satellites for communication, navigation, exploration, and research. Predictions of the conditions in the satellite environment have therefore become very important. We will here present predictions made with the use of artificial intelligence (AI) techniques, such as artificial neural networks (ANN) and hybrids of AT methods. We are developing a space weather model based on intelligence hybrid systems (IHS). The model consists of different forecast modules, each module predicts the space weather on a specific time-scale. The time-scales range from minutes to months with the fundamental time-scale of 1-5 minutes, 1-3 hours, 1-3 days, and 27 days. Solar and solar wind data are used as input data. From solar magnetic field measurements, either made on the ground at Wilcox Solar Observatory (WSO) at Stanford, or made from space by the satellite SOHO, solar wind parameters can be predicted and modelled with ANN and MHD models. Magnetograms from WSO are available on a daily basis. However, from SOHO magnetograms will be available every 90 minutes. SOHO magnetograms as input to ANNs will therefore make it possible to even predict solar transient events. Geomagnetic storm activity can today be predicted with very high accuracy by means of ANN methods using solar wind input data. However, at present real-time solar wind data are only available during part of the day from the satellite WIND. With the launch of ACE in 1997, solar wind data will on the other hand be available during 24 hours per day. The conditions of the satellite environment are not only disturbed at times of geomagnetic storms but also at times of intense solar radiation and highly energetic particles. These events are associated with increased solar activity. Predictions of these events are therefore also handled with the modules in the Lund Space Weather Model. Interesting Links: Lund Space Weather and AI Center
Vaňková, Nikola; Česla, Petr
2017-02-17
In this work, we have investigated the predictive properties of mixed-mode retention model and oligomeric mixed-mode model, taking into account the contribution of monomeric units to the retention, in hydrophilic interaction liquid chromatography. The gradient retention times of native maltooligosaccharides and their fluorescent derivatives were predicted in the oligomeric series with number of monomeric glucose units in the range from two to seven. The maltooligosaccharides were separated on a packed column with carbamoyl-bonded silica stationary phase and 15 gradient profiles with different initial and final mobile phase composition were used with the gradient times 5; 7.5 and 10min. The predicted gradient retention times were compared for calculations based on isocratic retention data and gradient retention data, which provided better accuracy of the results. By comparing two different mobile phase additives, the more accurate retention times were predicted in mobile phases containing ammonium acetate. The acidic derivatives, prepared by reaction of an oligosaccharide with 2-aminobenzoic acid or 8-aminonaphthalene-1,3,6-trisulfonic acid, provided more accurate predictions of the retention data in comparison to native oligosaccharides or their neutral derivatives. The oligomeric mixed-mode model allowed prediction of gradient retention times using only one gradient profile, which significantly speeded-up the method development. Copyright © 2017 Elsevier B.V. All rights reserved.
Drought Prediction Site Specific and Regional up to Three Years in Advance
NASA Astrophysics Data System (ADS)
Suhler, G.; O'Brien, D. P.
2002-12-01
Dynamic Predictables has developed proprietary software that analyzes and predicts future climatic behavior based on past data. The programs employ both a regional thermodynamic model together with a unique predictive algorithm to achieve a high degree of prediction accuracy up to 36 months. The thermodynamic model was developed initially to explain the results of a study on global circulation models done at SUNY-Stony Brook by S. Hameed, R.G. Currie, and H. LaGrone (Int. Jour. Climatology, 15, pp.852-871, 1995). The authors pointed out that on a time scale of 2-70 months the spectrum of sea level pressure is dominated by the harmonics and subharmonics of the seasonal cycle and their combination tones. These oscillations are fundamental to an understanding of climatic variations on a sub-regional to continental basis. The oscillatory nature of these variations allows them to be used as broad based climate predictors. In addition, they can be subtracted from the data to yield residuals. The residuals are then analyzed to determine components that are predictable. The program then combines both the thermodynamic model results (the primary predictive model) with those from the residual data (the secondary model) to yield an estimate of the future behavior of the climatic variable. Spatial resolution is site specific or aggregated regional based upon appropriate length (45 years or more monthly data) and reasonable quality weather observation records. Most climate analysis has been based on monthly time-step data, but time scales on the order of days can be used. Oregon Climate Division 1 (Coastal) precipitation provides an example relating DynaPred's method to nature's observed elements in the early 2000s. The prediction's leading dynamic factors are the strong seasonal in the primary model combined with high secondary model contributions from planet Earth's Chandler Wobble (near 15 months) and what has been called the Quasi-Triennial Oscillation (QTO, near 36 months) in equatorial regions. Examples of regional aggregate and site-specific predictions previously made blind forward and publicly available (AASC Annual Meetings 1998-2002) will be shown. Certain climate dynamics features relevant to extrema prediction and specifically drought prediction will then be discussed. Time steps presented will be monthly. Climate variables examined are mean temperature and accumulated precipitation. NINO3 SST, interior continental and marine/continental transition area examples will be shown. http://www.dynamicpredictables.com
An algebraic method for constructing stable and consistent autoregressive filters
DOE Office of Scientific and Technical Information (OSTI.GOV)
Harlim, John, E-mail: jharlim@psu.edu; Department of Meteorology, the Pennsylvania State University, University Park, PA 16802; Hong, Hoon, E-mail: hong@ncsu.edu
2015-02-15
In this paper, we introduce an algebraic method to construct stable and consistent univariate autoregressive (AR) models of low order for filtering and predicting nonlinear turbulent signals with memory depth. By stable, we refer to the classical stability condition for the AR model. By consistent, we refer to the classical consistency constraints of Adams–Bashforth methods of order-two. One attractive feature of this algebraic method is that the model parameters can be obtained without directly knowing any training data set as opposed to many standard, regression-based parameterization methods. It takes only long-time average statistics as inputs. The proposed method provides amore » discretization time step interval which guarantees the existence of stable and consistent AR model and simultaneously produces the parameters for the AR models. In our numerical examples with two chaotic time series with different characteristics of decaying time scales, we find that the proposed AR models produce significantly more accurate short-term predictive skill and comparable filtering skill relative to the linear regression-based AR models. These encouraging results are robust across wide ranges of discretization times, observation times, and observation noise variances. Finally, we also find that the proposed model produces an improved short-time prediction relative to the linear regression-based AR-models in forecasting a data set that characterizes the variability of the Madden–Julian Oscillation, a dominant tropical atmospheric wave pattern.« less
Nateghi, Roshanak; Guikema, Seth D; Quiring, Steven M
2011-12-01
This article compares statistical methods for modeling power outage durations during hurricanes and examines the predictive accuracy of these methods. Being able to make accurate predictions of power outage durations is valuable because the information can be used by utility companies to plan their restoration efforts more efficiently. This information can also help inform customers and public agencies of the expected outage times, enabling better collective response planning, and coordination of restoration efforts for other critical infrastructures that depend on electricity. In the long run, outage duration estimates for future storm scenarios may help utilities and public agencies better allocate risk management resources to balance the disruption from hurricanes with the cost of hardening power systems. We compare the out-of-sample predictive accuracy of five distinct statistical models for estimating power outage duration times caused by Hurricane Ivan in 2004. The methods compared include both regression models (accelerated failure time (AFT) and Cox proportional hazard models (Cox PH)) and data mining techniques (regression trees, Bayesian additive regression trees (BART), and multivariate additive regression splines). We then validate our models against two other hurricanes. Our results indicate that BART yields the best prediction accuracy and that it is possible to predict outage durations with reasonable accuracy. © 2011 Society for Risk Analysis.
Mathematical model to predict drivers' reaction speeds.
Long, Benjamin L; Gillespie, A Isabella; Tanaka, Martin L
2012-02-01
Mental distractions and physical impairments can increase the risk of accidents by affecting a driver's ability to control the vehicle. In this article, we developed a linear mathematical model that can be used to quantitatively predict drivers' performance over a variety of possible driving conditions. Predictions were not limited only to conditions tested, but also included linear combinations of these tests conditions. Two groups of 12 participants were evaluated using a custom drivers' reaction speed testing device to evaluate the effect of cell phone talking, texting, and a fixed knee brace on the components of drivers' reaction speed. Cognitive reaction time was found to increase by 24% for cell phone talking and 74% for texting. The fixed knee brace increased musculoskeletal reaction time by 24%. These experimental data were used to develop a mathematical model to predict reaction speed for an untested condition, talking on a cell phone with a fixed knee brace. The model was verified by comparing the predicted reaction speed to measured experimental values from an independent test. The model predicted full braking time within 3% of the measured value. Although only a few influential conditions were evaluated, we present a general approach that can be expanded to include other types of distractions, impairments, and environmental conditions.
NASA Astrophysics Data System (ADS)
Hwang, Junga; Yoon, Kyoung-Won; Jo, Gyeongbok; Noh, Sung-Jun
2016-12-01
The space radiation dose over air routes including polar routes should be carefully considered, especially when space weather shows sudden disturbances such as coronal mass ejections (CMEs), flares, and accompanying solar energetic particle events. We recently established a heliocentric potential (HCP) prediction model for real-time operation of the CARI-6 and CARI-6M programs. Specifically, the HCP value is used as a critical input value in the CARI-6/6M programs, which estimate the aviation route dose based on the effective dose rate. The CARI-6/6M approach is the most widely used technique, and the programs can be obtained from the U.S. Federal Aviation Administration (FAA). However, HCP values are given at a one month delay on the FAA official webpage, which makes it difficult to obtain real-time information on the aviation route dose. In order to overcome this critical limitation regarding the time delay for space weather customers, we developed a HCP prediction model based on sunspot number variations (Hwang et al. 2015). In this paper, we focus on improvements to our HCP prediction model and update it with neutron monitoring data. We found that the most accurate method to derive the HCP value involves (1) real-time daily sunspot assessments, (2) predictions of the daily HCP by our prediction algorithm, and (3) calculations of the resultant daily effective dose rate. Additionally, we also derived the HCP prediction algorithm in this paper by using ground neutron counts. With the compensation stemming from the use of ground neutron count data, the newly developed HCP prediction model was improved.
Shang, Qiang; Lin, Ciyun; Yang, Zhaosheng; Bing, Qichun; Zhou, Xiyang
2016-01-01
Short-term traffic flow prediction is one of the most important issues in the field of intelligent transport system (ITS). Because of the uncertainty and nonlinearity, short-term traffic flow prediction is a challenging task. In order to improve the accuracy of short-time traffic flow prediction, a hybrid model (SSA-KELM) is proposed based on singular spectrum analysis (SSA) and kernel extreme learning machine (KELM). SSA is used to filter out the noise of traffic flow time series. Then, the filtered traffic flow data is used to train KELM model, the optimal input form of the proposed model is determined by phase space reconstruction, and parameters of the model are optimized by gravitational search algorithm (GSA). Finally, case validation is carried out using the measured data of an expressway in Xiamen, China. And the SSA-KELM model is compared with several well-known prediction models, including support vector machine, extreme learning machine, and single KLEM model. The experimental results demonstrate that performance of the proposed model is superior to that of the comparison models. Apart from accuracy improvement, the proposed model is more robust.
Lin, Ciyun; Yang, Zhaosheng; Bing, Qichun; Zhou, Xiyang
2016-01-01
Short-term traffic flow prediction is one of the most important issues in the field of intelligent transport system (ITS). Because of the uncertainty and nonlinearity, short-term traffic flow prediction is a challenging task. In order to improve the accuracy of short-time traffic flow prediction, a hybrid model (SSA-KELM) is proposed based on singular spectrum analysis (SSA) and kernel extreme learning machine (KELM). SSA is used to filter out the noise of traffic flow time series. Then, the filtered traffic flow data is used to train KELM model, the optimal input form of the proposed model is determined by phase space reconstruction, and parameters of the model are optimized by gravitational search algorithm (GSA). Finally, case validation is carried out using the measured data of an expressway in Xiamen, China. And the SSA-KELM model is compared with several well-known prediction models, including support vector machine, extreme learning machine, and single KLEM model. The experimental results demonstrate that performance of the proposed model is superior to that of the comparison models. Apart from accuracy improvement, the proposed model is more robust. PMID:27551829
US EPA 2012 Air Quality Fused Surface for the Conterminous U.S. Map Service
This web service contains a polygon layer that depicts fused air quality predictions for 2012 for census tracts in the conterminous United States. Fused air quality predictions (for ozone and PM2.5) are modeled using a Bayesian space-time downscaling fusion model approach described in a series of three published journal papers: 1) (Berrocal, V., Gelfand, A. E. and Holland, D. M. (2012). Space-time fusion under error in computer model output: an application to modeling air quality. Biometrics 68, 837-848; 2) Berrocal, V., Gelfand, A. E. and Holland, D. M. (2010). A bivariate space-time downscaler under space and time misalignment. The Annals of Applied Statistics 4, 1942-1975; and 3) Berrocal, V., Gelfand, A. E., and Holland, D. M. (2010). A spatio-temporal downscaler for output from numerical models. J. of Agricultural, Biological,and Environmental Statistics 15, 176-197) is used to provide daily, predictive PM2.5 (daily average) and O3 (daily 8-hr maximum) surfaces for 2012. Summer (O3) and annual (PM2.5) means calculated and published. The downscaling fusion model uses both air quality monitoring data from the National Air Monitoring Stations/State and Local Air Monitoring Stations (NAMS/SLAMS) and numerical output from the Models-3/Community Multiscale Air Quality (CMAQ). Currently, predictions at the US census tract centroid locations within the 12 km CMAQ domain are archived. Predictions at the CMAQ grid cell centroids, or any desired set of locations co
NASA Astrophysics Data System (ADS)
Marçais, J.; de Dreuzy, J.-R.; Ginn, T. R.; Rousseau-Gueutin, P.; Leray, S.
2015-06-01
While central in groundwater resources and contaminant fate, Transit Time Distributions (TTDs) are never directly accessible from field measurements but always deduced from a combination of tracer data and more or less involved models. We evaluate the predictive capabilities of approximate distributions (Lumped Parameter Models abbreviated as LPMs) instead of fully developed aquifer models. We develop a generic assessment methodology based on synthetic aquifer models to establish references for observable quantities as tracer concentrations and prediction targets as groundwater renewal times. Candidate LPMs are calibrated on the observable tracer concentrations and used to infer renewal time predictions, which are compared with the reference ones. This methodology is applied to the produced crystalline aquifer of Plœmeur (Brittany, France) where flows leak through a micaschists aquitard to reach a sloping aquifer where they radially converge to the producing well, issuing broad rather than multi-modal TTDs. One, two and three parameters LPMs were calibrated to a corresponding number of simulated reference anthropogenic tracer concentrations (CFC-11, 85Kr and SF6). Extensive statistical analysis over the aquifer shows that a good fit of the anthropogenic tracer concentrations is neither a necessary nor a sufficient condition to reach acceptable predictive capability. Prediction accuracy is however strongly conditioned by the use of a priori relevant LPMs. Only adequate LPM shapes yield unbiased estimations. In the case of Plœmeur, relevant LPMs should have two parameters to capture the mean and the standard deviation of the residence times and cover the first few decades [0; 50 years]. Inverse Gaussian and shifted exponential performed equally well for the wide variety of the reference TTDs from strongly peaked in recharge zones where flows are diverging to broadly distributed in more converging zones. When using two sufficiently different atmospheric tracers like CFC-11 and 85Kr, groundwater renewal time predictions are accurate at 1-5 years for estimating mean transit times of some decades (10-50 years). 1-parameter LPMs calibrated on a single atmospheric tracer lead to substantially larger errors of the order of 10 years, while 3-parameter LPMs calibrated with a third atmospheric tracers (SF6) do not improve the prediction capabilities. Based on a specific site, this study highlights the high predictive capacities of two atmospheric tracers on the same time range with sufficiently different atmospheric concentration chronicles.
Ding, Ziyun; Nolte, Daniel; Kit Tsang, Chui; Cleather, Daniel J; Kedgley, Angela E; Bull, Anthony M J
2016-02-01
Segment-based musculoskeletal models allow the prediction of muscle, ligament, and joint forces without making assumptions regarding joint degrees-of-freedom (DOF). The dataset published for the "Grand Challenge Competition to Predict in vivo Knee Loads" provides directly measured tibiofemoral contact forces for activities of daily living (ADL). For the Sixth Grand Challenge Competition to Predict in vivo Knee Loads, blinded results for "smooth" and "bouncy" gait trials were predicted using a customized patient-specific musculoskeletal model. For an unblinded comparison, the following modifications were made to improve the predictions: further customizations, including modifications to the knee center of rotation; reductions to the maximum allowable muscle forces to represent known loss of strength in knee arthroplasty patients; and a kinematic constraint to the hip joint to address the sensitivity of the segment-based approach to motion tracking artifact. For validation, the improved model was applied to normal gait, squat, and sit-to-stand for three subjects. Comparisons of the predictions with measured contact forces showed that segment-based musculoskeletal models using patient-specific input data can estimate tibiofemoral contact forces with root mean square errors (RMSEs) of 0.48-0.65 times body weight (BW) for normal gait trials. Comparisons between measured and predicted tibiofemoral contact forces yielded an average coefficient of determination of 0.81 and RMSEs of 0.46-1.01 times BW for squatting and 0.70-0.99 times BW for sit-to-stand tasks. This is comparable to the best validations in the literature using alternative models.
On the Space-Time Structure of Sheared Turbulence
NASA Astrophysics Data System (ADS)
de Maré, Martin; Mann, Jakob
2016-09-01
We develop a model that predicts all two-point correlations in high Reynolds number turbulent flow, in both space and time. This is accomplished by combining the design philosophies behind two existing models, the Mann spectral velocity tensor, in which isotropic turbulence is distorted according to rapid distortion theory, and Kristensen's longitudinal coherence model, in which eddies are simultaneously advected by larger eddies as well as decaying. The model is compared with data from both observations and large-eddy simulations and is found to predict spatial correlations comparable to the Mann spectral tensor and temporal coherence better than any known model. Within the developed framework, Lagrangian two-point correlations in space and time are also predicted, and the predictions are compared with measurements of isotropic turbulence. The required input to the models, which are formulated as spectral velocity tensors, can be estimated from measured spectra or be derived from the rate of dissipation of turbulent kinetic energy, the friction velocity and the mean shear of the flow. The developed models can, for example, be used in wind-turbine engineering, in applications such as lidar-assisted feed forward control and wind-turbine wake modelling.
Modeling and predicting intertidal variations of the salinity field in the Bay/Delta
Knowles, Noah; Uncles, Reginald J.
1995-01-01
One approach to simulating daily to monthly variability in the bay is the development of intertidal model using tidally-averaged equations and a time step on the order of the day. An intertidal numerical model of the bay's physics, capable of portraying seasonal and inter-annual variability, would have several uses. Observations are limited in time and space, so simulation could help fill the gaps. Also, the ability to simulate multi-year episodes (eg, an extended drought) could provide insight into the response of the ecosystem to such events. Finally, such a model could be used in a forecast mode wherein predicted delta flow is used as model input, and predicted salinity distribution is output with estimates days and months in advance. This note briefly introduces such a tidally-averaged model (Uncles and Peterson, in press) and a corresponding predictive scheme for baywide forecasting.
Automated System Checkout to Support Predictive Maintenance for the Reusable Launch Vehicle
NASA Technical Reports Server (NTRS)
Patterson-Hine, Ann; Deb, Somnath; Kulkarni, Deepak; Wang, Yao; Lau, Sonie (Technical Monitor)
1998-01-01
The Propulsion Checkout and Control System (PCCS) is a predictive maintenance software system. The real-time checkout procedures and diagnostics are designed to detect components that need maintenance based on their condition, rather than using more conventional approaches such as scheduled or reliability centered maintenance. Predictive maintenance can reduce turn-around time and cost and increase safety as compared to conventional maintenance approaches. Real-time sensor validation, limit checking, statistical anomaly detection, and failure prediction based on simulation models are employed. Multi-signal models, useful for testability analysis during system design, are used during the operational phase to detect and isolate degraded or failed components. The TEAMS-RT real-time diagnostic engine was developed to utilize the multi-signal models by Qualtech Systems, Inc. Capability of predicting the maintenance condition was successfully demonstrated with a variety of data, from simulation to actual operation on the Integrated Propulsion Technology Demonstrator (IPTD) at Marshall Space Flight Center (MSFC). Playback of IPTD valve actuations for feature recognition updates identified an otherwise undetectable Main Propulsion System 12 inch prevalve degradation. The algorithms were loaded into the Propulsion Checkout and Control System for further development and are the first known application of predictive Integrated Vehicle Health Management to an operational cryogenic testbed. The software performed successfully in real-time, meeting the required performance goal of 1 second cycle time.
A test of an interactive model of binge eating among undergraduate men.
Minnich, Allison M; Gordon, Kathryn H; Holm-Denoma, Jill M; Troop-Gordon, Wendy
2014-12-01
Past research has shown that a combination of high perfectionism, high body dissatisfaction, and low self-esteem is predictive of binge eating in college women (Bardone-Cone et al., 2006). In the current study, we examined whether this triple interaction model is applicable to men. Male undergraduate college students from a large Midwestern university (n=302) completed self-report measures online at two different time points, a minimum of eight weeks apart. Analyses revealed a significant interaction between the three risk factors, such that high perfectionism, high body dissatisfaction, and low self-esteem at Time 1 were associated with higher levels of Time 2 binge eating symptoms. The triple interaction model did not predict Time 2 anxiety or depressive symptoms, which suggests model specificity. These findings offer a greater understanding of the interactive nature of risk factors in predicting binge eating symptoms among men. Copyright © 2014 Elsevier Ltd. All rights reserved.
Improving the prediction of African savanna vegetation variables using time series of MODIS products
NASA Astrophysics Data System (ADS)
Tsalyuk, Miriam; Kelly, Maggi; Getz, Wayne M.
2017-09-01
African savanna vegetation is subject to extensive degradation as a result of rapid climate and land use change. To better understand these changes detailed assessment of vegetation structure is needed across an extensive spatial scale and at a fine temporal resolution. Applying remote sensing techniques to savanna vegetation is challenging due to sparse cover, high background soil signal, and difficulty to differentiate between spectral signals of bare soil and dry vegetation. In this paper, we attempt to resolve these challenges by analyzing time series of four MODIS Vegetation Products (VPs): Normalized Difference Vegetation Index (NDVI), Enhanced Vegetation Index (EVI), Leaf Area Index (LAI), and Fraction of Photosynthetically Active Radiation (FPAR) for Etosha National Park, a semiarid savanna in north-central Namibia. We create models to predict the density, cover, and biomass of the main savanna vegetation forms: grass, shrubs, and trees. To calibrate remote sensing data we developed an extensive and relatively rapid field methodology and measured herbaceous and woody vegetation during both the dry and wet seasons. We compared the efficacy of the four MODIS-derived VPs in predicting vegetation field measured variables. We then compared the optimal time span of VP time series to predict ground-measured vegetation. We found that Multiyear Partial Least Square Regression (PLSR) models were superior to single year or single date models. Our results show that NDVI-based PLSR models yield robust prediction of tree density (R2 = 0.79, relative Root Mean Square Error, rRMSE = 1.9%) and tree cover (R2 = 0.78, rRMSE = 0.3%). EVI provided the best model for shrub density (R2 = 0.82) and shrub cover (R2 = 0.83), but was only marginally superior over models based on other VPs. FPAR was the best predictor of vegetation biomass of trees (R2 = 0.76), shrubs (R2 = 0.83), and grass (R2 = 0.91). Finally, we addressed an enduring challenge in the remote sensing of semiarid vegetation by examining the transferability of predictive models through space and time. Our results show that models created in the wetter part of Etosha could accurately predict trees' and shrubs' variables in the drier part of the reserve and vice versa. Moreover, our results demonstrate that models created for vegetation variables in the dry season of 2011 could be successfully applied to predict vegetation in the wet season of 2012. We conclude that extensive field data combined with multiyear time series of MODIS vegetation products can produce robust predictive models for multiple vegetation forms in the African savanna. These methods advance the monitoring of savanna vegetation dynamics and contribute to improved management and conservation of these valuable ecosystems.
Martínez-Martínez, F; Rupérez-Moreno, M J; Martínez-Sober, M; Solves-Llorens, J A; Lorente, D; Serrano-López, A J; Martínez-Sanchis, S; Monserrat, C; Martín-Guerrero, J D
2017-11-01
This work presents a data-driven method to simulate, in real-time, the biomechanical behavior of the breast tissues in some image-guided interventions such as biopsies or radiotherapy dose delivery as well as to speed up multimodal registration algorithms. Ten real breasts were used for this work. Their deformation due to the displacement of two compression plates was simulated off-line using the finite element (FE) method. Three machine learning models were trained with the data from those simulations. Then, they were used to predict in real-time the deformation of the breast tissues during the compression. The models were a decision tree and two tree-based ensemble methods (extremely randomized trees and random forest). Two different experimental setups were designed to validate and study the performance of these models under different conditions. The mean 3D Euclidean distance between nodes predicted by the models and those extracted from the FE simulations was calculated to assess the performance of the models in the validation set. The experiments proved that extremely randomized trees performed better than the other two models. The mean error committed by the three models in the prediction of the nodal displacements was under 2 mm, a threshold usually set for clinical applications. The time needed for breast compression prediction is sufficiently short to allow its use in real-time (<0.2 s). Copyright © 2017 Elsevier Ltd. All rights reserved.
CME Arrival-time Validation of Real-time WSA-ENLIL+Cone Simulations at the CCMC/SWRC
NASA Astrophysics Data System (ADS)
Wold, A. M.; Mays, M. L.; Taktakishvili, A.; Jian, L.; Odstrcil, D.; MacNeice, P. J.
2016-12-01
The Wang-Sheeley-Arge (WSA)-ENLIL+Cone model is used extensively in space weather operations worldwide to model CME propagation, as such it is important to assess its performance. We present validation results of the WSA-ENLIL+Cone model installed at the Community Coordinated Modeling Center (CCMC) and executed in real-time by the CCMC/Space Weather Research Center (SWRC). The SWRC is a CCMC sub-team that provides space weather services to NASA robotic mission operators and science campaigns, and also prototypes new forecasting models and techniques. CCMC/SWRC uses the WSA-ENLIL+Cone model to predict CME arrivals at NASA missions throughout the inner heliosphere. In this work we compare model predicted CME arrival-times to in-situ ICME shock observations near Earth (ACE, Wind), STEREO-A and B for simulations completed between March 2010 - July 2016 (over 1500 runs). We report hit, miss, false alarm, and correct rejection statistics for all three spacecraft. For hits we compute the bias, RMSE, and average absolute CME arrival time error, and the dependence of these errors on CME input parameters. We compare the predicted geomagnetic storm strength (Kp index) to the CME arrival time error for Earth-directed CMEs. The predicted Kp index is computed using the WSA-ENLIL+Cone plasma parameters at Earth with a modified Newell et al. (2007) coupling function. We also explore the impact of the multi-spacecraft observations on the CME parameters used initialize the model by comparing model validation results before and after the STEREO-B communication loss (since September 2014) and STEREO-A side-lobe operations (August 2014-December 2015). This model validation exercise has significance for future space weather mission planning such as L5 missions.
Thorndahl, Søren; Nielsen, Jesper Ellerbæk; Jensen, David Getreuer
2016-12-01
Flooding produced by high-intensive local rainfall and drainage system capacity exceedance can have severe impacts in cities. In order to prepare cities for these types of flood events - especially in the future climate - it is valuable to be able to simulate these events numerically, both historically and in real-time. There is a rather untested potential in real-time prediction of urban floods. In this paper, radar data observations with different spatial and temporal resolution, radar nowcasts of 0-2 h leadtime, and numerical weather models with leadtimes up to 24 h are used as inputs to an integrated flood and drainage systems model in order to investigate the relative difference between different inputs in predicting future floods. The system is tested on the small town of Lystrup in Denmark, which was flooded in 2012 and 2014. Results show it is possible to generate detailed flood maps in real-time with high resolution radar rainfall data, but rather limited forecast performance in predicting floods with leadtimes more than half an hour.
Prediction of mortality rates using a model with stochastic parameters
NASA Astrophysics Data System (ADS)
Tan, Chon Sern; Pooi, Ah Hin
2016-10-01
Prediction of future mortality rates is crucial to insurance companies because they face longevity risks while providing retirement benefits to a population whose life expectancy is increasing. In the past literature, a time series model based on multivariate power-normal distribution has been applied on mortality data from the United States for the years 1933 till 2000 to forecast the future mortality rates for the years 2001 till 2010. In this paper, a more dynamic approach based on the multivariate time series will be proposed where the model uses stochastic parameters that vary with time. The resulting prediction intervals obtained using the model with stochastic parameters perform better because apart from having good ability in covering the observed future mortality rates, they also tend to have distinctly shorter interval lengths.
Hamann, Hendrik F.; Hwang, Youngdeok; van Kessel, Theodore G.; Khabibrakhmanov, Ildar K.; Muralidhar, Ramachandran
2016-10-18
A method and a system to perform multi-model blending are described. The method includes obtaining one or more sets of predictions of historical conditions, the historical conditions corresponding with a time T that is historical in reference to current time, and the one or more sets of predictions of the historical conditions being output by one or more models. The method also includes obtaining actual historical conditions, the actual historical conditions being measured conditions at the time T, assembling a training data set including designating the two or more set of predictions of historical conditions as predictor variables and the actual historical conditions as response variables, and training a machine learning algorithm based on the training data set. The method further includes obtaining a blended model based on the machine learning algorithm.
A Final Approach Trajectory Model for Current Operations
NASA Technical Reports Server (NTRS)
Gong, Chester; Sadovsky, Alexander
2010-01-01
Predicting accurate trajectories with limited intent information is a challenge faced by air traffic management decision support tools in operation today. One such tool is the FAA's Terminal Proximity Alert system which is intended to assist controllers in maintaining safe separation of arrival aircraft during final approach. In an effort to improve the performance of such tools, two final approach trajectory models are proposed; one based on polynomial interpolation, the other on the Fourier transform. These models were tested against actual traffic data and used to study effects of the key final approach trajectory modeling parameters of wind, aircraft type, and weight class, on trajectory prediction accuracy. Using only the limited intent data available to today's ATM system, both the polynomial interpolation and Fourier transform models showed improved trajectory prediction accuracy over a baseline dead reckoning model. Analysis of actual arrival traffic showed that this improved trajectory prediction accuracy leads to improved inter-arrival separation prediction accuracy for longer look ahead times. The difference in mean inter-arrival separation prediction error between the Fourier transform and dead reckoning models was 0.2 nmi for a look ahead time of 120 sec, a 33 percent improvement, with a corresponding 32 percent improvement in standard deviation.
Plant traits determine forest flammability
NASA Astrophysics Data System (ADS)
Zylstra, Philip; Bradstock, Ross
2016-04-01
Carbon and nutrient cycles in forest ecosystems are influenced by their inherent flammability - a property determined by the traits of the component plant species that form the fuel and influence the micro climate of a fire. In the absence of a model capable of explaining the complexity of such a system however, flammability is frequently represented by simple metrics such as surface fuel load. The implications of modelling fire - flammability feedbacks using surface fuel load were examined and compared to a biophysical, mechanistic model (Forest Flammability Model) that incorporates the influence of structural plant traits (e.g. crown shape and spacing) and leaf traits (e.g. thickness, dimensions and moisture). Fuels burn with values of combustibility modelled from leaf traits, transferring convective heat along vectors defined by flame angle and with plume temperatures that decrease with distance from the flame. Flames are re-calculated in one-second time-steps, with new leaves within the plant, neighbouring plants or higher strata ignited when the modelled time to ignition is reached, and other leaves extinguishing when their modelled flame duration is exceeded. The relative influence of surface fuels, vegetation structure and plant leaf traits were examined by comparing flame heights modelled using three treatments that successively added these components within the FFM. Validation was performed across a diverse range of eucalypt forests burnt under widely varying conditions during a forest fire in the Brindabella Ranges west of Canberra (ACT) in 2003. Flame heights ranged from 10 cm to more than 20 m, with an average of 4 m. When modelled from surface fuels alone, flame heights were on average 1.5m smaller than observed values, and were predicted within the error range 28% of the time. The addition of plant structure produced predicted flame heights that were on average 1.5m larger than observed, but were correct 53% of the time. The over-prediction in this case was the result of a small number of large errors, where higher strata such as forest canopy were modelled to ignite but did not. The addition of leaf traits largely addressed this error, so that the mean flame height over-prediction was reduced to 0.3m and the fully parameterised FFM gave correct predictions 62% of the time. When small (<1m) flames were excluded, the fully parameterised model correctly predicted flame heights 12 times more often than could be predicted using surface fuels alone, and the Mean Absolute Error was 4 times smaller. The inadequate consideration of plant traits within a mechanistic framework introduces significant error to forest fire behaviour modelling. The FFM provides a solution to this, and an avenue by which plant trait information can be used to better inform Global Vegetation Models and decision-making tools used to mitigate the impacts of fire.
Nonlinear modeling of chaotic time series: Theory and applications
DOE Office of Scientific and Technical Information (OSTI.GOV)
Casdagli, M.; Eubank, S.; Farmer, J.D.
1990-01-01
We review recent developments in the modeling and prediction of nonlinear time series. In some cases apparent randomness in time series may be due to chaotic behavior of a nonlinear but deterministic system. In such cases it is possible to exploit the determinism to make short term forecasts that are much more accurate than one could make from a linear stochastic model. This is done by first reconstructing a state space, and then using nonlinear function approximation methods to create a dynamical model. Nonlinear models are valuable not only as short term forecasters, but also as diagnostic tools for identifyingmore » and quantifying low-dimensional chaotic behavior. During the past few years methods for nonlinear modeling have developed rapidly, and have already led to several applications where nonlinear models motivated by chaotic dynamics provide superior predictions to linear models. These applications include prediction of fluid flows, sunspots, mechanical vibrations, ice ages, measles epidemics and human speech. 162 refs., 13 figs.« less
NASA Astrophysics Data System (ADS)
Cappelli, Mark; Young, Christopher
2016-10-01
We present continued efforts towards introducing physical models for cross-magnetic field electron transport into Hall thruster discharge simulations. In particular, we seek to evaluate whether such models accurately capture ion dynamics, both averaged and resolved in time, through comparisons with measured ion velocity distributions which are now becoming available for several devices. Here, we describe a turbulent electron transport model that is integrated into 2-D hybrid fluid/PIC simulations of a 72 mm diameter laboratory thruster operating at 400 W. We also compare this model's predictions with one recently proposed by Lafluer et al.. Introducing these models into 2-D hybrid simulations is relatively straightforward and leverages the existing framework for solving the electron fluid equations. The models are tested for their ability to capture the time-averaged experimental discharge current and its fluctuations due to ionization instabilities. Model predictions are also more rigorously evaluated against recent laser-induced fluorescence measurements of time-resolved ion velocity distributions.
Fific, Mario; Little, Daniel R; Nosofsky, Robert M
2010-04-01
We formalize and provide tests of a set of logical-rule models for predicting perceptual classification response times (RTs) and choice probabilities. The models are developed by synthesizing mental-architecture, random-walk, and decision-bound approaches. According to the models, people make independent decisions about the locations of stimuli along a set of component dimensions. Those independent decisions are then combined via logical rules to determine the overall categorization response. The time course of the independent decisions is modeled via random-walk processes operating along individual dimensions. Alternative mental architectures are used as mechanisms for combining the independent decisions to implement the logical rules. We derive fundamental qualitative contrasts for distinguishing among the predictions of the rule models and major alternative models of classification RT. We also use the models to predict detailed RT-distribution data associated with individual stimuli in tasks of speeded perceptual classification. PsycINFO Database Record (c) 2010 APA, all rights reserved.
Green, Jasmine; Liem, Gregory Arief D; Martin, Andrew J; Colmar, Susan; Marsh, Herbert W; McInerney, Dennis
2012-10-01
The study tested three theoretically/conceptually hypothesized longitudinal models of academic processes leading to academic performance. Based on a longitudinal sample of 1866 high-school students across two consecutive years of high school (Time 1 and Time 2), the model with the most superior heuristic value demonstrated: (a) academic motivation and self-concept positively predicted attitudes toward school; (b) attitudes toward school positively predicted class participation and homework completion and negatively predicted absenteeism; and (c) class participation and homework completion positively predicted test performance whilst absenteeism negatively predicted test performance. Taken together, these findings provide support for the relevance of the self-system model and, particularly, the importance of examining the dynamic relationships amongst engagement factors of the model. The study highlights implications for educational and psychological theory, measurement, and intervention. Copyright © 2012 The Foundation for Professionals in Services for Adolescents. Published by Elsevier Ltd. All rights reserved.
[Application of an artificial neural network in the design of sustained-release dosage forms].
Wei, X H; Wu, J J; Liang, W Q
2001-09-01
To use the artificial neural network (ANN) in Matlab 5.1 tool-boxes to predict the formulations of sustained-release tablets. The solubilities of nine drugs and various ratios of HPMC: Dextrin for 63 tablet formulations were used as the ANN model input, and in vitro accumulation released at 6 sampling times were used as output. The ANN model was constructed by selecting the optimal number of iterations (25) and model structure in which there are one hidden layer and five hidden layer nodes. The optimized ANN model was used for prediction of formulation based on desired target in vitro dissolution-time profiles. ANN predicted profiles based on ANN predicted formulations were closely similar to the target profiles. The ANN could be used for predicting the dissolution profiles of sustained release dosage form and for the design of optimal formulation.
NASA Astrophysics Data System (ADS)
Omrani, Hiba; Drobinski, Philippe; Dubos, Thomas
2010-05-01
In this work, we consider the effect of indiscriminate and spectral nudging on the large and small scales of an idealized model simulation. The model is a two layer quasi-geostrophic model on the beta-plane driven at its boundaries by the « global » version with periodic boundary condition. This setup mimics the configuration used for regional climate modelling. The effect of large-scale nudging is studied by using the "perfect model" approach. Two sets of experiments are performed: (1) the effect of nudging is investigated with a « global » high resolution two layer quasi-geostrophic model driven by a low resolution two layer quasi-geostrophic model. (2) similar simulations are conducted with the two layer quasi-geostrophic Limited Area Model (LAM) where the size of the LAM domain comes into play in addition to the first set of simulations. The study shows that the indiscriminate nudging time that minimizes the error at both the large and small scales is reached for a nudging time close to the predictability time, for spectral nudging, the optimum nudging time should tend to zero since the best large scale dynamics is supposed to be given by the driving large-scale fields are generally given at much lower frequency than the model time step(e,g, 6-hourly analysis) with a basic interpolation between the fields, the optimum nudging time differs from zero, however remaining smaller than the predictability time.
Short time ahead wind power production forecast
NASA Astrophysics Data System (ADS)
Sapronova, Alla; Meissner, Catherine; Mana, Matteo
2016-09-01
An accurate prediction of wind power output is crucial for efficient coordination of cooperative energy production from different sources. Long-time ahead prediction (from 6 to 24 hours) of wind power for onshore parks can be achieved by using a coupled model that would bridge the mesoscale weather prediction data and computational fluid dynamics. When a forecast for shorter time horizon (less than one hour ahead) is anticipated, an accuracy of a predictive model that utilizes hourly weather data is decreasing. That is because the higher frequency fluctuations of the wind speed are lost when data is averaged over an hour. Since the wind speed can vary up to 50% in magnitude over a period of 5 minutes, the higher frequency variations of wind speed and direction have to be taken into account for an accurate short-term ahead energy production forecast. In this work a new model for wind power production forecast 5- to 30-minutes ahead is presented. The model is based on machine learning techniques and categorization approach and using the historical park production time series and hourly numerical weather forecast.
In this paper, the concept of scale analysis is applied to evaluate ozone predictions from two regional-scale air quality models. To this end, seasonal time series of observations and predictions from the RAMS3b/UAM-V and MM5/MAQSIP (SMRAQ) modeling systems for ozone were spectra...
Pagán, Josué; Risco-Martín, José L; Moya, José M; Ayala, José L
2016-08-01
Prediction of symptomatic crises in chronic diseases allows to take decisions before the symptoms occur, such as the intake of drugs to avoid the symptoms or the activation of medical alarms. The prediction horizon is in this case an important parameter in order to fulfill the pharmacokinetics of medications, or the time response of medical services. This paper presents a study about the prediction limits of a chronic disease with symptomatic crises: the migraine. For that purpose, this work develops a methodology to build predictive migraine models and to improve these predictions beyond the limits of the initial models. The maximum prediction horizon is analyzed, and its dependency on the selected features is studied. A strategy for model selection is proposed to tackle the trade off between conservative but robust predictive models, with respect to less accurate predictions with higher horizons. The obtained results show a prediction horizon close to 40min, which is in the time range of the drug pharmacokinetics. Experiments have been performed in a realistic scenario where input data have been acquired in an ambulatory clinical study by the deployment of a non-intrusive Wireless Body Sensor Network. Our results provide an effective methodology for the selection of the future horizon in the development of prediction algorithms for diseases experiencing symptomatic crises. Copyright © 2016 Elsevier Inc. All rights reserved.
Modeling Rabbit Responses to Single and Multiple Aerosol ...
Journal Article Survival models are developed here to predict response and time-to-response for mortality in rabbits following exposures to single or multiple aerosol doses of Bacillus anthracis spores. Hazard function models were developed for a multiple dose dataset to predict the probability of death through specifying dose-response functions and the time between exposure and the time-to-death (TTD). Among the models developed, the best-fitting survival model (baseline model) has an exponential dose-response model with a Weibull TTD distribution. Alternative models assessed employ different underlying dose-response functions and use the assumption that, in a multiple dose scenario, earlier doses affect the hazard functions of each subsequent dose. In addition, published mechanistic models are analyzed and compared with models developed in this paper. None of the alternative models that were assessed provided a statistically significant improvement in fit over the baseline model. The general approach utilizes simple empirical data analysis to develop parsimonious models with limited reliance on mechanistic assumptions. The baseline model predicts TTDs consistent with reported results from three independent high-dose rabbit datasets. More accurate survival models depend upon future development of dose-response datasets specifically designed to assess potential multiple dose effects on response and time-to-response. The process used in this paper to dev
Composite Stress Rupture: A New Reliability Model Based on Strength Decay
NASA Technical Reports Server (NTRS)
Reeder, James R.
2012-01-01
A model is proposed to estimate reliability for stress rupture of composite overwrap pressure vessels (COPVs) and similar composite structures. This new reliability model is generated by assuming a strength degradation (or decay) over time. The model suggests that most of the strength decay occurs late in life. The strength decay model will be shown to predict a response similar to that predicted by a traditional reliability model for stress rupture based on tests at a single stress level. In addition, the model predicts that even though there is strength decay due to proof loading, a significant overall increase in reliability is gained by eliminating any weak vessels, which would fail early. The model predicts that there should be significant periods of safe life following proof loading, because time is required for the strength to decay from the proof stress level to the subsequent loading level. Suggestions for testing the strength decay reliability model have been made. If the strength decay reliability model predictions are shown through testing to be accurate, COPVs may be designed to carry a higher level of stress than is currently allowed, which will enable the production of lighter structures
An improved grey model for the prediction of real-time GPS satellite clock bias
NASA Astrophysics Data System (ADS)
Zheng, Z. Y.; Chen, Y. Q.; Lu, X. S.
2008-07-01
In real-time GPS precise point positioning (PPP), real-time and reliable satellite clock bias (SCB) prediction is a key to implement real-time GPS PPP. It is difficult to hold the nuisance and inenarrable performance of space-borne GPS satellite atomic clock because of its high-frequency, sensitivity and impressionable, it accords with the property of grey model (GM) theory, i. e. we can look on the variable process of SCB as grey system. Firstly, based on limits of quadratic polynomial (QP) and traditional GM to predict SCB, a modified GM (1,1) is put forward to predict GPS SCB in this paper; and then, taking GPS SCB data for example, we analyzed clock bias prediction with different sample interval, the relationship between GM exponent and prediction accuracy, precision comparison of GM to QP, and concluded the general rule of different type SCB and GM exponent; finally, to test the reliability and validation of the modified GM what we put forward, taking IGS clock bias ephemeris product as reference, we analyzed the prediction precision with the modified GM, It is showed that the modified GM is reliable and validation to predict GPS SCB and can offer high precise SCB prediction for real-time GPS PPP.
Models for short term malaria prediction in Sri Lanka
Briët, Olivier JT; Vounatsou, Penelope; Gunawardena, Dissanayake M; Galappaththy, Gawrie NL; Amerasinghe, Priyanie H
2008-01-01
Background Malaria in Sri Lanka is unstable and fluctuates in intensity both spatially and temporally. Although the case counts are dwindling at present, given the past history of resurgence of outbreaks despite effective control measures, the control programmes have to stay prepared. The availability of long time series of monitored/diagnosed malaria cases allows for the study of forecasting models, with an aim to developing a forecasting system which could assist in the efficient allocation of resources for malaria control. Methods Exponentially weighted moving average models, autoregressive integrated moving average (ARIMA) models with seasonal components, and seasonal multiplicative autoregressive integrated moving average (SARIMA) models were compared on monthly time series of district malaria cases for their ability to predict the number of malaria cases one to four months ahead. The addition of covariates such as the number of malaria cases in neighbouring districts or rainfall were assessed for their ability to improve prediction of selected (seasonal) ARIMA models. Results The best model for forecasting and the forecasting error varied strongly among the districts. The addition of rainfall as a covariate improved prediction of selected (seasonal) ARIMA models modestly in some districts but worsened prediction in other districts. Improvement by adding rainfall was more frequent at larger forecasting horizons. Conclusion Heterogeneity of patterns of malaria in Sri Lanka requires regionally specific prediction models. Prediction error was large at a minimum of 22% (for one of the districts) for one month ahead predictions. The modest improvement made in short term prediction by adding rainfall as a covariate to these prediction models may not be sufficient to merit investing in a forecasting system for which rainfall data are routinely processed. PMID:18460204
Predictive Monitoring for Improved Management of Glucose Levels
Reifman, Jaques; Rajaraman, Srinivasan; Gribok, Andrei; Ward, W. Kenneth
2007-01-01
Background Recent developments and expected near-future improvements in continuous glucose monitoring (CGM) devices provide opportunities to couple them with mathematical forecasting models to produce predictive monitoring systems for early, proactive glycemia management of diabetes mellitus patients before glucose levels drift to undesirable levels. This article assesses the feasibility of data-driven models to serve as the forecasting engine of predictive monitoring systems. Methods We investigated the capabilities of data-driven autoregressive (AR) models to (1) capture the correlations in glucose time-series data, (2) make accurate predictions as a function of prediction horizon, and (3) be made portable from individual to individual without any need for model tuning. The investigation is performed by employing CGM data from nine type 1 diabetic subjects collected over a continuous 5-day period. Results With CGM data serving as the gold standard, AR model-based predictions of glucose levels assessed over nine subjects with Clarke error grid analysis indicated that, for a 30-minute prediction horizon, individually tuned models yield 97.6 to 100.0% of data in the clinically acceptable zones A and B, whereas cross-subject, portable models yield 95.8 to 99.7% of data in zones A and B. Conclusions This study shows that, for a 30-minute prediction horizon, data-driven AR models provide sufficiently-accurate and clinically-acceptable estimates of glucose levels for timely, proactive therapy and should be considered as the modeling engine for predictive monitoring of patients with type 1 diabetes mellitus. It also suggests that AR models can be made portable from individual to individual with minor performance penalties, while greatly reducing the burden associated with model tuning and data collection for model development. PMID:19885110
Real-time emissions from construction equipment compared with model predictions.
Heidari, Bardia; Marr, Linsey C
2015-02-01
The construction industry is a large source of greenhouse gases and other air pollutants. Measuring and monitoring real-time emissions will provide practitioners with information to assess environmental impacts and improve the sustainability of construction. We employed a portable emission measurement system (PEMS) for real-time measurement of carbon dioxide (CO), nitrogen oxides (NOx), hydrocarbon, and carbon monoxide (CO) emissions from construction equipment to derive emission rates (mass of pollutant emitted per unit time) and emission factors (mass of pollutant emitted per unit volume of fuel consumed) under real-world operating conditions. Measurements were compared with emissions predicted by methodologies used in three models: NONROAD2008, OFFROAD2011, and a modal statistical model. Measured emission rates agreed with model predictions for some pieces of equipment but were up to 100 times lower for others. Much of the difference was driven by lower fuel consumption rates than predicted. Emission factors during idling and hauling were significantly different from each other and from those of other moving activities, such as digging and dumping. It appears that operating conditions introduce considerable variability in emission factors. Results of this research will aid researchers and practitioners in improving current emission estimation techniques, frameworks, and databases.
NASA Technical Reports Server (NTRS)
Curry, Timothy J.; Batterson, James G. (Technical Monitor)
2000-01-01
Low order equivalent system (LOES) models for the Tu-144 supersonic transport aircraft were identified from flight test data. The mathematical models were given in terms of transfer functions with a time delay by the military standard MIL-STD-1797A, "Flying Qualities of Piloted Aircraft," and the handling qualities were predicted from the estimated transfer function coefficients. The coefficients and the time delay in the transfer functions were estimated using a nonlinear equation error formulation in the frequency domain. Flight test data from pitch, roll, and yaw frequency sweeps at various flight conditions were used for parameter estimation. Flight test results are presented in terms of the estimated parameter values, their standard errors, and output fits in the time domain. Data from doublet maneuvers at the same flight conditions were used to assess the predictive capabilities of the identified models. The identified transfer function models fit the measured data well and demonstrated good prediction capabilities. The Tu-144 was predicted to be between level 2 and 3 for all longitudinal maneuvers and level I for all lateral maneuvers. High estimates of the equivalent time delay in the transfer function model caused the poor longitudinal rating.
McCarthy, John F.; Katz, Ira R.; Thompson, Caitlin; Kemp, Janet; Hannemann, Claire M.; Nielson, Christopher; Schoenbaum, Michael
2015-01-01
Objectives. The Veterans Health Administration (VHA) evaluated the use of predictive modeling to identify patients at risk for suicide and to supplement ongoing care with risk-stratified interventions. Methods. Suicide data came from the National Death Index. Predictors were measures from VHA clinical records incorporating patient-months from October 1, 2008, to September 30, 2011, for all suicide decedents and 1% of living patients, divided randomly into development and validation samples. We used data on all patients alive on September 30, 2010, to evaluate predictions of suicide risk over 1 year. Results. Modeling demonstrated that suicide rates were 82 and 60 times greater than the rate in the overall sample in the highest 0.01% stratum for calculated risk for the development and validation samples, respectively; 39 and 30 times greater in the highest 0.10%; 14 and 12 times greater in the highest 1.00%; and 6.3 and 5.7 times greater in the highest 5.00%. Conclusions. Predictive modeling can identify high-risk patients who were not identified on clinical grounds. VHA is developing modeling to enhance clinical care and to guide the delivery of preventive interventions. PMID:26066914
McCarthy, John F; Bossarte, Robert M; Katz, Ira R; Thompson, Caitlin; Kemp, Janet; Hannemann, Claire M; Nielson, Christopher; Schoenbaum, Michael
2015-09-01
The Veterans Health Administration (VHA) evaluated the use of predictive modeling to identify patients at risk for suicide and to supplement ongoing care with risk-stratified interventions. Suicide data came from the National Death Index. Predictors were measures from VHA clinical records incorporating patient-months from October 1, 2008, to September 30, 2011, for all suicide decedents and 1% of living patients, divided randomly into development and validation samples. We used data on all patients alive on September 30, 2010, to evaluate predictions of suicide risk over 1 year. Modeling demonstrated that suicide rates were 82 and 60 times greater than the rate in the overall sample in the highest 0.01% stratum for calculated risk for the development and validation samples, respectively; 39 and 30 times greater in the highest 0.10%; 14 and 12 times greater in the highest 1.00%; and 6.3 and 5.7 times greater in the highest 5.00%. Predictive modeling can identify high-risk patients who were not identified on clinical grounds. VHA is developing modeling to enhance clinical care and to guide the delivery of preventive interventions.
NASA Astrophysics Data System (ADS)
Ceballos-Núñez, Verónika; Richardson, Andrew D.; Sierra, Carlos A.
2018-03-01
The global carbon cycle is strongly controlled by the source/sink strength of vegetation as well as the capacity of terrestrial ecosystems to retain this carbon. These dynamics, as well as processes such as the mixing of old and newly fixed carbon, have been studied using ecosystem models, but different assumptions regarding the carbon allocation strategies and other model structures may result in highly divergent model predictions. We assessed the influence of three different carbon allocation schemes on the C cycling in vegetation. First, we described each model with a set of ordinary differential equations. Second, we used published measurements of ecosystem C compartments from the Harvard Forest Environmental Measurement Site to find suitable parameters for the different model structures. And third, we calculated C stocks, release fluxes, radiocarbon values (based on the bomb spike), ages, and transit times. We obtained model simulations in accordance with the available data, but the time series of C in foliage and wood need to be complemented with other ecosystem compartments in order to reduce the high parameter collinearity that we observed, and reduce model equifinality. Although the simulated C stocks in ecosystem compartments were similar, the different model structures resulted in very different predictions of age and transit time distributions. In particular, the inclusion of two storage compartments resulted in the prediction of a system mean age that was 12-20 years older than in the models with one or no storage compartments. The age of carbon in the wood compartment of this model was also distributed towards older ages, whereas fast cycling compartments had an age distribution that did not exceed 5 years. As expected, models with C distributed towards older ages also had longer transit times. These results suggest that ages and transit times, which can be indirectly measured using isotope tracers, serve as important diagnostics of model structure and could largely help to reduce uncertainties in model predictions. Furthermore, by considering age and transit times of C in vegetation compartments as distributions, not only their mean values, we obtain additional insights into the temporal dynamics of carbon use, storage, and allocation to plant parts, which not only depends on the rate at which this C is transferred in and out of the compartments but also on the stochastic nature of the process itself.
NASA Astrophysics Data System (ADS)
Sauder, Heather Scot
To reach the high standards set for renewable energy production in the US and around the globe, wind turbines with taller towers and longer blades are being designed for onshore and offshore wind developments to capture more energy from higher winds aloft and a larger rotor diameter. However, amongst all the wind turbine components wind turbine blades are still the most prone to damage. Given that wind turbine blades experience dynamic loads from multiple sources, there is a need to be able to predict the real-time load, stress distribution and response of the blade in a given wind environment for damage, flutter and fatigue life predictions. Current methods of wind-induced response analysis for wind turbine blades use approximations that are not suitable for wind turbine blade airfoils which are thick, and therefore lead to inaccurate life predictions. Additionally, a time-domain formulation can prove to be especially advantageous for predicting aerodynamic loads on wind turbine blades since they operate in a turbulent atmospheric boundary layer. This will help to analyze the blades on wind turbines that operate individually or in a farm setting where they experience high turbulence in the wake of another wind turbine. A time-domain formulation is also useful for examining the effects of gusty winds that are transient in nature like in gust fronts, thunderstorms or extreme events such as hurricanes, microbursts, and tornadoes. Time-domain methods present the opportunity for real-time health monitoring strategies that can easily be used with finite element methods for prediction of fatigue life or onset of flutter instability. The purpose of the proposed work is to develop a robust computational model to predict the loads, stresses and response of a wind turbine blade in operating and extreme wind conditions. The model can be used to inform health monitoring strategies for preventative maintenance and provide a realistic number of stress cycles that the blade will experience for fatigue life prediction procedures. To fill in the gaps in the existing knowledge and meet the overall goal of the proposed research, the following objectives were accomplished: (a) improve the existing aeroelastic (motion- and turbulence-induced) load models to predict the response of wind turbine blade airfoils to understand its behavior in turbulent wind, (b) understand, model and predict the response of wind turbine blades in transient or gusty wind, boundary-layer wind and incoherent wind over the span of the blade, (c) understand the effects of aero-structural coupling between the along-wind, cross-wind and torsional vibrations, and finally (d) develop a computational tool using the improved time-domain load model to predict the real-time load, stress distribution and response of a given wind turbine blade during operating and parked conditions subject to a specific wind environment both in a short and long term for damage, flutter and fatigue life predictions.
Predictive modeling of mosquito abundance and dengue transmission in Kenya
NASA Astrophysics Data System (ADS)
Caldwell, J.; Krystosik, A.; Mutuku, F.; Ndenga, B.; LaBeaud, D.; Mordecai, E.
2017-12-01
Approximately 390 million people are exposed to dengue virus every year, and with no widely available treatments or vaccines, predictive models of disease risk are valuable tools for vector control and disease prevention. The aim of this study was to modify and improve climate-driven predictive models of dengue vector abundance (Aedes spp. mosquitoes) and viral transmission to people in Kenya. We simulated disease transmission using a temperature-driven mechanistic model and compared model predictions with vector trap data for larvae, pupae, and adult mosquitoes collected between 2014 and 2017 at four sites across urban and rural villages in Kenya. We tested predictive capacity of our models using four temperature measurements (minimum, maximum, range, and anomalies) across daily, weekly, and monthly time scales. Our results indicate seasonal temperature variation is a key driving factor of Aedes mosquito abundance and disease transmission. These models can help vector control programs target specific locations and times when vectors are likely to be present, and can be modified for other Aedes-transmitted diseases and arboviral endemic regions around the world.
NASA Astrophysics Data System (ADS)
Luo, Jing-Jia; Masson, Sebastien; Behera, Swadhin; Shingu, Satoru; Yamagata, Toshio
2005-11-01
Predictabilities of tropical climate signals are investigated using a relatively high resolution Scale Interaction Experiment Frontier Research Center for Global Change (FRCGC) coupled GCM (SINTEX-F). Five ensemble forecast members are generated by perturbing the model’s coupling physics, which accounts for the uncertainties of both initial conditions and model physics. Because of the model’s good performance in simulating the climatology and ENSO in the tropical Pacific, a simple coupled SST-nudging scheme generates realistic thermocline and surface wind variations in the equatorial Pacific. Several westerly and easterly wind bursts in the western Pacific are also captured.Hindcast results for the period 1982 2001 show a high predictability of ENSO. All past El Niño and La Niña events, including the strongest 1997/98 warm episode, are successfully predicted with the anomaly correlation coefficient (ACC) skill scores above 0.7 at the 12-month lead time. The predicted signals of some particular events, however, become weak with a delay in the phase at mid and long lead times. This is found to be related to the intraseasonal wind bursts that are unpredicted beyond a few months of lead time. The model forecasts also show a “spring prediction barrier” similar to that in observations. Spatial SST anomalies, teleconnection, and global drought/flood during three different phases of ENSO are successfully predicted at 9 12-month lead times.In the tropical North Atlantic and southwestern Indian Ocean, where ENSO has predominant influences, the model shows skillful predictions at the 7 12-month lead times. The distinct signal of the Indian Ocean dipole (IOD) event in 1994 is predicted at the 6-month lead time. SST anomalies near the western coast of Australia are also predicted beyond the 12-month lead time because of pronounced decadal signals there.
NASA Astrophysics Data System (ADS)
Hipp, J. R.; Ballard, S.; Begnaud, M. L.; Encarnacao, A. V.; Young, C. J.; Phillips, W. S.
2015-12-01
Recently our combined SNL-LANL research team has succeeded in developing a global, seamless 3D tomographic P- and S-velocity model (SALSA3D) that provides superior first P and first S travel time predictions at both regional and teleseismic distances. However, given the variable data quality and uneven data sampling associated with this type of model, it is essential that there be a means to calculate high-quality estimates of the path-dependent variance and covariance associated with the predicted travel times of ray paths through the model. In this paper, we describe a methodology for accomplishing this by exploiting the full model covariance matrix and show examples of path-dependent travel time prediction uncertainty computed from our latest tomographic model. Typical global 3D SALSA3D models have on the order of 1/2 million nodes, so the challenge in calculating the covariance matrix is formidable: 0.9 TB storage for 1/2 of a symmetric matrix, necessitating an Out-Of-Core (OOC) blocked matrix solution technique. With our approach the tomography matrix (G which includes a prior model covariance constraint) is multiplied by its transpose (GTG) and written in a blocked sub-matrix fashion. We employ a distributed parallel solution paradigm that solves for (GTG)-1 by assigning blocks to individual processing nodes for matrix decomposition update and scaling operations. We first find the Cholesky decomposition of GTG which is subsequently inverted. Next, we employ OOC matrix multiplication methods to calculate the model covariance matrix from (GTG)-1 and an assumed data covariance matrix. Given the model covariance matrix, we solve for the travel-time covariance associated with arbitrary ray-paths by summing the model covariance along both ray paths. Setting the paths equal and taking the square root yields the travel prediction uncertainty for the single path.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rottmann, Joerg; Berbeco, Ross
Purpose: Precise prediction of respiratory motion is a prerequisite for real-time motion compensation techniques such as beam, dynamic couch, or dynamic multileaf collimator tracking. Collection of tumor motion data to train the prediction model is required for most algorithms. To avoid exposure of patients to additional dose from imaging during this procedure, the feasibility of training a linear respiratory motion prediction model with an external surrogate signal is investigated and its performance benchmarked against training the model with tumor positions directly. Methods: The authors implement a lung tumor motion prediction algorithm based on linear ridge regression that is suitable tomore » overcome system latencies up to about 300 ms. Its performance is investigated on a data set of 91 patient breathing trajectories recorded from fiducial marker tracking during radiotherapy delivery to the lung of ten patients. The expected 3D geometric error is quantified as a function of predictor lookahead time, signal sampling frequency and history vector length. Additionally, adaptive model retraining is evaluated, i.e., repeatedly updating the prediction model after initial training. Training length for this is gradually increased with incoming (internal) data availability. To assess practical feasibility model calculation times as well as various minimum data lengths for retraining are evaluated. Relative performance of model training with external surrogate motion data versus tumor motion data is evaluated. However, an internal–external motion correlation model is not utilized, i.e., prediction is solely driven by internal motion in both cases. Results: Similar prediction performance was achieved for training the model with external surrogate data versus internal (tumor motion) data. Adaptive model retraining can substantially boost performance in the case of external surrogate training while it has little impact for training with internal motion data. A minimum adaptive retraining data length of 8 s and history vector length of 3 s achieve maximal performance. Sampling frequency appears to have little impact on performance confirming previously published work. By using the linear predictor, a relative geometric 3D error reduction of about 50% was achieved (using adaptive retraining, a history vector length of 3 s and with results averaged over all investigated lookahead times and signal sampling frequencies). The absolute mean error could be reduced from (2.0 ± 1.6) mm when using no prediction at all to (0.9 ± 0.8) mm and (1.0 ± 0.9) mm when using the predictor trained with internal tumor motion training data and external surrogate motion training data, respectively (for a typical lookahead time of 250 ms and sampling frequency of 15 Hz). Conclusions: A linear prediction model can reduce latency induced tracking errors by an average of about 50% in real-time image guided radiotherapy systems with system latencies of up to 300 ms. Training a linear model for lung tumor motion prediction with an external surrogate signal alone is feasible and results in similar performance as training with (internal) tumor motion. Particularly for scenarios where motion data are extracted from fluoroscopic imaging with ionizing radiation, this may alleviate the need for additional imaging dose during the collection of model training data.« less
A model for prediction of STOVL ejector dynamics
NASA Technical Reports Server (NTRS)
Drummond, Colin K.
1989-01-01
A semi-empirical control-volume approach to ejector modeling for transient performance prediction is presented. This new approach is motivated by the need for a predictive real-time ejector sub-system simulation for Short Take-Off Verticle Landing (STOVL) integrated flight and propulsion controls design applications. Emphasis is placed on discussion of the approximate characterization of the mixing process central to thrust augmenting ejector operation. The proposed ejector model suggests transient flow predictions are possible with a model based on steady-flow data. A practical test case is presented to illustrate model calibration.
Recognition of predictors for mid-long term runoff prediction based on lasso
NASA Astrophysics Data System (ADS)
Xie, S.; Huang, Y.
2017-12-01
Reliable and accuracy mid-long term runoff prediction is of great importance in integrated management of reservoir. And many methods are proposed to model runoff time series. Almost all forecast lead times (LT) of these models are 1 month, and the predictors are previous runoff with different time lags. However, runoff prediction with increased LT, which is more beneficial, is not popular in current researches. It is because the connection between previous runoff and current runoff will be weakened with the increase of LT. So 74 atmospheric circulation factors (ACFs) together with pre-runoff are used as alternative predictors for mid-long term runoff prediction of Longyangxia reservoir in this study. Because pre-runoff and 74 ACFs with different time lags are so many and most of these factors are useless, lasso, which means `least absolutely shrinkage and selection operator', is used to recognize predictors. And the result demonstrates that 74 ACFs are beneficial for runoff prediction in both validation and test sets when LT is greater than 6. And there are 6 factors other than pre-runoff, most of which are with big time lag, are selected as predictors frequently. In order to verify the effect of 74 ACFs, 74 stochastic time series generated from normalized 74 ACFs are used as input of model. The result shows that these 74 stochastic time series are useless, which confirm the effect of 74 ACFs on mid-long term runoff prediction.
Rotenone persistence model for montane streams
Brown, Peter J.; Zale, Alexander V.
2012-01-01
The efficient and effective use of rotenone is hindered by its unknown persistence in streams. Environmental conditions degrade rotenone, but current label instructions suggest fortifying the chemical along a stream based on linear distance or travel time rather than environmental conditions. Our objective was to develop models that use measurements of environmental conditions to predict rotenone persistence in streams. Detailed measurements of ultraviolet radiation, water temperature, dissolved oxygen, total dissolved solids (TDS), conductivity, pH, oxidation–reduction potential (ORP), substrate composition, amount of organic matter, channel slope, and travel time were made along stream segments located between rotenone treatment stations and cages containing bioassay fish in six streams. The amount of fine organic matter, biofilm, sand, gravel, cobble, rubble, small boulders, slope, pH, TDS, ORP, light reaching the stream, energy dissipated, discharge, and cumulative travel time were each significantly correlated with fish death. By using logistic regression, measurements of environmental conditions were paired with the responses of bioassay fish to develop a model that predicted the persistence of rotenone toxicity in streams. This model was validated with data from two additional stream treatment reaches. Rotenone persistence was predicted by a model that used travel time, rubble, and ORP. When this model predicts a probability of less than 0.95, those who apply rotenone can expect incomplete eradication and should plan on fortifying rotenone concentrations. The significance of travel time has been previously identified and is currently used to predict rotenone persistence. However, rubble substrate, which may be associated with the degradation of rotenone by adsorption and volatilization in turbulent environments, was not previously considered.
On entropy, financial markets and minority games
NASA Astrophysics Data System (ADS)
Zapart, Christopher A.
2009-04-01
The paper builds upon an earlier statistical analysis of financial time series with Shannon information entropy, published in [L. Molgedey, W. Ebeling, Local order, entropy and predictability of financial time series, European Physical Journal B-Condensed Matter and Complex Systems 15/4 (2000) 733-737]. A novel generic procedure is proposed for making multistep-ahead predictions of time series by building a statistical model of entropy. The approach is first demonstrated on the chaotic Mackey-Glass time series and later applied to Japanese Yen/US dollar intraday currency data. The paper also reinterprets Minority Games [E. Moro, The minority game: An introductory guide, Advances in Condensed Matter and Statistical Physics (2004)] within the context of physical entropy, and uses models derived from minority game theory as a tool for measuring the entropy of a model in response to time series. This entropy conditional upon a model is subsequently used in place of information-theoretic entropy in the proposed multistep prediction algorithm.
Research on light rail electric load forecasting based on ARMA model
NASA Astrophysics Data System (ADS)
Huang, Yifan
2018-04-01
The article compares a variety of time series models and combines the characteristics of power load forecasting. Then, a light load forecasting model based on ARMA model is established. Based on this model, a light rail system is forecasted. The prediction results show that the accuracy of the model prediction is high.
Waste tyre pyrolysis: modelling of a moving bed reactor.
Aylón, E; Fernández-Colino, A; Murillo, R; Grasa, G; Navarro, M V; García, T; Mastral, A M
2010-12-01
This paper describes the development of a new model for waste tyre pyrolysis in a moving bed reactor. This model comprises three different sub-models: a kinetic sub-model that predicts solid conversion in terms of reaction time and temperature, a heat transfer sub-model that calculates the temperature profile inside the particle and the energy flux from the surroundings to the tyre particles and, finally, a hydrodynamic model that predicts the solid flow pattern inside the reactor. These three sub-models have been integrated in order to develop a comprehensive reactor model. Experimental results were obtained in a continuous moving bed reactor and used to validate model predictions, with good approximation achieved between the experimental and simulated results. In addition, a parametric study of the model was carried out, which showed that tyre particle heating is clearly faster than average particle residence time inside the reactor. Therefore, this fast particle heating together with fast reaction kinetics enables total solid conversion to be achieved in this system in accordance with the predictive model. Copyright © 2010 Elsevier Ltd. All rights reserved.
Healthy Work Revisited: Do Changes in Time Strain Predict Well-Being?
Moen, Phyllis; Kelly, Erin L.; Lam, Jack
2013-01-01
Building on Karasek and Theorell (R. Karasek & T. Theorell, 1990, Healthy work: Stress, productivity, and the reconstruction of working life, New York, NY: Basic Books), we theorized and tested the relationship between time strain (work-time demands and control) and seven self-reported health outcomes. We drew on survey data from 550 employees fielded before and 6 months after the implementation of an organizational intervention, the Results Only Work Environment (ROWE) in a white-collar organization. Cross-sectional (Wave 1) models showed psychological time demands and time control measures were related to health outcomes in expected directions. The ROWE intervention did not predict changes in psychological time demands by Wave 2, but did predict increased time control (a sense of time adequacy and schedule control). Statistical models revealed increases in psychological time demands and time adequacy predicted changes in positive (energy, mastery, psychological well-being, self-assessed health) and negative (emotional exhaustion, somatic symptoms, psychological distress) outcomes in expected directions, net of job and home demands and covariates. This study demonstrates the value of including time strain in investigations of the health effects of job conditions. Results encourage longitudinal models of change in psychological time demands as well as time control, along with the development and testing of interventions aimed at reducing time strain in different populations of workers. PMID:23506547
Kowinsky, Amy M; Shovel, Judith; McLaughlin, Maribeth; Vertacnik, Lisa; Greenhouse, Pamela K; Martin, Susan Christie; Minnier, Tamra E
2012-01-01
Predictable and unpredictable patient care tasks compete for caregiver time and attention, making it difficult for patient care staff to reliably and consistently meet patient needs. We have piloted a redesigned care model that separates the work of patient care technicians based on task predictability and creates role specificity. This care model shows promise in improving the ability of staff to reliably complete tasks in a more consistent and timely manner.
Taxi-Out Time Prediction for Departures at Charlotte Airport Using Machine Learning Techniques
NASA Technical Reports Server (NTRS)
Lee, Hanbong; Malik, Waqar; Jung, Yoon C.
2016-01-01
Predicting the taxi-out times of departures accurately is important for improving airport efficiency and takeoff time predictability. In this paper, we attempt to apply machine learning techniques to actual traffic data at Charlotte Douglas International Airport for taxi-out time prediction. To find the key factors affecting aircraft taxi times, surface surveillance data is first analyzed. From this data analysis, several variables, including terminal concourse, spot, runway, departure fix and weight class, are selected for taxi time prediction. Then, various machine learning methods such as linear regression, support vector machines, k-nearest neighbors, random forest, and neural networks model are applied to actual flight data. Different traffic flow and weather conditions at Charlotte airport are also taken into account for more accurate prediction. The taxi-out time prediction results show that linear regression and random forest techniques can provide the most accurate prediction in terms of root-mean-square errors. We also discuss the operational complexity and uncertainties that make it difficult to predict the taxi times accurately.
A new model integrating short- and long-term aging of copper added to soils
Zeng, Saiqi; Li, Jumei; Wei, Dongpu
2017-01-01
Aging refers to the processes by which the bioavailability/toxicity, isotopic exchangeability, and extractability of metals added to soils decline overtime. We studied the characteristics of the aging process in copper (Cu) added to soils and the factors that affect this process. Then we developed a semi-mechanistic model to predict the lability of Cu during the aging process with descriptions of the diffusion process using complementary error function. In the previous studies, two semi-mechanistic models to separately predict short-term and long-term aging of Cu added to soils were developed with individual descriptions of the diffusion process. In the short-term model, the diffusion process was linearly related to the square root of incubation time (t1/2), and in the long-term model, the diffusion process was linearly related to the natural logarithm of incubation time (lnt). Both models could predict short-term or long-term aging processes separately, but could not predict the short- and long-term aging processes by one model. By analyzing and combining the two models, we found that the short- and long-term behaviors of the diffusion process could be described adequately using the complementary error function. The effect of temperature on the diffusion process was obtained in this model as well. The model can predict the aging process continuously based on four factors—soil pH, incubation time, soil organic matter content and temperature. PMID:28820888
Barrett, Jessica; Pennells, Lisa; Sweeting, Michael; Willeit, Peter; Di Angelantonio, Emanuele; Gudnason, Vilmundur; Nordestgaard, Børge G.; Psaty, Bruce M; Goldbourt, Uri; Best, Lyle G; Assmann, Gerd; Salonen, Jukka T; Nietert, Paul J; Verschuren, W. M. Monique; Brunner, Eric J; Kronmal, Richard A; Salomaa, Veikko; Bakker, Stephan J L; Dagenais, Gilles R; Sato, Shinichi; Jansson, Jan-Håkan; Willeit, Johann; Onat, Altan; de la Cámara, Agustin Gómez; Roussel, Ronan; Völzke, Henry; Dankner, Rachel; Tipping, Robert W; Meade, Tom W; Donfrancesco, Chiara; Kuller, Lewis H; Peters, Annette; Gallacher, John; Kromhout, Daan; Iso, Hiroyasu; Knuiman, Matthew; Casiglia, Edoardo; Kavousi, Maryam; Palmieri, Luigi; Sundström, Johan; Davis, Barry R; Njølstad, Inger; Couper, David; Danesh, John; Thompson, Simon G; Wood, Angela
2017-01-01
Abstract The added value of incorporating information from repeated blood pressure and cholesterol measurements to predict cardiovascular disease (CVD) risk has not been rigorously assessed. We used data on 191,445 adults from the Emerging Risk Factors Collaboration (38 cohorts from 17 countries with data encompassing 1962–2014) with more than 1 million measurements of systolic blood pressure, total cholesterol, and high-density lipoprotein cholesterol. Over a median 12 years of follow-up, 21,170 CVD events occurred. Risk prediction models using cumulative mean values of repeated measurements and summary measures from longitudinal modeling of the repeated measurements were compared with models using measurements from a single time point. Risk discrimination (C-index) and net reclassification were calculated, and changes in C-indices were meta-analyzed across studies. Compared with the single-time-point model, the cumulative means and longitudinal models increased the C-index by 0.0040 (95% confidence interval (CI): 0.0023, 0.0057) and 0.0023 (95% CI: 0.0005, 0.0042), respectively. Reclassification was also improved in both models; compared with the single-time-point model, overall net reclassification improvements were 0.0369 (95% CI: 0.0303, 0.0436) for the cumulative-means model and 0.0177 (95% CI: 0.0110, 0.0243) for the longitudinal model. In conclusion, incorporating repeated measurements of blood pressure and cholesterol into CVD risk prediction models slightly improves risk prediction. PMID:28549073
Qian, Liwei; Zheng, Haoran; Zhou, Hong; Qin, Ruibin; Li, Jinlong
2013-01-01
The increasing availability of time series expression datasets, although promising, raises a number of new computational challenges. Accordingly, the development of suitable classification methods to make reliable and sound predictions is becoming a pressing issue. We propose, here, a new method to classify time series gene expression via integration of biological networks. We evaluated our approach on 2 different datasets and showed that the use of a hidden Markov model/Gaussian mixture models hybrid explores the time-dependence of the expression data, thereby leading to better prediction results. We demonstrated that the biclustering procedure identifies function-related genes as a whole, giving rise to high accordance in prognosis prediction across independent time series datasets. In addition, we showed that integration of biological networks into our method significantly improves prediction performance. Moreover, we compared our approach with several state-of–the-art algorithms and found that our method outperformed previous approaches with regard to various criteria. Finally, our approach achieved better prediction results on early-stage data, implying the potential of our method for practical prediction. PMID:23516469
In situ Observations of Heliospheric Current Sheets Evolution
NASA Astrophysics Data System (ADS)
Liu, Yong; Peng, Jun; Huang, Jia; Klecker, Berndt
2017-04-01
We investigate the Heliospheric current sheet observation time difference of the spacecraft using the STEREO, ACE and WIND data. The observations are first compared to a simple theory in which the time difference is only determined by the radial and longitudinal separation between the spacecraft. The predictions fit well with the observations except for a few events. Then the time delay caused by the latitudinal separation is taken in consideration. The latitude of each spacecraft is calculated based on the PFSS model assuming that heliospheric current sheets propagate at the solar wind speed without changing their shapes from the origin to spacecraft near 1AU. However, including the latitudinal effects does not improve the prediction, possibly because that the PFSS model may not locate the current sheets accurately enough. A new latitudinal delay is predicted based on the time delay using the observations on ACE data. The new method improved the prediction on the time lag between spacecraft; however, further study is needed to predict the location of the heliospheric current sheet more accurately.
A model of interval timing by neural integration.
Simen, Patrick; Balci, Fuat; de Souza, Laura; Cohen, Jonathan D; Holmes, Philip
2011-06-22
We show that simple assumptions about neural processing lead to a model of interval timing as a temporal integration process, in which a noisy firing-rate representation of time rises linearly on average toward a response threshold over the course of an interval. Our assumptions include: that neural spike trains are approximately independent Poisson processes, that correlations among them can be largely cancelled by balancing excitation and inhibition, that neural populations can act as integrators, and that the objective of timed behavior is maximal accuracy and minimal variance. The model accounts for a variety of physiological and behavioral findings in rodents, monkeys, and humans, including ramping firing rates between the onset of reward-predicting cues and the receipt of delayed rewards, and universally scale-invariant response time distributions in interval timing tasks. It furthermore makes specific, well-supported predictions about the skewness of these distributions, a feature of timing data that is usually ignored. The model also incorporates a rapid (potentially one-shot) duration-learning procedure. Human behavioral data support the learning rule's predictions regarding learning speed in sequences of timed responses. These results suggest that simple, integration-based models should play as prominent a role in interval timing theory as they do in theories of perceptual decision making, and that a common neural mechanism may underlie both types of behavior.
NASA Astrophysics Data System (ADS)
Luo, Junhui; Wu, Chao; Liu, Xianlin; Mi, Decai; Zeng, Fuquan; Zeng, Yongjun
2018-01-01
At present, the prediction of soft foundation settlement mostly use the exponential curve and hyperbola deferred approximation method, and the correlation between the results is poor. However, the application of neural network in this area has some limitations, and none of the models used in the existing cases adopted the TS fuzzy neural network of which calculation combines the characteristics of fuzzy system and neural network to realize the mutual compatibility methods. At the same time, the developed and optimized calculation program is convenient for engineering designers. Taking the prediction and analysis of soft foundation settlement of gully soft soil in granite area of Guangxi Guihe road as an example, the fuzzy neural network model is established and verified to explore the applicability. The TS fuzzy neural network is used to construct the prediction model of settlement and deformation, and the corresponding time response function is established to calculate and analyze the settlement of soft foundation. The results show that the prediction of short-term settlement of the model is accurate and the final settlement prediction result has certain engineering reference value.
Process fault detection and nonlinear time series analysis for anomaly detection in safeguards
DOE Office of Scientific and Technical Information (OSTI.GOV)
Burr, T.L.; Mullen, M.F.; Wangen, L.E.
In this paper we discuss two advanced techniques, process fault detection and nonlinear time series analysis, and apply them to the analysis of vector-valued and single-valued time-series data. We investigate model-based process fault detection methods for analyzing simulated, multivariate, time-series data from a three-tank system. The model-predictions are compared with simulated measurements of the same variables to form residual vectors that are tested for the presence of faults (possible diversions in safeguards terminology). We evaluate two methods, testing all individual residuals with a univariate z-score and testing all variables simultaneously with the Mahalanobis distance, for their ability to detect lossmore » of material from two different leak scenarios from the three-tank system: a leak without and with replacement of the lost volume. Nonlinear time-series analysis tools were compared with the linear methods popularized by Box and Jenkins. We compare prediction results using three nonlinear and two linear modeling methods on each of six simulated time series: two nonlinear and four linear. The nonlinear methods performed better at predicting the nonlinear time series and did as well as the linear methods at predicting the linear values.« less
Real-time prediction of respiratory motion based on a local dynamic model in an augmented space
NASA Astrophysics Data System (ADS)
Hong, S.-M.; Jung, B.-H.; Ruan, D.
2011-03-01
Motion-adaptive radiotherapy aims to deliver ablative radiation dose to the tumor target with minimal normal tissue exposure, by accounting for real-time target movement. In practice, prediction is usually necessary to compensate for system latency induced by measurement, communication and control. This work focuses on predicting respiratory motion, which is most dominant for thoracic and abdominal tumors. We develop and investigate the use of a local dynamic model in an augmented space, motivated by the observation that respiratory movement exhibits a locally circular pattern in a plane augmented with a delayed axis. By including the angular velocity as part of the system state, the proposed dynamic model effectively captures the natural evolution of respiratory motion. The first-order extended Kalman filter is used to propagate and update the state estimate. The target location is predicted by evaluating the local dynamic model equations at the required prediction length. This method is complementary to existing work in that (1) the local circular motion model characterizes 'turning', overcoming the limitation of linear motion models; (2) it uses a natural state representation including the local angular velocity and updates the state estimate systematically, offering explicit physical interpretations; (3) it relies on a parametric model and is much less data-satiate than the typical adaptive semiparametric or nonparametric method. We tested the performance of the proposed method with ten RPM traces, using the normalized root mean squared difference between the predicted value and the retrospective observation as the error metric. Its performance was compared with predictors based on the linear model, the interacting multiple linear models and the kernel density estimator for various combinations of prediction lengths and observation rates. The local dynamic model based approach provides the best performance for short to medium prediction lengths under relatively low observation rate. Sensitivity analysis indicates its robustness toward the choice of parameters. Its simplicity, robustness and low computation cost makes the proposed local dynamic model an attractive tool for real-time prediction with system latencies below 0.4 s.
Real-time prediction of respiratory motion based on a local dynamic model in an augmented space.
Hong, S-M; Jung, B-H; Ruan, D
2011-03-21
Motion-adaptive radiotherapy aims to deliver ablative radiation dose to the tumor target with minimal normal tissue exposure, by accounting for real-time target movement. In practice, prediction is usually necessary to compensate for system latency induced by measurement, communication and control. This work focuses on predicting respiratory motion, which is most dominant for thoracic and abdominal tumors. We develop and investigate the use of a local dynamic model in an augmented space, motivated by the observation that respiratory movement exhibits a locally circular pattern in a plane augmented with a delayed axis. By including the angular velocity as part of the system state, the proposed dynamic model effectively captures the natural evolution of respiratory motion. The first-order extended Kalman filter is used to propagate and update the state estimate. The target location is predicted by evaluating the local dynamic model equations at the required prediction length. This method is complementary to existing work in that (1) the local circular motion model characterizes 'turning', overcoming the limitation of linear motion models; (2) it uses a natural state representation including the local angular velocity and updates the state estimate systematically, offering explicit physical interpretations; (3) it relies on a parametric model and is much less data-satiate than the typical adaptive semiparametric or nonparametric method. We tested the performance of the proposed method with ten RPM traces, using the normalized root mean squared difference between the predicted value and the retrospective observation as the error metric. Its performance was compared with predictors based on the linear model, the interacting multiple linear models and the kernel density estimator for various combinations of prediction lengths and observation rates. The local dynamic model based approach provides the best performance for short to medium prediction lengths under relatively low observation rate. Sensitivity analysis indicates its robustness toward the choice of parameters. Its simplicity, robustness and low computation cost makes the proposed local dynamic model an attractive tool for real-time prediction with system latencies below 0.4 s.
Modeling Interdependent and Periodic Real-World Action Sequences
Kurashima, Takeshi; Althoff, Tim; Leskovec, Jure
2018-01-01
Mobile health applications, including those that track activities such as exercise, sleep, and diet, are becoming widely used. Accurately predicting human actions in the real world is essential for targeted recommendations that could improve our health and for personalization of these applications. However, making such predictions is extremely difficult due to the complexities of human behavior, which consists of a large number of potential actions that vary over time, depend on each other, and are periodic. Previous work has not jointly modeled these dynamics and has largely focused on item consumption patterns instead of broader types of behaviors such as eating, commuting or exercising. In this work, we develop a novel statistical model, called TIPAS, for Time-varying, Interdependent, and Periodic Action Sequences. Our approach is based on personalized, multivariate temporal point processes that model time-varying action propensities through a mixture of Gaussian intensities. Our model captures short-term and long-term periodic interdependencies between actions through Hawkes process-based self-excitations. We evaluate our approach on two activity logging datasets comprising 12 million real-world actions (e.g., eating, sleep, and exercise) taken by 20 thousand users over 17 months. We demonstrate that our approach allows us to make successful predictions of future user actions and their timing. Specifically, TIPAS improves predictions of actions, and their timing, over existing methods across multiple datasets by up to 156%, and up to 37%, respectively. Performance improvements are particularly large for relatively rare and periodic actions such as walking and biking, improving over baselines by up to 256%. This demonstrates that explicit modeling of dependencies and periodicities in real-world behavior enables successful predictions of future actions, with implications for modeling human behavior, app personalization, and targeting of health interventions. PMID:29780977
The scaling of geographic ranges: implications for species distribution models
Yackulic, Charles B.; Ginsberg, Joshua R.
2016-01-01
There is a need for timely science to inform policy and management decisions; however, we must also strive to provide predictions that best reflect our understanding of ecological systems. Species distributions evolve through time and reflect responses to environmental conditions that are mediated through individual and population processes. Species distribution models that reflect this understanding, and explicitly model dynamics, are likely to give more accurate predictions.
ERIC Educational Resources Information Center
Redmond, M. William, Jr.
2011-01-01
The purpose of this study is to develop a preadmission predictive model of student success for prospective first-time African American college applicants at a predominately White four-year public institution within the Pennsylvania State System of Higher Education. This model will use two types of variables. They are (a) cognitive variables (i.e.,…
Manikandan, Narayanan; Subha, Srinivasan
2016-01-01
Software development life cycle has been characterized by destructive disconnects between activities like planning, analysis, design, and programming. Particularly software developed with prediction based results is always a big challenge for designers. Time series data forecasting like currency exchange, stock prices, and weather report are some of the areas where an extensive research is going on for the last three decades. In the initial days, the problems with financial analysis and prediction were solved by statistical models and methods. For the last two decades, a large number of Artificial Neural Networks based learning models have been proposed to solve the problems of financial data and get accurate results in prediction of the future trends and prices. This paper addressed some architectural design related issues for performance improvement through vectorising the strengths of multivariate econometric time series models and Artificial Neural Networks. It provides an adaptive approach for predicting exchange rates and it can be called hybrid methodology for predicting exchange rates. This framework is tested for finding the accuracy and performance of parallel algorithms used.
Manikandan, Narayanan; Subha, Srinivasan
2016-01-01
Software development life cycle has been characterized by destructive disconnects between activities like planning, analysis, design, and programming. Particularly software developed with prediction based results is always a big challenge for designers. Time series data forecasting like currency exchange, stock prices, and weather report are some of the areas where an extensive research is going on for the last three decades. In the initial days, the problems with financial analysis and prediction were solved by statistical models and methods. For the last two decades, a large number of Artificial Neural Networks based learning models have been proposed to solve the problems of financial data and get accurate results in prediction of the future trends and prices. This paper addressed some architectural design related issues for performance improvement through vectorising the strengths of multivariate econometric time series models and Artificial Neural Networks. It provides an adaptive approach for predicting exchange rates and it can be called hybrid methodology for predicting exchange rates. This framework is tested for finding the accuracy and performance of parallel algorithms used. PMID:26881271
Combining multiple earthquake models in real time for earthquake early warning
Minson, Sarah E.; Wu, Stephen; Beck, James L; Heaton, Thomas H.
2017-01-01
The ultimate goal of earthquake early warning (EEW) is to provide local shaking information to users before the strong shaking from an earthquake reaches their location. This is accomplished by operating one or more real‐time analyses that attempt to predict shaking intensity, often by estimating the earthquake’s location and magnitude and then predicting the ground motion from that point source. Other EEW algorithms use finite rupture models or may directly estimate ground motion without first solving for an earthquake source. EEW performance could be improved if the information from these diverse and independent prediction models could be combined into one unified, ground‐motion prediction. In this article, we set the forecast shaking at each location as the common ground to combine all these predictions and introduce a Bayesian approach to creating better ground‐motion predictions. We also describe how this methodology could be used to build a new generation of EEW systems that provide optimal decisions customized for each user based on the user’s individual false‐alarm tolerance and the time necessary for that user to react.
SU-F-P-20: Predicting Waiting Times in Radiation Oncology Using Machine Learning
DOE Office of Scientific and Technical Information (OSTI.GOV)
Joseph, A; Herrera, D; Hijal, T
Purpose: Waiting times remain one of the most vexing patient satisfaction challenges facing healthcare. Waiting time uncertainty can cause patients, who are already sick or in pain, to worry about when they will receive the care they need. These waiting periods are often difficult for staff to predict and only rough estimates are typically provided based on personal experience. This level of uncertainty leaves most patients unable to plan their calendar, making the waiting experience uncomfortable, even painful. In the present era of electronic health records (EHRs), waiting times need not be so uncertain. Extensive EHRs provide unprecedented amounts ofmore » data that can statistically cluster towards representative values when appropriate patient cohorts are selected. Predictive modelling, such as machine learning, is a powerful approach that benefits from large, potentially complex, datasets. The essence of machine learning is to predict future outcomes by learning from previous experience. The application of a machine learning algorithm to waiting time data has the potential to produce personalized waiting time predictions such that the uncertainty may be removed from the patient’s waiting experience. Methods: In radiation oncology, patients typically experience several types of waiting (eg waiting at home for treatment planning, waiting in the waiting room for oncologist appointments and daily waiting in the waiting room for radiotherapy treatments). A daily treatment wait time model is discussed in this report. To develop a prediction model using our large dataset (with more than 100k sample points) a variety of machine learning algorithms from the Python package sklearn were tested. Results: We found that the Random Forest Regressor model provides the best predictions for daily radiotherapy treatment waiting times. Using this model, we achieved a median residual (actual value minus predicted value) of 0.25 minutes and a standard deviation residual of 6.5 minutes. This means that the majority of our estimates are within 6.5 minutes of the actual wait time. Conclusion: The goal of this project was to define an appropriate machine learning algorithm to estimate waiting times based on the collective knowledge and experience learned from previous patients. Our results offer an opportunity to improve the information that is provided to patients and family members regarding the amount of time they can expect to wait for radiotherapy treatment at our centre. AJ acknowledges support by the CREATE Medical Physics Research Training Network grant of the Natural Sciences and Engineering Research Council (Grant number: 432290) and from the 2014 Q+ Initiative of the McGill University Health Centre.« less
Combining neural networks and genetic algorithms for hydrological flow forecasting
NASA Astrophysics Data System (ADS)
Neruda, Roman; Srejber, Jan; Neruda, Martin; Pascenko, Petr
2010-05-01
We present a neural network approach to rainfall-runoff modeling for small size river basins based on several time series of hourly measured data. Different neural networks are considered for short time runoff predictions (from one to six hours lead time) based on runoff and rainfall data observed in previous time steps. Correlation analysis shows that runoff data, short time rainfall history, and aggregated API values are the most significant data for the prediction. Neural models of multilayer perceptron and radial basis function networks with different numbers of units are used and compared with more traditional linear time series predictors. Out of possible 48 hours of relevant history of all the input variables, the most important ones are selected by means of input filters created by a genetic algorithm. The genetic algorithm works with population of binary encoded vectors defining input selection patterns. Standard genetic operators of two-point crossover, random bit-flipping mutation, and tournament selection were used. The evaluation of objective function of each individual consists of several rounds of building and testing a particular neural network model. The whole procedure is rather computational exacting (taking hours to days on a desktop PC), thus a high-performance mainframe computer has been used for our experiments. Results based on two years worth data from the Ploucnice river in Northern Bohemia suggest that main problems connected with this approach to modeling are ovetraining that can lead to poor generalization, and relatively small number of extreme events which makes it difficult for a model to predict the amplitude of the event. Thus, experiments with both absolute and relative runoff predictions were carried out. In general it can be concluded that the neural models show about 5 per cent improvement in terms of efficiency coefficient over liner models. Multilayer perceptrons with one hidden layer trained by back propagation algorithm and predicting relative runoff show the best behavior so far. Utilizing the genetically evolved input filter improves the performance of yet another 5 per cent. In the future we would like to continue with experiments in on-line prediction using real-time data from Smeda River with 6 hours lead time forecast. Following the operational reality we will focus on classification of the runoffs into flood alert levels, and reformulation of the time series prediction task as a classification problem. The main goal of all this work is to improve flood warning system operated by the Czech Hydrometeorological Institute.
Long Term Mean Local Time of the Ascending Node Prediction
NASA Technical Reports Server (NTRS)
McKinley, David P.
2007-01-01
Significant error has been observed in the long term prediction of the Mean Local Time of the Ascending Node on the Aqua spacecraft. This error of approximately 90 seconds over a two year prediction is a complication in planning and timing of maneuvers for all members of the Earth Observing System Afternoon Constellation, which use Aqua's MLTAN as the reference for their inclination maneuvers. It was determined that the source of the prediction error was the lack of a solid Earth tide model in the operational force models. The Love Model of the solid Earth tide potential was used to derive analytic corrections to the inclination and right ascension of the ascending node of Aqua's Sun-synchronous orbit. Additionally, it was determined that the resonance between the Sun and orbit plane of the Sun-synchronous orbit is the primary driver of this error. The analytic corrections have been added to the operational force models for the Aqua spacecraft reducing the two-year 90-second error to less than 7 seconds.
Saha, Kaushik; Som, Sibendu; Battistoni, Michele
2017-01-01
Flash boiling is known to be a common phenomenon for gasoline direct injection (GDI) engine sprays. The Homogeneous Relaxation Model has been adopted in many recent numerical studies for predicting cavitation and flash boiling. The Homogeneous Relaxation Model is assessed in this study. Sensitivity analysis of the model parameters has been documented to infer the driving factors for the flash-boiling predictions. The model parameters have been varied over a range and the differences in predictions of the extent of flashing have been studied. Apart from flashing in the near nozzle regions, mild cavitation is also predicted inside the gasoline injectors.more » The variation in the predicted time scales through the model parameters for predicting these two different thermodynamic phenomena (cavitation, flash) have been elaborated in this study. Turbulence model effects have also been investigated by comparing predictions from the standard and Re-Normalization Group (RNG) k-ε turbulence models.« less
A High Precision Prediction Model Using Hybrid Grey Dynamic Model
ERIC Educational Resources Information Center
Li, Guo-Dong; Yamaguchi, Daisuke; Nagai, Masatake; Masuda, Shiro
2008-01-01
In this paper, we propose a new prediction analysis model which combines the first order one variable Grey differential equation Model (abbreviated as GM(1,1) model) from grey system theory and time series Autoregressive Integrated Moving Average (ARIMA) model from statistics theory. We abbreviate the combined GM(1,1) ARIMA model as ARGM(1,1)…
NASA Technical Reports Server (NTRS)
Sojka, J. J.; Schunk, R. W.; Hoegy, W. R.; Grebowsky, J. M.
1991-01-01
The polar ionospheric F-region often exhibits regions of marked density depletion. These depletions have been observed by a variety of polar orbiting ionospheric satellites over a full range of solar cycle, season, magnetic activity, and universal time (UT). An empirical model of these observations has recently been developed to describe the polar depletion dependence on these parameters. Specifically, the dependence has been defined as a function of F10.7 (solar), summer or winter, Kp (magnetic), and UT. Polar cap depletions have also been predicted /1, 2/ and are, hence, present in physical models of the high latitude ionosphere. Using the Utah State University Time Dependent Ionospheric Model (TDIM) the predicted polar depletion characteristics are compared with those described by the above empirical model. In addition, the TDIM is used to predict the IMF By dependence of the polar hole feature.
He, Xiaoming; Bhowmick, Sankha; Bischof, John C
2009-07-01
The Arrhenius and thermal isoeffective dose (TID) models are the two most commonly used models for predicting hyperthermic injury. The TID model is essentially derived from the Arrhenius model, but due to a variety of assumptions and simplifications now leads to different predictions, particularly at temperatures higher than 50 degrees C. In the present study, the two models are compared and their appropriateness tested for predicting hyperthermic injury in both the traditional hyperthermia (usually, 43-50 degrees C) and thermal surgery (or thermal therapy/thermal ablation, usually, >50 degrees C) regime. The kinetic parameters of thermal injury in both models were obtained from the literature (or literature data), tabulated, and analyzed for various prostate and kidney systems. It was found that the kinetic parameters vary widely, and were particularly dependent on the cell or tissue type, injury assay used, and the time when the injury assessment was performed. In order to compare the capability of the two models for thermal injury prediction, thermal thresholds for complete killing (i.e., 99% cell or tissue injury) were predicted using the models in two important urologic systems, viz., the benign prostatic hyperplasia tissue and the normal porcine kidney tissue. The predictions of the two models matched well at temperatures below 50 degrees C. At higher temperatures, however, the thermal thresholds predicted using the TID model with a constant R value of 0.5, the value commonly used in the traditional hyperthermia literature, are much lower than those predicted using the Arrhenius model. This suggests that traditional use of the TID model (i.e., R=0.5) is inappropriate for predicting hyperthermic injury in the thermal surgery regime (>50 degrees C). Finally, the time-temperature relationships for complete killing (i.e., 99% injury) were calculated and analyzed using the Arrhenius model for the various prostate and kidney systems.
Space can substitute for time in predicting climate-change effects on biodiversity
Blois, Jessica L.; Williams, John W.; Fitzpatrick, Matthew C.; Jackson, Stephen T.; Ferrier, Simon
2013-01-01
“Space-for-time” substitution is widely used in biodiversity modeling to infer past or future trajectories of ecological systems from contemporary spatial patterns. However, the foundational assumption—that drivers of spatial gradients of species composition also drive temporal changes in diversity—rarely is tested. Here, we empirically test the space-for-time assumption by constructing orthogonal datasets of compositional turnover of plant taxa and climatic dissimilarity through time and across space from Late Quaternary pollen records in eastern North America, then modeling climate-driven compositional turnover. Predictions relying on space-for-time substitution were ∼72% as accurate as “time-for-time” predictions. However, space-for-time substitution performed poorly during the Holocene when temporal variation in climate was small relative to spatial variation and required subsampling to match the extent of spatial and temporal climatic gradients. Despite this caution, our results generally support the judicious use of space-for-time substitution in modeling community responses to climate change.
NASA Astrophysics Data System (ADS)
Deo, Ravinesh C.; Şahin, Mehmet
2015-02-01
The prediction of future drought is an effective mitigation tool for assessing adverse consequences of drought events on vital water resources, agriculture, ecosystems and hydrology. Data-driven model predictions using machine learning algorithms are promising tenets for these purposes as they require less developmental time, minimal inputs and are relatively less complex than the dynamic or physical model. This paper authenticates a computationally simple, fast and efficient non-linear algorithm known as extreme learning machine (ELM) for the prediction of Effective Drought Index (EDI) in eastern Australia using input data trained from 1957-2008 and the monthly EDI predicted over the period 2009-2011. The predictive variables for the ELM model were the rainfall and mean, minimum and maximum air temperatures, supplemented by the large-scale climate mode indices of interest as regression covariates, namely the Southern Oscillation Index, Pacific Decadal Oscillation, Southern Annular Mode and the Indian Ocean Dipole moment. To demonstrate the effectiveness of the proposed data-driven model a performance comparison in terms of the prediction capabilities and learning speeds was conducted between the proposed ELM algorithm and the conventional artificial neural network (ANN) algorithm trained with Levenberg-Marquardt back propagation. The prediction metrics certified an excellent performance of the ELM over the ANN model for the overall test sites, thus yielding Mean Absolute Errors, Root-Mean Square Errors, Coefficients of Determination and Willmott's Indices of Agreement of 0.277, 0.008, 0.892 and 0.93 (for ELM) and 0.602, 0.172, 0.578 and 0.92 (for ANN) models. Moreover, the ELM model was executed with learning speed 32 times faster and training speed 6.1 times faster than the ANN model. An improvement in the prediction capability of the drought duration and severity by the ELM model was achieved. Based on these results we aver that out of the two machine learning algorithms tested, the ELM was the more expeditious tool for prediction of drought and its related properties.
Four Major South Korea's Rivers Using Deep Learning Models.
Lee, Sangmok; Lee, Donghyun
2018-06-24
Harmful algal blooms are an annual phenomenon that cause environmental damage, economic losses, and disease outbreaks. A fundamental solution to this problem is still lacking, thus, the best option for counteracting the effects of algal blooms is to improve advance warnings (predictions). However, existing physical prediction models have difficulties setting a clear coefficient indicating the relationship between each factor when predicting algal blooms, and many variable data sources are required for the analysis. These limitations are accompanied by high time and economic costs. Meanwhile, artificial intelligence and deep learning methods have become increasingly common in scientific research; attempts to apply the long short-term memory (LSTM) model to environmental research problems are increasing because the LSTM model exhibits good performance for time-series data prediction. However, few studies have applied deep learning models or LSTM to algal bloom prediction, especially in South Korea, where algal blooms occur annually. Therefore, we employed the LSTM model for algal bloom prediction in four major rivers of South Korea. We conducted short-term (one week) predictions by employing regression analysis and deep learning techniques on a newly constructed water quality and quantity dataset drawn from 16 dammed pools on the rivers. Three deep learning models (multilayer perceptron, MLP; recurrent neural network, RNN; and long short-term memory, LSTM) were used to predict chlorophyll-a, a recognized proxy for algal activity. The results were compared to those from OLS (ordinary least square) regression analysis and actual data based on the root mean square error (RSME). The LSTM model showed the highest prediction rate for harmful algal blooms and all deep learning models out-performed the OLS regression analysis. Our results reveal the potential for predicting algal blooms using LSTM and deep learning.
Predicting field weed emergence with empirical models and soft computing techniques
USDA-ARS?s Scientific Manuscript database
Seedling emergence is the most important phenological process that influences the success of weed species; therefore, predicting weed emergence timing plays a critical role in scheduling weed management measures. Important efforts have been made in the attempt to develop models to predict seedling e...
NASA Astrophysics Data System (ADS)
Hu, J.; Zhang, H.; Ying, Q.; Chen, S.-H.; Vandenberghe, F.; Kleeman, M. J.
2015-03-01
For the first time, a ~ decadal (9 years from 2000 to 2008) air quality model simulation with 4 km horizontal resolution over populated regions and daily time resolution has been conducted for California to provide air quality data for health effect studies. Model predictions are compared to measurements to evaluate the accuracy of the simulation with an emphasis on spatial and temporal variations that could be used in epidemiology studies. Better model performance is found at longer averaging times, suggesting that model results with averaging times ≥ 1 month should be the first to be considered in epidemiological studies. The UCD/CIT model predicts spatial and temporal variations in the concentrations of O3, PM2.5, elemental carbon (EC), organic carbon (OC), nitrate, and ammonium that meet standard modeling performance criteria when compared to monthly-averaged measurements. Predicted sulfate concentrations do not meet target performance metrics due to missing sulfur sources in the emissions. Predicted seasonal and annual variations of PM2.5, EC, OC, nitrate, and ammonium have mean fractional biases that meet the model performance criteria in 95, 100, 71, 73, and 92% of the simulated months, respectively. The base data set provides an improvement for predicted population exposure to PM concentrations in California compared to exposures estimated by central site monitors operated 1 day out of every 3 days at a few urban locations. Uncertainties in the model predictions arise from several issues. Incomplete understanding of secondary organic aerosol formation mechanisms leads to OC bias in the model results in summertime but does not affect OC predictions in winter when concentrations are typically highest. The CO and NO (species dominated by mobile emissions) results reveal temporal and spatial uncertainties associated with the mobile emissions generated by the EMFAC 2007 model. The WRF model tends to overpredict wind speed during stagnation events, leading to underpredictions of high PM concentrations, usually in winter months. The WRF model also generally underpredicts relative humidity, resulting in less particulate nitrate formation, especially during winter months. These limitations must be recognized when using data in health studies. All model results included in the current manuscript can be downloaded free of charge at http://faculty.engineering.ucdavis.edu/kleeman/ .
Zhang, Ji-Li; Liu, Bo-Fei; Di, Xue-Ying; Chu, Teng-Fei; Jin, Sen
2012-11-01
Taking fuel moisture content, fuel loading, and fuel bed depth as controlling factors, the fuel beds of Mongolian oak leaves in Maoershan region of Northeast China in field were simulated, and a total of one hundred experimental burnings under no-wind and zero-slope conditions were conducted in laboratory, with the effects of the fuel moisture content, fuel loading, and fuel bed depth on the flame length and its residence time analyzed and the multivariate linear prediction models constructed. The results indicated that fuel moisture content had a significant negative liner correlation with flame length, but less correlation with flame residence time. Both the fuel loading and the fuel bed depth were significantly positively correlated with flame length and its residence time. The interactions of fuel bed depth with fuel moisture content and fuel loading had significant effects on the flame length, while the interactions of fuel moisture content with fuel loading and fuel bed depth affected the flame residence time significantly. The prediction model of flame length had better prediction effect, which could explain 83.3% of variance, with a mean absolute error of 7.8 cm and a mean relative error of 16.2%, while the prediction model of flame residence time was not good enough, which could only explain 54% of variance, with a mean absolute error of 9.2 s and a mean relative error of 18.6%.
Non-Gaussian lineshapes and dynamics of time-resolved linear and nonlinear (correlation) spectra.
Dinpajooh, Mohammadhasan; Matyushov, Dmitry V
2014-07-17
Signatures of nonlinear and non-Gaussian dynamics in time-resolved linear and nonlinear (correlation) 2D spectra are analyzed in a model considering a linear plus quadratic dependence of the spectroscopic transition frequency on a Gaussian nuclear coordinate of the thermal bath (quadratic coupling). This new model is contrasted to the commonly assumed linear dependence of the transition frequency on the medium nuclear coordinates (linear coupling). The linear coupling model predicts equality between the Stokes shift and equilibrium correlation functions of the transition frequency and time-independent spectral width. Both predictions are often violated, and we are asking here the question of whether a nonlinear solvent response and/or non-Gaussian dynamics are required to explain these observations. We find that correlation functions of spectroscopic observables calculated in the quadratic coupling model depend on the chromophore's electronic state and the spectral width gains time dependence, all in violation of the predictions of the linear coupling models. Lineshape functions of 2D spectra are derived assuming Ornstein-Uhlenbeck dynamics of the bath nuclear modes. The model predicts asymmetry of 2D correlation plots and bending of the center line. The latter is often used to extract two-point correlation functions from 2D spectra. The dynamics of the transition frequency are non-Gaussian. However, the effect of non-Gaussian dynamics is limited to the third-order (skewness) time correlation function, without affecting the time correlation functions of higher order. The theory is tested against molecular dynamics simulations of a model polar-polarizable chromophore dissolved in a force field water.
Prediction of Flutter Boundary Using Flutter Margin for The Discrete-Time System
NASA Astrophysics Data System (ADS)
Dwi Saputra, Angga; Wibawa Purabaya, R.
2018-04-01
Flutter testing in a wind tunnel is generally conducted at subcritical speeds to avoid damages. Hence, The flutter speed has to be predicted from the behavior some of its stability criteria estimated against the dynamic pressure or flight speed. Therefore, it is quite important for a reliable flutter prediction method to estimates flutter boundary. This paper summarizes the flutter testing of a wing cantilever model in a wind tunnel. The model has two degree of freedom; they are bending and torsion modes. The flutter test was conducted in a subsonic wind tunnel. The dynamic data responses was measured by two accelerometers that were mounted on leading edge and center of wing tip. The measurement was repeated while the wind speed increased. The dynamic responses were used to determine the parameter flutter margin for the discrete-time system. The flutter boundary of the model was estimated using extrapolation of the parameter flutter margin against the dynamic pressure. The parameter flutter margin for the discrete-time system has a better performance for flutter prediction than the modal parameters. A model with two degree freedom and experiencing classical flutter, the parameter flutter margin for the discrete-time system gives a satisfying result in prediction of flutter boundary on subsonic wind tunnel test.
Arima model and exponential smoothing method: A comparison
NASA Astrophysics Data System (ADS)
Wan Ahmad, Wan Kamarul Ariffin; Ahmad, Sabri
2013-04-01
This study shows the comparison between Autoregressive Moving Average (ARIMA) model and Exponential Smoothing Method in making a prediction. The comparison is focused on the ability of both methods in making the forecasts with the different number of data sources and the different length of forecasting period. For this purpose, the data from The Price of Crude Palm Oil (RM/tonne), Exchange Rates of Ringgit Malaysia (RM) in comparison to Great Britain Pound (GBP) and also The Price of SMR 20 Rubber Type (cents/kg) with three different time series are used in the comparison process. Then, forecasting accuracy of each model is measured by examinethe prediction error that producedby using Mean Squared Error (MSE), Mean Absolute Percentage Error (MAPE), and Mean Absolute deviation (MAD). The study shows that the ARIMA model can produce a better prediction for the long-term forecasting with limited data sources, butcannot produce a better prediction for time series with a narrow range of one point to another as in the time series for Exchange Rates. On the contrary, Exponential Smoothing Method can produce a better forecasting for Exchange Rates that has a narrow range of one point to another for its time series, while itcannot produce a better prediction for a longer forecasting period.
Multiaxial Temperature- and Time-Dependent Failure Model
NASA Technical Reports Server (NTRS)
Richardson, David; McLennan, Michael; Anderson, Gregory; Macon, David; Batista-Rodriquez, Alicia
2003-01-01
A temperature- and time-dependent mathematical model predicts the conditions for failure of a material subjected to multiaxial stress. The model was initially applied to a filled epoxy below its glass-transition temperature, and is expected to be applicable to other materials, at least below their glass-transition temperatures. The model is justified simply by the fact that it closely approximates the experimentally observed failure behavior of this material: The multiaxiality of the model has been confirmed (see figure) and the model has been shown to be applicable at temperatures from -20 to 115 F (-29 to 46 C) and to predict tensile failures of constant-load and constant-load-rate specimens with failure times ranging from minutes to months..
The Predictability of Advection-dominated Flux-transport Solar Dynamo Models
NASA Astrophysics Data System (ADS)
Sanchez, Sabrina; Fournier, Alexandre; Aubert, Julien
2014-01-01
Space weather is a matter of practical importance in our modern society. Predictions of forecoming solar cycles mean amplitude and duration are currently being made based on flux-transport numerical models of the solar dynamo. Interested in the forecast horizon of such studies, we quantify the predictability window of a representative, advection-dominated, flux-transport dynamo model by investigating its sensitivity to initial conditions and control parameters through a perturbation analysis. We measure the rate associated with the exponential growth of an initial perturbation of the model trajectory, which yields a characteristic timescale known as the e-folding time τ e . The e-folding time is shown to decrease with the strength of the α-effect, and to increase with the magnitude of the imposed meridional circulation. Comparing the e-folding time with the solar cycle periodicity, we obtain an average estimate for τ e equal to 2.76 solar cycle durations. From a practical point of view, the perturbations analyzed in this work can be interpreted as uncertainties affecting either the observations or the physical model itself. After reviewing these, we discuss their implications for solar cycle prediction.
Taghvaei, Sajjad; Jahanandish, Mohammad Hasan; Kosuge, Kazuhiro
2017-01-01
Population aging of the societies requires providing the elderly with safe and dependable assistive technologies in daily life activities. Improving the fall detection algorithms can play a major role in achieving this goal. This article proposes a real-time fall prediction algorithm based on the acquired visual data of a user with walking assistive system from a depth sensor. In the lack of a coupled dynamic model of the human and the assistive walker a hybrid "system identification-machine learning" approach is used. An autoregressive-moving-average (ARMA) model is fitted on the time-series walking data to forecast the upcoming states, and a hidden Markov model (HMM) based classifier is built on the top of the ARMA model to predict falling in the upcoming time frames. The performance of the algorithm is evaluated through experiments with four subjects including an experienced physiotherapist while using a walker robot in five different falling scenarios; namely, fall forward, fall down, fall back, fall left, and fall right. The algorithm successfully predicts the fall with a rate of 84.72%.
Fast integration-based prediction bands for ordinary differential equation models.
Hass, Helge; Kreutz, Clemens; Timmer, Jens; Kaschek, Daniel
2016-04-15
To gain a deeper understanding of biological processes and their relevance in disease, mathematical models are built upon experimental data. Uncertainty in the data leads to uncertainties of the model's parameters and in turn to uncertainties of predictions. Mechanistic dynamic models of biochemical networks are frequently based on nonlinear differential equation systems and feature a large number of parameters, sparse observations of the model components and lack of information in the available data. Due to the curse of dimensionality, classical and sampling approaches propagating parameter uncertainties to predictions are hardly feasible and insufficient. However, for experimental design and to discriminate between competing models, prediction and confidence bands are essential. To circumvent the hurdles of the former methods, an approach to calculate a profile likelihood on arbitrary observations for a specific time point has been introduced, which provides accurate confidence and prediction intervals for nonlinear models and is computationally feasible for high-dimensional models. In this article, reliable and smooth point-wise prediction and confidence bands to assess the model's uncertainty on the whole time-course are achieved via explicit integration with elaborate correction mechanisms. The corresponding system of ordinary differential equations is derived and tested on three established models for cellular signalling. An efficiency analysis is performed to illustrate the computational benefit compared with repeated profile likelihood calculations at multiple time points. The integration framework and the examples used in this article are provided with the software package Data2Dynamics, which is based on MATLAB and freely available at http://www.data2dynamics.org helge.hass@fdm.uni-freiburg.de Supplementary data are available at Bioinformatics online. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
Peterson, A Townsend; Martínez-Campos, Carmen; Nakazawa, Yoshinori; Martínez-Meyer, Enrique
2005-09-01
Numerous human diseases-malaria, dengue, yellow fever and leishmaniasis, to name a few-are transmitted by insect vectors with brief life cycles and biting activity that varies in both space and time. Although the general geographic distributions of these epidemiologically important species are known, the spatiotemporal variation in their emergence and activity remains poorly understood. We used ecological niche modeling via a genetic algorithm to produce time-specific predictive models of monthly distributions of Aedes aegypti in Mexico in 1995. Significant predictions of monthly mosquito activity and distributions indicate that predicting spatiotemporal dynamics of disease vector species is feasible; significant coincidence with human cases of dengue indicate that these dynamics probably translate directly into transmission of dengue virus to humans. This approach provides new potential for optimizing use of resources for disease prevention and remediation via automated forecasting of disease transmission risk.
Refining Sunrise/set Prediction Models by Accounting for the Effects of Refraction
NASA Astrophysics Data System (ADS)
Wilson, Teresa; Bartlett, Jennifer L.
2016-01-01
Current atmospheric models used to predict the times of sunrise and sunset have an error of one to four minutes at mid-latitudes (0° - 55° N/S). At higher latitudes, slight changes in refraction may cause significant discrepancies, including determining even whether the Sun appears to rise or set. While different components of refraction are known, how they affect predictions of sunrise/set has not yet been quantified. A better understanding of the contributions from temperature profile, pressure, humidity, and aerosols, could significantly improve the standard prediction. Because sunrise/set times and meteorological data from multiple locations will be necessary for a thorough investigation of the problem, we will collect this data using smartphones as part of a citizen science project. This analysis will lead to more complete models that will provide more accurate times for navigators and outdoorsman alike.
Rastetter, Edward B; Williams, Mathew; Griffin, Kevin L; Kwiatkowski, Bonnie L; Tomasky, Gabrielle; Potosnak, Mark J; Stoy, Paul C; Shaver, Gaius R; Stieglitz, Marc; Hobbie, John E; Kling, George W
2010-07-01
Continuous time-series estimates of net ecosystem carbon exchange (NEE) are routinely made using eddy covariance techniques. Identifying and compensating for errors in the NEE time series can be automated using a signal processing filter like the ensemble Kalman filter (EnKF). The EnKF compares each measurement in the time series to a model prediction and updates the NEE estimate by weighting the measurement and model prediction relative to a specified measurement error estimate and an estimate of the model-prediction error that is continuously updated based on model predictions of earlier measurements in the time series. Because of the covariance among model variables, the EnKF can also update estimates of variables for which there is no direct measurement. The resulting estimates evolve through time, enabling the EnKF to be used to estimate dynamic variables like changes in leaf phenology. The evolving estimates can also serve as a means to test the embedded model and reconcile persistent deviations between observations and model predictions. We embedded a simple arctic NEE model into the EnKF and filtered data from an eddy covariance tower located in tussock tundra on the northern foothills of the Brooks Range in northern Alaska, USA. The model predicts NEE based only on leaf area, irradiance, and temperature and has been well corroborated for all the major vegetation types in the Low Arctic using chamber-based data. This is the first application of the model to eddy covariance data. We modified the EnKF by adding an adaptive noise estimator that provides a feedback between persistent model data deviations and the noise added to the ensemble of Monte Carlo simulations in the EnKF. We also ran the EnKF with both a specified leaf-area trajectory and with the EnKF sequentially recalibrating leaf-area estimates to compensate for persistent model-data deviations. When used together, adaptive noise estimation and sequential recalibration substantially improved filter performance, but it did not improve performance when used individually. The EnKF estimates of leaf area followed the expected springtime canopy phenology. However, there were also diel fluctuations in the leaf-area estimates; these are a clear indication of a model deficiency possibly related to vapor pressure effects on canopy conductance.
Unscented Kalman Filter-Trained Neural Networks for Slip Model Prediction
Li, Zhencai; Wang, Yang; Liu, Zhen
2016-01-01
The purpose of this work is to investigate the accurate trajectory tracking control of a wheeled mobile robot (WMR) based on the slip model prediction. Generally, a nonholonomic WMR may increase the slippage risk, when traveling on outdoor unstructured terrain (such as longitudinal and lateral slippage of wheels). In order to control a WMR stably and accurately under the effect of slippage, an unscented Kalman filter and neural networks (NNs) are applied to estimate the slip model in real time. This method exploits the model approximating capabilities of nonlinear state–space NN, and the unscented Kalman filter is used to train NN’s weights online. The slip parameters can be estimated and used to predict the time series of deviation velocity, which can be used to compensate control inputs of a WMR. The results of numerical simulation show that the desired trajectory tracking control can be performed by predicting the nonlinear slip model. PMID:27467703
Sadiq, Muhammad W; Nielsen, Elisabet I; Khachman, Dalia; Conil, Jean-Marie; Georges, Bernard; Houin, Georges; Laffont, Celine M; Karlsson, Mats O; Friberg, Lena E
2017-04-01
The purpose of this study was to develop a whole-body physiologically based pharmacokinetic (WB-PBPK) model for ciprofloxacin for ICU patients, based on only plasma concentration data. In a next step, tissue and organ concentration time profiles in patients were predicted using the developed model. The WB-PBPK model was built using a non-linear mixed effects approach based on data from 102 adult intensive care unit patients. Tissue to plasma distribution coefficients (Kp) were available from the literature and used as informative priors. The developed WB-PBPK model successfully characterized both the typical trends and variability of the available ciprofloxacin plasma concentration data. The WB-PBPK model was thereafter combined with a pharmacokinetic-pharmacodynamic (PKPD) model, developed based on in vitro time-kill data of ciprofloxacin and Escherichia coli to illustrate the potential of this type of approach to predict the time-course of bacterial killing at different sites of infection. The predicted unbound concentration-time profile in extracellular tissue was driving the bacterial killing in the PKPD model and the rate and extent of take-over of mutant bacteria in different tissues were explored. The bacterial killing was predicted to be most efficient in lung and kidney, which correspond well to ciprofloxacin's indications pneumonia and urinary tract infections. Furthermore, a function based on available information on bacterial killing by the immune system in vivo was incorporated. This work demonstrates the development and application of a WB-PBPK-PD model to compare killing of bacteria with different antibiotic susceptibility, of value for drug development and the optimal use of antibiotics .
Physiologically Based Pharmacokinetic Model for Terbinafine in Rats and Humans
Hosseini-Yeganeh, Mahboubeh; McLachlan, Andrew J.
2002-01-01
The aim of this study was to develop a physiologically based pharmacokinetic (PB-PK) model capable of describing and predicting terbinafine concentrations in plasma and tissues in rats and humans. A PB-PK model consisting of 12 tissue and 2 blood compartments was developed using concentration-time data for tissues from rats (n = 33) after intravenous bolus administration of terbinafine (6 mg/kg of body weight). It was assumed that all tissues except skin and testis tissues were well-stirred compartments with perfusion rate limitations. The uptake of terbinafine into skin and testis tissues was described by a PB-PK model which incorporates a membrane permeability rate limitation. The concentration-time data for terbinafine in human plasma and tissues were predicted by use of a scaled-up PB-PK model, which took oral absorption into consideration. The predictions obtained from the global PB-PK model for the concentration-time profile of terbinafine in human plasma and tissues were in close agreement with the observed concentration data for rats. The scaled-up PB-PK model provided an excellent prediction of published terbinafine concentration-time data obtained after the administration of single and multiple oral doses in humans. The estimated volume of distribution at steady state (Vss) obtained from the PB-PK model agreed with the reported value of 11 liters/kg. The apparent volume of distribution of terbinafine in skin and adipose tissues accounted for 41 and 52%, respectively, of the Vss for humans, indicating that uptake into and redistribution from these tissues dominate the pharmacokinetic profile of terbinafine. The PB-PK model developed in this study was capable of accurately predicting the plasma and tissue terbinafine concentrations in both rats and humans and provides insight into the physiological factors that determine terbinafine disposition. PMID:12069977
Comparing an annual and daily time-step model for predicting field-scale phosphorus loss
USDA-ARS?s Scientific Manuscript database
Numerous models exist for describing phosphorus (P) losses from agricultural fields. The complexity of these models varies considerably ranging from simple empirically-based annual time-step models to more complex process-based daily time step models. While better accuracy is often assumed with more...
Modeling water yield response to forest cover changes in northern Minnesota
S.C. Bernath; E.S. Verry; K.N. Brooks; P.F. Ffolliott
1982-01-01
A water yield model (TIMWAT) has been developed to predict changes in water yield following changes in forest cover in northern Minnesota. Two versions of the model exist; one predicts changes in water yield as a function of gross precipitation and time after clearcutting. The second version predicts changes in water yield due to changes in above-ground biomass...
NASA Technical Reports Server (NTRS)
Pandolf, Kent B.; Stroschein, Leander A.; Gonzalez, Richard R.; Sawka, Michael N.
1994-01-01
This institute has developed a comprehensive USARIEM heat strain model for predicting physiological responses and soldier performance in the heat which has been programmed for use by hand-held calculators, personal computers, and incorporated into the development of a heat strain decision aid. This model deals directly with five major inputs: the clothing worn, the physical work intensity, the state of heat acclimation, the ambient environment (air temperature, relative humidity, wind speed, and solar load), and the accepted heat casualty level. In addition to predicting rectal temperature, heart rate, and sweat loss given the above inputs, our model predicts the expected physical work/rest cycle, the maximum safe physical work time, the estimated recovery time from maximal physical work, and the drinking water requirements associated with each of these situations. This model provides heat injury risk management guidance based on thermal strain predictions from the user specified environmental conditions, soldier characteristics, clothing worn, and the physical work intensity. If heat transfer values for space operations' clothing are known, NASA can use this prediction model to help avoid undue heat strain in astronauts during space flight.
Modeling to predict pilot performance during CDTI-based in-trail following experiments
NASA Technical Reports Server (NTRS)
Sorensen, J. A.; Goka, T.
1984-01-01
A mathematical model was developed of the flight system with the pilot using a cockpit display of traffic information (CDTI) to establish and maintain in-trail spacing behind a lead aircraft during approach. Both in-trail and vertical dynamics were included. The nominal spacing was based on one of three criteria (Constant Time Predictor; Constant Time Delay; or Acceleration Cue). This model was used to simulate digitally the dynamics of a string of multiple following aircraft, including response to initial position errors. The simulation was used to predict the outcome of a series of in-trail following experiments, including pilot performance in maintaining correct longitudinal spacing and vertical position. The experiments were run in the NASA Ames Research Center multi-cab cockpit simulator facility. The experimental results were then used to evaluate the model and its prediction accuracy. Model parameters were adjusted, so that modeled performance matched experimental results. Lessons learned in this modeling and prediction study are summarized.
Kalman filter to update forest cover estimates
Raymond L. Czaplewski
1990-01-01
The Kalman filter is a statistical estimator that combines a time-series of independent estimates, using a prediction model that describes expected changes in the state of a system over time. An expensive inventory can be updated using model predictions that are adjusted with more recent, but less expensive and precise, monitoring data. The concepts of the Kalman...
Utility of NCEP Operational and Emerging Meteorological Models for Driving Air Quality Prediction
NASA Astrophysics Data System (ADS)
McQueen, J.; Huang, J.; Huang, H. C.; Shafran, P.; Lee, P.; Pan, L.; Sleinkofer, A. M.; Stajner, I.; Upadhayay, S.; Tallapragada, V.
2017-12-01
Operational air quality predictions for the United States (U. S.) are provided at NOAA by the National Air Quality Forecasting Capability (NAQFC). NAQFC provides nationwide operational predictions of ozone and particulate matter twice per day (at 06 and 12 UTC cycles) at 12 km resolution and 1 hour time intervals through 48 hours and distributed at http://airquality.weather.gov. The NOAA National Centers for Environmental Prediction (NCEP) operational North American Mesoscale (NAM) 12 km weather prediction is used to drive the Community Multiscale Air Quality (CMAQ) model. In 2017, the NAM was upgraded in part to reduce a warm 2m temperature bias in Summer (V4). At the same time CMAQ was updated to V5.0.2. Both versions of the models were run in parallel for several months. Therefore the impact of improvements from the atmospheric chemistry model versus upgrades with the weather prediction model could be assessed. . Improvements to CMAQ were related to improvements to improvements in NAM 2 m temperature bias through increasing the opacity of clouds and reducing downward shortwave radiation resulted in reduced ozone photolysis. Higher resolution operational NWP models have recently been introduced as part of the NCEP modeling suite. These include the NAM CONUS Nest (3 km horizontal resolution) run four times per day through 60 hours and the High Resolution Rapid Refresh (HRRR, 3 km) run hourly out to 18 hours. In addition, NCEP with other NOAA labs has begun to develop and test the Next Generation Global Prediction System (NGGPS) based on the FV3 global model. This presentation also overviews recent developments with operational numerical weather prediction and evaluates the ability of these models for predicting low level temperatures, clouds and capturing boundary layer processes important for driving air quality prediction in complex terrain. The assessed meteorological model errors could help determine the magnitude of possible pollutant errors from CMAQ if used for driving meteorology. The NWP models will be evaluated against standard and mesonet fields averaged for various regions during the summer 2017. An evaluation of meteorological fields important to air quality modeling (eg: near surface winds, temperatures, moisture and boundary layer heights, cloud cover) will be reported on.
NASA Technical Reports Server (NTRS)
Dhanasekharan, M.; Huang, H.; Kokini, J. L.; Janes, H. W. (Principal Investigator)
1999-01-01
The measured rheological behavior of hard wheat flour dough was predicted using three nonlinear differential viscoelastic models. The Phan-Thien Tanner model gave good zero shear viscosity prediction, but overpredicted the shear viscosity at higher shear rates and the transient and extensional properties. The Giesekus-Leonov model gave similar predictions to the Phan-Thien Tanner model, but the extensional viscosity prediction showed extension thickening. Using high values of the mobility factor, extension thinning behavior was observed but the predictions were not satisfactory. The White-Metzner model gave good predictions of the steady shear viscosity and the first normal stress coefficient but it was unable to predict the uniaxial extensional viscosity as it exhibited asymptotic behavior in the tested extensional rates. It also predicted the transient shear properties with moderate accuracy in the transient phase, but very well at higher times, compared to the Phan-Thien Tanner model and the Giesekus-Leonov model. None of the models predicted all observed data consistently well. Overall the White-Metzner model appeared to make the best predictions of all the observed data.
NASA Astrophysics Data System (ADS)
Schöniger, Anneli; Wöhling, Thomas; Nowak, Wolfgang
2014-05-01
Bayesian model averaging ranks the predictive capabilities of alternative conceptual models based on Bayes' theorem. The individual models are weighted with their posterior probability to be the best one in the considered set of models. Finally, their predictions are combined into a robust weighted average and the predictive uncertainty can be quantified. This rigorous procedure does, however, not yet account for possible instabilities due to measurement noise in the calibration data set. This is a major drawback, since posterior model weights may suffer a lack of robustness related to the uncertainty in noisy data, which may compromise the reliability of model ranking. We present a new statistical concept to account for measurement noise as source of uncertainty for the weights in Bayesian model averaging. Our suggested upgrade reflects the limited information content of data for the purpose of model selection. It allows us to assess the significance of the determined posterior model weights, the confidence in model selection, and the accuracy of the quantified predictive uncertainty. Our approach rests on a brute-force Monte Carlo framework. We determine the robustness of model weights against measurement noise by repeatedly perturbing the observed data with random realizations of measurement error. Then, we analyze the induced variability in posterior model weights and introduce this "weighting variance" as an additional term into the overall prediction uncertainty analysis scheme. We further determine the theoretical upper limit in performance of the model set which is imposed by measurement noise. As an extension to the merely relative model ranking, this analysis provides a measure of absolute model performance. To finally decide, whether better data or longer time series are needed to ensure a robust basis for model selection, we resample the measurement time series and assess the convergence of model weights for increasing time series length. We illustrate our suggested approach with an application to model selection between different soil-plant models following up on a study by Wöhling et al. (2013). Results show that measurement noise compromises the reliability of model ranking and causes a significant amount of weighting uncertainty, if the calibration data time series is not long enough to compensate for its noisiness. This additional contribution to the overall predictive uncertainty is neglected without our approach. Thus, we strongly advertise to include our suggested upgrade in the Bayesian model averaging routine.
On-Line, Self-Learning, Predictive Tool for Determining Payload Thermal Response
NASA Technical Reports Server (NTRS)
Jen, Chian-Li; Tilwick, Leon
2000-01-01
This paper will present the results of a joint ManTech / Goddard R&D effort, currently under way, to develop and test a computer based, on-line, predictive simulation model for use by facility operators to predict the thermal response of a payload during thermal vacuum testing. Thermal response was identified as an area that could benefit from the algorithms developed by Dr. Jeri for complex computer simulations. Most thermal vacuum test setups are unique since no two payloads have the same thermal properties. This requires that the operators depend on their past experiences to conduct the test which requires time for them to learn how the payload responds while at the same time limiting any risk of exceeding hot or cold temperature limits. The predictive tool being developed is intended to be used with the new Thermal Vacuum Data System (TVDS) developed at Goddard for the Thermal Vacuum Test Operations group. This model can learn the thermal response of the payload by reading a few data points from the TVDS, accepting the payload's current temperature as the initial condition for prediction. The model can then be used as a predictive tool to estimate the future payload temperatures according to a predetermined shroud temperature profile. If the error of prediction is too big, the model can be asked to re-learn the new situation on-line in real-time and give a new prediction. Based on some preliminary tests, we feel this predictive model can forecast the payload temperature of the entire test cycle within 5 degrees Celsius after it has learned 3 times during the beginning of the test. The tool will allow the operator to play "what-if' experiments to decide what is his best shroud temperature set-point control strategy. This tool will save money by minimizing guess work and optimizing transitions as well as making the testing process safer and easier to conduct.
Ling, Qi; Liu, Jimin; Zhuo, Jianyong; Zhuang, Runzhou; Huang, Haitao; He, Xiangxiang; Xu, Xiao; Zheng, Shusen
2018-04-27
Donor characteristics and graft quality were recently reported to play an important role in the recurrence of hepatocellular carcinoma after liver transplantation. Our aim was to establish a prognostic model by using both donor and recipient variables. Data of 1,010 adult patients (training/validation: 2/1) undergoing primary liver transplantation for hepatocellular carcinoma were extracted from the China Liver Transplant Registry database and analyzed retrospectively. A multivariate competing risk regression model was developed and used to generate a nomogram predicting the likelihood of post-transplant hepatocellular carcinoma recurrence. Of 673 patients in the training cohort, 70 (10.4%) had hepatocellular carcinoma recurrence with a median recurrence time of 6 months (interquartile range: 4-25 months). Cold ischemia time was the only independent donor prognostic factor for predicting hepatocellular carcinoma recurrence (hazard ratio = 2.234, P = .007). The optimal cutoff value was 12 hours when patients were grouped according to cold ischemia time at 2-hour intervals. Integrating cold ischemia time into the Milan criteria (liver transplantation candidate selection criteria) improved the accuracy for predicting hepatocellular carcinoma recurrence in both training and validation sets (P < .05). A nomogram composed of cold ischemia time, tumor burden, differentiation, and α-fetoprotein level proved to be accurate and reliable in predicting the likelihood of 1-year hepatocellular carcinoma recurrence after liver transplantation. Additionally, donor anti-hepatitis B core antibody positivity, prolonged cold ischemia time, and anhepatic time were linked to the intrahepatic recurrence, whereas older donor age, prolonged donor warm ischemia time, cold ischemia time, and ABO incompatibility were relevant to the extrahepatic recurrence. The graft quality integrated models exhibited considerable predictive accuracy in early hepatocellular carcinoma recurrence risk assessment. The identification of donor risks can further help understand the mechanism of different patterns of recurrence. Copyright © 2018 Elsevier Inc. All rights reserved.
Recurrent Neural Network Applications for Astronomical Time Series
NASA Astrophysics Data System (ADS)
Protopapas, Pavlos
2017-06-01
The benefits of good predictive models in astronomy lie in early event prediction systems and effective resource allocation. Current time series methods applicable to regular time series have not evolved to generalize for irregular time series. In this talk, I will describe two Recurrent Neural Network methods, Long Short-Term Memory (LSTM) and Echo State Networks (ESNs) for predicting irregular time series. Feature engineering along with a non-linear modeling proved to be an effective predictor. For noisy time series, the prediction is improved by training the network on error realizations using the error estimates from astronomical light curves. In addition to this, we propose a new neural network architecture to remove correlation from the residuals in order to improve prediction and compensate for the noisy data. Finally, I show how to set hyperparameters for a stable and performant solution correctly. In this work, we circumvent this obstacle by optimizing ESN hyperparameters using Bayesian optimization with Gaussian Process priors. This automates the tuning procedure, enabling users to employ the power of RNN without needing an in-depth understanding of the tuning procedure.
Predicting local field potentials with recurrent neural networks.
Kim, Louis; Harer, Jacob; Rangamani, Akshay; Moran, James; Parks, Philip D; Widge, Alik; Eskandar, Emad; Dougherty, Darin; Chin, Sang Peter
2016-08-01
We present a Recurrent Neural Network using LSTM (Long Short Term Memory) that is capable of modeling and predicting Local Field Potentials. We train and test the network on real data recorded from epilepsy patients. We construct networks that predict multi-channel LFPs for 1, 10, and 100 milliseconds forward in time. Our results show that prediction using LSTM outperforms regression when predicting 10 and 100 millisecond forward in time.
NASA Astrophysics Data System (ADS)
Guarnaccia, Claudio; Quartieri, Joseph; Tepedino, Carmine
2017-06-01
The dangerous effect of noise on human health is well known. Both the auditory and non-auditory effects are largely documented in literature, and represent an important hazard in human activities. Particular care is devoted to road traffic noise, since it is growing according to the growth of residential, industrial and commercial areas. For these reasons, it is important to develop effective models able to predict the noise in a certain area. In this paper, a hybrid predictive model is presented. The model is based on the mixing of two different approach: the Time Series Analysis (TSA) and the Artificial Neural Network (ANN). The TSA model is based on the evaluation of trend and seasonality in the data, while the ANN model is based on the capacity of the network to "learn" the behavior of the data. The mixed approach will consist in the evaluation of noise levels by means of TSA and, once the differences (residuals) between TSA estimations and observed data have been calculated, in the training of a ANN on the residuals. This hybrid model will exploit interesting features and results, with a significant variation related to the number of steps forward in the prediction. It will be shown that the best results, in terms of prediction, are achieved predicting one step ahead in the future. Anyway, a 7 days prediction can be performed, with a slightly greater error, but offering a larger range of prediction, with respect to the single day ahead predictive model.
Jin, H; Wu, S; Vidyanti, I; Di Capua, P; Wu, B
2015-01-01
This article is part of the Focus Theme of Methods of Information in Medicine on "Big Data and Analytics in Healthcare". Depression is a common and often undiagnosed condition for patients with diabetes. It is also a condition that significantly impacts healthcare outcomes, use, and cost as well as elevating suicide risk. Therefore, a model to predict depression among diabetes patients is a promising and valuable tool for providers to proactively assess depressive symptoms and identify those with depression. This study seeks to develop a generalized multilevel regression model, using a longitudinal data set from a recent large-scale clinical trial, to predict depression severity and presence of major depression among patients with diabetes. Severity of depression was measured by the Patient Health Questionnaire PHQ-9 score. Predictors were selected from 29 candidate factors to develop a 2-level Poisson regression model that can make population-average predictions for all patients and subject-specific predictions for individual patients with historical records. Newly obtained patient records can be incorporated with historical records to update the prediction model. Root-mean-square errors (RMSE) were used to evaluate predictive accuracy of PHQ-9 scores. The study also evaluated the classification ability of using the predicted PHQ-9 scores to classify patients as having major depression. Two time-invariant and 10 time-varying predictors were selected for the model. Incorporating historical records and using them to update the model may improve both predictive accuracy of PHQ-9 scores and classification ability of the predicted scores. Subject-specific predictions (for individual patients with historical records) achieved RMSE about 4 and areas under the receiver operating characteristic (ROC) curve about 0.9 and are better than population-average predictions. The study developed a generalized multilevel regression model to predict depression and demonstrated that using generalized multilevel regression based on longitudinal patient records can achieve high predictive ability.
Koral, Kenneth F.; Avram, Anca M.; Kaminski, Mark S.; Dewaraja, Yuni K.
2012-01-01
Abstract Background For individualized treatment planning in radioimmunotherapy (RIT), correlations must be established between tracer-predicted and therapy-delivered absorbed doses. The focus of this work was to investigate this correlation for tumors. Methods The study analyzed 57 tumors in 19 follicular lymphoma patients treated with I-131 tositumomab and imaged with SPECT/CT multiple times after tracer and therapy administrations. Instead of the typical least-squares fit to a single tumor's measured time-activity data, estimation was accomplished via a biexponential mixed model in which the curves from multiple subjects were jointly estimated. The tumor-absorbed dose estimates were determined by patient-specific Monte Carlo calculation. Results The mixed model gave realistic tumor time-activity fits that showed the expected uptake and clearance phases even with noisy data or missing time points. Correlation between tracer and therapy tumor-residence times (r=0.98; p<0.0001) and correlation between tracer-predicted and therapy-delivered mean tumor-absorbed doses (r=0.86; p<0.0001) were very high. The predicted and delivered absorbed doses were within±25% (or within±75 cGy) for 80% of tumors. Conclusions The mixed-model approach is feasible for fitting tumor time-activity data in RIT treatment planning when individual least-squares fitting is not possible due to inadequate sampling points. The good correlation between predicted and delivered tumor doses demonstrates the potential of using a pretherapy tracer study for tumor dosimetry-based treatment planning in RIT. PMID:22947086
Briët, Olivier J T; Amerasinghe, Priyanie H; Vounatsou, Penelope
2013-01-01
With the renewed drive towards malaria elimination, there is a need for improved surveillance tools. While time series analysis is an important tool for surveillance, prediction and for measuring interventions' impact, approximations by commonly used Gaussian methods are prone to inaccuracies when case counts are low. Therefore, statistical methods appropriate for count data are required, especially during "consolidation" and "pre-elimination" phases. Generalized autoregressive moving average (GARMA) models were extended to generalized seasonal autoregressive integrated moving average (GSARIMA) models for parsimonious observation-driven modelling of non Gaussian, non stationary and/or seasonal time series of count data. The models were applied to monthly malaria case time series in a district in Sri Lanka, where malaria has decreased dramatically in recent years. The malaria series showed long-term changes in the mean, unstable variance and seasonality. After fitting negative-binomial Bayesian models, both a GSARIMA and a GARIMA deterministic seasonality model were selected based on different criteria. Posterior predictive distributions indicated that negative-binomial models provided better predictions than Gaussian models, especially when counts were low. The G(S)ARIMA models were able to capture the autocorrelation in the series. G(S)ARIMA models may be particularly useful in the drive towards malaria elimination, since episode count series are often seasonal and non-stationary, especially when control is increased. Although building and fitting GSARIMA models is laborious, they may provide more realistic prediction distributions than do Gaussian methods and may be more suitable when counts are low.
Briët, Olivier J. T.; Amerasinghe, Priyanie H.; Vounatsou, Penelope
2013-01-01
Introduction With the renewed drive towards malaria elimination, there is a need for improved surveillance tools. While time series analysis is an important tool for surveillance, prediction and for measuring interventions’ impact, approximations by commonly used Gaussian methods are prone to inaccuracies when case counts are low. Therefore, statistical methods appropriate for count data are required, especially during “consolidation” and “pre-elimination” phases. Methods Generalized autoregressive moving average (GARMA) models were extended to generalized seasonal autoregressive integrated moving average (GSARIMA) models for parsimonious observation-driven modelling of non Gaussian, non stationary and/or seasonal time series of count data. The models were applied to monthly malaria case time series in a district in Sri Lanka, where malaria has decreased dramatically in recent years. Results The malaria series showed long-term changes in the mean, unstable variance and seasonality. After fitting negative-binomial Bayesian models, both a GSARIMA and a GARIMA deterministic seasonality model were selected based on different criteria. Posterior predictive distributions indicated that negative-binomial models provided better predictions than Gaussian models, especially when counts were low. The G(S)ARIMA models were able to capture the autocorrelation in the series. Conclusions G(S)ARIMA models may be particularly useful in the drive towards malaria elimination, since episode count series are often seasonal and non-stationary, especially when control is increased. Although building and fitting GSARIMA models is laborious, they may provide more realistic prediction distributions than do Gaussian methods and may be more suitable when counts are low. PMID:23785448
Yang, Su; Shi, Shixiong; Hu, Xiaobing; Wang, Minjie
2015-01-01
Spatial-temporal correlations among the data play an important role in traffic flow prediction. Correspondingly, traffic modeling and prediction based on big data analytics emerges due to the city-scale interactions among traffic flows. A new methodology based on sparse representation is proposed to reveal the spatial-temporal dependencies among traffic flows so as to simplify the correlations among traffic data for the prediction task at a given sensor. Three important findings are observed in the experiments: (1) Only traffic flows immediately prior to the present time affect the formation of current traffic flows, which implies the possibility to reduce the traditional high-order predictors into an 1-order model. (2) The spatial context relevant to a given prediction task is more complex than what is assumed to exist locally and can spread out to the whole city. (3) The spatial context varies with the target sensor undergoing prediction and enlarges with the increment of time lag for prediction. Because the scope of human mobility is subject to travel time, identifying the varying spatial context against time lag is crucial for prediction. Since sparse representation can capture the varying spatial context to adapt to the prediction task, it outperforms the traditional methods the inputs of which are confined as the data from a fixed number of nearby sensors. As the spatial-temporal context for any prediction task is fully detected from the traffic data in an automated manner, where no additional information regarding network topology is needed, it has good scalability to be applicable to large-scale networks.
Yang, Su; Shi, Shixiong; Hu, Xiaobing; Wang, Minjie
2015-01-01
Spatial-temporal correlations among the data play an important role in traffic flow prediction. Correspondingly, traffic modeling and prediction based on big data analytics emerges due to the city-scale interactions among traffic flows. A new methodology based on sparse representation is proposed to reveal the spatial-temporal dependencies among traffic flows so as to simplify the correlations among traffic data for the prediction task at a given sensor. Three important findings are observed in the experiments: (1) Only traffic flows immediately prior to the present time affect the formation of current traffic flows, which implies the possibility to reduce the traditional high-order predictors into an 1-order model. (2) The spatial context relevant to a given prediction task is more complex than what is assumed to exist locally and can spread out to the whole city. (3) The spatial context varies with the target sensor undergoing prediction and enlarges with the increment of time lag for prediction. Because the scope of human mobility is subject to travel time, identifying the varying spatial context against time lag is crucial for prediction. Since sparse representation can capture the varying spatial context to adapt to the prediction task, it outperforms the traditional methods the inputs of which are confined as the data from a fixed number of nearby sensors. As the spatial-temporal context for any prediction task is fully detected from the traffic data in an automated manner, where no additional information regarding network topology is needed, it has good scalability to be applicable to large-scale networks. PMID:26496370
Shen, Xiao-jun; Sun, Jing-sheng; Li, Ming-si; Zhang, Ji-yang; Wang, Jing-lei; Li, Dong-wei
2015-02-01
It is important to improve the real-time irrigation forecasting precision by predicting real-time water consumption of cotton mulched with plastic film under drip irrigation based on meteorological data and cotton growth status. The model parameters for calculating ET0 based on Hargreaves formula were determined using historical meteorological data from 1953 to 2008 in Shihezi reclamation area. According to the field experimental data of growing season in 2009-2010, the model of computing crop coefficient Kc was established based on accumulated temperature. On the basis of crop water requirement (ET0) and Kc, a real-time irrigation forecast model was finally constructed, and it was verified by the field experimental data in 2011. The results showed that the forecast model had high forecasting precision, and the average absolute values of relative error between the predicted value and measured value were about 3.7%, 2.4% and 1.6% during seedling, squaring and blossom-boll forming stages, respectively. The forecast model could be used to modify the predicted values in time according to the real-time meteorological data and to guide the water management in local film-mulched cotton field under drip irrigation.
A univariate model of river water nitrate time series
NASA Astrophysics Data System (ADS)
Worrall, F.; Burt, T. P.
1999-01-01
Four time series were taken from three catchments in the North and South of England. The sites chosen included two in predominantly agricultural catchments, one at the tidal limit and one downstream of a sewage treatment works. A time series model was constructed for each of these series as a means of decomposing the elements controlling river water nitrate concentrations and to assess whether this approach could provide a simple management tool for protecting water abstractions. Autoregressive (AR) modelling of the detrended and deseasoned time series showed a "memory effect". This memory effect expressed itself as an increase in the winter-summer difference in nitrate levels that was dependent upon the nitrate concentration 12 or 6 months previously. Autoregressive moving average (ARMA) modelling showed that one of the series contained seasonal, non-stationary elements that appeared as an increasing trend in the winter-summer difference. The ARMA model was used to predict nitrate levels and predictions were tested against data held back from the model construction process - predictions gave average percentage errors of less than 10%. Empirical modelling can therefore provide a simple, efficient method for constructing management models for downstream water abstraction.
A review of statistical updating methods for clinical prediction models.
Su, Ting-Li; Jaki, Thomas; Hickey, Graeme L; Buchan, Iain; Sperrin, Matthew
2018-01-01
A clinical prediction model is a tool for predicting healthcare outcomes, usually within a specific population and context. A common approach is to develop a new clinical prediction model for each population and context; however, this wastes potentially useful historical information. A better approach is to update or incorporate the existing clinical prediction models already developed for use in similar contexts or populations. In addition, clinical prediction models commonly become miscalibrated over time, and need replacing or updating. In this article, we review a range of approaches for re-using and updating clinical prediction models; these fall in into three main categories: simple coefficient updating, combining multiple previous clinical prediction models in a meta-model and dynamic updating of models. We evaluated the performance (discrimination and calibration) of the different strategies using data on mortality following cardiac surgery in the United Kingdom: We found that no single strategy performed sufficiently well to be used to the exclusion of the others. In conclusion, useful tools exist for updating existing clinical prediction models to a new population or context, and these should be implemented rather than developing a new clinical prediction model from scratch, using a breadth of complementary statistical methods.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Saha, Kaushik; Som, Sibendu; Battistoni, Michele
Flash boiling is known to be a common phenomenon for gasoline direct injection (GDI) engine sprays. The Homogeneous Relaxation Model has been adopted in many recent numerical studies for predicting cavitation and flash boiling. The Homogeneous Relaxation Model is assessed in this study. Sensitivity analysis of the model parameters has been documented to infer the driving factors for the flash-boiling predictions. The model parameters have been varied over a range and the differences in predictions of the extent of flashing have been studied. Apart from flashing in the near nozzle regions, mild cavitation is also predicted inside the gasoline injectors.more » The variation in the predicted time scales through the model parameters for predicting these two different thermodynamic phenomena (cavitation, flash) have been elaborated in this study. Turbulence model effects have also been investigated by comparing predictions from the standard and Re-Normalization Group (RNG) k-ε turbulence models.« less
NASA Astrophysics Data System (ADS)
Liemohn, M. W.; Welling, D. T.; De Zeeuw, D.; Kuznetsova, M. M.; Rastaetter, L.; Ganushkina, N. Y.; Ilie, R.; Toth, G.; Gombosi, T. I.; van der Holst, B.
2016-12-01
The ground-based magnetometer index Dst is a decent measure of the near-Earth current systems, in particular those in the storm-time inner magnetosphere. The ability of a large-scale, physics-based model to reproduce, or even predict, this index is therefore a tangible measure of the overall validity of the code for space weather research and space weather operational usage. Experimental real-time simulations of the Space Weather Modeling Framework (SWMF) are conducted at the Community Coordinated Modeling Center (CCMC), with results available there (http://ccmc.gsfc.nasa.gov/realtime.php), through the CCMC Integrated Space Weather Analysis (iSWA) site (http://iswa.ccmc.gsfc.nasa.gov/IswaSystemWebApp/), and the Michigan SWMF site (http://csem.engin.umich.edu/realtime). Presently, two configurations of the SWMF are running in real time at CCMC, both focusing on the geospace modules, using the BATS-R-US magnetohydrodynamic model, the Ridley Ionosphere Model, and with and without the Rice Convection Model for inner magnetospheric drift physics. While both have been running for several years, nearly continuous results are available since July 2015. Dst from the model output is compared against the Kyoto real-time Dst. Various quantitative measures are presented to assess the goodness of fit between the models and observations. In particular, correlation coefficients, RMSE and prediction efficiency are calculated and discussed. In addition, contingency tables are presented, demonstrating the ability of the model to predict "disturbed times" as defined by Dst values below some critical threshold. It is shown that the SWMF run with the inner magnetosphere model is significantly better at reproducing storm-time values, with prediction efficiencies above 0.25 and Heidke skill scores above 0.5. This work was funded by NASA and NSF grants, and the European Union's Horizon 2020 research and innovation programme under grant agreement 637302 PROGRESS.
Stochastic Residual-Error Analysis For Estimating Hydrologic Model Predictive Uncertainty
A hybrid time series-nonparametric sampling approach, referred to herein as semiparametric, is presented for the estimation of model predictive uncertainty. The methodology is a two-step procedure whereby a distributed hydrologic model is first calibrated, then followed by brute ...
Predictive Microbiology and Food Safety Applications
USDA-ARS?s Scientific Manuscript database
Mathematical modeling is the science of systematic study of recurrent events or phenomena. When models are properly developed, their applications may save costs and time. For microbial food safety research and applications, predictive microbiology models may be developed based on the fact that most ...
Pereira, Telma; Lemos, Luís; Cardoso, Sandra; Silva, Dina; Rodrigues, Ana; Santana, Isabel; de Mendonça, Alexandre; Guerreiro, Manuela; Madeira, Sara C
2017-07-19
Predicting progression from a stage of Mild Cognitive Impairment to dementia is a major pursuit in current research. It is broadly accepted that cognition declines with a continuum between MCI and dementia. As such, cohorts of MCI patients are usually heterogeneous, containing patients at different stages of the neurodegenerative process. This hampers the prognostic task. Nevertheless, when learning prognostic models, most studies use the entire cohort of MCI patients regardless of their disease stages. In this paper, we propose a Time Windows approach to predict conversion to dementia, learning with patients stratified using time windows, thus fine-tuning the prognosis regarding the time to conversion. In the proposed Time Windows approach, we grouped patients based on the clinical information of whether they converted (converter MCI) or remained MCI (stable MCI) within a specific time window. We tested time windows of 2, 3, 4 and 5 years. We developed a prognostic model for each time window using clinical and neuropsychological data and compared this approach with the commonly used in the literature, where all patients are used to learn the models, named as First Last approach. This enables to move from the traditional question "Will a MCI patient convert to dementia somewhere in the future" to the question "Will a MCI patient convert to dementia in a specific time window". The proposed Time Windows approach outperformed the First Last approach. The results showed that we can predict conversion to dementia as early as 5 years before the event with an AUC of 0.88 in the cross-validation set and 0.76 in an independent validation set. Prognostic models using time windows have higher performance when predicting progression from MCI to dementia, when compared to the prognostic approach commonly used in the literature. Furthermore, the proposed Time Windows approach is more relevant from a clinical point of view, predicting conversion within a temporal interval rather than sometime in the future and allowing clinicians to timely adjust treatments and clinical appointments.
Nonlinear Prediction Model for Hydrologic Time Series Based on Wavelet Decomposition
NASA Astrophysics Data System (ADS)
Kwon, H.; Khalil, A.; Brown, C.; Lall, U.; Ahn, H.; Moon, Y.
2005-12-01
Traditionally forecasting and characterizations of hydrologic systems is performed utilizing many techniques. Stochastic linear methods such as AR and ARIMA and nonlinear ones such as statistical learning theory based tools have been extensively used. The common difficulty to all methods is the determination of sufficient and necessary information and predictors for a successful prediction. Relationships between hydrologic variables are often highly nonlinear and interrelated across the temporal scale. A new hybrid approach is proposed for the simulation of hydrologic time series combining both the wavelet transform and the nonlinear model. The present model employs some merits of wavelet transform and nonlinear time series model. The Wavelet Transform is adopted to decompose a hydrologic nonlinear process into a set of mono-component signals, which are simulated by nonlinear model. The hybrid methodology is formulated in a manner to improve the accuracy of a long term forecasting. The proposed hybrid model yields much better results in terms of capturing and reproducing the time-frequency properties of the system at hand. Prediction results are promising when compared to traditional univariate time series models. An application of the plausibility of the proposed methodology is provided and the results conclude that wavelet based time series model can be utilized for simulating and forecasting of hydrologic variable reasonably well. This will ultimately serve the purpose of integrated water resources planning and management.
Forecasting the Northern African Dust Outbreak Towards Europe in April 2011: A Model Intercomparison
NASA Technical Reports Server (NTRS)
Huneeus, N.; Basart, S.; Fiedler, S.; Morcrette, J.-J.; Benedetti, A.; Mulcahy, J.; Terradellas, E.; Pérez García-Pando, C.; Pejanovic, G.; Nickovic, S.
2016-01-01
In the framework of the World Meteorological Organisation's Sand and Dust Storm Warning Advisory and Assessment System, we evaluated the predictions of five state-of-the-art dust forecast models during an intense Saharan dust outbreak affecting western and northern Europe in April 2011. We assessed the capacity of the models to predict the evolution of the dust cloud with lead times of up to 72 hours using observations of aerosol optical depth (AOD) from the AErosol RObotic NETwork (AERONET) and the Moderate Resolution Imaging Spectroradiometer (MODIS) and dust surface concentrations from a ground-based measurement network. In addition, the predicted vertical dust distribution was evaluated with vertical extinction profiles from the Cloud and Aerosol Lidar with Orthogonal Polarization (CALIOP). To assess the diversity in forecast capability among the models, the analysis was extended to wind field (both surface and profile), synoptic conditions, emissions and deposition fluxes. Models predict the onset and evolution of the AOD for all analysed lead times. On average, differences among the models are larger than differences among lead times for each individual model. In spite of large differences in emission and deposition, the models present comparable skill for AOD. In general, models are better in predicting AOD than near-surface dust concentration over the Iberian Peninsula. Models tend to underestimate the long-range transport towards northern Europe. Our analysis suggests that this is partly due to difficulties in simulating the vertical distribution dust and horizontal wind. Differences in the size distribution and wet scavenging efficiency may also account for model diversity in long-range transport.
An Efficient Pattern Mining Approach for Event Detection in Multivariate Temporal Data
Batal, Iyad; Cooper, Gregory; Fradkin, Dmitriy; Harrison, James; Moerchen, Fabian; Hauskrecht, Milos
2015-01-01
This work proposes a pattern mining approach to learn event detection models from complex multivariate temporal data, such as electronic health records. We present Recent Temporal Pattern mining, a novel approach for efficiently finding predictive patterns for event detection problems. This approach first converts the time series data into time-interval sequences of temporal abstractions. It then constructs more complex time-interval patterns backward in time using temporal operators. We also present the Minimal Predictive Recent Temporal Patterns framework for selecting a small set of predictive and non-spurious patterns. We apply our methods for predicting adverse medical events in real-world clinical data. The results demonstrate the benefits of our methods in learning accurate event detection models, which is a key step for developing intelligent patient monitoring and decision support systems. PMID:26752800
NASA AVOSS Fast-Time Wake Prediction Models: User's Guide
NASA Technical Reports Server (NTRS)
Ahmad, Nash'at N.; VanValkenburg, Randal L.; Pruis, Matthew
2014-01-01
The National Aeronautics and Space Administration (NASA) is developing and testing fast-time wake transport and decay models to safely enhance the capacity of the National Airspace System (NAS). The fast-time wake models are empirical algorithms used for real-time predictions of wake transport and decay based on aircraft parameters and ambient weather conditions. The aircraft dependent parameters include the initial vortex descent velocity and the vortex pair separation distance. The atmospheric initial conditions include vertical profiles of temperature or potential temperature, eddy dissipation rate, and crosswind. The current distribution includes the latest versions of the APA (3.4) and the TDP (2.1) models. This User's Guide provides detailed information on the model inputs, file formats, and the model output. An example of a model run and a brief description of the Memphis 1995 Wake Vortex Dataset is also provided.
Amyloid imaging and CSF biomarkers in predicting cognitive impairment up to 7.5 years later
Fagan, Anne M.; Grant, Elizabeth A.; Hassenstab, Jason; Moulder, Krista L.; Maue Dreyfus, Denise; Sutphen, Courtney L.; Benzinger, Tammie L.S.; Mintun, Mark A.; Holtzman, David M.; Morris, John C.
2013-01-01
Objectives: We compared the ability of molecular biomarkers for Alzheimer disease (AD), including amyloid imaging and CSF biomarkers (Aβ42, tau, ptau181, tau/Aβ42, ptau181/Aβ42), to predict time to incident cognitive impairment among cognitively normal adults aged 45 to 88 years and followed for up to 7.5 years. Methods: Longitudinal data from Knight Alzheimer's Disease Research Center participants (N = 201) followed for a mean of 3.70 years (SD = 1.46 years) were used. Participants with amyloid imaging and CSF collection within 1 year of a clinical assessment indicating normal cognition were eligible. Cox proportional hazards models tested whether the individual biomarkers were related to time to incident cognitive impairment. “Expanded” models were developed using the biomarkers and participant demographic variables. The predictive values of the models were compared. Results: Abnormal levels of all biomarkers were associated with faster time to cognitive impairment, and some participants with abnormal biomarker levels remained cognitively normal for up to 6.6 years. No differences in predictive value were found between the individual biomarkers (p > 0.074), nor did we find differences between the expanded biomarker models (p > 0.312). Each expanded model better predicted incident cognitive impairment than the model containing the biomarker alone (p < 0.005). Conclusions: Our results indicate that all AD biomarkers studied here predicted incident cognitive impairment, and support the hypothesis that biomarkers signal underlying AD pathology at least several years before the appearance of dementia symptoms. PMID:23576620
Liao, C; Peng, Z Y; Li, J B; Cui, X W; Zhang, Z H; Malakar, P K; Zhang, W J; Pan, Y J; Zhao, Y
2015-03-01
The aim of this study was to simultaneously construct PCR-DGGE-based predictive models of Listeria monocytogenes and Vibrio parahaemolyticus on cooked shrimps at 4 and 10°C. Calibration curves were established to correlate peak density of DGGE bands with microbial counts. Microbial counts derived from PCR-DGGE and plate methods were fitted by Baranyi model to obtain molecular and traditional predictive models. For L. monocytogenes, growing at 4 and 10°C, molecular predictive models were constructed. It showed good evaluations of correlation coefficients (R(2) > 0.92), bias factors (Bf ) and accuracy factors (Af ) (1.0 ≤ Bf ≤ Af ≤ 1.1). Moreover, no significant difference was found between molecular and traditional predictive models when analysed on lag phase (λ), maximum growth rate (μmax ) and growth data (P > 0.05). But for V. parahaemolyticus, inactivated at 4 and 10°C, molecular models show significant difference when compared with traditional models. Taken together, these results suggest that PCR-DGGE based on DNA can be used to construct growth models, but it is inappropriate for inactivation models yet. This is the first report of developing PCR-DGGE to simultaneously construct multiple molecular models. It has been known for a long time that microbial predictive models based on traditional plate methods are time-consuming and labour-intensive. Denaturing gradient gel electrophoresis (DGGE) has been widely used as a semiquantitative method to describe complex microbial community. In our study, we developed DGGE to quantify bacterial counts and simultaneously established two molecular predictive models to describe the growth and survival of two bacteria (Listeria monocytogenes and Vibrio parahaemolyticus) at 4 and 10°C. We demonstrated that PCR-DGGE could be used to construct growth models. This work provides a new approach to construct molecular predictive models and thereby facilitates predictive microbiology and QMRA (Quantitative Microbial Risk Assessment). © 2014 The Society for Applied Microbiology.
Alanazi, Hamdan O; Abdullah, Abdul Hanan; Qureshi, Kashif Naseer
2017-04-01
Recently, Artificial Intelligence (AI) has been used widely in medicine and health care sector. In machine learning, the classification or prediction is a major field of AI. Today, the study of existing predictive models based on machine learning methods is extremely active. Doctors need accurate predictions for the outcomes of their patients' diseases. In addition, for accurate predictions, timing is another significant factor that influences treatment decisions. In this paper, existing predictive models in medicine and health care have critically reviewed. Furthermore, the most famous machine learning methods have explained, and the confusion between a statistical approach and machine learning has clarified. A review of related literature reveals that the predictions of existing predictive models differ even when the same dataset is used. Therefore, existing predictive models are essential, and current methods must be improved.
Predictors of nursing home residents' time to hospitalization.
O'Malley, A James; Caudry, Daryl J; Grabowski, David C
2011-02-01
To model the predictors of the time to first acute hospitalization for nursing home residents, and accounting for previous hospitalizations, model the predictors of time between subsequent hospitalizations. Merged file from New York State for the period 1998-2004 consisting of nursing home information from the minimum dataset and hospitalization information from the Statewide Planning and Research Cooperative System. Accelerated failure time models were used to estimate the model parameters and predict survival times. The models were fit to observations from 50 percent of the nursing homes and validated on the remaining observations. Pressure ulcers and facility-level deficiencies were associated with a decreased time to first hospitalization, while the presence of advance directives and facility staffing was associated with an increased time. These predictors of the time to first hospitalization model had effects of similar magnitude in predicting the time between subsequent hospitalizations. This study provides novel evidence suggesting modifiable patient and nursing home characteristics are associated with the time to first hospitalization and time to subsequent hospitalizations for nursing home residents. © Health Research and Educational Trust.
Knowledge-guided fuzzy logic modeling to infer cellular signaling networks from proteomic data
Liu, Hui; Zhang, Fan; Mishra, Shital Kumar; Zhou, Shuigeng; Zheng, Jie
2016-01-01
Modeling of signaling pathways is crucial for understanding and predicting cellular responses to drug treatments. However, canonical signaling pathways curated from literature are seldom context-specific and thus can hardly predict cell type-specific response to external perturbations; purely data-driven methods also have drawbacks such as limited biological interpretability. Therefore, hybrid methods that can integrate prior knowledge and real data for network inference are highly desirable. In this paper, we propose a knowledge-guided fuzzy logic network model to infer signaling pathways by exploiting both prior knowledge and time-series data. In particular, the dynamic time warping algorithm is employed to measure the goodness of fit between experimental and predicted data, so that our method can model temporally-ordered experimental observations. We evaluated the proposed method on a synthetic dataset and two real phosphoproteomic datasets. The experimental results demonstrate that our model can uncover drug-induced alterations in signaling pathways in cancer cells. Compared with existing hybrid models, our method can model feedback loops so that the dynamical mechanisms of signaling networks can be uncovered from time-series data. By calibrating generic models of signaling pathways against real data, our method supports precise predictions of context-specific anticancer drug effects, which is an important step towards precision medicine. PMID:27774993
NASA Astrophysics Data System (ADS)
Park, Jong Ho; Ahn, Byung Tae
2003-01-01
A failure model for electromigration based on the "failure unit model" was presented for the prediction of lifetime in metal lines.The failure unit model, which consists of failure units in parallel and series, can predict both the median time to failure (MTTF) and the deviation in the time to failure (DTTF) in Al metal lines. The model can describe them only qualitatively. In our model, both the probability function of the failure unit in single grain segments and polygrain segments are considered instead of in polygrain segments alone. Based on our model, we calculated MTTF, DTTF, and activation energy for different median grain sizes, grain size distributions, linewidths, line lengths, current densities, and temperatures. Comparisons between our results and published experimental data showed good agreements and our model could explain the previously unexplained phenomena. Our advanced failure unit model might be further applied to other electromigration characteristics of metal lines.
Selby, Edward A; Kranzler, Amy; Panza, Emily; Fehling, Kara B
2016-04-01
Influenced by chaos theory, the emotional cascade model proposes that rumination and negative emotion may promote each other in a self-amplifying cycle that increases over time. Accordingly, exponential-compounding effects may better describe the relationship between rumination and negative emotion when they occur in impulsive persons, and predict impulsive behavior. Forty-seven community and undergraduate participants who reported frequent engagement in impulsive behaviors monitored their ruminative thoughts and negative emotion multiple times daily for two weeks using digital recording devices. Hypotheses were tested using cross-lagged mixed model analyses. Findings indicated that rumination predicted subsequent elevations in rumination that lasted over extended periods of time. Rumination and negative emotion predicted increased levels of each other at subsequent assessments, and exponential functions for these associations were supported. Results also supported a synergistic effect between rumination and negative emotion, predicting larger elevations in subsequent rumination and negative emotion than when one variable alone was elevated. Finally, there were synergistic effects of rumination and negative emotion in predicting number of impulsive behaviors subsequently reported. These findings are consistent with the emotional cascade model in suggesting that momentary rumination and negative emotion progressively propagate and magnify each other over time in impulsive people, promoting impulsive behavior. © 2014 Wiley Periodicals, Inc.
Du, Hongying; Wang, Jie; Yao, Xiaojun; Hu, Zhide
2009-01-01
The heuristic method (HM) and support vector machine (SVM) were used to construct quantitative structure-retention relationship models by a series of compounds to predict the gradient retention times of reversed-phase high-performance liquid chromatography (HPLC) in three different columns. The aims of this investigation were to predict the retention times of multifarious compounds, to find the main properties of the three columns, and to indicate the theory of separation procedures. In our method, we correlated the retention times of many diverse structural analytes in three columns (Symmetry C18, Chromolith, and SG-MIX) with their representative molecular descriptors, calculated from the molecular structures alone. HM was used to select the most important molecular descriptors and build linear regression models. Furthermore, non-linear regression models were built using the SVM method; the performance of the SVM models were better than that of the HM models, and the prediction results were in good agreement with the experimental values. This paper could give some insights into the factors that were likely to govern the gradient retention process of the three investigated HPLC columns, which could theoretically supervise the practical experiment.
NASA Astrophysics Data System (ADS)
Omrani, H.; Drobinski, P.; Dubos, T.
2009-09-01
In this work, we consider the effect of indiscriminate nudging time on the large and small scales of an idealized limited area model simulation. The limited area model is a two layer quasi-geostrophic model on the beta-plane driven at its boundaries by its « global » version with periodic boundary condition. This setup mimics the configuration used for regional climate modelling. Compared to a previous study by Salameh et al. (2009) who investigated the existence of an optimal nudging time minimizing the error on both large and small scale in a linear model, we here use a fully non-linear model which allows us to represent the chaotic nature of the atmosphere: given the perfect quasi-geostrophic model, errors in the initial conditions, concentrated mainly in the smaller scales of motion, amplify and cascade into the larger scales, eventually resulting in a prediction with low skill. To quantify the predictability of our quasi-geostrophic model, we measure the rate of divergence of the system trajectories in phase space (Lyapunov exponent) from a set of simulations initiated with a perturbation of a reference initial state. Predictability of the "global", periodic model is mostly controlled by the beta effect. In the LAM, predictability decreases as the domain size increases. Then, the effect of large-scale nudging is studied by using the "perfect model” approach. Two sets of experiments were performed: (1) the effect of nudging is investigated with a « global » high resolution two layer quasi-geostrophic model driven by a low resolution two layer quasi-geostrophic model. (2) similar simulations are conducted with the two layer quasi-geostrophic LAM where the size of the LAM domain comes into play in addition to the first set of simulations. In the two sets of experiments, the best spatial correlation between the nudge simulation and the reference is observed with a nudging time close to the predictability time.
Osecky, Eric M.; Bogin, Gregory E.; Villano, Stephanie M.; ...
2016-08-18
An ignition quality tester was used to characterize the autoignition delay times of iso-octane. The experimental data were characterized between temperatures of 653 and 996 K, pressures of 1.0 and 1.5 MPa, and global equivalence ratios of 0.7 and 1.05. A clear negative temperature coefficient behavior was seen at both pressures in the experimental data. These data were used to characterize the effectiveness of three modeling methods: a single-zone homogeneous batch reactor, a multizone engine model, and a three-dimensional computational fluid dynamics (CFD) model. A detailed 874 species iso-octane ignition mechanism (Mehl, M.; Curran, H. J.; Pitz, W. J.; Westbrook,more » C. K.Chemical kinetic modeling of component mixtures relevant to gasoline. Proceedings of the European Combustion Meeting; Vienna, Austria, April 14-17, 2009) was reduced to 89 species for use in these models, and the predictions of the reduced mechanism were consistent with ignition delay times predicted by the detailed chemical mechanism across a broad range of temperatures, pressures, and equivalence ratios. The CFD model was also run without chemistry to characterize the extent of mixing of fuel and air in the chamber. The calculations predicted that the main part of the combustion chamber was fairly well-mixed at longer times (> ~30 ms), suggesting that the simpler models might be applicable in this quasi-homogeneous region. The multizone predictions, where the combustion chamber was divided into 20 zones of temperature and equivalence ratio, were quite close to the coupled CFD-kinetics results, but the calculation time was ~11 times faster than the coupled CFD-kinetics model. Although the coupled CFD-kinetics model captured the observed negative temperature coefficient behavior and pressure dependence, discrepancies remain between the predictions and the observed ignition time delays, suggesting improvements are still needed in the kinetic mechanism and/or the CFD model. This approach suggests a combined modeling approach, wherein the CFD calculations (without chemistry) can be used to examine the sensitivity of various model inputs to in-cylinder temperature and equivalence ratios. In conclusion, these values can be used as inputs to the multizone model to examine the impact on ignition delay. Additionally, the speed of the multizone model also makes it feasible to quickly test more detailed kinetic mechanisms for comparison to experimental data and sensitivity analysis.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Osecky, Eric M.; Bogin, Gregory E.; Villano, Stephanie M.
An ignition quality tester was used to characterize the autoignition delay times of iso-octane. The experimental data were characterized between temperatures of 653 and 996 K, pressures of 1.0 and 1.5 MPa, and global equivalence ratios of 0.7 and 1.05. A clear negative temperature coefficient behavior was seen at both pressures in the experimental data. These data were used to characterize the effectiveness of three modeling methods: a single-zone homogeneous batch reactor, a multizone engine model, and a three-dimensional computational fluid dynamics (CFD) model. A detailed 874 species iso-octane ignition mechanism (Mehl, M.; Curran, H. J.; Pitz, W. J.; Westbrook,more » C. K.Chemical kinetic modeling of component mixtures relevant to gasoline. Proceedings of the European Combustion Meeting; Vienna, Austria, April 14-17, 2009) was reduced to 89 species for use in these models, and the predictions of the reduced mechanism were consistent with ignition delay times predicted by the detailed chemical mechanism across a broad range of temperatures, pressures, and equivalence ratios. The CFD model was also run without chemistry to characterize the extent of mixing of fuel and air in the chamber. The calculations predicted that the main part of the combustion chamber was fairly well-mixed at longer times (> ~30 ms), suggesting that the simpler models might be applicable in this quasi-homogeneous region. The multizone predictions, where the combustion chamber was divided into 20 zones of temperature and equivalence ratio, were quite close to the coupled CFD-kinetics results, but the calculation time was ~11 times faster than the coupled CFD-kinetics model. Although the coupled CFD-kinetics model captured the observed negative temperature coefficient behavior and pressure dependence, discrepancies remain between the predictions and the observed ignition time delays, suggesting improvements are still needed in the kinetic mechanism and/or the CFD model. This approach suggests a combined modeling approach, wherein the CFD calculations (without chemistry) can be used to examine the sensitivity of various model inputs to in-cylinder temperature and equivalence ratios. In conclusion, these values can be used as inputs to the multizone model to examine the impact on ignition delay. Additionally, the speed of the multizone model also makes it feasible to quickly test more detailed kinetic mechanisms for comparison to experimental data and sensitivity analysis.« less
Numerical Models for Sound Propagation in Long Spaces
NASA Astrophysics Data System (ADS)
Lai, Chenly Yuen Cheung
Both reverberation time and steady-state sound field are the key elements for assessing the acoustic condition in an enclosed space. They affect the noise propagation, speech intelligibility, clarity index, and definition. Since the sound field in a long space is non diffuse, classical room acoustics theory does not apply in this situation. The ray tracing technique and the image source methods are two common models to fathom both reverberation time and steady-state sound field in long enclosures nowadays. Although both models can give an accurate estimate of reverberation times and steady-state sound field directly or indirectly, they often involve time-consuming calculations. In order to simplify the acoustic consideration, a theoretical formulation has been developed for predicting both steady-state sound fields and reverberation times in street canyons. The prediction model is further developed to predict the steady-state sound field in a long enclosure. Apart from the straight long enclosure, there are other variations such as a cross junction, a long enclosure with a T-intersection, an U-turn long enclosure. In the present study, an theoretical and experimental investigations were conducted to develop formulae for predicting reverberation times and steady-state sound fields in a junction of a street canyon and in a long enclosure with T-intersection. The theoretical models are validated by comparing the numerical predictions with published experimental results. The theoretical results are also compared with precise indoor measurements and large-scale outdoor experimental results. In all of previous acoustical studies related to long enclosure, most of the studies are focused on the monopole sound source. Besides non-directional noise source, many noise sources in long enclosure are dipole like, such as train noise and fan noise. In order to study the characteristics of directional noise sources, a review of available dipole source was conducted. A dipole was constructed which was subsequent used for experimental studies. In additional, a theoretical model was developed for predicting dipole sound fields. The theoretical model can be used to study the effect of a dipole source on the speech intelligibility in long enclosures.
Using Deep Learning Model for Meteorological Satellite Cloud Image Prediction
NASA Astrophysics Data System (ADS)
Su, X.
2017-12-01
A satellite cloud image contains much weather information such as precipitation information. Short-time cloud movement forecast is important for precipitation forecast and is the primary means for typhoon monitoring. The traditional methods are mostly using the cloud feature matching and linear extrapolation to predict the cloud movement, which makes that the nonstationary process such as inversion and deformation during the movement of the cloud is basically not considered. It is still a hard task to predict cloud movement timely and correctly. As deep learning model could perform well in learning spatiotemporal features, to meet this challenge, we could regard cloud image prediction as a spatiotemporal sequence forecasting problem and introduce deep learning model to solve this problem. In this research, we use a variant of Gated-Recurrent-Unit(GRU) that has convolutional structures to deal with spatiotemporal features and build an end-to-end model to solve this forecast problem. In this model, both the input and output are spatiotemporal sequences. Compared to Convolutional LSTM(ConvLSTM) model, this model has lower amount of parameters. We imply this model on GOES satellite data and the model perform well.
Optimization of global model composed of radial basis functions using the term-ranking approach
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cai, Peng; Tao, Chao, E-mail: taochao@nju.edu.cn; Liu, Xiao-Jun
2014-03-15
A term-ranking method is put forward to optimize the global model composed of radial basis functions to improve the predictability of the model. The effectiveness of the proposed method is examined by numerical simulation and experimental data. Numerical simulations indicate that this method can significantly lengthen the prediction time and decrease the Bayesian information criterion of the model. The application to real voice signal shows that the optimized global model can capture more predictable component in chaos-like voice data and simultaneously reduce the predictable component (periodic pitch) in the residual signal.
Mathewson, Paul D; Moyer-Horner, Lucas; Beever, Erik A; Briscoe, Natalie J; Kearney, Michael; Yahn, Jeremiah M; Porter, Warren P
2017-03-01
How climate constrains species' distributions through time and space is an important question in the context of conservation planning for climate change. Despite increasing awareness of the need to incorporate mechanism into species distribution models (SDMs), mechanistic modeling of endotherm distributions remains limited in this literature. Using the American pika (Ochotona princeps) as an example, we present a framework whereby mechanism can be incorporated into endotherm SDMs. Pika distribution has repeatedly been found to be constrained by warm temperatures, so we used Niche Mapper, a mechanistic heat-balance model, to convert macroclimate data to pika-specific surface activity time in summer across the western United States. We then explored the difference between using a macroclimate predictor (summer temperature) and using a mechanistic predictor (predicted surface activity time) in SDMs. Both approaches accurately predicted pika presences in current and past climate regimes. However, the activity models predicted 8-19% less habitat loss in response to annual temperature increases of ~3-5 °C predicted in the region by 2070, suggesting that pikas may be able to buffer some climate change effects through behavioral thermoregulation that can be captured by mechanistic modeling. Incorporating mechanism added value to the modeling by providing increased confidence in areas where different modeling approaches agreed and providing a range of outcomes in areas of disagreement. It also provided a more proximate variable relating animal distribution to climate, allowing investigations into how unique habitat characteristics and intraspecific phenotypic variation may allow pikas to exist in areas outside those predicted by generic SDMs. Only a small number of easily obtainable data are required to parameterize this mechanistic model for any endotherm, and its use can improve SDM predictions by explicitly modeling a widely applicable direct physiological effect: climate-imposed restrictions on activity. This more complete understanding is necessary to inform climate adaptation actions, management strategies, and conservation plans. © 2016 John Wiley & Sons Ltd.
Mathewson, Paul; Moyer-Horner, Lucas; Beever, Erik; Briscoe, Natalie; Kearney, Michael T.; Yahn, Jeremiah; Porter, Warren P.
2017-01-01
How climate constrains species’ distributions through time and space is an important question in the context of conservation planning for climate change. Despite increasing awareness of the need to incorporate mechanism into species distribution models (SDMs), mechanistic modeling of endotherm distributions remains limited in this literature. Using the American pika (Ochotona princeps) as an example, we present a framework whereby mechanism can be incorporated into endotherm SDMs. Pika distribution has repeatedly been found to be constrained by warm temperatures, so we used Niche Mapper, a mechanistic heat-balance model, to convert macroclimate data to pika-specific surface activity time in summer across the western United States. We then explored the difference between using a macroclimate predictor (summer temperature) and using a mechanistic predictor (predicted surface activity time) in SDMs. Both approaches accurately predicted pika presences in current and past climate regimes. However, the activity models predicted 8–19% less habitat loss in response to annual temperature increases of ~3–5 °C predicted in the region by 2070, suggesting that pikas may be able to buffer some climate change effects through behavioral thermoregulation that can be captured by mechanistic modeling. Incorporating mechanism added value to the modeling by providing increased confidence in areas where different modeling approaches agreed and providing a range of outcomes in areas of disagreement. It also provided a more proximate variable relating animal distribution to climate, allowing investigations into how unique habitat characteristics and intraspecific phenotypic variation may allow pikas to exist in areas outside those predicted by generic SDMs. Only a small number of easily obtainable data are required to parameterize this mechanistic model for any endotherm, and its use can improve SDM predictions by explicitly modeling a widely applicable direct physiological effect: climate-imposed restrictions on activity. This more complete understanding is necessary to inform climate adaptation actions, management strategies, and conservation plans.
Haiducek, John D.; Welling, Daniel T.; Ganushkina, Natalia Y.; ...
2017-10-30
We simulated the entire month of January, 2005 using the Space Weather Modeling Framework (SWMF) with observed solar wind data as input. We conducted this simulation with and without an inner magnetosphere model, and tested two different grid resolutions. We evaluated the model's accuracy in predicting Kp, Sym-H, AL, and cross polar cap potential (CPCP). We find that the model does an excellent job of predicting the Sym-H index, with an RMSE of 17-18 nT. Kp is predicted well during storm-time conditions, but over-predicted during quiet times by a margin of 1 to 1.7 Kp units. AL is predicted reasonablymore » well on average, with an RMSE of 230-270 nT. However, the model reaches the largest negative AL values significantly less often than the observations. The model tended to over-predict CPCP, with RMSE values on the order of 46-48 kV. We found the results to be insensitive to grid resoution, with the exception of the rate of occurrence for strongly negative AL values. As a result, the use of the inner magnetosphere component, however, affected results significantly, with all quantities except CPCP improved notably when the inner magnetosphere model was on.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Haiducek, John D.; Welling, Daniel T.; Ganushkina, Natalia Y.
We simulated the entire month of January, 2005 using the Space Weather Modeling Framework (SWMF) with observed solar wind data as input. We conducted this simulation with and without an inner magnetosphere model, and tested two different grid resolutions. We evaluated the model's accuracy in predicting Kp, Sym-H, AL, and cross polar cap potential (CPCP). We find that the model does an excellent job of predicting the Sym-H index, with an RMSE of 17-18 nT. Kp is predicted well during storm-time conditions, but over-predicted during quiet times by a margin of 1 to 1.7 Kp units. AL is predicted reasonablymore » well on average, with an RMSE of 230-270 nT. However, the model reaches the largest negative AL values significantly less often than the observations. The model tended to over-predict CPCP, with RMSE values on the order of 46-48 kV. We found the results to be insensitive to grid resoution, with the exception of the rate of occurrence for strongly negative AL values. As a result, the use of the inner magnetosphere component, however, affected results significantly, with all quantities except CPCP improved notably when the inner magnetosphere model was on.« less
Assessing predictability of a hydrological stochastic-dynamical system
NASA Astrophysics Data System (ADS)
Gelfan, Alexander
2014-05-01
The water cycle includes the processes with different memory that creates potential for predictability of hydrological system based on separating its long and short memory components and conditioning long-term prediction on slower evolving components (similar to approaches in climate prediction). In the face of the Panta Rhei IAHS Decade questions, it is important to find a conceptual approach to classify hydrological system components with respect to their predictability, define predictable/unpredictable patterns, extend lead-time and improve reliability of hydrological predictions based on the predictable patterns. Representation of hydrological systems as the dynamical systems subjected to the effect of noise (stochastic-dynamical systems) provides possible tool for such conceptualization. A method has been proposed for assessing predictability of hydrological system caused by its sensitivity to both initial and boundary conditions. The predictability is defined through a procedure of convergence of pre-assigned probabilistic measure (e.g. variance) of the system state to stable value. The time interval of the convergence, that is the time interval during which the system losses memory about its initial state, defines limit of the system predictability. The proposed method was applied to assess predictability of soil moisture dynamics in the Nizhnedevitskaya experimental station (51.516N; 38.383E) located in the agricultural zone of the central European Russia. A stochastic-dynamical model combining a deterministic one-dimensional model of hydrothermal regime of soil with a stochastic model of meteorological inputs was developed. The deterministic model describes processes of coupled heat and moisture transfer through unfrozen/frozen soil and accounts for the influence of phase changes on water flow. The stochastic model produces time series of daily meteorological variables (precipitation, air temperature and humidity), whose statistical properties are similar to those of the corresponding series of the actual data measured at the station. Beginning from the initial conditions and being forced by Monte-Carlo generated synthetic meteorological series, the model simulated diverging trajectories of soil moisture characteristics (water content of soil column, moisture of different soil layers, etc.). Limit of predictability of the specific characteristic was determined through time of stabilization of variance of the characteristic between the trajectories, as they move away from the initial state. Numerical experiments were carried out with the stochastic-dynamical model to analyze sensitivity of the soil moisture predictability assessments to uncertainty in the initial conditions, to determine effects of the soil hydraulic properties and processes of soil freezing on the predictability. It was found, particularly, that soil water content predictability is sensitive to errors in the initial conditions and strongly depends on the hydraulic properties of soil under both unfrozen and frozen conditions. Even if the initial conditions are "well-established", the assessed predictability of water content of unfrozen soil does not exceed 30-40 days, while for frozen conditions it may be as long as 3-4 months. The latter creates opportunity for utilizing the autumn water content of soil as the predictor for spring snowmelt runoff in the region under consideration.
Predicting clicks of PubMed articles.
Mao, Yuqing; Lu, Zhiyong
2013-01-01
Predicting the popularity or access usage of an article has the potential to improve the quality of PubMed searches. We can model the click trend of each article as its access changes over time by mining the PubMed query logs, which contain the previous access history for all articles. In this article, we examine the access patterns produced by PubMed users in two years (July 2009 to July 2011). We explore the time series of accesses for each article in the query logs, model the trends with regression approaches, and subsequently use the models for prediction. We show that the click trends of PubMed articles are best fitted with a log-normal regression model. This model allows the number of accesses an article receives and the time since it first becomes available in PubMed to be related via quadratic and logistic functions, with the model parameters to be estimated via maximum likelihood. Our experiments predicting the number of accesses for an article based on its past usage demonstrate that the mean absolute error and mean absolute percentage error of our model are 4.0% and 8.1% lower than the power-law regression model, respectively. The log-normal distribution is also shown to perform significantly better than a previous prediction method based on a human memory theory in cognitive science. This work warrants further investigation on the utility of such a log-normal regression approach towards improving information access in PubMed.
Predicting clicks of PubMed articles
Mao, Yuqing; Lu, Zhiyong
2013-01-01
Predicting the popularity or access usage of an article has the potential to improve the quality of PubMed searches. We can model the click trend of each article as its access changes over time by mining the PubMed query logs, which contain the previous access history for all articles. In this article, we examine the access patterns produced by PubMed users in two years (July 2009 to July 2011). We explore the time series of accesses for each article in the query logs, model the trends with regression approaches, and subsequently use the models for prediction. We show that the click trends of PubMed articles are best fitted with a log-normal regression model. This model allows the number of accesses an article receives and the time since it first becomes available in PubMed to be related via quadratic and logistic functions, with the model parameters to be estimated via maximum likelihood. Our experiments predicting the number of accesses for an article based on its past usage demonstrate that the mean absolute error and mean absolute percentage error of our model are 4.0% and 8.1% lower than the power-law regression model, respectively. The log-normal distribution is also shown to perform significantly better than a previous prediction method based on a human memory theory in cognitive science. This work warrants further investigation on the utility of such a log-normal regression approach towards improving information access in PubMed. PMID:24551386
Karschner, Erin L; Schwope, David M; Schwilke, Eugene W; Goodwin, Robert S; Kelly, Deanna L; Gorelick, David A; Huestis, Marilyn A
2012-10-01
Determining time since last cannabis/Δ9-tetrahydrocannabinol (THC) exposure is important in clinical, workplace, and forensic settings. Mathematical models calculating time of last exposure from whole blood concentrations typically employ a theoretical 0.5 whole blood-to-plasma (WB/P) ratio. No studies previously evaluated predictive models utilizing empirically-derived WB/P ratios, or whole blood cannabinoid pharmacokinetics after subchronic THC dosing. Ten male chronic, daily cannabis smokers received escalating around-the-clock oral THC (40-120 mg daily) for 8 days. Cannabinoids were quantified in whole blood and plasma by two-dimensional gas chromatography-mass spectrometry. Maximum whole blood THC occurred 3.0 h after the first oral THC dose and 103.5h (4.3 days) during multiple THC dosing. Median WB/P ratios were THC 0.63 (n=196), 11-hydroxy-THC 0.60 (n=189), and 11-nor-9-carboxy-THC (THCCOOH) 0.55 (n=200). Predictive models utilizing these WB/P ratios accurately estimated last cannabis exposure in 96% and 100% of specimens collected within 1-5h after a single oral THC dose and throughout multiple dosing, respectively. Models were only 60% and 12.5% accurate 12.5 and 22.5h after the last THC dose, respectively. Predictive models estimating time since last cannabis intake from whole blood and plasma cannabinoid concentrations were inaccurate during abstinence, but highly accurate during active THC dosing. THC redistribution from large cannabinoid body stores and high circulating THCCOOH concentrations create different pharmacokinetic profiles than those in less than daily cannabis smokers that were used to derive the models. Thus, the models do not accurately predict time of last THC intake in individuals consuming THC daily. Published by Elsevier Ireland Ltd.
Broadening the trans-contextual model of motivation: A study with Spanish adolescents.
González-Cutre, D; Sicilia, Á; Beas-Jiménez, M; Hagger, M S
2014-08-01
The original trans-contextual model of motivation proposed that autonomy support from teachers develops students' autonomous motivation in physical education (PE), and that autonomous motivation is transferred from PE contexts to physical activity leisure-time contexts, and predicts attitudes, perceived behavioral control and subjective norms, and forming intentions to participate in future physical activity behavior. The purpose of this study was to test an extended trans-contextual model of motivation including autonomy support from peers and parents and basic psychological needs in a Spanish sample. School students (n = 400) aged between 12 and 18 years completed measures of perceived autonomy support from three sources, autonomous motivation and constructs from the theory of planned behavior at three different points in time and in two contexts, PE and leisure-time. A path analysis controlling for past physical activity behavior supported the main postulates of the model. Autonomous motivation in a PE context predicted autonomous motivation in a leisure-time physical activity context, perceived autonomy support from teachers predicted satisfaction of basic psychological needs in PE, and perceived autonomy support from peers and parents predicted need satisfaction in leisure-time. This study provides a cross-cultural replication of the trans-contextual model of motivation and broadens it to encompass basic psychological needs. © 2013 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.
NASA Technical Reports Server (NTRS)
Kaplan, Michael L.; Lin, Yuh-Lang
2005-01-01
The purpose of the research was to develop and test improved hazard algorithms that could result in the development of sensors that are better able to anticipate potentially severe atmospheric turbulence, which affects aircraft safety. The research focused on employing numerical simulation models to develop improved algorithms for the prediction of aviation turbulence. This involved producing both research simulations and real-time simulations of environments predisposed to moderate and severe aviation turbulence. The research resulted in the following fundamental advancements toward the aforementioned goal: 1) very high resolution simulations of turbulent environments indicated how predictive hazard indices could be improved resulting in a candidate hazard index that indicated the potential for improvement over existing operational indices, 2) a real-time turbulence hazard numerical modeling system was improved by correcting deficiencies in its simulation of moist convection and 3) the same real-time predictive system was tested by running the code twice daily and the hazard prediction indices updated and improved. Additionally, a simple validation study was undertaken to determine how well a real time hazard predictive index performed when compared to commercial pilot observations of aviation turbulence. Simple statistical analyses were performed in this validation study indicating potential skill in employing the hazard prediction index to predict regions of varying intensities of aviation turbulence. Data sets from a research numerical model where provided to NASA for use in a large eddy simulation numerical model. A NASA contractor report and several refereed journal articles where prepared and submitted for publication during the course of this research.
Time-dependent Ionization in a Steady Flow in an MHD Model of the Solar Corona and Wind
NASA Astrophysics Data System (ADS)
Shen, Chengcai; Raymond, John C.; Mikić, Zoran; Linker, Jon A.; Reeves, Katharine K.; Murphy, Nicholas A.
2017-11-01
Time-dependent ionization is important for diagnostics of coronal streamers and pseudostreamers. We describe time-dependent ionization calculations for a three-dimensional magnetohydrodynamic (MHD) model of the solar corona and inner heliosphere. We analyze how non-equilibrium ionization (NEI) influences emission from a pseudostreamer during the Whole Sun Month interval (Carrington rotation CR1913, 1996 August 22 to September 18). We use a time-dependent code to calculate NEI states, based on the plasma temperature, density, velocity, and magnetic field in the MHD model, to obtain the synthetic emissivities and predict the intensities of the Lyα, O VI, Mg x, and Si xii emission lines observed by the SOHO/Ultraviolet Coronagraph Spectrometer (UVCS). At low coronal heights, the predicted intensity profiles of both Lyα and O VI lines match UVCS observations well, but the Mg x and Si xii emission are predicted to be too bright. At larger heights, the O VI and Mg x lines are predicted to be brighter for NEI than equilibrium ionization around this pseudostreamer, and Si xii is predicted to be fainter for NEI cases. The differences of predicted UVCS intensities between NEI and equilibrium ionization are around a factor of 2, but neither matches the observed intensity distributions along the full length of the UVCS slit. Variations in elemental abundances in closed field regions due to the gravitational settling and the FIP effect may significantly contribute to the predicted uncertainty. The assumption of Maxwellian electron distributions and errors in the magnetic field on the solar surface may also have notable effects on the mismatch between observations and model predictions.
Wu, Mike; Ghassemi, Marzyeh; Feng, Mengling; Celi, Leo A; Szolovits, Peter; Doshi-Velez, Finale
2017-05-01
The widespread adoption of electronic health records allows us to ask evidence-based questions about the need for and benefits of specific clinical interventions in critical-care settings across large populations. We investigated the prediction of vasopressor administration and weaning in the intensive care unit. Vasopressors are commonly used to control hypotension, and changes in timing and dosage can have a large impact on patient outcomes. We considered a cohort of 15 695 intensive care unit patients without orders for reduced care who were alive 30 days post-discharge. A switching-state autoregressive model (SSAM) was trained to predict the multidimensional physiological time series of patients before, during, and after vasopressor administration. The latent states from the SSAM were used as predictors of vasopressor administration and weaning. The unsupervised SSAM features were able to predict patient vasopressor administration and successful patient weaning. Features derived from the SSAM achieved areas under the receiver operating curve of 0.92, 0.88, and 0.71 for predicting ungapped vasopressor administration, gapped vasopressor administration, and vasopressor weaning, respectively. We also demonstrated many cases where our model predicted weaning well in advance of a successful wean. Models that used SSAM features increased performance on both predictive tasks. These improvements may reflect an underlying, and ultimately predictive, latent state detectable from the physiological time series. © The Author 2016. Published by Oxford University Press on behalf of the American Medical Informatics Association. All rights reserved. For Permissions, please email: journals.permissions@oup.com
Clinical models are inaccurate in predicting bile duct stones in situ for patients with gallbladder.
Topal, B; Fieuws, S; Tomczyk, K; Aerts, R; Van Steenbergen, W; Verslype, C; Penninckx, F
2009-01-01
The probability that a patient has common bile duct stones (CBDS) is a key factor in determining diagnostic and treatment strategies. This prospective cohort study evaluated the accuracy of clinical models in predicting CBDS for patients who will undergo cholecystectomy for lithiasis. From October 2005 until September 2006, 335 consecutive patients with symptoms of gallstone disease underwent cholecystectomy. Statistical analysis was performed on prospective patient data obtained at the time of first presentation to the hospital. Demonstrable CBDS at the time of endoscopic retrograde cholangiopancreatography (ERCP) or intraoperative cholangiography (IOC) was considered the gold standard for the presence of CBDS. Common bile duct stones were demonstrated in 53 patients. For 35 patients, ERCP was performed, with successful stone clearance in 24 of 30 patients who had proven CBDS. In 29 patients, IOC showed CBDS, which were managed successfully via laparoscopic common bile duct exploration, with stone extraction at the time of cholecystectomy. Prospective validation of the existing model for CBDS resulted in a predictive accuracy rate of 73%. The new model showed a predictive accuracy rate of 79%. Clinical models are inaccurate in predicting CBDS in patients with cholelithiasis. Management strategies should be based on the local availability of therapeutic expertise.
Creep fatigue life prediction for engine hot section materials (isotropic)
NASA Technical Reports Server (NTRS)
Moreno, Vito; Nissley, David; Lin, Li-Sen Jim
1985-01-01
The first two years of a two-phase program aimed at improving the high temperature crack initiation life prediction technology for gas turbine hot section components are discussed. In Phase 1 (baseline) effort, low cycle fatigue (LCF) models, using a data base generated for a cast nickel base gas turbine hot section alloy (B1900+Hf), were evaluated for their ability to predict the crack initiation life for relevant creep-fatigue loading conditions and to define data required for determination of model constants. The variables included strain range and rate, mean strain, strain hold times and temperature. None of the models predicted all of the life trends within reasonable data requirements. A Cycle Damage Accumulation (CDA) was therefore developed which follows an exhaustion of material ductility approach. Material ductility is estimated based on observed similarities of deformation structure between fatigue, tensile and creep tests. The cycle damage function is based on total strain range, maximum stress and stress amplitude and includes both time independent and time dependent components. The CDA model accurately predicts all of the trends in creep-fatigue life with loading conditions. In addition, all of the CDA model constants are determinable from rapid cycle, fully reversed fatigue tests and monotonic tensile and/or creep data.
Luo, Yi; Zhang, Tao; Li, Xiao-song
2016-05-01
To explore the application of fuzzy time series model based on fuzzy c-means clustering in forecasting monthly incidence of Hepatitis E in mainland China. Apredictive model (fuzzy time series method based on fuzzy c-means clustering) was developed using Hepatitis E incidence data in mainland China between January 2004 and July 2014. The incidence datafrom August 2014 to November 2014 were used to test the fitness of the predictive model. The forecasting results were compared with those resulted from traditional fuzzy time series models. The fuzzy time series model based on fuzzy c-means clustering had 0.001 1 mean squared error (MSE) of fitting and 6.977 5 x 10⁻⁴ MSE of forecasting, compared with 0.0017 and 0.0014 from the traditional forecasting model. The results indicate that the fuzzy time series model based on fuzzy c-means clustering has a better performance in forecasting incidence of Hepatitis E.
A Quantitative Description of Suicide Inhibition of Dichloroacetic Acid in Rats and Mice
DOE Office of Scientific and Technical Information (OSTI.GOV)
Keys, Deborah A.; Schultz, Irv R.; Mahle, Deirdre A.
Dichloroacetic acid (DCA), a minor metabolite of trichloroethylene (TCE) and water disinfection byproduct, remains an important risk assessment issue because of its carcinogenic potency. DCA has been shown to inhibit its own metabolism by irreversibly inactivating glutathione transferase zeta (GSTzeta). To better predict internal dosimetry of DCA, a physiologically based pharmacokinetic (PBPK) model of DCA was developed. Suicide inhibition was described dynamically by varying the rate of maximal GSTzeta mediated metabolism of DCA (Vmax) over time. Resynthesis (zero-order) and degradation (first-order) of metabolic activity were described. Published iv pharmacokinetic studies in native rats were used to estimate an initial Vmaxmore » value, with Km set to an in vitro determined value. Degradation and resynthesis rates were set to estimated values from a published immunoreactive GSTzeta protein time course. The first-order inhibition rate, kd, was estimated to this same time course. A secondary, linear non-GSTzeta-mediated metabolic pathway is proposed to fit DCA time courses following treatment with DCA in drinking water. The PBPK model predictions were validated by comparing predicted DCA concentrations to measured concentrations in published studies of rats pretreated with DCA following iv exposure to 0.05 to 20 mg/kg DCA. The same model structure was parameterized to simulate DCA time courses following iv exposure in native and pretreated mice. Blood and liver concentrations during and postexposure to DCA in drinking water were predicted. Comparisons of PBPK model predicted to measured values were favorable, lending support for the further development of this model for application to DCA or TCE human health risk assessment.« less
Predicting the dynamics of bacterial growth inhibition by ribosome-targeting antibiotics
NASA Astrophysics Data System (ADS)
Greulich, Philip; Doležal, Jakub; Scott, Matthew; Evans, Martin R.; Allen, Rosalind J.
2017-12-01
Understanding how antibiotics inhibit bacteria can help to reduce antibiotic use and hence avoid antimicrobial resistance—yet few theoretical models exist for bacterial growth inhibition by a clinically relevant antibiotic treatment regimen. In particular, in the clinic, antibiotic treatment is time-dependent. Here, we use a theoretical model, previously applied to steady-state bacterial growth, to predict the dynamical response of a bacterial cell to a time-dependent dose of ribosome-targeting antibiotic. Our results depend strongly on whether the antibiotic shows reversible transport and/or low-affinity ribosome binding (‘low-affinity antibiotic’) or, in contrast, irreversible transport and/or high affinity ribosome binding (‘high-affinity antibiotic’). For low-affinity antibiotics, our model predicts that growth inhibition depends on the duration of the antibiotic pulse, and can show a transient period of very fast growth following removal of the antibiotic. For high-affinity antibiotics, growth inhibition depends on peak dosage rather than dose duration, and the model predicts a pronounced post-antibiotic effect, due to hysteresis, in which growth can be suppressed for long times after the antibiotic dose has ended. These predictions are experimentally testable and may be of clinical significance.
Predicting the dynamics of bacterial growth inhibition by ribosome-targeting antibiotics
Greulich, Philip; Doležal, Jakub; Scott, Matthew; Evans, Martin R; Allen, Rosalind J
2017-01-01
Understanding how antibiotics inhibit bacteria can help to reduce antibiotic use and hence avoid antimicrobial resistance—yet few theoretical models exist for bacterial growth inhibition by a clinically relevant antibiotic treatment regimen. In particular, in the clinic, antibiotic treatment is time-dependent. Here, we use a theoretical model, previously applied to steady-state bacterial growth, to predict the dynamical response of a bacterial cell to a time-dependent dose of ribosome-targeting antibiotic. Our results depend strongly on whether the antibiotic shows reversible transport and/or low-affinity ribosome binding (‘low-affinity antibiotic’) or, in contrast, irreversible transport and/or high affinity ribosome binding (‘high-affinity antibiotic’). For low-affinity antibiotics, our model predicts that growth inhibition depends on the duration of the antibiotic pulse, and can show a transient period of very fast growth following removal of the antibiotic. For high-affinity antibiotics, growth inhibition depends on peak dosage rather than dose duration, and the model predicts a pronounced post-antibiotic effect, due to hysteresis, in which growth can be suppressed for long times after the antibiotic dose has ended. These predictions are experimentally testable and may be of clinical significance. PMID:28714461
Nøst, Therese Haugdahl; Breivik, Knut; Wania, Frank; Rylander, Charlotta; Odland, Jon Øyvind; Sandanger, Torkjel Manning
2015-01-01
Background Studies on the health effects of polychlorinated biphenyls (PCBs) call for an understanding of past and present human exposure. Time-resolved mechanistic models may supplement information on concentrations in individuals obtained from measurements and/or statistical approaches if they can be shown to reproduce empirical data. Objectives Here, we evaluated the capability of one such mechanistic model to reproduce measured PCB concentrations in individual Norwegian women. We also assessed individual life-course concentrations. Methods Concentrations of four PCB congeners in pregnant (n = 310, sampled in 2007–2009) and postmenopausal (n = 244, 2005) women were compared with person-specific predictions obtained using CoZMoMAN, an emission-based environmental fate and human food-chain bioaccumulation model. Person-specific predictions were also made using statistical regression models including dietary and lifestyle variables and concentrations. Results CoZMoMAN accurately reproduced medians and ranges of measured concentrations in the two study groups. Furthermore, rank correlations between measurements and predictions from both CoZMoMAN and regression analyses were strong (Spearman’s r > 0.67). Precision in quartile assignments from predictions was strong overall as evaluated by weighted Cohen’s kappa (> 0.6). Simulations indicated large inter-individual differences in concentrations experienced in the past. Conclusions The mechanistic model reproduced all measurements of PCB concentrations within a factor of 10, and subject ranking and quartile assignments were overall largely consistent, although they were weak within each study group. Contamination histories for individuals predicted by CoZMoMAN revealed variation between study subjects, particularly in the timing of peak concentrations. Mechanistic models can provide individual PCB exposure metrics that could serve as valuable supplements to measurements. Citation Nøst TH, Breivik K, Wania F, Rylander C, Odland JØ, Sandanger TM. 2016. Estimating time-varying PCB exposures using person-specific predictions to supplement measured values: a comparison of observed and predicted values in two cohorts of Norwegian women. Environ Health Perspect 124:299–305; http://dx.doi.org/10.1289/ehp.1409191 PMID:26186800
NASA Astrophysics Data System (ADS)
Chen, L. C.; Mo, K. C.; Zhang, Q.; Huang, J.
2014-12-01
Drought prediction from monthly to seasonal time scales is of critical importance to disaster mitigation, agricultural planning, and multi-purpose reservoir management. Starting in December 2012, NOAA Climate Prediction Center (CPC) has been providing operational Standardized Precipitation Index (SPI) Outlooks using the North American Multi-Model Ensemble (NMME) forecasts, to support CPC's monthly drought outlooks and briefing activities. The current NMME system consists of six model forecasts from U.S. and Canada modeling centers, including the CFSv2, CM2.1, GEOS-5, CCSM3.0, CanCM3, and CanCM4 models. In this study, we conduct an assessment of the predictive skill of meteorological drought using real-time NMME forecasts for the period from May 2012 to May 2014. The ensemble SPI forecasts are the equally weighted mean of the six model forecasts. Two performance measures, the anomaly correlation coefficient and root-mean-square errors against the observations, are used to evaluate forecast skill.Similar to the assessment based on NMME retrospective forecasts, predictive skill of monthly-mean precipitation (P) forecasts is generally low after the second month and errors vary among models. Although P forecast skill is not large, SPI predictive skill is high and the differences among models are small. The skill mainly comes from the P observations appended to the model forecasts. This factor also contributes to the similarity of SPI prediction among the six models. Still, NMME SPI ensemble forecasts have higher skill than those based on individual models or persistence, and the 6-month SPI forecasts are skillful out to four months. The three major drought events occurred during the 2012-2014 period, the 2012 Central Great Plains drought, the 2013 Upper Midwest flash drought, and 2013-2014 California drought, are used as examples to illustrate the system's strength and limitations. For precipitation-driven drought events, such as the 2012 Central Great Plains drought, NMME SPI forecasts perform well in predicting drought severity and spatial patterns. For fast-developing drought events, such as the 2013 Upper Midwest flash drought, the system failed to capture the onset of the drought.
Putting mechanisms into crop production models
USDA-ARS?s Scientific Manuscript database
Crop simulation models dynamically predict processes of carbon, nitrogen, and water balance on daily or hourly time-steps to the point of predicting yield and production at crop maturity. A brief history of these models is reviewed, and their level of mechanism for assimilation and respiration, ran...
Predictive microbiology for food packaging applications
USDA-ARS?s Scientific Manuscript database
Mathematical modeling has been applied to describe the microbial growth and inactivation in foods for decades and is also known as ‘Predictive microbiology’. When models are developed and validated, their applications may save cost and time. The Pathogen Modeling Program (PMP), a collection of mode...
NASA Astrophysics Data System (ADS)
Kavetski, Dmitri; Clark, Martyn P.
2010-10-01
Despite the widespread use of conceptual hydrological models in environmental research and operations, they remain frequently implemented using numerically unreliable methods. This paper considers the impact of the time stepping scheme on model analysis (sensitivity analysis, parameter optimization, and Markov chain Monte Carlo-based uncertainty estimation) and prediction. It builds on the companion paper (Clark and Kavetski, 2010), which focused on numerical accuracy, fidelity, and computational efficiency. Empirical and theoretical analysis of eight distinct time stepping schemes for six different hydrological models in 13 diverse basins demonstrates several critical conclusions. (1) Unreliable time stepping schemes, in particular, fixed-step explicit methods, suffer from troublesome numerical artifacts that severely deform the objective function of the model. These deformations are not rare isolated instances but can arise in any model structure, in any catchment, and under common hydroclimatic conditions. (2) Sensitivity analysis can be severely contaminated by numerical errors, often to the extent that it becomes dominated by the sensitivity of truncation errors rather than the model equations. (3) Robust time stepping schemes generally produce "better behaved" objective functions, free of spurious local optima, and with sufficient numerical continuity to permit parameter optimization using efficient quasi Newton methods. When implemented within a multistart framework, modern Newton-type optimizers are robust even when started far from the optima and provide valuable diagnostic insights not directly available from evolutionary global optimizers. (4) Unreliable time stepping schemes lead to inconsistent and biased inferences of the model parameters and internal states. (5) Even when interactions between hydrological parameters and numerical errors provide "the right result for the wrong reason" and the calibrated model performance appears adequate, unreliable time stepping schemes make the model unnecessarily fragile in predictive mode, undermining validation assessments and operational use. Erroneous or misleading conclusions of model analysis and prediction arising from numerical artifacts in hydrological models are intolerable, especially given that robust numerics are accepted as mainstream in other areas of science and engineering. We hope that the vivid empirical findings will encourage the conceptual hydrological community to close its Pandora's box of numerical problems, paving the way for more meaningful model application and interpretation.
Control of Systems With Slow Actuators Using Time Scale Separation
NASA Technical Reports Server (NTRS)
Stepanyan, Vehram; Nguyen, Nhan
2009-01-01
This paper addresses the problem of controlling a nonlinear plant with a slow actuator using singular perturbation method. For the known plant-actuator cascaded system the proposed scheme achieves tracking of a given reference model with considerably less control demand than would otherwise result when using conventional design techniques. This is the consequence of excluding the small parameter from the actuator dynamics via time scale separation. The resulting tracking error is within the order of this small parameter. For the unknown system the adaptive counterpart is developed based on the prediction model, which is driven towards the reference model by the control design. It is proven that the prediction model tracks the reference model with an error proportional to the small parameter, while the prediction error converges to zero. The resulting closed-loop system with all prediction models and adaptive laws remains stable. The benefits of the approach are demonstrated in simulation studies and compared to conventional control approaches.
Investigation of metabolites for estimating blood deposition time.
Lech, Karolina; Liu, Fan; Davies, Sarah K; Ackermann, Katrin; Ang, Joo Ern; Middleton, Benita; Revell, Victoria L; Raynaud, Florence J; Hoveijn, Igor; Hut, Roelof A; Skene, Debra J; Kayser, Manfred
2018-01-01
Trace deposition timing reflects a novel concept in forensic molecular biology involving the use of rhythmic biomarkers for estimating the time within a 24-h day/night cycle a human biological sample was left at the crime scene, which in principle allows verifying a sample donor's alibi. Previously, we introduced two circadian hormones for trace deposition timing and recently demonstrated that messenger RNA (mRNA) biomarkers significantly improve time prediction accuracy. Here, we investigate the suitability of metabolites measured using a targeted metabolomics approach, for trace deposition timing. Analysis of 171 plasma metabolites collected around the clock at 2-h intervals for 36 h from 12 male participants under controlled laboratory conditions identified 56 metabolites showing statistically significant oscillations, with peak times falling into three day/night time categories: morning/noon, afternoon/evening and night/early morning. Time prediction modelling identified 10 independently contributing metabolite biomarkers, which together achieved prediction accuracies expressed as AUC of 0.81, 0.86 and 0.90 for these three time categories respectively. Combining metabolites with previously established hormone and mRNA biomarkers in time prediction modelling resulted in an improved prediction accuracy reaching AUCs of 0.85, 0.89 and 0.96 respectively. The additional impact of metabolite biomarkers, however, was rather minor as the previously established model with melatonin, cortisol and three mRNA biomarkers achieved AUC values of 0.88, 0.88 and 0.95 for the same three time categories respectively. Nevertheless, the selected metabolites could become practically useful in scenarios where RNA marker information is unavailable such as due to RNA degradation. This is the first metabolomics study investigating circulating metabolites for trace deposition timing, and more work is needed to fully establish their usefulness for this forensic purpose.
Forecasting Geomagnetic Activity Using Kalman Filters
NASA Astrophysics Data System (ADS)
Veeramani, T.; Sharma, A.
2006-05-01
The coupling of energy from the solar wind to the magnetosphere leads to the geomagnetic activity in the form of storms and substorms and are characterized by indices such as AL, Dst and Kp. The geomagnetic activity has been predicted near-real time using local linear filter models of the system dynamics wherein the time series of the input solar wind and the output magnetospheric response were used to reconstruct the phase space of the system by a time-delay embedding technique. Recently, the radiation belt dynamics have been studied using a adaptive linear state space model [Rigler et al. 2004]. This was achieved by assuming a linear autoregressive equation for the underlying process and an adaptive identification of the model parameters using a Kalman filter approach. We use such a model for predicting the geomagnetic activity. In the case of substorms, the Bargatze et al [1985] data set yields persistence like behaviour when a time resolution of 2.5 minutes was used to test the model for the prediction of the AL index. Unlike the local linear filters, which are driven by the solar wind input without feedback from the observations, the Kalman filter makes use of the observations as and when available to optimally update the model parameters. The update procedure requires the prediction intervals to be long enough so that the forecasts can be used in practice. The time resolution of the data suitable for such forecasting is studied by taking averages over different durations.
Hossain, Moinul; Muromachi, Yasunori
2012-03-01
The concept of measuring the crash risk for a very short time window in near future is gaining more practicality due to the recent advancements in the fields of information systems and traffic sensor technology. Although some real-time crash prediction models have already been proposed, they are still primitive in nature and require substantial improvements to be implemented in real-life. This manuscript investigates the major shortcomings of the existing models and offers solutions to overcome them with an improved framework and modeling method. It employs random multinomial logit model to identify the most important predictors as well as the most suitable detector locations to acquire data to build such a model. Afterwards, it applies Bayesian belief net (BBN) to build the real-time crash prediction model. The model has been constructed using high resolution detector data collected from Shibuya 3 and Shinjuku 4 expressways under the jurisdiction of Tokyo Metropolitan Expressway Company Limited, Japan. It has been specifically built for the basic freeway segments and it predicts the chance of formation of a hazardous traffic condition within the next 4-9 min for a particular 250 meter long road section. The performance evaluation results reflect that at an average threshold value the model is able to successful classify 66% of the future crashes with a false alarm rate less than 20%. Copyright © 2011 Elsevier Ltd. All rights reserved.
Comparison of statistical models for analyzing wheat yield time series.
Michel, Lucie; Makowski, David
2013-01-01
The world's population is predicted to exceed nine billion by 2050 and there is increasing concern about the capability of agriculture to feed such a large population. Foresight studies on food security are frequently based on crop yield trends estimated from yield time series provided by national and regional statistical agencies. Various types of statistical models have been proposed for the analysis of yield time series, but the predictive performances of these models have not yet been evaluated in detail. In this study, we present eight statistical models for analyzing yield time series and compare their ability to predict wheat yield at the national and regional scales, using data provided by the Food and Agriculture Organization of the United Nations and by the French Ministry of Agriculture. The Holt-Winters and dynamic linear models performed equally well, giving the most accurate predictions of wheat yield. However, dynamic linear models have two advantages over Holt-Winters models: they can be used to reconstruct past yield trends retrospectively and to analyze uncertainty. The results obtained with dynamic linear models indicated a stagnation of wheat yields in many countries, but the estimated rate of increase of wheat yield remained above 0.06 t ha⁻¹ year⁻¹ in several countries in Europe, Asia, Africa and America, and the estimated values were highly uncertain for several major wheat producing countries. The rate of yield increase differed considerably between French regions, suggesting that efforts to identify the main causes of yield stagnation should focus on a subnational scale.
Real-time numerical forecast of global epidemic spreading: case study of 2009 A/H1N1pdm.
Tizzoni, Michele; Bajardi, Paolo; Poletto, Chiara; Ramasco, José J; Balcan, Duygu; Gonçalves, Bruno; Perra, Nicola; Colizza, Vittoria; Vespignani, Alessandro
2012-12-13
Mathematical and computational models for infectious diseases are increasingly used to support public-health decisions; however, their reliability is currently under debate. Real-time forecasts of epidemic spread using data-driven models have been hindered by the technical challenges posed by parameter estimation and validation. Data gathered for the 2009 H1N1 influenza crisis represent an unprecedented opportunity to validate real-time model predictions and define the main success criteria for different approaches. We used the Global Epidemic and Mobility Model to generate stochastic simulations of epidemic spread worldwide, yielding (among other measures) the incidence and seeding events at a daily resolution for 3,362 subpopulations in 220 countries. Using a Monte Carlo Maximum Likelihood analysis, the model provided an estimate of the seasonal transmission potential during the early phase of the H1N1 pandemic and generated ensemble forecasts for the activity peaks in the northern hemisphere in the fall/winter wave. These results were validated against the real-life surveillance data collected in 48 countries, and their robustness assessed by focusing on 1) the peak timing of the pandemic; 2) the level of spatial resolution allowed by the model; and 3) the clinical attack rate and the effectiveness of the vaccine. In addition, we studied the effect of data incompleteness on the prediction reliability. Real-time predictions of the peak timing are found to be in good agreement with the empirical data, showing strong robustness to data that may not be accessible in real time (such as pre-exposure immunity and adherence to vaccination campaigns), but that affect the predictions for the attack rates. The timing and spatial unfolding of the pandemic are critically sensitive to the level of mobility data integrated into the model. Our results show that large-scale models can be used to provide valuable real-time forecasts of influenza spreading, but they require high-performance computing. The quality of the forecast depends on the level of data integration, thus stressing the need for high-quality data in population-based models, and of progressive updates of validated available empirical knowledge to inform these models.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Blijderveen, Maarten van; University of Twente, Department of Thermal Engineering, Drienerlolaan 5, 7522 NB Enschede; Bramer, Eddy A.
Highlights: Black-Right-Pointing-Pointer We model piloted ignition times of wood and plastics. Black-Right-Pointing-Pointer The model is applied on a packed bed. Black-Right-Pointing-Pointer When the air flow is above a critical level, no ignition can take place. - Abstract: To gain insight in the startup of an incinerator, this article deals with piloted ignition. A newly developed model is described to predict the piloted ignition times of wood, PMMA and PVC. The model is based on the lower flammability limit and the adiabatic flame temperature at this limit. The incoming radiative heat flux, sample thickness and moisture content are some of themore » used variables. Not only the ignition time can be calculated with the model, but also the mass flux and surface temperature at ignition. The ignition times for softwoods and PMMA are mainly under-predicted. For hardwoods and PVC the predicted ignition times agree well with experimental results. Due to a significant scatter in the experimental data the mass flux and surface temperature calculated with the model are hard to validate. The model is applied on the startup of a municipal waste incineration plant. For this process a maximum allowable primary air flow is derived. When the primary air flow is above this maximum air flow, no ignition can be obtained.« less
Thermal and structural analysis of the GOES scan mirror's on orbit performance
NASA Technical Reports Server (NTRS)
Zurmehly, G. E.; Hookman, R. A.
1991-01-01
The on-orbit performance of the GOES satellite's scan mirror has been predicted by means of thermal, structural, and optical models. A simpler-than-conventional thermal model was used to reduce the time required to obtain orbital predictions, and the structural model was used to predict on-earth gravity sag and on-orbit distortions. The transfer of data from the thermal model to the structural model was automated for a given set of thermal nodes and structural grids.
NASA Astrophysics Data System (ADS)
Chardon, Jérémy; Hingray, Benoit; Favre, Anne-Catherine
2016-04-01
Scenarios of surface weather required for the impact studies have to be unbiased and adapted to the space and time scales of the considered hydro-systems. Hence, surface weather scenarios obtained from global climate models and/or numerical weather prediction models are not really appropriated. Outputs of these models have to be post-processed, which is often carried out thanks to Statistical Downscaling Methods (SDMs). Among those SDMs, approaches based on regression are often applied. For a given station, a regression link can be established between a set of large scale atmospheric predictors and the surface weather variable. These links are then used for the prediction of the latter. However, physical processes generating surface weather vary in time. This is well known for precipitation for instance. The most relevant predictors and the regression link are also likely to vary in time. A better prediction skill is thus classically obtained with a seasonal stratification of the data. Another strategy is to identify the most relevant predictor set and establish the regression link from dates that are similar - or analog - to the target date. In practice, these dates can be selected thanks to an analog model. In this study, we explore the possibility of improving the local performance of an analog model - where the analogy is applied to the geopotential heights 1000 and 500 hPa - using additional local scale predictors for the probabilistic prediction of the Safran precipitation over France. For each prediction day, the prediction is obtained from two GLM regression models - for both the occurrence and the quantity of precipitation - for which predictors and parameters are estimated from the analog dates. Firstly, the resulting combined model noticeably allows increasing the prediction performance by adapting the downscaling link for each prediction day. Secondly, the selected predictors for a given prediction depend on the large scale situation and on the considered region. Finally, even with such an adaptive predictor identification, the downscaling link appears to be robust: for a same prediction day, predictors selected for different locations of a given region are similar and the regression parameters are consistent within the region of interest.
Simulating wildfire spread behavior between two NASA Active Fire data timeframes
NASA Astrophysics Data System (ADS)
Adhikari, B.; Hodza, P.; Xu, C.; Minckley, T. A.
2017-12-01
Although NASA's Active Fire dataset is considered valuable in mapping the spatial distribution and extent of wildfires across the world, the data is only available at approximately 12-hour time intervals, creating uncertainties and risks associated with fire spread and behavior between the two Visible Infrared Imaging Radiometer Satellite (VIIRS) data collection timeframes. Our study seeks to close the information gap for the United States by using the latest Active Fire data collected for instance around 0130 hours as an ignition source and critical inputs to a wildfire model by uniquely incorporating forecasted and real-time weather conditions for predicting fire perimeter at the next 12 hour reporting time (i.e. around 1330 hours). The model ingests highly dynamic variables such as fuel moisture, temperature, relative humidity, wind among others, and prompts a Monte Carlo simulation exercise that uses a varying range of possible values for evaluating all possible wildfire behaviors. The Monte Carlo simulation implemented in this model provides a measure of the relative wildfire risk levels at various locations based on the number of times those sites are intersected by simulated fire perimeters. Model calibration is achieved using data at next reporting time (i.e. after 12 hours) to enhance the predictive quality at further time steps. While initial results indicate that the calibrated model can predict the overall geometry and direction of wildland fire spread, the model seems to over-predict the sizes of most fire perimeters possibly due to unaccounted fire suppression activities. Nonetheless, the results of this study show great promise in aiding wildland fire tracking, fighting and risk management.
What Time is Your Sunset? Accounting for Refraction in Sunrise/set Prediction Models
NASA Astrophysics Data System (ADS)
Wilson, Teresa; Bartlett, Jennifer Lynn; Chizek Frouard, Malynda; Hilton, James; Phlips, Alan; Edgar, Roman
2018-01-01
Algorithms that predict sunrise and sunset times currently have an uncertainty of one to four minutes at mid-latitudes (0° - 55° N/S) due to limitations in the atmospheric models they incorporate. At higher latitudes, slight changes in refraction can cause significant discrepancies, including difficulties determining whether the Sun appears to rise or set. While different components of refraction are known, how they affect predictions of sunrise/set has not yet been quantified. A better understanding of the contributions from temperature profile, pressure, humidity, and aerosols could significantly improve the standard prediction.We present a sunrise/set calculator that interchanges the refraction component by varying the refraction model. We, then, compared these predictions with data sets of observed rise/set times taken from Mount Wilson Observatory in California, University of Alberta in Edmonton, Alberta, and onboard the SS James Franco in the Atlantic. A thorough investigation of the problem requires a more substantial data set of observed rise/set times and corresponding meteorological data from around the world.We have developed a mobile application, Sunrise & Sunset Observer, so that anyone can capture this astronomical and meteorological data using their smartphone video recorder as part of a citizen science project. The Android app for this project is available in the Google Play store. Videos can also be submitted through the project website (riseset.phy.mtu.edu). Data analysis will lead to more complete models that will provide higher accuracy rise/set predictions to benefit astronomers, navigators, and outdoorsmen everywhere.
A time series modeling approach in risk appraisal of violent and sexual recidivism.
Bani-Yaghoub, Majid; Fedoroff, J Paul; Curry, Susan; Amundsen, David E
2010-10-01
For over half a century, various clinical and actuarial methods have been employed to assess the likelihood of violent recidivism. Yet there is a need for new methods that can improve the accuracy of recidivism predictions. This study proposes a new time series modeling approach that generates high levels of predictive accuracy over short and long periods of time. The proposed approach outperformed two widely used actuarial instruments (i.e., the Violence Risk Appraisal Guide and the Sex Offender Risk Appraisal Guide). Furthermore, analysis of temporal risk variations based on specific time series models can add valuable information into risk assessment and management of violent offenders.
NASA Astrophysics Data System (ADS)
Habibi, H.; Norouzi, A.; Habib, A.; Seo, D. J.
2016-12-01
To produce accurate predictions of flooding in urban areas, it is necessary to model both natural channel and storm drain networks. While there exist many urban hydraulic models of varying sophistication, most of them are not practical for real-time application for large urban areas. On the other hand, most distributed hydrologic models developed for real-time applications lack the ability to explicitly simulate storm drains. In this work, we develop a storm drain model that can be coupled with distributed hydrologic models such as the National Weather Service Hydrology Laboratory's Distributed Hydrologic Model, for real-time flash flood prediction in large urban areas to improve prediction and to advance the understanding of integrated response of natural channels and storm drains to rainfall events of varying magnitude and spatiotemporal extent in urban catchments of varying sizes. The initial study area is the Johnson Creek Catchment (40.1 km2) in the City of Arlington, TX. For observed rainfall, the high-resolution (500 m, 1 min) precipitation data from the Dallas-Fort Worth Demonstration Network of the Collaborative Adaptive Sensing of the Atmosphere radars is used.
Carvalho, Fabiana M.; Chaim, Khallil T.; Sanchez, Tiago A.; de Araujo, Draulio B.
2016-01-01
The updating of prospective internal models is necessary to accurately predict future observations. Uncertainty-driven internal model updating has been studied using a variety of perceptual paradigms, and have revealed engagement of frontal and parietal areas. In a distinct literature, studies on temporal expectations have also characterized a time-perception network, which relies on temporal orienting of attention. However, the updating of prospective internal models is highly dependent on temporal attention, since temporal attention must be reoriented according to the current environmental demands. In this study, we used functional magnetic resonance imaging (fMRI) to evaluate to what extend the continuous manipulation of temporal prediction would recruit update-related areas and the time-perception network areas. We developed an exogenous temporal task that combines rhythm cueing and time-to-contact principles to generate implicit temporal expectation. Two patterns of motion were created: periodic (simple harmonic oscillation) and non-periodic (harmonic oscillation with variable acceleration). We found that non-periodic motion engaged the exogenous temporal orienting network, which includes the ventral premotor and inferior parietal cortices, and the cerebellum, as well as the presupplementary motor area, which has previously been implicated in internal model updating, and the motion-sensitive area MT+. Interestingly, we found a right-hemisphere preponderance suggesting the engagement of explicit timing mechanisms. We also show that the periodic motion condition, when compared to the non-periodic motion, activated a particular subset of the default-mode network (DMN) midline areas, including the left dorsomedial prefrontal cortex (DMPFC), anterior cingulate cortex (ACC), and bilateral posterior cingulate cortex/precuneus (PCC/PC). It suggests that the DMN plays a role in processing contextually expected information and supports recent evidence that the DMN may reflect the validation of prospective internal models and predictive control. Taken together, our findings suggest that continuous manipulation of temporal predictions engages representations of temporal prediction as well as task-independent updating of internal models. PMID:27313526
Klerman, Elizabeth B; Beckett, Scott A; Landrigan, Christopher P
2016-09-13
In 2011 the U.S. Accreditation Council for Graduate Medical Education began limiting first year resident physicians (interns) to shifts of ≤16 consecutive hours. Controversy persists regarding the effectiveness of this policy for reducing errors and accidents while promoting education and patient care. Using a mathematical model of the effects of circadian rhythms and length of time awake on objective performance and subjective alertness, we quantitatively compared predictions for traditional intern schedules to those that limit work to ≤ 16 consecutive hours. We simulated two traditional schedules and three novel schedules using the mathematical model. The traditional schedules had extended duration work shifts (≥24 h) with overnight work shifts every second shift (including every third night, Q3) or every third shift (including every fourth night, Q4) night; the novel schedules had two different cross-cover (XC) night team schedules (XC-V1 and XC-V2) and a Rapid Cycle Rotation (RCR) schedule. Predicted objective performance and subjective alertness for each work shift were computed for each individual's schedule within a team and then combined for the team as a whole. Our primary outcome was the amount of time within a work shift during which a team's model-predicted objective performance and subjective alertness were lower than that expected after 16 or 24 h of continuous wake in an otherwise rested individual. The model predicted fewer hours with poor performance and alertness, especially during night-time work hours, for all three novel schedules than for either the traditional Q3 or Q4 schedules. Three proposed schedules that eliminate extended shifts may improve performance and alertness compared with traditional Q3 or Q4 schedules. Predicted times of worse performance and alertness were at night, which is also a time when supervision of trainees is lower. Mathematical modeling provides a quantitative comparison approach with potential to aid residency programs in schedule analysis and redesign.
CRAFFT: An Activity Prediction Model based on Bayesian Networks
Nazerfard, Ehsan; Cook, Diane J.
2014-01-01
Recent advances in the areas of pervasive computing, data mining, and machine learning offer unique opportunities to provide health monitoring and assistance for individuals facing difficulties to live independently in their homes. Several components have to work together to provide health monitoring for smart home residents including, but not limited to, activity recognition, activity discovery, activity prediction, and prompting system. Compared to the significant research done to discover and recognize activities, less attention has been given to predict the future activities that the resident is likely to perform. Activity prediction components can play a major role in design of a smart home. For instance, by taking advantage of an activity prediction module, a smart home can learn context-aware rules to prompt individuals to initiate important activities. In this paper, we propose an activity prediction model using Bayesian networks together with a novel two-step inference process to predict both the next activity features and the next activity label. We also propose an approach to predict the start time of the next activity which is based on modeling the relative start time of the predicted activity using the continuous normal distribution and outlier detection. To validate our proposed models, we used real data collected from physical smart environments. PMID:25937847
CRAFFT: An Activity Prediction Model based on Bayesian Networks.
Nazerfard, Ehsan; Cook, Diane J
2015-04-01
Recent advances in the areas of pervasive computing, data mining, and machine learning offer unique opportunities to provide health monitoring and assistance for individuals facing difficulties to live independently in their homes. Several components have to work together to provide health monitoring for smart home residents including, but not limited to, activity recognition, activity discovery, activity prediction, and prompting system. Compared to the significant research done to discover and recognize activities, less attention has been given to predict the future activities that the resident is likely to perform. Activity prediction components can play a major role in design of a smart home. For instance, by taking advantage of an activity prediction module, a smart home can learn context-aware rules to prompt individuals to initiate important activities. In this paper, we propose an activity prediction model using Bayesian networks together with a novel two-step inference process to predict both the next activity features and the next activity label. We also propose an approach to predict the start time of the next activity which is based on modeling the relative start time of the predicted activity using the continuous normal distribution and outlier detection. To validate our proposed models, we used real data collected from physical smart environments.
Modeling and Predicting the Stress Relaxation of Composites with Short and Randomly Oriented Fibers
Obaid, Numaira; Sain, Mohini
2017-01-01
The addition of short fibers has been experimentally observed to slow the stress relaxation of viscoelastic polymers, producing a change in the relaxation time constant. Our recent study attributed this effect of fibers on stress relaxation behavior to the interfacial shear stress transfer at the fiber-matrix interface. This model explained the effect of fiber addition on stress relaxation without the need to postulate structural changes at the interface. In our previous study, we developed an analytical model for the effect of fully aligned short fibers, and the model predictions were successfully compared to finite element simulations. However, in most industrial applications of short-fiber composites, fibers are not aligned, and hence it is necessary to examine the time dependence of viscoelastic polymers containing randomly oriented short fibers. In this study, we propose an analytical model to predict the stress relaxation behavior of short-fiber composites where the fibers are randomly oriented. The model predictions were compared to results obtained from Monte Carlo finite element simulations, and good agreement between the two was observed. The analytical model provides an excellent tool to accurately predict the stress relaxation behavior of randomly oriented short-fiber composites. PMID:29053601
Modelling and Predicting Backstroke Start Performance Using Non-Linear and Linear Models.
de Jesus, Karla; Ayala, Helon V H; de Jesus, Kelly; Coelho, Leandro Dos S; Medeiros, Alexandre I A; Abraldes, José A; Vaz, Mário A P; Fernandes, Ricardo J; Vilas-Boas, João Paulo
2018-03-01
Our aim was to compare non-linear and linear mathematical model responses for backstroke start performance prediction. Ten swimmers randomly completed eight 15 m backstroke starts with feet over the wedge, four with hands on the highest horizontal and four on the vertical handgrip. Swimmers were videotaped using a dual media camera set-up, with the starts being performed over an instrumented block with four force plates. Artificial neural networks were applied to predict 5 m start time using kinematic and kinetic variables and to determine the accuracy of the mean absolute percentage error. Artificial neural networks predicted start time more robustly than the linear model with respect to changing training to the validation dataset for the vertical handgrip (3.95 ± 1.67 vs. 5.92 ± 3.27%). Artificial neural networks obtained a smaller mean absolute percentage error than the linear model in the horizontal (0.43 ± 0.19 vs. 0.98 ± 0.19%) and vertical handgrip (0.45 ± 0.19 vs. 1.38 ± 0.30%) using all input data. The best artificial neural network validation revealed a smaller mean absolute error than the linear model for the horizontal (0.007 vs. 0.04 s) and vertical handgrip (0.01 vs. 0.03 s). Artificial neural networks should be used for backstroke 5 m start time prediction due to the quite small differences among the elite level performances.
A model of interval timing by neural integration
Simen, Patrick; Balci, Fuat; deSouza, Laura; Cohen, Jonathan D.; Holmes, Philip
2011-01-01
We show that simple assumptions about neural processing lead to a model of interval timing as a temporal integration process, in which a noisy firing-rate representation of time rises linearly on average toward a response threshold over the course of an interval. Our assumptions include: that neural spike trains are approximately independent Poisson processes; that correlations among them can be largely cancelled by balancing excitation and inhibition; that neural populations can act as integrators; and that the objective of timed behavior is maximal accuracy and minimal variance. The model accounts for a variety of physiological and behavioral findings in rodents, monkeys and humans, including ramping firing rates between the onset of reward-predicting cues and the receipt of delayed rewards, and universally scale-invariant response time distributions in interval timing tasks. It furthermore makes specific, well-supported predictions about the skewness of these distributions, a feature of timing data that is usually ignored. The model also incorporates a rapid (potentially one-shot) duration-learning procedure. Human behavioral data support the learning rule’s predictions regarding learning speed in sequences of timed responses. These results suggest that simple, integration-based models should play as prominent a role in interval timing theory as they do in theories of perceptual decision making, and that a common neural mechanism may underlie both types of behavior. PMID:21697374
Evaluation of the North American Multi-Model Ensemble System for Monthly and Seasonal Prediction
NASA Astrophysics Data System (ADS)
Zhang, Q.
2014-12-01
Since August 2011, the real time seasonal forecasts of the U.S. National Multi-Model Ensemble (NMME) have been made on 8th of each month by NCEP Climate Prediction Center (CPC). The participating models were NCEP/CFSv1&2, GFDL/CM2.2, NCAR/U.Miami/COLA/CCSM3, NASA/GEOS5, IRI/ ECHAM-a & ECHAM-f in the first year of the real time NMME forecast. Two Canadian coupled models CMC/CanCM3 and CM4 joined in and CFSv1 and IRI's models dropped out in the second year. The NMME team at CPC collects monthly means of three variables, precipitation, temperature at 2m and sea surface temperature from each modeling center on a 1x1 global grid, removes systematic errors, makes the grand ensemble mean in equal weight for each model mean and probability forecast with equal weight for each member of each model. This provides the NMME forecast locked in schedule for the CPC operational seasonal and monthly outlook. The basic verification metrics of seasonal and monthly prediction of NMME are calculated as an evaluation of skill, including both deterministic and probabilistic forecasts for the 3-year real time (August, 2011- July 2014) period and the 30-year retrospective forecast (1982-2011) of the individual models as well as the NMME ensemble. The motivation of this study is to provide skill benchmarks for future improvements of the NMME seasonal and monthly prediction system. We also want to establish whether the real time and hindcast periods (used for bias correction in real time) are consistent. The experimental phase I of the project already supplies routine guidance to users of the NMME forecasts.
Very-short-term wind power prediction by a hybrid model with single- and multi-step approaches
NASA Astrophysics Data System (ADS)
Mohammed, E.; Wang, S.; Yu, J.
2017-05-01
Very-short-term wind power prediction (VSTWPP) has played an essential role for the operation of electric power systems. This paper aims at improving and applying a hybrid method of VSTWPP based on historical data. The hybrid method is combined by multiple linear regressions and least square (MLR&LS), which is intended for reducing prediction errors. The predicted values are obtained through two sub-processes:1) transform the time-series data of actual wind power into the power ratio, and then predict the power ratio;2) use the predicted power ratio to predict the wind power. Besides, the proposed method can include two prediction approaches: single-step prediction (SSP) and multi-step prediction (MSP). WPP is tested comparatively by auto-regressive moving average (ARMA) model from the predicted values and errors. The validity of the proposed hybrid method is confirmed in terms of error analysis by using probability density function (PDF), mean absolute percent error (MAPE) and means square error (MSE). Meanwhile, comparison of the correlation coefficients between the actual values and the predicted values for different prediction times and window has confirmed that MSP approach by using the hybrid model is the most accurate while comparing to SSP approach and ARMA. The MLR&LS is accurate and promising for solving problems in WPP.
Calculation of precise firing statistics in a neural network model
NASA Astrophysics Data System (ADS)
Cho, Myoung Won
2017-08-01
A precise prediction of neural firing dynamics is requisite to understand the function of and the learning process in a biological neural network which works depending on exact spike timings. Basically, the prediction of firing statistics is a delicate manybody problem because the firing probability of a neuron at a time is determined by the summation over all effects from past firing states. A neural network model with the Feynman path integral formulation is recently introduced. In this paper, we present several methods to calculate firing statistics in the model. We apply the methods to some cases and compare the theoretical predictions with simulation results.
McGinitie, Teague M; Ebrahimi-Najafabadi, Heshmatollah; Harynuk, James J
2014-02-21
A new method for calibrating thermodynamic data to be used in the prediction of analyte retention times is presented. The method allows thermodynamic data collected on one column to be used in making predictions across columns of the same stationary phase but with varying geometries. This calibration is essential as slight variances in the column inner diameter and stationary phase film thickness between columns or as a column ages will adversely affect the accuracy of predictions. The calibration technique uses a Grob standard mixture along with a Nelder-Mead simplex algorithm and a previously developed model of GC retention times based on a three-parameter thermodynamic model to estimate both inner diameter and stationary phase film thickness. The calibration method is highly successful with the predicted retention times for a set of alkanes, ketones and alcohols having an average error of 1.6s across three columns. Copyright © 2014 Elsevier B.V. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Puskar, Joseph David; Quintana, Michael A.; Sorensen, Neil Robert
A program is underway at Sandia National Laboratories to predict long-term reliability of photovoltaic (PV) systems. The vehicle for the reliability predictions is a Reliability Block Diagram (RBD), which models system behavior. Because this model is based mainly on field failure and repair times, it can be used to predict current reliability, but it cannot currently be used to accurately predict lifetime. In order to be truly predictive, physics-informed degradation processes and failure mechanisms need to be included in the model. This paper describes accelerated life testing of metal foil tapes used in thin-film PV modules, and how tape jointmore » degradation, a possible failure mode, can be incorporated into the model.« less
Aboagye-Sarfo, Patrick; Mai, Qun; Sanfilippo, Frank M; Preen, David B; Stewart, Louise M; Fatovich, Daniel M
2015-10-01
To develop multivariate vector-ARMA (VARMA) forecast models for predicting emergency department (ED) demand in Western Australia (WA) and compare them to the benchmark univariate autoregressive moving average (ARMA) and Winters' models. Seven-year monthly WA state-wide public hospital ED presentation data from 2006/07 to 2012/13 were modelled. Graphical and VARMA modelling methods were used for descriptive analysis and model fitting. The VARMA models were compared to the benchmark univariate ARMA and Winters' models to determine their accuracy to predict ED demand. The best models were evaluated by using error correction methods for accuracy. Descriptive analysis of all the dependent variables showed an increasing pattern of ED use with seasonal trends over time. The VARMA models provided a more precise and accurate forecast with smaller confidence intervals and better measures of accuracy in predicting ED demand in WA than the ARMA and Winters' method. VARMA models are a reliable forecasting method to predict ED demand for strategic planning and resource allocation. While the ARMA models are a closely competing alternative, they under-estimated future ED demand. Copyright © 2015 Elsevier Inc. All rights reserved.
Analysis of seismograms from a downhole array in sediments near San Francisco Bay
Joyner, William B.; Warrick, Richard E.; Oliver, Adolph A.
1976-01-01
A four-level downhole array of three-component instruments was established on the southwest shore of San Francisco Bay to monitor the effect of the sediments on low-amplitude seismic ground motion. The deepest instrument is at a depth of 186 meters, two meters below the top of the Franciscan bedrock. Earthquake data from regional distances (29 km ≤ Δ ≤ 485 km) over a wide range of azimuths are compared with the predictions of a simple plane-layered model with material properties independently determined. Spectral ratios between the surface and bedrock computed for the one horizontal component of motion that was analyzed agree rather well with the model predictions; the model predicts the frequencies of the first three peaks within 10 percent in most cases and the height of the peaks within 50 percent in most cases. Surface time histories computed from the theoretical model predict the time variations of amplitude and frequency content reasonably well, but correlations of individual cycles cannot be made between observed and predicted traces.
Hanuschkin, Alexander; Kunkel, Susanne; Helias, Moritz; Morrison, Abigail; Diesmann, Markus
2010-01-01
Traditionally, event-driven simulations have been limited to the very restricted class of neuronal models for which the timing of future spikes can be expressed in closed form. Recently, the class of models that is amenable to event-driven simulation has been extended by the development of techniques to accurately calculate firing times for some integrate-and-fire neuron models that do not enable the prediction of future spikes in closed form. The motivation of this development is the general perception that time-driven simulations are imprecise. Here, we demonstrate that a globally time-driven scheme can calculate firing times that cannot be discriminated from those calculated by an event-driven implementation of the same model; moreover, the time-driven scheme incurs lower computational costs. The key insight is that time-driven methods are based on identifying a threshold crossing in the recent past, which can be implemented by a much simpler algorithm than the techniques for predicting future threshold crossings that are necessary for event-driven approaches. As run time is dominated by the cost of the operations performed at each incoming spike, which includes spike prediction in the case of event-driven simulation and retrospective detection in the case of time-driven simulation, the simple time-driven algorithm outperforms the event-driven approaches. Additionally, our method is generally applicable to all commonly used integrate-and-fire neuronal models; we show that a non-linear model employing a standard adaptive solver can reproduce a reference spike train with a high degree of precision. PMID:21031031
USDA-ARS?s Scientific Manuscript database
AnnAGNPS (Annualized Agricultural Non-Point Source Pollution Model) is a system of computer models developed to predict non-point source pollutant loadings within agricultural watersheds. It contains a daily time step distributed parameter continuous simulation surface runoff model designed to assis...
Gruginskie, Lúcia Adriana Dos Santos; Vaccaro, Guilherme Luís Roehe
2018-01-01
The quality of the judicial system of a country can be verified by the overall length time of lawsuits, or the lead time. When the lead time is excessive, a country's economy can be affected, leading to the adoption of measures such as the creation of the Saturn Center in Europe. Although there are performance indicators to measure the lead time of lawsuits, the analysis and the fit of prediction models are still underdeveloped themes in the literature. To contribute to this subject, this article compares different prediction models according to their accuracy, sensitivity, specificity, precision, and F1 measure. The database used was from TRF4-the Tribunal Regional Federal da 4a Região-a federal court in southern Brazil, corresponding to the 2nd Instance civil lawsuits completed in 2016. The models were fitted using support vector machine, naive Bayes, random forests, and neural network approaches with categorical predictor variables. The lead time of the 2nd Instance judgment was selected as the response variable measured in days and categorized in bands. The comparison among the models showed that the support vector machine and random forest approaches produced measurements that were superior to those of the other models. The evaluation of the models was made using k-fold cross-validation similar to that applied to the test models.
Predicting Loss-of-Control Boundaries Toward a Piloting Aid
NASA Technical Reports Server (NTRS)
Barlow, Jonathan; Stepanyan, Vahram; Krishnakumar, Kalmanje
2012-01-01
This work presents an approach to predicting loss-of-control with the goal of providing the pilot a decision aid focused on maintaining the pilot's control action within predicted loss-of-control boundaries. The predictive architecture combines quantitative loss-of-control boundaries, a data-based predictive control boundary estimation algorithm and an adaptive prediction method to estimate Markov model parameters in real-time. The data-based loss-of-control boundary estimation algorithm estimates the boundary of a safe set of control inputs that will keep the aircraft within the loss-of-control boundaries for a specified time horizon. The adaptive prediction model generates estimates of the system Markov Parameters, which are used by the data-based loss-of-control boundary estimation algorithm. The combined algorithm is applied to a nonlinear generic transport aircraft to illustrate the features of the architecture.
A systematic review of models to predict recruitment to multicentre clinical trials.
Barnard, Katharine D; Dent, Louise; Cook, Andrew
2010-07-06
Less than one third of publicly funded trials managed to recruit according to their original plan often resulting in request for additional funding and/or time extensions. The aim was to identify models which might be useful to a major public funder of randomised controlled trials when estimating likely time requirements for recruiting trial participants. The requirements of a useful model were identified as usability, based on experience, able to reflect time trends, accounting for centre recruitment and contribution to a commissioning decision. A systematic review of English language articles using MEDLINE and EMBASE. Search terms included: randomised controlled trial, patient, accrual, predict, enroll, models, statistical; Bayes Theorem; Decision Theory; Monte Carlo Method and Poisson. Only studies discussing prediction of recruitment to trials using a modelling approach were included. Information was extracted from articles by one author, and checked by a second, using a pre-defined form. Out of 326 identified abstracts, only 8 met all the inclusion criteria. Of these 8 studies examined, there are five major classes of model discussed: the unconditional model, the conditional model, the Poisson model, Bayesian models and Monte Carlo simulation of Markov models. None of these meet all the pre-identified needs of the funder. To meet the needs of a number of research programmes, a new model is required as a matter of importance. Any model chosen should be validated against both retrospective and prospective data, to ensure the predictions it gives are superior to those currently used.
NASA Astrophysics Data System (ADS)
Shen, B.; Tao, W.; Atlas, R.
2008-12-01
Very Severe Cyclonic Storm Nargis, the deadliest named tropical cyclone (TC) in the North Indian Ocean Basin, devastated Burma (Myanmar) in May 2008, causing tremendous damage and numerous fatalities. An increased lead time in the prediction of TC Nargis would have increased the warning time and may therefore have saved lives and reduced economic damage. Recent advances in high-resolution global models and supercomputers have shown the potential for improving TC track and intensity forecasts, presumably by improving multi-scale simulations. The key but challenging questions to be answered include: (1) if and how realistic, in terms of timing, location and TC general structure, the global mesoscale model (GMM) can simulate TC genesis and (2) under what conditions can the model extend the lead time of TC genesis forecasts. In this study, we focus on genesis prediction for TCs in the Indian Ocean with the GMM. Preliminary real-data simulations show that the initial formation and intensity variations of TC Nargis can be realistically predicted at a lead time of up to 5 days. These simulations also suggest that the accurate representations of a westerly wind burst (WWB) and an equatorial trough, associated with monsoon circulations and/or a Madden-Julian Oscillation (MJO), are important for predicting the formation of this kind of TC. In addition to the WWB and equatorial trough, other favorable environmental conditions will be examined, which include enhanced monsoonal circulation, upper-level outflow, low- and middle-level moistening, and surface fluxes.
Model for small arms fire muzzle blast wave propagation in air
NASA Astrophysics Data System (ADS)
Aguilar, Juan R.; Desai, Sachi V.
2011-11-01
Accurate modeling of small firearms muzzle blast wave propagation in the far field is critical to predict sound pressure levels, impulse durations and rise times, as functions of propagation distance. Such a task being relevant to a number of military applications including the determination of human response to blast noise, gunfire detection and localization, and gun suppressor design. Herein, a time domain model to predict small arms fire muzzle blast wave propagation is introduced. The model implements a Friedlander wave with finite rise time which diverges spherically from the gun muzzle. Additionally, the effects in blast wave form of thermoviscous and molecular relaxational processes, which are associated with atmospheric absorption of sound were also incorporated in the model. Atmospheric absorption of blast waves is implemented using a time domain recursive formula obtained from numerical integration of corresponding differential equations using a Crank-Nicholson finite difference scheme. Theoretical predictions from our model were compared to previously recorded real world data of muzzle blast wave signatures obtained by shooting a set different sniper weapons of varying calibers. Recordings containing gunfire acoustical signatures were taken at distances between 100 and 600 meters from the gun muzzle. Results shows that predicted blast wave slope and exponential decay agrees well with measured data. Analysis also reveals the persistency of an oscillatory phenomenon after blast overpressure in the recorded wave forms.
Su, Jiandong; Barbera, Lisa; Sutradhar, Rinku
2015-06-01
Prior work has utilized longitudinal information on performance status to demonstrate its association with risk of death among cancer patients; however, no study has assessed whether such longitudinal information improves the predictions for risk of death. To examine whether the use of repeated performance status assessments improve predictions for risk of death compared to using only performance status assessment at the time of cancer diagnosis. This was a population-based longitudinal study of adult outpatients who had a cancer diagnosis and had at least one assessment of performance status. To account for each patient's changing performance status over time, we implemented a Cox model with a time-varying covariate for performance status. This model was compared to a Cox model using only a time-fixed (baseline) covariate for performance status. The regression coefficients of each model were derived based on a randomly selected 60% of patients, and then, the predictive ability of each model was assessed via concordance probabilities when applied to the remaining 40% of patients. Our study consisted of 15,487 cancer patients with over 53,000 performance status assessments. The utilization of repeated performance status assessments improved predictions for risk of death compared to using only the performance status assessment taken at diagnosis. When studying the hazard of death among patients with cancer, if available, researchers should incorporate changing information on performance status scores, instead of simply baseline information on performance status. © The Author(s) 2015.
Crimmins, Theresa M; Crimmins, Michael A; Gerst, Katharine L; Rosemartin, Alyssa H; Weltzin, Jake F
2017-01-01
In support of science and society, the USA National Phenology Network (USA-NPN) maintains a rapidly growing, continental-scale, species-rich dataset of plant and animal phenology observations that with over 10 million records is the largest such database in the United States. The aim of this study was to explore the potential that exists in the broad and rich volunteer-collected dataset maintained by the USA-NPN for constructing models predicting the timing of phenological transition across species' ranges within the continental United States. Contributed voluntarily by professional and citizen scientists, these opportunistically collected observations are characterized by spatial clustering, inconsistent spatial and temporal sampling, and short temporal depth (2009-present). Whether data exhibiting such limitations can be used to develop predictive models appropriate for use across large geographic regions has not yet been explored. We constructed predictive models for phenophases that are the most abundant in the database and also relevant to management applications for all species with available data, regardless of plant growth habit, location, geographic extent, or temporal depth of the observations. We implemented a very basic model formulation-thermal time models with a fixed start date. Sufficient data were available to construct 107 individual species × phenophase models. Remarkably, given the limited temporal depth of this dataset and the simple modeling approach used, fifteen of these models (14%) met our criteria for model fit and error. The majority of these models represented the "breaking leaf buds" and "leaves" phenophases and represented shrub or tree growth forms. Accumulated growing degree day (GDD) thresholds that emerged ranged from 454 GDDs (Amelanchier canadensis-breaking leaf buds) to 1,300 GDDs (Prunus serotina-open flowers). Such candidate thermal time thresholds can be used to produce real-time and short-term forecast maps of the timing of these phenophase transition. In addition, many of the candidate models that emerged were suitable for use across the majority of the species' geographic ranges. Real-time and forecast maps of phenophase transitions could support a wide range of natural resource management applications, including invasive plant management, issuing asthma and allergy alerts, and anticipating frost damage for crops in vulnerable states. Our finding that several viable thermal time threshold models that work across the majority of the species ranges could be constructed from the USA-NPN database provides clear evidence that great potential exists this dataset to develop more enhanced predictive models for additional species and phenophases. Further, the candidate models that emerged have immediate utility for supporting a wide range of management applications.
NASA Astrophysics Data System (ADS)
Harken, B.; Geiges, A.; Rubin, Y.
2013-12-01
There are several stages in any hydrological modeling campaign, including: formulation and analysis of a priori information, data acquisition through field campaigns, inverse modeling, and forward modeling and prediction of some environmental performance metric (EPM). The EPM being predicted could be, for example, contaminant concentration, plume travel time, or aquifer recharge rate. These predictions often have significant bearing on some decision that must be made. Examples include: how to allocate limited remediation resources between multiple contaminated groundwater sites, where to place a waste repository site, and what extraction rates can be considered sustainable in an aquifer. Providing an answer to these questions depends on predictions of EPMs using forward models as well as levels of uncertainty related to these predictions. Uncertainty in model parameters, such as hydraulic conductivity, leads to uncertainty in EPM predictions. Often, field campaigns and inverse modeling efforts are planned and undertaken with reduction of parametric uncertainty as the objective. The tool of hypothesis testing allows this to be taken one step further by considering uncertainty reduction in the ultimate prediction of the EPM as the objective and gives a rational basis for weighing costs and benefits at each stage. When using the tool of statistical hypothesis testing, the EPM is cast into a binary outcome. This is formulated as null and alternative hypotheses, which can be accepted and rejected with statistical formality. When accounting for all sources of uncertainty at each stage, the level of significance of this test provides a rational basis for planning, optimization, and evaluation of the entire campaign. Case-specific information, such as consequences prediction error and site-specific costs can be used in establishing selection criteria based on what level of risk is deemed acceptable. This framework is demonstrated and discussed using various synthetic case studies. The case studies involve contaminated aquifers where a decision must be made based on prediction of when a contaminant will arrive at a given location. The EPM, in this case contaminant travel time, is cast into the hypothesis testing framework. The null hypothesis states that the contaminant plume will arrive at the specified location before a critical value of time passes, and the alternative hypothesis states that the plume will arrive after the critical time passes. Different field campaigns are analyzed based on effectiveness in reducing the probability of selecting the wrong hypothesis, which in this case corresponds to reducing uncertainty in the prediction of plume arrival time. To examine the role of inverse modeling in this framework, case studies involving both Maximum Likelihood parameter estimation and Bayesian inversion are used.
Comparisons of Crosswind Velocity Profile Estimates Used in Fast-Time Wake Vortex Prediction Models
NASA Technical Reports Server (NTRS)
Pruis, Mathew J.; Delisi, Donald P.; Ahmad, Nashat N.
2011-01-01
Five methods for estimating crosswind profiles used in fast-time wake vortex prediction models are compared in this study. Previous investigations have shown that temporal and spatial variations in the crosswind vertical profile have a large impact on the transport and time evolution of the trailing vortex pair. The most important crosswind parameters are the magnitude of the crosswind and the gradient in the crosswind shear. It is known that pulsed and continuous wave lidar measurements can provide good estimates of the wind profile in the vicinity of airports. In this study comparisons are made between estimates of the crosswind profiles from a priori information on the trajectory of the vortex pair as well as crosswind profiles derived from different sensors and a regional numerical weather prediction model.
Temperature-Dependent Kinetic Prediction for Reactions Described by Isothermal Mathematics
Dinh, L. N.; Sun, T. C.; McLean, W.
2016-09-12
Most kinetic models are expressed in isothermal mathematics. In addition, this may lead unaware scientists either to the misconception that classical isothermal kinetic models cannot be used for any chemical process in an environment with a time-dependent temperature profile or, even worse, to a misuse of them. In reality, classical isothermal models can be employed to make kinetic predictions for reactions in environments with time-dependent temperature profiles, provided that there is a continuity/conservation in the reaction extent at every temperature–time step. In this article, fundamental analyses, illustrations, guiding tables, and examples are given to help the interested readers using eithermore » conventional isothermal reacted fraction curves or rate equations to make proper kinetic predictions for chemical reactions in environments with temperature profiles that vary, even arbitrarily, with time simply by the requirement of continuity/conservation of reaction extent whenever there is an external temperature change.« less
Li, Qiongge; Chan, Maria F
2017-01-01
Over half of cancer patients receive radiotherapy (RT) as partial or full cancer treatment. Daily quality assurance (QA) of RT in cancer treatment closely monitors the performance of the medical linear accelerator (Linac) and is critical for continuous improvement of patient safety and quality of care. Cumulative longitudinal QA measurements are valuable for understanding the behavior of the Linac and allow physicists to identify trends in the output and take preventive actions. In this study, artificial neural networks (ANNs) and autoregressive moving average (ARMA) time-series prediction modeling techniques were both applied to 5-year daily Linac QA data. Verification tests and other evaluations were then performed for all models. Preliminary results showed that ANN time-series predictive modeling has more advantages over ARMA techniques for accurate and effective applicability in the dosimetry and QA field. © 2016 New York Academy of Sciences.
Thermoplastic matrix composite processing model
NASA Technical Reports Server (NTRS)
Dara, P. H.; Loos, A. C.
1985-01-01
The effects the processing parameters pressure, temperature, and time have on the quality of continuous graphite fiber reinforced thermoplastic matrix composites were quantitatively accessed by defining the extent to which intimate contact and bond formation has occurred at successive ply interfaces. Two models are presented predicting the extents to which the ply interfaces have achieved intimate contact and cohesive strength. The models are based on experimental observation of compression molded laminates and neat resin conditions, respectively. Identified as the mechanism explaining the phenomenon by which the plies bond to themselves is the theory of autohesion (or self diffusion). Theoretical predictions from the Reptation Theory between autohesive strength and contact time are used to explain the effects of the processing parameters on the observed experimental strengths. The application of a time-temperature relationship for autohesive strength predictions is evaluated. A viscoelastic compression molding model of a tow was developed to explain the phenomenon by which the prepreg ply interfaces develop intimate contact.
Singh, Kunwar P; Gupta, Shikha; Ojha, Priyanka; Rai, Premanjali
2013-04-01
The research aims to develop artificial intelligence (AI)-based model to predict the adsorptive removal of 2-chlorophenol (CP) in aqueous solution by coconut shell carbon (CSC) using four operational variables (pH of solution, adsorbate concentration, temperature, and contact time), and to investigate their effects on the adsorption process. Accordingly, based on a factorial design, 640 batch experiments were conducted. Nonlinearities in experimental data were checked using Brock-Dechert-Scheimkman (BDS) statistics. Five nonlinear models were constructed to predict the adsorptive removal of CP in aqueous solution by CSC using four variables as input. Performances of the constructed models were evaluated and compared using statistical criteria. BDS statistics revealed strong nonlinearity in experimental data. Performance of all the models constructed here was satisfactory. Radial basis function network (RBFN) and multilayer perceptron network (MLPN) models performed better than generalized regression neural network, support vector machines, and gene expression programming models. Sensitivity analysis revealed that the contact time had highest effect on adsorption followed by the solution pH, temperature, and CP concentration. The study concluded that all the models constructed here were capable of capturing the nonlinearity in data. A better generalization and predictive performance of RBFN and MLPN models suggested that these can be used to predict the adsorption of CP in aqueous solution using CSC.
Risk-Based, Hypothesis-Driven Framework for Hydrological Field Campaigns with Case Studies
NASA Astrophysics Data System (ADS)
Harken, B.; Rubin, Y.
2014-12-01
There are several stages in any hydrological modeling campaign, including: formulation and analysis of a priori information, data acquisition through field campaigns, inverse modeling, and prediction of some environmental performance metric (EPM). The EPM being predicted could be, for example, contaminant concentration or plume travel time. These predictions often have significant bearing on a decision that must be made. Examples include: how to allocate limited remediation resources between contaminated groundwater sites or where to place a waste repository site. Answering such questions depends on predictions of EPMs using forward models as well as levels of uncertainty related to these predictions. Uncertainty in EPM predictions stems from uncertainty in model parameters, which can be reduced by measurements taken in field campaigns. The costly nature of field measurements motivates a rational basis for determining a measurement strategy that is optimal with respect to the uncertainty in the EPM prediction. The tool of hypothesis testing allows this uncertainty to be quantified by computing the significance of the test resulting from a proposed field campaign. The significance of the test gives a rational basis for determining the optimality of a proposed field campaign. This hypothesis testing framework is demonstrated and discussed using various synthetic case studies. This study involves contaminated aquifers where a decision must be made based on prediction of when a contaminant will arrive at a specified location. The EPM, in this case contaminant travel time, is cast into the hypothesis testing framework. The null hypothesis states that the contaminant plume will arrive at the specified location before a critical amount of time passes, and the alternative hypothesis states that the plume will arrive after the critical time passes. The optimality of different field campaigns is assessed by computing the significance of the test resulting from each one. Evaluating the level of significance caused by a field campaign involves steps including likelihood-based inverse modeling and semi-analytical conditional particle tracking.
Francy, Donna S.; Brady, Amie M.G.; Carvin, Rebecca B.; Corsi, Steven R.; Fuller, Lori M.; Harrison, John H.; Hayhurst, Brett A.; Lant, Jeremiah; Nevers, Meredith B.; Terrio, Paul J.; Zimmerman, Tammy M.
2013-01-01
Predictive models have been used at beaches to improve the timeliness and accuracy of recreational water-quality assessments over the most common current approach to water-quality monitoring, which relies on culturing fecal-indicator bacteria such as Escherichia coli (E. coli.). Beach-specific predictive models use environmental and water-quality variables that are easily and quickly measured as surrogates to estimate concentrations of fecal-indicator bacteria or to provide the probability that a State recreational water-quality standard will be exceeded. When predictive models are used for beach closure or advisory decisions, they are referred to as “nowcasts.” During the recreational seasons of 2010-12, the U.S. Geological Survey (USGS), in cooperation with 23 local and State agencies, worked to improve existing nowcasts at 4 beaches, validate predictive models at another 38 beaches, and collect data for predictive-model development at 7 beaches throughout the Great Lakes. This report summarizes efforts to collect data and develop predictive models by multiple agencies and to compile existing information on the beaches and beach-monitoring programs into one comprehensive report. Local agencies measured E. coli concentrations and variables expected to affect E. coli concentrations such as wave height, turbidity, water temperature, and numbers of birds at the time of sampling. In addition to these field measurements, equipment was installed by the USGS or local agencies at or near several beaches to collect water-quality and metrological measurements in near real time, including nearshore buoys, weather stations, and tributary staff gages and monitors. The USGS worked with local agencies to retrieve data from existing sources either manually or by use of tools designed specifically to compile and process data for predictive-model development. Predictive models were developed by use of linear regression and (or) partial least squares techniques for 42 beaches that had at least 2 years of data (2010-11 and sometimes earlier) and for 1 beach that had 1 year of data. For most models, software designed for model development by the U.S. Environmental Protection Agency (Virtual Beach) was used. The selected model for each beach was based on a combination of explanatory variables including, most commonly, turbidity, day of the year, change in lake level over 24 hours, wave height, wind direction and speed, and antecedent rainfall for various time periods. Forty-two predictive models were validated against data collected during an independent year (2012) and compared to the current method for assessing recreational water quality-using the previous day’s E. coli concentration (persistence model). Goals for good predictive-model performance were responses that were at least 5 percent greater than the persistence model and overall correct responses greater than or equal to 80 percent, sensitivities (percentage of exceedances of the bathing-water standard that were correctly predicted by the model) greater than or equal to 50 percent, and specificities (percentage of nonexceedances correctly predicted by the model) greater than or equal to 85 percent. Out of 42 predictive models, 24 models yielded over-all correct responses that were at least 5 percent greater than the use of the persistence model. Predictive-model responses met the performance goals more often than the persistence-model responses in terms of overall correctness (28 versus 17 models, respectively), sensitivity (17 versus 4 models), and specificity (34 versus 25 models). Gaining knowledge of each beach and the factors that affect E. coli concentrations is important for developing good predictive models. Collection of additional years of data with a wide range of environmental conditions may also help to improve future model performance. The USGS will continue to work with local agencies in 2013 and beyond to develop and validate predictive models at beaches and improve existing nowcasts, restructuring monitoring activities to accommodate future uncertainties in funding and resources.
Real-Time Prediction of Temperature Elevation During Robotic Bone Drilling Using the Torque Signal.
Feldmann, Arne; Gavaghan, Kate; Stebinger, Manuel; Williamson, Tom; Weber, Stefan; Zysset, Philippe
2017-09-01
Bone drilling is a surgical procedure commonly required in many surgical fields, particularly orthopedics, dentistry and head and neck surgeries. While the long-term effects of thermal bone necrosis are unknown, the thermal damage to nerves in spinal or otolaryngological surgeries might lead to partial paralysis. Previous models to predict the temperature elevation have been suggested, but were not validated or have the disadvantages of computation time and complexity which does not allow real time predictions. Within this study, an analytical temperature prediction model is proposed which uses the torque signal of the drilling process to model the heat production of the drill bit. A simple Green's disk source function is used to solve the three dimensional heat equation along the drilling axis. Additionally, an extensive experimental study was carried out to validate the model. A custom CNC-setup with a load cell and a thermal camera was used to measure the axial drilling torque and force as well as temperature elevations. Bones with different sets of bone volume fraction were drilled with two drill bits ([Formula: see text]1.8 mm and [Formula: see text]2.5 mm) and repeated eight times. The model was calibrated with 5 of 40 measurements and successfully validated with the rest of the data ([Formula: see text]C). It was also found that the temperature elevation can be predicted using only the torque signal of the drilling process. In the future, the model could be used to monitor and control the drilling process of surgeries close to vulnerable structures.
Multiplexed Predictive Control of a Large Commercial Turbofan Engine
NASA Technical Reports Server (NTRS)
Richter, hanz; Singaraju, Anil; Litt, Jonathan S.
2008-01-01
Model predictive control is a strategy well-suited to handle the highly complex, nonlinear, uncertain, and constrained dynamics involved in aircraft engine control problems. However, it has thus far been infeasible to implement model predictive control in engine control applications, because of the combination of model complexity and the time allotted for the control update calculation. In this paper, a multiplexed implementation is proposed that dramatically reduces the computational burden of the quadratic programming optimization that must be solved online as part of the model-predictive-control algorithm. Actuator updates are calculated sequentially and cyclically in a multiplexed implementation, as opposed to the simultaneous optimization taking place in conventional model predictive control. Theoretical aspects are discussed based on a nominal model, and actual computational savings are demonstrated using a realistic commercial engine model.
In-silico wear prediction for knee replacements--methodology and corroboration.
Strickland, M A; Taylor, M
2009-07-22
The capability to predict in-vivo wear of knee replacements is a valuable pre-clinical analysis tool for implant designers. Traditionally, time-consuming experimental tests provided the principal means of investigating wear. Today, computational models offer an alternative. However, the validity of these models has not been demonstrated across a range of designs and test conditions, and several different formulas are in contention for estimating wear rates, limiting confidence in the predictive power of these in-silico models. This study collates and retrospectively simulates a wide range of experimental wear tests using fast rigid-body computational models with extant wear prediction algorithms, to assess the performance of current in-silico wear prediction tools. The number of tests corroborated gives a broader, more general assessment of the performance of these wear-prediction tools, and provides better estimates of the wear 'constants' used in computational models. High-speed rigid-body modelling allows a range of alternative algorithms to be evaluated. Whilst most cross-shear (CS)-based models perform comparably, the 'A/A+B' wear model appears to offer the best predictive power amongst existing wear algorithms. However, the range and variability of experimental data leaves considerable uncertainty in the results. More experimental data with reduced variability and more detailed reporting of studies will be necessary to corroborate these models with greater confidence. With simulation times reduced to only a few minutes, these models are ideally suited to large-volume 'design of experiment' or probabilistic studies (which are essential if pre-clinical assessment tools are to begin addressing the degree of variation observed clinically and in explanted components).
Multiscale Modeling of Angiogenesis and Predictive Capacity
NASA Astrophysics Data System (ADS)
Pillay, Samara; Byrne, Helen; Maini, Philip
Tumors induce the growth of new blood vessels from existing vasculature through angiogenesis. Using an agent-based approach, we model the behavior of individual endothelial cells during angiogenesis. We incorporate crowding effects through volume exclusion, motility of cells through biased random walks, and include birth and death-like processes. We use the transition probabilities associated with the discrete model and a discrete conservation equation for cell occupancy to determine collective cell behavior, in terms of partial differential equations (PDEs). We derive three PDE models incorporating single, multi-species and no volume exclusion. By fitting the parameters in our PDE models and other well-established continuum models to agent-based simulations during a specific time period, and then comparing the outputs from the PDE models and agent-based model at later times, we aim to determine how well the PDE models predict the future behavior of the agent-based model. We also determine whether predictions differ across PDE models and the significance of those differences. This may impact drug development strategies based on PDE models.
Water quality management using statistical analysis and time-series prediction model
NASA Astrophysics Data System (ADS)
Parmar, Kulwinder Singh; Bhardwaj, Rashmi
2014-12-01
This paper deals with water quality management using statistical analysis and time-series prediction model. The monthly variation of water quality standards has been used to compare statistical mean, median, mode, standard deviation, kurtosis, skewness, coefficient of variation at Yamuna River. Model validated using R-squared, root mean square error, mean absolute percentage error, maximum absolute percentage error, mean absolute error, maximum absolute error, normalized Bayesian information criterion, Ljung-Box analysis, predicted value and confidence limits. Using auto regressive integrated moving average model, future water quality parameters values have been estimated. It is observed that predictive model is useful at 95 % confidence limits and curve is platykurtic for potential of hydrogen (pH), free ammonia, total Kjeldahl nitrogen, dissolved oxygen, water temperature (WT); leptokurtic for chemical oxygen demand, biochemical oxygen demand. Also, it is observed that predicted series is close to the original series which provides a perfect fit. All parameters except pH and WT cross the prescribed limits of the World Health Organization /United States Environmental Protection Agency, and thus water is not fit for drinking, agriculture and industrial use.
Testing the Predictive Validity of the Hendrich II Fall Risk Model.
Jung, Hyesil; Park, Hyeoun-Ae
2018-03-01
Cumulative data on patient fall risk have been compiled in electronic medical records systems, and it is possible to test the validity of fall-risk assessment tools using these data between the times of admission and occurrence of a fall. The Hendrich II Fall Risk Model scores assessed during three time points of hospital stays were extracted and used for testing the predictive validity: (a) upon admission, (b) when the maximum fall-risk score from admission to falling or discharge, and (c) immediately before falling or discharge. Predictive validity was examined using seven predictive indicators. In addition, logistic regression analysis was used to identify factors that significantly affect the occurrence of a fall. Among the different time points, the maximum fall-risk score assessed between admission and falling or discharge showed the best predictive performance. Confusion or disorientation and having a poor ability to rise from a sitting position were significant risk factors for a fall.
Prediction of the blowout of jet diffusion flames in a coflowing stream of air
DOE Office of Scientific and Technical Information (OSTI.GOV)
Karbasi, M.; Wierzba, I.
1995-12-31
The blowout limits of a lifted diffusion flame in a coflowing stream of air are estimated using a simple model for extinction, for a range of fuels, jet diameters and co-flowing stream velocities. The proposed model uses a parameter which relates to the ratio of a time associated with the mixing processes in a turbulent jet to a characteristic chemical time. The Kolmogorov microscale of time is used as time scale in this model. It is shown that turbulent diffusion flames are quenched by excessive turbulence for a critical value of this parameter. The predicted blowout velocity of diffusion flamesmore » obtained using this model is in good agreement with the available experimental data.« less
Outcome Prediction in Mathematical Models of Immune Response to Infection.
Mai, Manuel; Wang, Kun; Huber, Greg; Kirby, Michael; Shattuck, Mark D; O'Hern, Corey S
2015-01-01
Clinicians need to predict patient outcomes with high accuracy as early as possible after disease inception. In this manuscript, we show that patient-to-patient variability sets a fundamental limit on outcome prediction accuracy for a general class of mathematical models for the immune response to infection. However, accuracy can be increased at the expense of delayed prognosis. We investigate several systems of ordinary differential equations (ODEs) that model the host immune response to a pathogen load. Advantages of systems of ODEs for investigating the immune response to infection include the ability to collect data on large numbers of 'virtual patients', each with a given set of model parameters, and obtain many time points during the course of the infection. We implement patient-to-patient variability v in the ODE models by randomly selecting the model parameters from distributions with coefficients of variation v that are centered on physiological values. We use logistic regression with one-versus-all classification to predict the discrete steady-state outcomes of the system. We find that the prediction algorithm achieves near 100% accuracy for v = 0, and the accuracy decreases with increasing v for all ODE models studied. The fact that multiple steady-state outcomes can be obtained for a given initial condition, i.e. the basins of attraction overlap in the space of initial conditions, limits the prediction accuracy for v > 0. Increasing the elapsed time of the variables used to train and test the classifier, increases the prediction accuracy, while adding explicit external noise to the ODE models decreases the prediction accuracy. Our results quantify the competition between early prognosis and high prediction accuracy that is frequently encountered by clinicians.
A simple two-stage model predicts response time distributions.
Carpenter, R H S; Reddi, B A J; Anderson, A J
2009-08-15
The neural mechanisms underlying reaction times have previously been modelled in two distinct ways. When stimuli are hard to detect, response time tends to follow a random-walk model that integrates noisy sensory signals. But studies investigating the influence of higher-level factors such as prior probability and response urgency typically use highly detectable targets, and response times then usually correspond to a linear rise-to-threshold mechanism. Here we show that a model incorporating both types of element in series - a detector integrating noisy afferent signals, followed by a linear rise-to-threshold performing decision - successfully predicts not only mean response times but, much more stringently, the observed distribution of these times and the rate of decision errors over a wide range of stimulus detectability. By reconciling what previously may have seemed to be conflicting theories, we are now closer to having a complete description of reaction time and the decision processes that underlie it.
Prediction of stock markets by the evolutionary mix-game model
NASA Astrophysics Data System (ADS)
Chen, Fang; Gou, Chengling; Guo, Xiaoqian; Gao, Jieping
2008-06-01
This paper presents the efforts of using the evolutionary mix-game model, which is a modified form of the agent-based mix-game model, to predict financial time series. Here, we have carried out three methods to improve the original mix-game model by adding the abilities of strategy evolution to agents, and then applying the new model referred to as the evolutionary mix-game model to forecast the Shanghai Stock Exchange Composite Index. The results show that these modifications can improve the accuracy of prediction greatly when proper parameters are chosen.