Combining forecast weights: Why and how?
NASA Astrophysics Data System (ADS)
Yin, Yip Chee; Kok-Haur, Ng; Hock-Eam, Lim
2012-09-01
This paper proposes a procedure called forecast weight averaging which is a specific combination of forecast weights obtained from different methods of constructing forecast weights for the purpose of improving the accuracy of pseudo out of sample forecasting. It is found that under certain specified conditions, forecast weight averaging can lower the mean squared forecast error obtained from model averaging. In addition, we show that in a linear and homoskedastic environment, this superior predictive ability of forecast weight averaging holds true irrespective whether the coefficients are tested by t statistic or z statistic provided the significant level is within the 10% range. By theoretical proofs and simulation study, we have shown that model averaging like, variance model averaging, simple model averaging and standard error model averaging, each produces mean squared forecast error larger than that of forecast weight averaging. Finally, this result also holds true marginally when applied to business and economic empirical data sets, Gross Domestic Product (GDP growth rate), Consumer Price Index (CPI) and Average Lending Rate (ALR) of Malaysia.
The Weighted-Average Lagged Ensemble.
DelSole, T; Trenary, L; Tippett, M K
2017-11-01
A lagged ensemble is an ensemble of forecasts from the same model initialized at different times but verifying at the same time. The skill of a lagged ensemble mean can be improved by assigning weights to different forecasts in such a way as to maximize skill. If the forecasts are bias corrected, then an unbiased weighted lagged ensemble requires the weights to sum to one. Such a scheme is called a weighted-average lagged ensemble. In the limit of uncorrelated errors, the optimal weights are positive and decay monotonically with lead time, so that the least skillful forecasts have the least weight. In more realistic applications, the optimal weights do not always behave this way. This paper presents a series of analytic examples designed to illuminate conditions under which the weights of an optimal weighted-average lagged ensemble become negative or depend nonmonotonically on lead time. It is shown that negative weights are most likely to occur when the errors grow rapidly and are highly correlated across lead time. The weights are most likely to behave nonmonotonically when the mean square error is approximately constant over the range forecasts included in the lagged ensemble. An extreme example of the latter behavior is presented in which the optimal weights vanish everywhere except at the shortest and longest lead times.
Individual versus superensemble forecasts of seasonal influenza outbreaks in the United States.
Yamana, Teresa K; Kandula, Sasikiran; Shaman, Jeffrey
2017-11-01
Recent research has produced a number of methods for forecasting seasonal influenza outbreaks. However, differences among the predicted outcomes of competing forecast methods can limit their use in decision-making. Here, we present a method for reconciling these differences using Bayesian model averaging. We generated retrospective forecasts of peak timing, peak incidence, and total incidence for seasonal influenza outbreaks in 48 states and 95 cities using 21 distinct forecast methods, and combined these individual forecasts to create weighted-average superensemble forecasts. We compared the relative performance of these individual and superensemble forecast methods by geographic location, timing of forecast, and influenza season. We find that, overall, the superensemble forecasts are more accurate than any individual forecast method and less prone to producing a poor forecast. Furthermore, we find that these advantages increase when the superensemble weights are stratified according to the characteristics of the forecast or geographic location. These findings indicate that different competing influenza prediction systems can be combined into a single more accurate forecast product for operational delivery in real time.
Individual versus superensemble forecasts of seasonal influenza outbreaks in the United States
Kandula, Sasikiran; Shaman, Jeffrey
2017-01-01
Recent research has produced a number of methods for forecasting seasonal influenza outbreaks. However, differences among the predicted outcomes of competing forecast methods can limit their use in decision-making. Here, we present a method for reconciling these differences using Bayesian model averaging. We generated retrospective forecasts of peak timing, peak incidence, and total incidence for seasonal influenza outbreaks in 48 states and 95 cities using 21 distinct forecast methods, and combined these individual forecasts to create weighted-average superensemble forecasts. We compared the relative performance of these individual and superensemble forecast methods by geographic location, timing of forecast, and influenza season. We find that, overall, the superensemble forecasts are more accurate than any individual forecast method and less prone to producing a poor forecast. Furthermore, we find that these advantages increase when the superensemble weights are stratified according to the characteristics of the forecast or geographic location. These findings indicate that different competing influenza prediction systems can be combined into a single more accurate forecast product for operational delivery in real time. PMID:29107987
NASA Astrophysics Data System (ADS)
Yin, Yip Chee; Hock-Eam, Lim
2012-09-01
Our empirical results show that we can predict GDP growth rate more accurately in continent with fewer large economies, compared to smaller economies like Malaysia. This difficulty is very likely positively correlated with subsidy or social security policies. The stage of economic development and level of competiveness also appears to have interactive effects on this forecast stability. These results are generally independent of the forecasting procedures. Countries with high stability in their economic growth, forecasting by model selection is better than model averaging. Overall forecast weight averaging (FWA) is a better forecasting procedure in most countries. FWA also outperforms simple model averaging (SMA) and has the same forecasting ability as Bayesian model averaging (BMA) in almost all countries.
On the Likely Utility of Hybrid Weights Optimized for Variances in Hybrid Error Covariance Models
NASA Astrophysics Data System (ADS)
Satterfield, E.; Hodyss, D.; Kuhl, D.; Bishop, C. H.
2017-12-01
Because of imperfections in ensemble data assimilation schemes, one cannot assume that the ensemble covariance is equal to the true error covariance of a forecast. Previous work demonstrated how information about the distribution of true error variances given an ensemble sample variance can be revealed from an archive of (observation-minus-forecast, ensemble-variance) data pairs. Here, we derive a simple and intuitively compelling formula to obtain the mean of this distribution of true error variances given an ensemble sample variance from (observation-minus-forecast, ensemble-variance) data pairs produced by a single run of a data assimilation system. This formula takes the form of a Hybrid weighted average of the climatological forecast error variance and the ensemble sample variance. Here, we test the extent to which these readily obtainable weights can be used to rapidly optimize the covariance weights used in Hybrid data assimilation systems that employ weighted averages of static covariance models and flow-dependent ensemble based covariance models. Univariate data assimilation and multi-variate cycling ensemble data assimilation are considered. In both cases, it is found that our computationally efficient formula gives Hybrid weights that closely approximate the optimal weights found through the simple but computationally expensive process of testing every plausible combination of weights.
NASA Astrophysics Data System (ADS)
Sardinha-Lourenço, A.; Andrade-Campos, A.; Antunes, A.; Oliveira, M. S.
2018-03-01
Recent research on water demand short-term forecasting has shown that models using univariate time series based on historical data are useful and can be combined with other prediction methods to reduce errors. The behavior of water demands in drinking water distribution networks focuses on their repetitive nature and, under meteorological conditions and similar consumers, allows the development of a heuristic forecast model that, in turn, combined with other autoregressive models, can provide reliable forecasts. In this study, a parallel adaptive weighting strategy of water consumption forecast for the next 24-48 h, using univariate time series of potable water consumption, is proposed. Two Portuguese potable water distribution networks are used as case studies where the only input data are the consumption of water and the national calendar. For the development of the strategy, the Autoregressive Integrated Moving Average (ARIMA) method and a short-term forecast heuristic algorithm are used. Simulations with the model showed that, when using a parallel adaptive weighting strategy, the prediction error can be reduced by 15.96% and the average error by 9.20%. This reduction is important in the control and management of water supply systems. The proposed methodology can be extended to other forecast methods, especially when it comes to the availability of multiple forecast models.
ERIC Educational Resources Information Center
Zan, Xinxing Anna; Yoon, Sang Won; Khasawneh, Mohammad; Srihari, Krishnaswami
2013-01-01
In an effort to develop a low-cost and user-friendly forecasting model to minimize forecasting error, we have applied average and exponentially weighted return ratios to project undergraduate student enrollment. We tested the proposed forecasting models with different sets of historical enrollment data, such as university-, school-, and…
1998 Annual Tropical Cyclone Report
1998-01-01
1998 ANNUAL TROPICAL CYCLONE REPORT Microwave imagery of Typhoon Rex (06W) as it passed through the Bonin Islands, taken at 0800Z on 28 August... DAVE ) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 160 5.3 TESTING AND RESULTS...weighting the forecasts given by XTRP and CLIM. 5.2.5.2 DYNAMIC AVERAGE ( DAVE ) A simple average of all dynamic forecast aids: NOGAPS (NGPS), Bracknell
NASA Astrophysics Data System (ADS)
Pérez, B.; Brouwer, R.; Beckers, J.; Paradis, D.; Balseiro, C.; Lyons, K.; Cure, M.; Sotillo, M. G.; Hackett, B.; Verlaan, M.; Fanjul, E. A.
2012-03-01
ENSURF (Ensemble SURge Forecast) is a multi-model application for sea level forecast that makes use of several storm surge or circulation models and near-real time tide gauge data in the region, with the following main goals: 1. providing easy access to existing forecasts, as well as to its performance and model validation, by means of an adequate visualization tool; 2. generation of better forecasts of sea level, including confidence intervals, by means of the Bayesian Model Average technique (BMA). The Bayesian Model Average technique generates an overall forecast probability density function (PDF) by making a weighted average of the individual forecasts PDF's; the weights represent the Bayesian likelihood that a model will give the correct forecast and are continuously updated based on the performance of the models during a recent training period. This implies the technique needs the availability of sea level data from tide gauges in near-real time. The system was implemented for the European Atlantic facade (IBIROOS region) and Western Mediterranean coast based on the MATROOS visualization tool developed by Deltares. Results of validation of the different models and BMA implementation for the main harbours are presented for these regions where this kind of activity is performed for the first time. The system is currently operational at Puertos del Estado and has proved to be useful in the detection of calibration problems in some of the circulation models, in the identification of the systematic differences between baroclinic and barotropic models for sea level forecasts and to demonstrate the feasibility of providing an overall probabilistic forecast, based on the BMA method.
NASA Astrophysics Data System (ADS)
Pérez, B.; Brower, R.; Beckers, J.; Paradis, D.; Balseiro, C.; Lyons, K.; Cure, M.; Sotillo, M. G.; Hacket, B.; Verlaan, M.; Alvarez Fanjul, E.
2011-04-01
ENSURF (Ensemble SURge Forecast) is a multi-model application for sea level forecast that makes use of existing storm surge or circulation models today operational in Europe, as well as near-real time tide gauge data in the region, with the following main goals: - providing an easy access to existing forecasts, as well as to its performance and model validation, by means of an adequate visualization tool - generation of better forecasts of sea level, including confidence intervals, by means of the Bayesian Model Average Technique (BMA) The system was developed and implemented within ECOOP (C.No. 036355) European Project for the NOOS and the IBIROOS regions, based on MATROOS visualization tool developed by Deltares. Both systems are today operational at Deltares and Puertos del Estado respectively. The Bayesian Modelling Average technique generates an overall forecast probability density function (PDF) by making a weighted average of the individual forecasts PDF's; the weights represent the probability that a model will give the correct forecast PDF and are determined and updated operationally based on the performance of the models during a recent training period. This implies the technique needs the availability of sea level data from tide gauges in near-real time. Results of validation of the different models and BMA implementation for the main harbours will be presented for the IBIROOS and Western Mediterranean regions, where this kind of activity is performed for the first time. The work has proved to be useful to detect problems in some of the circulation models not previously well calibrated with sea level data, to identify the differences on baroclinic and barotropic models for sea level applications and to confirm the general improvement of the BMA forecasts.
Liu, Dong-jun; Li, Li
2015-01-01
For the issue of haze-fog, PM2.5 is the main influence factor of haze-fog pollution in China. The trend of PM2.5 concentration was analyzed from a qualitative point of view based on mathematical models and simulation in this study. The comprehensive forecasting model (CFM) was developed based on the combination forecasting ideas. Autoregressive Integrated Moving Average Model (ARIMA), Artificial Neural Networks (ANNs) model and Exponential Smoothing Method (ESM) were used to predict the time series data of PM2.5 concentration. The results of the comprehensive forecasting model were obtained by combining the results of three methods based on the weights from the Entropy Weighting Method. The trend of PM2.5 concentration in Guangzhou China was quantitatively forecasted based on the comprehensive forecasting model. The results were compared with those of three single models, and PM2.5 concentration values in the next ten days were predicted. The comprehensive forecasting model balanced the deviation of each single prediction method, and had better applicability. It broadens a new prediction method for the air quality forecasting field. PMID:26110332
Liu, Dong-jun; Li, Li
2015-06-23
For the issue of haze-fog, PM2.5 is the main influence factor of haze-fog pollution in China. The trend of PM2.5 concentration was analyzed from a qualitative point of view based on mathematical models and simulation in this study. The comprehensive forecasting model (CFM) was developed based on the combination forecasting ideas. Autoregressive Integrated Moving Average Model (ARIMA), Artificial Neural Networks (ANNs) model and Exponential Smoothing Method (ESM) were used to predict the time series data of PM2.5 concentration. The results of the comprehensive forecasting model were obtained by combining the results of three methods based on the weights from the Entropy Weighting Method. The trend of PM2.5 concentration in Guangzhou China was quantitatively forecasted based on the comprehensive forecasting model. The results were compared with those of three single models, and PM2.5 concentration values in the next ten days were predicted. The comprehensive forecasting model balanced the deviation of each single prediction method, and had better applicability. It broadens a new prediction method for the air quality forecasting field.
NASA Astrophysics Data System (ADS)
Soltanzadeh, I.; Azadi, M.; Vakili, G. A.
2011-07-01
Using Bayesian Model Averaging (BMA), an attempt was made to obtain calibrated probabilistic numerical forecasts of 2-m temperature over Iran. The ensemble employs three limited area models (WRF, MM5 and HRM), with WRF used with five different configurations. Initial and boundary conditions for MM5 and WRF are obtained from the National Centers for Environmental Prediction (NCEP) Global Forecast System (GFS) and for HRM the initial and boundary conditions come from analysis of Global Model Europe (GME) of the German Weather Service. The resulting ensemble of seven members was run for a period of 6 months (from December 2008 to May 2009) over Iran. The 48-h raw ensemble outputs were calibrated using BMA technique for 120 days using a 40 days training sample of forecasts and relative verification data. The calibrated probabilistic forecasts were assessed using rank histogram and attribute diagrams. Results showed that application of BMA improved the reliability of the raw ensemble. Using the weighted ensemble mean forecast as a deterministic forecast it was found that the deterministic-style BMA forecasts performed usually better than the best member's deterministic forecast.
Using Bayes Model Averaging for Wind Power Forecasts
NASA Astrophysics Data System (ADS)
Preede Revheim, Pål; Beyer, Hans Georg
2014-05-01
For operational purposes predictions of the forecasts of the lumped output of groups of wind farms spread over larger geographic areas will often be of interest. A naive approach is to make forecasts for each individual site and sum them up to get the group forecast. It is however well documented that a better choice is to use a model that also takes advantage of spatial smoothing effects. It might however be the case that some sites tends to more accurately reflect the total output of the region, either in general or for certain wind directions. It will then be of interest giving these a greater influence over the group forecast. Bayesian model averaging (BMA) is a statistical post-processing method for producing probabilistic forecasts from ensembles. Raftery et al. [1] show how BMA can be used for statistical post processing of forecast ensembles, producing PDFs of future weather quantities. The BMA predictive PDF of a future weather quantity is a weighted average of the ensemble members' PDFs, where the weights can be interpreted as posterior probabilities and reflect the ensemble members' contribution to overall forecasting skill over a training period. In Revheim and Beyer [2] the BMA procedure used in Sloughter, Gneiting and Raftery [3] were found to produce fairly accurate PDFs for the future mean wind speed of a group of sites from the single sites wind speeds. However, when the procedure was attempted applied to wind power it resulted in either problems with the estimation of the parameters (mainly caused by longer consecutive periods of no power production) or severe underestimation (mainly caused by problems with reflecting the power curve). In this paper the problems that arose when applying BMA to wind power forecasting is met through two strategies. First, the BMA procedure is run with a combination of single site wind speeds and single site wind power production as input. This solves the problem with longer consecutive periods where the input data does not contain information, but it has the disadvantage of nearly doubling the number of model parameters to be estimated. Second, the BMA procedure is run with group mean wind power as the response variable instead of group mean wind speed. This also solves the problem with longer consecutive periods without information in the input data, but it leaves the power curve to also be estimated from the data. [1] Raftery, A. E., et al. (2005). Using Bayesian Model Averaging to Calibrate Forecast Ensembles. Monthly Weather Review, 133, 1155-1174. [2]Revheim, P. P. and H. G. Beyer (2013). Using Bayesian Model Averaging for wind farm group forecasts. EWEA Wind Power Forecasting Technology Workshop,Rotterdam, 4-5 December 2013. [3]Sloughter, J. M., T. Gneiting and A. E. Raftery (2010). Probabilistic Wind Speed Forecasting Using Ensembles and Bayesian Model Averaging. Journal of the American Statistical Association, Vol. 105, No. 489, 25-35
Song, Jingwei; He, Jiaying; Zhu, Menghua; Tan, Debao; Zhang, Yu; Ye, Song; Shen, Dingtao; Zou, Pengfei
2014-01-01
A simulated annealing (SA) based variable weighted forecast model is proposed to combine and weigh local chaotic model, artificial neural network (ANN), and partial least square support vector machine (PLS-SVM) to build a more accurate forecast model. The hybrid model was built and multistep ahead prediction ability was tested based on daily MSW generation data from Seattle, Washington, the United States. The hybrid forecast model was proved to produce more accurate and reliable results and to degrade less in longer predictions than three individual models. The average one-week step ahead prediction has been raised from 11.21% (chaotic model), 12.93% (ANN), and 12.94% (PLS-SVM) to 9.38%. Five-week average has been raised from 13.02% (chaotic model), 15.69% (ANN), and 15.92% (PLS-SVM) to 11.27%. PMID:25301508
NASA Astrophysics Data System (ADS)
Rings, Joerg; Vrugt, Jasper A.; Schoups, Gerrit; Huisman, Johan A.; Vereecken, Harry
2012-05-01
Bayesian model averaging (BMA) is a standard method for combining predictive distributions from different models. In recent years, this method has enjoyed widespread application and use in many fields of study to improve the spread-skill relationship of forecast ensembles. The BMA predictive probability density function (pdf) of any quantity of interest is a weighted average of pdfs centered around the individual (possibly bias-corrected) forecasts, where the weights are equal to posterior probabilities of the models generating the forecasts, and reflect the individual models skill over a training (calibration) period. The original BMA approach presented by Raftery et al. (2005) assumes that the conditional pdf of each individual model is adequately described with a rather standard Gaussian or Gamma statistical distribution, possibly with a heteroscedastic variance. Here we analyze the advantages of using BMA with a flexible representation of the conditional pdf. A joint particle filtering and Gaussian mixture modeling framework is presented to derive analytically, as closely and consistently as possible, the evolving forecast density (conditional pdf) of each constituent ensemble member. The median forecasts and evolving conditional pdfs of the constituent models are subsequently combined using BMA to derive one overall predictive distribution. This paper introduces the theory and concepts of this new ensemble postprocessing method, and demonstrates its usefulness and applicability by numerical simulation of the rainfall-runoff transformation using discharge data from three different catchments in the contiguous United States. The revised BMA method receives significantly lower-prediction errors than the original default BMA method (due to filtering) with predictive uncertainty intervals that are substantially smaller but still statistically coherent (due to the use of a time-variant conditional pdf).
Kuo, R J; Wu, P; Wang, C P
2002-09-01
Sales forecasting plays a very prominent role in business strategy. Numerous investigations addressing this problem have generally employed statistical methods, such as regression or autoregressive and moving average (ARMA). However, sales forecasting is very complicated owing to influence by internal and external environments. Recently, artificial neural networks (ANNs) have also been applied in sales forecasting since their promising performances in the areas of control and pattern recognition. However, further improvement is still necessary since unique circumstances, e.g. promotion, cause a sudden change in the sales pattern. Thus, this study utilizes a proposed fuzzy neural network (FNN), which is able to eliminate the unimportant weights, for the sake of learning fuzzy IF-THEN rules obtained from the marketing experts with respect to promotion. The result from FNN is further integrated with the time series data through an ANN. Both the simulated and real-world problem results show that FNN with weight elimination can have lower training error compared with the regular FNN. Besides, real-world problem results also indicate that the proposed estimation system outperforms the conventional statistical method and single ANN in accuracy.
Global Earthquake Activity Rate models based on version 2 of the Global Strain Rate Map
NASA Astrophysics Data System (ADS)
Bird, P.; Kreemer, C.; Kagan, Y. Y.; Jackson, D. D.
2013-12-01
Global Earthquake Activity Rate (GEAR) models have usually been based on either relative tectonic motion (fault slip rates and/or distributed strain rates), or on smoothing of seismic catalogs. However, a hybrid approach appears to perform better than either parent, at least in some retrospective tests. First, we construct a Tectonic ('T') forecast of shallow (≤ 70 km) seismicity based on global plate-boundary strain rates from version 2 of the Global Strain Rate Map. Our approach is the SHIFT (Seismic Hazard Inferred From Tectonics) method described by Bird et al. [2010, SRL], in which the character of the strain rate tensor (thrusting and/or strike-slip and/or normal) is used to select the most comparable type of plate boundary for calibration of the coupled seismogenic lithosphere thickness and corner magnitude. One difference is that activity of offshore plate boundaries is spatially smoothed using empirical half-widths [Bird & Kagan, 2004, BSSA] before conversion to seismicity. Another is that the velocity-dependence of coupling in subduction and continental-convergent boundaries [Bird et al., 2009, BSSA] is incorporated. Another forecast component is the smoothed-seismicity ('S') forecast model of [Kagan & Jackson, 1994, JGR; Kagan & Jackson, 2010, GJI], which was based on optimized smoothing of the shallow part of the GCMT catalog, years 1977-2004. Both forecasts were prepared for threshold magnitude 5.767. Then, we create hybrid forecasts by one of 3 methods: (a) taking the greater of S or T; (b) simple weighted-average of S and T; or (c) log of the forecast rate is a weighted average of the logs of S and T. In methods (b) and (c) there is one free parameter, which is the fractional contribution from S. All hybrid forecasts are normalized to the same global rate. Pseudo-prospective tests for 2005-2012 (using versions of S and T calibrated on years 1977-2004) show that many hybrid models outperform both parents (S and T), and that the optimal weight on S is in the neighborhood of 5/8. This is true whether forecast performance is scored by Kagan's [2009, GJI] I1 information score, or by the S-test of Zechar & Jordan [2010, BSSA]. These hybrids also score well (0.97) in the ASS-test of Zechar & Jordan [2008, GJI] with respect to prior relative intensity.
Statistical earthquake focal mechanism forecasts
NASA Astrophysics Data System (ADS)
Kagan, Yan Y.; Jackson, David D.
2014-04-01
Forecasts of the focal mechanisms of future shallow (depth 0-70 km) earthquakes are important for seismic hazard estimates and Coulomb stress, and other models of earthquake occurrence. Here we report on a high-resolution global forecast of earthquake rate density as a function of location, magnitude and focal mechanism. In previous publications we reported forecasts of 0.5° spatial resolution, covering the latitude range from -75° to +75°, based on the Global Central Moment Tensor earthquake catalogue. In the new forecasts we have improved the spatial resolution to 0.1° and the latitude range from pole to pole. Our focal mechanism estimates require distance-weighted combinations of observed focal mechanisms within 1000 km of each gridpoint. Simultaneously, we calculate an average rotation angle between the forecasted mechanism and all the surrounding mechanisms, using the method of Kagan & Jackson proposed in 1994. This average angle reveals the level of tectonic complexity of a region and indicates the accuracy of the prediction. The procedure becomes problematical where longitude lines are not approximately parallel, and where shallow earthquakes are so sparse that an adequate sample spans very large distances. North or south of 75°, the azimuths of points 1000 km away may vary by about 35°. We solved this problem by calculating focal mechanisms on a plane tangent to the Earth's surface at each forecast point, correcting for the rotation of the longitude lines at the locations of earthquakes included in the averaging. The corrections are negligible between -30° and +30° latitude, but outside that band uncorrected rotations can be significantly off. Improved forecasts at 0.5° and 0.1° resolution are posted at http://eq.ess.ucla.edu/kagan/glob_gcmt_index.html.
National Hospital Input Price Index
Freeland, Mark S.; Anderson, Gerard; Schendler, Carol Ellen
1979-01-01
The national community hospital input price index presented here isolates the effects of prices of goods and services required to produce hospital care and measures the average percent change in prices for a fixed market basket of hospital inputs. Using the methodology described in this article, weights for various expenditure categories were estimated and proxy price variables associated with each were selected. The index is calculated for the historical period 1970 through 1978 and forecast for 1979 through 1981. During the historical period, the input price index increased an average of 8.0 percent a year, compared with an average rate of increase of 6.6 percent for overall consumer prices. For the period 1979 through 1981, the average annual increase is forecast at between 8.5 and 9.0 percent. Using the index to deflate growth in expenses, the level of real growth in expenditures per inpatient day (net service intensity growth) averaged 4.5 percent per year with considerable annual variation related to government and hospital industry policies. PMID:10309052
National hospital input price index.
Freeland, M S; Anderson, G; Schendler, C E
1979-01-01
The national community hospital input price index presented here isolates the effects of prices of goods and services required to produce hospital care and measures the average percent change in prices for a fixed market basket of hospital inputs. Using the methodology described in this article, weights for various expenditure categories were estimated and proxy price variables associated with each were selected. The index is calculated for the historical period 1970 through 1978 and forecast for 1979 through 1981. During the historical period, the input price index increased an average of 8.0 percent a year, compared with an average rate of increase of 6.6 percent for overall consumer prices. For the period 1979 through 1981, the average annual increase is forecast at between 8.5 and 9.0 per cent. Using the index to deflate growth in expenses, the level of real growth in expenditures per inpatient day (net service intensity growth) averaged 4.5 percent per year with considerable annual variation related to government and hospital industry policies.
A travel time forecasting model based on change-point detection method
NASA Astrophysics Data System (ADS)
LI, Shupeng; GUANG, Xiaoping; QIAN, Yongsheng; ZENG, Junwei
2017-06-01
Travel time parameters obtained from road traffic sensors data play an important role in traffic management practice. A travel time forecasting model is proposed for urban road traffic sensors data based on the method of change-point detection in this paper. The first-order differential operation is used for preprocessing over the actual loop data; a change-point detection algorithm is designed to classify the sequence of large number of travel time data items into several patterns; then a travel time forecasting model is established based on autoregressive integrated moving average (ARIMA) model. By computer simulation, different control parameters are chosen for adaptive change point search for travel time series, which is divided into several sections of similar state.Then linear weight function is used to fit travel time sequence and to forecast travel time. The results show that the model has high accuracy in travel time forecasting.
Model-free aftershock forecasts constructed from similar sequences in the past
NASA Astrophysics Data System (ADS)
van der Elst, N.; Page, M. T.
2017-12-01
The basic premise behind aftershock forecasting is that sequences in the future will be similar to those in the past. Forecast models typically use empirically tuned parametric distributions to approximate past sequences, and project those distributions into the future to make a forecast. While parametric models do a good job of describing average outcomes, they are not explicitly designed to capture the full range of variability between sequences, and can suffer from over-tuning of the parameters. In particular, parametric forecasts may produce a high rate of "surprises" - sequences that land outside the forecast range. Here we present a non-parametric forecast method that cuts out the parametric "middleman" between training data and forecast. The method is based on finding past sequences that are similar to the target sequence, and evaluating their outcomes. We quantify similarity as the Poisson probability that the observed event count in a past sequence reflects the same underlying intensity as the observed event count in the target sequence. Event counts are defined in terms of differential magnitude relative to the mainshock. The forecast is then constructed from the distribution of past sequences outcomes, weighted by their similarity. We compare the similarity forecast with the Reasenberg and Jones (RJ95) method, for a set of 2807 global aftershock sequences of M≥6 mainshocks. We implement a sequence-specific RJ95 forecast using a global average prior and Bayesian updating, but do not propagate epistemic uncertainty. The RJ95 forecast is somewhat more precise than the similarity forecast: 90% of observed sequences fall within a factor of two of the median RJ95 forecast value, whereas the fraction is 85% for the similarity forecast. However, the surprise rate is much higher for the RJ95 forecast; 10% of observed sequences fall in the upper 2.5% of the (Poissonian) forecast range. The surprise rate is less than 3% for the similarity forecast. The similarity forecast may be useful to emergency managers and non-specialists when confidence or expertise in parametric forecasting may be lacking. The method makes over-tuning impossible, and minimizes the rate of surprises. At the least, this forecast constitutes a useful benchmark for more precisely tuned parametric forecasts.
Online probabilistic learning with an ensemble of forecasts
NASA Astrophysics Data System (ADS)
Thorey, Jean; Mallet, Vivien; Chaussin, Christophe
2016-04-01
Our objective is to produce a calibrated weighted ensemble to forecast a univariate time series. In addition to a meteorological ensemble of forecasts, we rely on observations or analyses of the target variable. The celebrated Continuous Ranked Probability Score (CRPS) is used to evaluate the probabilistic forecasts. However applying the CRPS on weighted empirical distribution functions (deriving from the weighted ensemble) may introduce a bias because of which minimizing the CRPS does not produce the optimal weights. Thus we propose an unbiased version of the CRPS which relies on clusters of members and is strictly proper. We adapt online learning methods for the minimization of the CRPS. These methods generate the weights associated to the members in the forecasted empirical distribution function. The weights are updated before each forecast step using only past observations and forecasts. Our learning algorithms provide the theoretical guarantee that, in the long run, the CRPS of the weighted forecasts is at least as good as the CRPS of any weighted ensemble with weights constant in time. In particular, the performance of our forecast is better than that of any subset ensemble with uniform weights. A noteworthy advantage of our algorithm is that it does not require any assumption on the distributions of the observations and forecasts, both for the application and for the theoretical guarantee to hold. As application example on meteorological forecasts for photovoltaic production integration, we show that our algorithm generates a calibrated probabilistic forecast, with significant performance improvements on probabilistic diagnostic tools (the CRPS, the reliability diagram and the rank histogram).
Fuzzy forecasting based on fuzzy-trend logical relationship groups.
Chen, Shyi-Ming; Wang, Nai-Yi
2010-10-01
In this paper, we present a new method to predict the Taiwan Stock Exchange Capitalization Weighted Stock Index (TAIEX) based on fuzzy-trend logical relationship groups (FTLRGs). The proposed method divides fuzzy logical relationships into FTLRGs based on the trend of adjacent fuzzy sets appearing in the antecedents of fuzzy logical relationships. First, we apply an automatic clustering algorithm to cluster the historical data into intervals of different lengths. Then, we define fuzzy sets based on these intervals of different lengths. Then, the historical data are fuzzified into fuzzy sets to derive fuzzy logical relationships. Then, we divide the fuzzy logical relationships into FTLRGs for forecasting the TAIEX. Moreover, we also apply the proposed method to forecast the enrollments and the inventory demand, respectively. The experimental results show that the proposed method gets higher average forecasting accuracy rates than the existing methods.
Models for short term malaria prediction in Sri Lanka
Briët, Olivier JT; Vounatsou, Penelope; Gunawardena, Dissanayake M; Galappaththy, Gawrie NL; Amerasinghe, Priyanie H
2008-01-01
Background Malaria in Sri Lanka is unstable and fluctuates in intensity both spatially and temporally. Although the case counts are dwindling at present, given the past history of resurgence of outbreaks despite effective control measures, the control programmes have to stay prepared. The availability of long time series of monitored/diagnosed malaria cases allows for the study of forecasting models, with an aim to developing a forecasting system which could assist in the efficient allocation of resources for malaria control. Methods Exponentially weighted moving average models, autoregressive integrated moving average (ARIMA) models with seasonal components, and seasonal multiplicative autoregressive integrated moving average (SARIMA) models were compared on monthly time series of district malaria cases for their ability to predict the number of malaria cases one to four months ahead. The addition of covariates such as the number of malaria cases in neighbouring districts or rainfall were assessed for their ability to improve prediction of selected (seasonal) ARIMA models. Results The best model for forecasting and the forecasting error varied strongly among the districts. The addition of rainfall as a covariate improved prediction of selected (seasonal) ARIMA models modestly in some districts but worsened prediction in other districts. Improvement by adding rainfall was more frequent at larger forecasting horizons. Conclusion Heterogeneity of patterns of malaria in Sri Lanka requires regionally specific prediction models. Prediction error was large at a minimum of 22% (for one of the districts) for one month ahead predictions. The modest improvement made in short term prediction by adding rainfall as a covariate to these prediction models may not be sufficient to merit investing in a forecasting system for which rainfall data are routinely processed. PMID:18460204
Past and projected trends of body mass index and weight status in South Australia: 2003 to 2019.
Hendrie, Gilly A; Ullah, Shahid; Scott, Jane A; Gray, John; Berry, Narelle; Booth, Sue; Carter, Patricia; Cobiac, Lynne; Coveney, John
2015-12-01
Functional data analysis (FDA) is a forecasting approach that, to date, has not been applied to obesity, and that may provide more accurate forecasting analysis to manage uncertainty in public health. This paper uses FDA to provide projections of Body Mass Index (BMI), overweight and obesity in an Australian population through to 2019. Data from the South Australian Monitoring and Surveillance System (January 2003 to December 2012, n=51,618 adults) were collected via telephone interview survey. FDA was conducted in four steps: 1) age-gender specific BMIs for each year were smoothed using a weighted regression; 2) the functional principal components decomposition was applied to estimate the basis functions; 3) an exponential smoothing state space model was used for forecasting the coefficient series; and 4) forecast coefficients were combined with the basis function. The forecast models suggest that between 2012 and 2019 average BMI will increase from 27.2 kg/m(2) to 28.0 kg/m(2) in males and 26.4 kg/m(2) to 27.6 kg/m(2) in females. The prevalence of obesity is forecast to increase by 6-7 percentage points by 2019 (to 28.7% in males and 29.2% in females). Projections identify age-gender groups at greatest risk of obesity over time. The novel approach will be useful to facilitate more accurate planning and policy development. © 2015 Public Health Association of Australia.
Tourism demand in the Algarve region: Evolution and forecast using SVARMA models
NASA Astrophysics Data System (ADS)
Lopes, Isabel Cristina; Soares, Filomena; Silva, Eliana Costa e.
2017-06-01
Tourism is one of the Portuguese economy's key sectors, and its relative weight has grown over recent years. The Algarve region is particularly focused on attracting foreign tourists and has built over the years a large offer of diversified hotel units. In this paper we present multivariate time series approach to forecast the number of overnight stays in hotel units (hotels, guesthouses or hostels, and tourist apartments) in Algarve. We adjust a seasonal vector autoregressive and moving averages model (SVARMA) to monthly data between 2006 and 2016. The forecast values were compared with the actual values of the overnight stays in Algarve in 2016 and led to a MAPE of 15.1% and RMSE= 53847.28. The MAPE for the Hotel series was merely 4.56%. These forecast values can be used by a hotel manager to predict their occupancy and to determine the best pricing policy.
Naive vs. Sophisticated Methods of Forecasting Public Library Circulations.
ERIC Educational Resources Information Center
Brooks, Terrence A.
1984-01-01
Two sophisticated--autoregressive integrated moving average (ARIMA), straight-line regression--and two naive--simple average, monthly average--forecasting techniques were used to forecast monthly circulation totals of 34 public libraries. Comparisons of forecasts and actual totals revealed that ARIMA and monthly average methods had smallest mean…
NASA Astrophysics Data System (ADS)
Lahmiri, Salim; Boukadoum, Mounir
2015-08-01
We present a new ensemble system for stock market returns prediction where continuous wavelet transform (CWT) is used to analyze return series and backpropagation neural networks (BPNNs) for processing CWT-based coefficients, determining the optimal ensemble weights, and providing final forecasts. Particle swarm optimization (PSO) is used for finding optimal weights and biases for each BPNN. To capture symmetry/asymmetry in the underlying data, three wavelet functions with different shapes are adopted. The proposed ensemble system was tested on three Asian stock markets: The Hang Seng, KOSPI, and Taiwan stock market data. Three statistical metrics were used to evaluate the forecasting accuracy; including, mean of absolute errors (MAE), root mean of squared errors (RMSE), and mean of absolute deviations (MADs). Experimental results showed that our proposed ensemble system outperformed the individual CWT-ANN models each with different wavelet function. In addition, the proposed ensemble system outperformed the conventional autoregressive moving average process. As a result, the proposed ensemble system is suitable to capture symmetry/asymmetry in financial data fluctuations for better prediction accuracy.
NASA Astrophysics Data System (ADS)
Shah-Heydari pour, A.; Pahlavani, P.; Bigdeli, B.
2017-09-01
According to the industrialization of cities and the apparent increase in pollutants and greenhouse gases, the importance of forests as the natural lungs of the earth is felt more than ever to clean these pollutants. Annually, a large part of the forests is destroyed due to the lack of timely action during the fire. Knowledge about areas with a high-risk of fire and equipping these areas by constructing access routes and allocating the fire-fighting equipment can help to eliminate the destruction of the forest. In this research, the fire risk of region was forecasted and the risk map of that was provided using MODIS images by applying geographically weighted regression model with Gaussian kernel and ordinary least squares over the effective parameters in forest fire including distance from residential areas, distance from the river, distance from the road, height, slope, aspect, soil type, land use, average temperature, wind speed, and rainfall. After the evaluation, it was found that the geographically weighted regression model with Gaussian kernel forecasted 93.4% of the all fire points properly, however the ordinary least squares method could forecast properly only 66% of the fire points.
Probabilistic Forecasting of Surface Ozone with a Novel Statistical Approach
NASA Technical Reports Server (NTRS)
Balashov, Nikolay V.; Thompson, Anne M.; Young, George S.
2017-01-01
The recent change in the Environmental Protection Agency's surface ozone regulation, lowering the surface ozone daily maximum 8-h average (MDA8) exceedance threshold from 75 to 70 ppbv, poses significant challenges to U.S. air quality (AQ) forecasters responsible for ozone MDA8 forecasts. The forecasters, supplied by only a few AQ model products, end up relying heavily on self-developed tools. To help U.S. AQ forecasters, this study explores a surface ozone MDA8 forecasting tool that is based solely on statistical methods and standard meteorological variables from the numerical weather prediction (NWP) models. The model combines the self-organizing map (SOM), which is a clustering technique, with a step wise weighted quadratic regression using meteorological variables as predictors for ozone MDA8. The SOM method identifies different weather regimes, to distinguish between various modes of ozone variability, and groups them according to similarity. In this way, when a regression is developed for a specific regime, data from the other regimes are also used, with weights that are based on their similarity to this specific regime. This approach, regression in SOM (REGiS), yields a distinct model for each regime taking into account both the training cases for that regime and other similar training cases. To produce probabilistic MDA8 ozone forecasts, REGiS weighs and combines all of the developed regression models on the basis of the weather patterns predicted by an NWP model. REGiS is evaluated over the San Joaquin Valley in California and the northeastern plains of Colorado. The results suggest that the model performs best when trained and adjusted separately for an individual AQ station and its corresponding meteorological site.
Improved Neural Networks with Random Weights for Short-Term Load Forecasting
Lang, Kun; Zhang, Mingyuan; Yuan, Yongbo
2015-01-01
An effective forecasting model for short-term load plays a significant role in promoting the management efficiency of an electric power system. This paper proposes a new forecasting model based on the improved neural networks with random weights (INNRW). The key is to introduce a weighting technique to the inputs of the model and use a novel neural network to forecast the daily maximum load. Eight factors are selected as the inputs. A mutual information weighting algorithm is then used to allocate different weights to the inputs. The neural networks with random weights and kernels (KNNRW) is applied to approximate the nonlinear function between the selected inputs and the daily maximum load due to the fast learning speed and good generalization performance. In the application of the daily load in Dalian, the result of the proposed INNRW is compared with several previously developed forecasting models. The simulation experiment shows that the proposed model performs the best overall in short-term load forecasting. PMID:26629825
Improved Neural Networks with Random Weights for Short-Term Load Forecasting.
Lang, Kun; Zhang, Mingyuan; Yuan, Yongbo
2015-01-01
An effective forecasting model for short-term load plays a significant role in promoting the management efficiency of an electric power system. This paper proposes a new forecasting model based on the improved neural networks with random weights (INNRW). The key is to introduce a weighting technique to the inputs of the model and use a novel neural network to forecast the daily maximum load. Eight factors are selected as the inputs. A mutual information weighting algorithm is then used to allocate different weights to the inputs. The neural networks with random weights and kernels (KNNRW) is applied to approximate the nonlinear function between the selected inputs and the daily maximum load due to the fast learning speed and good generalization performance. In the application of the daily load in Dalian, the result of the proposed INNRW is compared with several previously developed forecasting models. The simulation experiment shows that the proposed model performs the best overall in short-term load forecasting.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lu, Siyuan; Hwang, Youngdeok; Khabibrakhmanov, Ildar
With increasing penetration of solar and wind energy to the total energy supply mix, the pressing need for accurate energy forecasting has become well-recognized. Here we report the development of a machine-learning based model blending approach for statistically combining multiple meteorological models for improving the accuracy of solar/wind power forecast. Importantly, we demonstrate that in addition to parameters to be predicted (such as solar irradiance and power), including additional atmospheric state parameters which collectively define weather situations as machine learning input provides further enhanced accuracy for the blended result. Functional analysis of variance shows that the error of individual modelmore » has substantial dependence on the weather situation. The machine-learning approach effectively reduces such situation dependent error thus produces more accurate results compared to conventional multi-model ensemble approaches based on simplistic equally or unequally weighted model averaging. Validation over an extended period of time results show over 30% improvement in solar irradiance/power forecast accuracy compared to forecasts based on the best individual model.« less
Entropy Econometrics for combining regional economic forecasts: A Data-Weighted Prior Estimator
NASA Astrophysics Data System (ADS)
Fernández-Vázquez, Esteban; Moreno, Blanca
2017-10-01
Forecast combination has been studied in econometrics for a long time, and the literature has shown the superior performance of forecast combination over individual predictions. However, there is still controversy on which is the best procedure to specify the forecast weights. This paper explores the possibility of using a procedure based on Entropy Econometrics, which allows setting the weights for the individual forecasts as a mixture of different alternatives. In particular, we examine the ability of the Data-Weighted Prior Estimator proposed by Golan (J Econom 101(1):165-193, 2001) to combine forecasting models in a context of small sample sizes, a relative common scenario when dealing with time series for regional economies. We test the validity of the proposed approach using a simulation exercise and a real-world example that aims at predicting gross regional product growth rates for a regional economy. The forecasting performance of the Data-Weighted Prior Estimator proposed is compared with other combining methods. The simulation results indicate that in scenarios of heavily ill-conditioned datasets the approach suggested dominates other forecast combination strategies. The empirical results are consistent with the conclusions found in the numerical experiment.
A stochastic HMM-based forecasting model for fuzzy time series.
Li, Sheng-Tun; Cheng, Yi-Chung
2010-10-01
Recently, fuzzy time series have attracted more academic attention than traditional time series due to their capability of dealing with the uncertainty and vagueness inherent in the data collected. The formulation of fuzzy relations is one of the key issues affecting forecasting results. Most of the present works adopt IF-THEN rules for relationship representation, which leads to higher computational overhead and rule redundancy. Sullivan and Woodall proposed a Markov-based formulation and a forecasting model to reduce computational overhead; however, its applicability is limited to handling one-factor problems. In this paper, we propose a novel forecasting model based on the hidden Markov model by enhancing Sullivan and Woodall's work to allow handling of two-factor forecasting problems. Moreover, in order to make the nature of conjecture and randomness of forecasting more realistic, the Monte Carlo method is adopted to estimate the outcome. To test the effectiveness of the resulting stochastic model, we conduct two experiments and compare the results with those from other models. The first experiment consists of forecasting the daily average temperature and cloud density in Taipei, Taiwan, and the second experiment is based on the Taiwan Weighted Stock Index by forecasting the exchange rate of the New Taiwan dollar against the U.S. dollar. In addition to improving forecasting accuracy, the proposed model adheres to the central limit theorem, and thus, the result statistically approximates to the real mean of the target value being forecast.
NASA Astrophysics Data System (ADS)
Escriba, P. A.; Callado, A.; Santos, D.; Santos, C.; Simarro, J.; García-Moya, J. A.
2009-09-01
At 00 UTC 24 January 2009 an explosive ciclogenesis originated over the Atlantic Ocean reached its maximum intensity with observed surface pressures lower than 970 hPa on its center and placed at Gulf of Vizcaya. During its path through southern France this low caused strong westerly and north-westerly winds over the Iberian Peninsula higher than 150 km/h at some places. These extreme winds leaved 10 casualties in Spain, 8 of them in Catalonia. The aim of this work is to show whether exists an added value in the short range prediction of the 24 January 2009 strong winds when using the Short Range Ensemble Prediction System (SREPS) of the Spanish Meteorological Agency (AEMET), with respect to the operational forecasting tools. This study emphasizes two aspects of probabilistic forecasting: the ability of a 3-day forecast of warn an extreme windy event and the ability of quantifying the predictability of the event so that giving value to deterministic forecast. Two type of probabilistic forecasts of wind are carried out, a non-calibrated and a calibrated one using Bayesian Model Averaging (BMA). AEMET runs daily experimentally SREPS twice a day (00 and 12 UTC). This system consists of 20 members that are constructed by integrating 5 local area models, COSMO (COSMO), HIRLAM (HIRLAM Consortium), HRM (DWD), MM5 (NOAA) and UM (UKMO), at 25 km of horizontal resolution. Each model uses 4 different initial and boundary conditions, the global models GFS (NCEP), GME (DWD), IFS (ECMWF) and UM. By this way it is obtained a probabilistic forecast that takes into account the initial, the contour and the model errors. BMA is a statistical tool for combining predictive probability functions from different sources. The BMA predictive probability density function (PDF) is a weighted average of PDFs centered on the individual bias-corrected forecasts. The weights are equal to posterior probabilities of the models generating the forecasts and reflect the skill of the ensemble members. Here BMA is applied to provide probabilistic forecasts of wind speed. In this work several forecasts for different time ranges (H+72, H+48 and H+24) of 10 meters wind speed over Catalonia are verified subjectively at one of the instants of maximum intensity, 12 UTC 24 January 2009. On one hand, three probabilistic forecasts are compared, ECMWF EPS, non-calibrated SREPS and calibrated SREPS. On the other hand, the relationship between predictability and skill of deterministic forecast is studied by looking at HIRLAM 0.16 deterministic forecasts of the event. Verification is focused on location and intensity of 10 meters wind speed and 10-minutal measures from AEMET automatic ground stations are used as observations. The results indicate that SREPS is able to forecast three days ahead mean winds higher than 36 km/h and that correctly localizes them with a significant probability of ocurrence in the affected area. The probability is higher after BMA calibration of the ensemble. The fact that probability of strong winds is high allows us to state that the predictability of the event is also high and, as a consequence, deterministic forecasts are more reliable. This is confirmed when verifying HIRLAM deterministic forecasts against observed values.
A New Integrated Weighted Model in SNOW-V10: Verification of Categorical Variables
NASA Astrophysics Data System (ADS)
Huang, Laura X.; Isaac, George A.; Sheng, Grant
2014-01-01
This paper presents the verification results for nowcasts of seven categorical variables from an integrated weighted model (INTW) and the underlying numerical weather prediction (NWP) models. Nowcasting, or short range forecasting (0-6 h), over complex terrain with sufficient accuracy is highly desirable but a very challenging task. A weighting, evaluation, bias correction and integration system (WEBIS) for generating nowcasts by integrating NWP forecasts and high frequency observations was used during the Vancouver 2010 Olympic and Paralympic Winter Games as part of the Science of Nowcasting Olympic Weather for Vancouver 2010 (SNOW-V10) project. Forecast data from Canadian high-resolution deterministic NWP system with three nested grids (at 15-, 2.5- and 1-km horizontal grid-spacing) were selected as background gridded data for generating the integrated nowcasts. Seven forecast variables of temperature, relative humidity, wind speed, wind gust, visibility, ceiling and precipitation rate are treated as categorical variables for verifying the integrated weighted forecasts. By analyzing the verification of forecasts from INTW and the NWP models among 15 sites, the integrated weighted model was found to produce more accurate forecasts for the 7 selected forecast variables, regardless of location. This is based on the multi-categorical Heidke skill scores for the test period 12 February to 21 March 2010.
NASA Astrophysics Data System (ADS)
Bailey, Monika E.; Isaac, George A.; Gultepe, Ismail; Heckman, Ivan; Reid, Janti
2014-01-01
An automated short-range forecasting system, adaptive blending of observations and model (ABOM), was tested in real time during the 2010 Vancouver Olympic and Paralympic Winter Games in British Columbia. Data at 1-min time resolution were available from a newly established, dense network of surface observation stations. Climatological data were not available at these new stations. This, combined with output from new high-resolution numerical models, provided a unique and exciting setting to test nowcasting systems in mountainous terrain during winter weather conditions. The ABOM method blends extrapolations in time of recent local observations with numerical weather predictions (NWP) model predictions to generate short-range point forecasts of surface variables out to 6 h. The relative weights of the model forecast and the observation extrapolation are based on performance over recent history. The average performance of ABOM nowcasts during February and March 2010 was evaluated using standard scores and thresholds important for Olympic events. Significant improvements over the model forecasts alone were obtained for continuous variables such as temperature, relative humidity and wind speed. The small improvements to forecasts of variables such as visibility and ceiling, subject to discontinuous changes, are attributed to the persistence component of ABOM.
Estimating the extreme low-temperature event using nonparametric methods
NASA Astrophysics Data System (ADS)
D'Silva, Anisha
This thesis presents a new method of estimating the one-in-N low temperature threshold using a non-parametric statistical method called kernel density estimation applied to daily average wind-adjusted temperatures. We apply our One-in-N Algorithm to local gas distribution companies (LDCs), as they have to forecast the daily natural gas needs of their consumers. In winter, demand for natural gas is high. Extreme low temperature events are not directly related to an LDCs gas demand forecasting, but knowledge of extreme low temperatures is important to ensure that an LDC has enough capacity to meet customer demands when extreme low temperatures are experienced. We present a detailed explanation of our One-in-N Algorithm and compare it to the methods using the generalized extreme value distribution, the normal distribution, and the variance-weighted composite distribution. We show that our One-in-N Algorithm estimates the one-in- N low temperature threshold more accurately than the methods using the generalized extreme value distribution, the normal distribution, and the variance-weighted composite distribution according to root mean square error (RMSE) measure at a 5% level of significance. The One-in- N Algorithm is tested by counting the number of times the daily average wind-adjusted temperature is less than or equal to the one-in- N low temperature threshold.
Climatological Observations for Maritime Prediction and Analysis Support Service (COMPASS)
NASA Astrophysics Data System (ADS)
OConnor, A.; Kirtman, B. P.; Harrison, S.; Gorman, J.
2016-02-01
Current US Navy forecasting systems cannot easily incorporate extended-range forecasts that can improve mission readiness and effectiveness; ensure safety; and reduce cost, labor, and resource requirements. If Navy operational planners had systems that incorporated these forecasts, they could plan missions using more reliable and longer-term weather and climate predictions. Further, using multi-model forecast ensembles instead of single forecasts would produce higher predictive performance. Extended-range multi-model forecast ensembles, such as those available in the North American Multi-Model Ensemble (NMME), are ideal for system integration because of their high skill predictions; however, even higher skill predictions can be produced if forecast model ensembles are combined correctly. While many methods for weighting models exist, the best method in a given environment requires expert knowledge of the models and combination methods.We present an innovative approach that uses machine learning to combine extended-range predictions from multi-model forecast ensembles and generate a probabilistic forecast for any region of the globe up to 12 months in advance. Our machine-learning approach uses 30 years of hindcast predictions to learn patterns of forecast model successes and failures. Each model is assigned a weight for each environmental condition, 100 km2 region, and day given any expected environmental information. These weights are then applied to the respective predictions for the region and time of interest to effectively stitch together a single, coherent probabilistic forecast. Our experimental results demonstrate the benefits of our approach to produce extended-range probabilistic forecasts for regions and time periods of interest that are superior, in terms of skill, to individual NMME forecast models and commonly weighted models. The probabilistic forecast leverages the strengths of three NMME forecast models to predict environmental conditions for an area spanning from San Diego, CA to Honolulu, HI, seven months in-advance. Key findings include: weighted combinations of models are strictly better than individual models; machine-learned combinations are especially better; and forecasts produced using our approach have the highest rank probability skill score most often.
Meta-heuristic CRPS minimization for the calibration of short-range probabilistic forecasts
NASA Astrophysics Data System (ADS)
Mohammadi, Seyedeh Atefeh; Rahmani, Morteza; Azadi, Majid
2016-08-01
This paper deals with the probabilistic short-range temperature forecasts over synoptic meteorological stations across Iran using non-homogeneous Gaussian regression (NGR). NGR creates a Gaussian forecast probability density function (PDF) from the ensemble output. The mean of the normal predictive PDF is a bias-corrected weighted average of the ensemble members and its variance is a linear function of the raw ensemble variance. The coefficients for the mean and variance are estimated by minimizing the continuous ranked probability score (CRPS) during a training period. CRPS is a scoring rule for distributional forecasts. In the paper of Gneiting et al. (Mon Weather Rev 133:1098-1118, 2005), Broyden-Fletcher-Goldfarb-Shanno (BFGS) method is used to minimize the CRPS. Since BFGS is a conventional optimization method with its own limitations, we suggest using the particle swarm optimization (PSO), a robust meta-heuristic method, to minimize the CRPS. The ensemble prediction system used in this study consists of nine different configurations of the weather research and forecasting model for 48-h forecasts of temperature during autumn and winter 2011 and 2012. The probabilistic forecasts were evaluated using several common verification scores including Brier score, attribute diagram and rank histogram. Results show that both BFGS and PSO find the optimal solution and show the same evaluation scores, but PSO can do this with a feasible random first guess and much less computational complexity.
Wave height data assimilation using non-stationary kriging
NASA Astrophysics Data System (ADS)
Tolosana-Delgado, R.; Egozcue, J. J.; Sáchez-Arcilla, A.; Gómez, J.
2011-03-01
Data assimilation into numerical models should be both computationally fast and physically meaningful, in order to be applicable in online environmental surveillance. We present a way to improve assimilation for computationally intensive models, based on non-stationary kriging and a separable space-time covariance function. The method is illustrated with significant wave height data. The covariance function is expressed as a collection of fields: each one is obtained as the empirical covariance between the studied property (significant wave height in log-scale) at a pixel where a measurement is located (a wave-buoy is available) and the same parameter at every other pixel of the field. These covariances are computed from the available history of forecasts. The method provides a set of weights, that can be mapped for each measuring location, and that do not vary with time. Resulting weights may be used in a weighted average of the differences between the forecast and measured parameter. In the case presented, these weights may show long-range connection patterns, such as between the Catalan coast and the eastern coast of Sardinia, associated to common prevailing meteo-oceanographic conditions. When such patterns are considered as non-informative of the present situation, it is always possible to diminish their influence by relaxing the covariance maps.
Using Time-Series Regression to Predict Academic Library Circulations.
ERIC Educational Resources Information Center
Brooks, Terrence A.
1984-01-01
Four methods were used to forecast monthly circulation totals in 15 midwestern academic libraries: dummy time-series regression, lagged time-series regression, simple average (straight-line forecasting), monthly average (naive forecasting). In tests of forecasting accuracy, dummy regression method and monthly mean method exhibited smallest average…
Application Bayesian Model Averaging method for ensemble system for Poland
NASA Astrophysics Data System (ADS)
Guzikowski, Jakub; Czerwinska, Agnieszka
2014-05-01
The aim of the project is to evaluate methods for generating numerical ensemble weather prediction using a meteorological data from The Weather Research & Forecasting Model and calibrating this data by means of Bayesian Model Averaging (WRF BMA) approach. We are constructing height resolution short range ensemble forecasts using meteorological data (temperature) generated by nine WRF's models. WRF models have 35 vertical levels and 2.5 km x 2.5 km horizontal resolution. The main emphasis is that the used ensemble members has a different parameterization of the physical phenomena occurring in the boundary layer. To calibrate an ensemble forecast we use Bayesian Model Averaging (BMA) approach. The BMA predictive Probability Density Function (PDF) is a weighted average of predictive PDFs associated with each individual ensemble member, with weights that reflect the member's relative skill. For test we chose a case with heat wave and convective weather conditions in Poland area from 23th July to 1st August 2013. From 23th July to 29th July 2013 temperature oscillated below or above 30 Celsius degree in many meteorology stations and new temperature records were added. During this time the growth of the hospitalized patients with cardiovascular system problems was registered. On 29th July 2013 an advection of moist tropical air masses was recorded in the area of Poland causes strong convection event with mesoscale convection system (MCS). MCS caused local flooding, damage to the transport infrastructure, destroyed buildings, trees and injuries and direct threat of life. Comparison of the meteorological data from ensemble system with the data recorded on 74 weather stations localized in Poland is made. We prepare a set of the model - observations pairs. Then, the obtained data from single ensemble members and median from WRF BMA system are evaluated on the basis of the deterministic statistical error Root Mean Square Error (RMSE), Mean Absolute Error (MAE). To evaluation probabilistic data The Brier Score (BS) and Continuous Ranked Probability Score (CRPS) were used. Finally comparison between BMA calibrated data and data from ensemble members will be displayed.
Seasonal forecasting of discharge for the Raccoon River, Iowa
NASA Astrophysics Data System (ADS)
Slater, Louise; Villarini, Gabriele; Bradley, Allen; Vecchi, Gabriel
2016-04-01
The state of Iowa (central United States) is regularly afflicted by severe natural hazards such as the 2008/2013 floods and the 2012 drought. To improve preparedness for these catastrophic events and allow Iowans to make more informed decisions about the most suitable water management strategies, we have developed a framework for medium to long range probabilistic seasonal streamflow forecasting for the Raccoon River at Van Meter, a 8900-km2 catchment located in central-western Iowa. Our flow forecasts use statistical models to predict seasonal discharge for low to high flows, with lead forecasting times ranging from one to ten months. Historical measurements of daily discharge are obtained from the U.S. Geological Survey (USGS) at the Van Meter stream gage, and used to compute quantile time series from minimum to maximum seasonal flow. The model is forced with basin-averaged total seasonal precipitation records from the PRISM Climate Group and annual row crop production acreage from the U.S. Department of Agriculture's National Agricultural Statistics Services database. For the forecasts, we use corn and soybean production from the previous year (persistence forecast) as a proxy for the impacts of agricultural practices on streamflow. The monthly precipitation forecasts are provided by eight Global Climate Models (GCMs) from the North American Multi-Model Ensemble (NMME), with lead times ranging from 0.5 to 11.5 months, and a resolution of 1 decimal degree. Additionally, precipitation from the month preceding each season is used to characterize antecedent soil moisture conditions. The accuracy of our modelled (1927-2015) and forecasted (2001-2015) discharge values is assessed by comparison with the observed USGS data. We explore the sensitivity of forecast skill over the full range of lead times, flow quantiles, forecast seasons, and with each GCM. Forecast skill is also examined using different formulations of the statistical models, as well as NMME forecast weighting procedures based on the computed potential skill (historical forecast accuracy) of the different GCMs. We find that the models describe the year-to-year variability in streamflow accurately, as well as the overall tendency towards increasing (and more variable) discharge over time. Surprisingly, forecast skill does not decrease markedly with lead time, and high flows tend to be well predicted, suggesting that these forecasts may have considerable practical applications. Further, the seasonal flow forecast accuracy is substantially improved by weighting the contribution of individual GCMs to the forecasts, and also by the inclusion of antecedent precipitation. Our results can provide critical information for adaptation strategies aiming to mitigate the costs and disruptions arising from flood and drought conditions, and allow us to determine how far in advance skillful forecasts can be issued. The availability of these discharge forecasts would have major societal and economic benefits for hydrology and water resources management, agriculture, disaster forecasts and prevention, energy, finance and insurance, food security, policy-making and public authorities, and transportation.
An Optimization of Inventory Demand Forecasting in University Healthcare Centre
NASA Astrophysics Data System (ADS)
Bon, A. T.; Ng, T. K.
2017-01-01
Healthcare industry becomes an important field for human beings nowadays as it concerns about one’s health. With that, forecasting demand for health services is an important step in managerial decision making for all healthcare organizations. Hence, a case study was conducted in University Health Centre to collect historical demand data of Panadol 650mg for 68 months from January 2009 until August 2014. The aim of the research is to optimize the overall inventory demand through forecasting techniques. Quantitative forecasting or time series forecasting model was used in the case study to forecast future data as a function of past data. Furthermore, the data pattern needs to be identified first before applying the forecasting techniques. Trend is the data pattern and then ten forecasting techniques are applied using Risk Simulator Software. Lastly, the best forecasting techniques will be find out with the least forecasting error. Among the ten forecasting techniques include single moving average, single exponential smoothing, double moving average, double exponential smoothing, regression, Holt-Winter’s additive, Seasonal additive, Holt-Winter’s multiplicative, seasonal multiplicative and Autoregressive Integrated Moving Average (ARIMA). According to the forecasting accuracy measurement, the best forecasting technique is regression analysis.
An information-theoretical perspective on weighted ensemble forecasts
NASA Astrophysics Data System (ADS)
Weijs, Steven V.; van de Giesen, Nick
2013-08-01
This paper presents an information-theoretical method for weighting ensemble forecasts with new information. Weighted ensemble forecasts can be used to adjust the distribution that an existing ensemble of time series represents, without modifying the values in the ensemble itself. The weighting can, for example, add new seasonal forecast information in an existing ensemble of historically measured time series that represents climatic uncertainty. A recent article in this journal compared several methods to determine the weights for the ensemble members and introduced the pdf-ratio method. In this article, a new method, the minimum relative entropy update (MRE-update), is presented. Based on the principle of minimum discrimination information, an extension of the principle of maximum entropy (POME), the method ensures that no more information is added to the ensemble than is present in the forecast. This is achieved by minimizing relative entropy, with the forecast information imposed as constraints. From this same perspective, an information-theoretical view on the various weighting methods is presented. The MRE-update is compared with the existing methods and the parallels with the pdf-ratio method are analysed. The paper provides a new, information-theoretical justification for one version of the pdf-ratio method that turns out to be equivalent to the MRE-update. All other methods result in sets of ensemble weights that, seen from the information-theoretical perspective, add either too little or too much (i.e. fictitious) information to the ensemble.
NASA Technical Reports Server (NTRS)
Keitz, J. F.
1982-01-01
The impact of more timely and accurate weather data on airline flight planning with the emphasis on fuel savings is studied. This volume of the report discusses the results of Task 3 of the four major tasks included in the study. Task 3 compares flight plans developed on the Suitland forecast with actual data observed by the aircraft (and averaged over 10 degree segments). The results show that the average difference between the forecast and observed wind speed is 9 kts. without considering direction, and the average difference in the component of the forecast wind parallel to the direction of the observed wind is 13 kts. - both indicating that the Suitland forecast underestimates the wind speeds. The Root Mean Square (RMS) vector error is 30.1 kts. The average absolute difference in direction between the forecast and observed wind is 26 degrees and the temperature difference is 3 degree Centigrade. These results indicate that the forecast model as well as the verifying analysis used to develop comparison flight plans in Tasks 1 and 2 is a limiting factor and that the average potential fuel savings or penalty are up to 3.6 percent depending on the direction of flight.
Nonmechanistic forecasts of seasonal influenza with iterative one-week-ahead distributions.
Brooks, Logan C; Farrow, David C; Hyun, Sangwon; Tibshirani, Ryan J; Rosenfeld, Roni
2018-06-15
Accurate and reliable forecasts of seasonal epidemics of infectious disease can assist in the design of countermeasures and increase public awareness and preparedness. This article describes two main contributions we made recently toward this goal: a novel approach to probabilistic modeling of surveillance time series based on "delta densities", and an optimization scheme for combining output from multiple forecasting methods into an adaptively weighted ensemble. Delta densities describe the probability distribution of the change between one observation and the next, conditioned on available data; chaining together nonparametric estimates of these distributions yields a model for an entire trajectory. Corresponding distributional forecasts cover more observed events than alternatives that treat the whole season as a unit, and improve upon multiple evaluation metrics when extracting key targets of interest to public health officials. Adaptively weighted ensembles integrate the results of multiple forecasting methods, such as delta density, using weights that can change from situation to situation. We treat selection of optimal weightings across forecasting methods as a separate estimation task, and describe an estimation procedure based on optimizing cross-validation performance. We consider some details of the data generation process, including data revisions and holiday effects, both in the construction of these forecasting methods and when performing retrospective evaluation. The delta density method and an adaptively weighted ensemble of other forecasting methods each improve significantly on the next best ensemble component when applied separately, and achieve even better cross-validated performance when used in conjunction. We submitted real-time forecasts based on these contributions as part of CDC's 2015/2016 FluSight Collaborative Comparison. Among the fourteen submissions that season, this system was ranked by CDC as the most accurate.
NASA Astrophysics Data System (ADS)
Chen, Tsing-Chang; Yen, Ming-Cheng; Wu, Kuang-Der; Ng, Thomas
1992-08-01
The time evolution of the Indian monsoon is closely related to locations of the northward migrating monsoon troughs and ridges which can be well depicted with the 30 60day filtered 850-mb streamfunction. Thus, long-range forecasts of the large-scale low-level monsoon can be obtained from those of the filtered 850-mb streamfunction. These long-range forecasts were made in this study in terms of the Auto Regressive (AR) Moving-Average process. The historical series of the AR model were constructed with the 30 60day filtered 850-mb streamfunction [˜ψ (850mb)] time series of 4months. However, the phase of the last low-frequency cycle in the ˜ψ (850mb) time series can be skewed by the bandpass filtering. To reduce this phase skewness, a simple scheme is introduced. With this phase modification of the filtered 850-mb streamfunction, we performed the pilot forecast experiments of three summers with the AR forecast process. The forecast errors in the positions of the northward propagating monsoon troughs and ridges at Day 20 are generally within the range of 1
2days behind the observed, except in some extreme cases.
NASA Technical Reports Server (NTRS)
Lambert, Winifred C.; Merceret, Francis J. (Technical Monitor)
2002-01-01
This report describes the results of the ANU's (Applied Meteorology Unit) Short-Range Statistical Forecasting task for peak winds. The peak wind speeds are an important forecast element for the Space Shuttle and Expendable Launch Vehicle programs. The Keith Weather Squadron and the Spaceflight Meteorology Group indicate that peak winds are challenging to forecast. The Applied Meteorology Unit was tasked to develop tools that aid in short-range forecasts of peak winds at tower sites of operational interest. A 7 year record of wind tower data was used in the analysis. Hourly and directional climatologies by tower and month were developed to determine the seasonal behavior of the average and peak winds. In all climatologies, the average and peak wind speeds were highly variable in time. This indicated that the development of a peak wind forecasting tool would be difficult. Probability density functions (PDF) of peak wind speed were calculated to determine the distribution of peak speed with average speed. These provide forecasters with a means of determining the probability of meeting or exceeding a certain peak wind given an observed or forecast average speed. The climatologies and PDFs provide tools with which to make peak wind forecasts that are critical to safe operations.
Superensemble forecasts of dengue outbreaks
Kandula, Sasikiran; Shaman, Jeffrey
2016-01-01
In recent years, a number of systems capable of predicting future infectious disease incidence have been developed. As more of these systems are operationalized, it is important that the forecasts generated by these different approaches be formally reconciled so that individual forecast error and bias are reduced. Here we present a first example of such multi-system, or superensemble, forecast. We develop three distinct systems for predicting dengue, which are applied retrospectively to forecast outbreak characteristics in San Juan, Puerto Rico. We then use Bayesian averaging methods to combine the predictions from these systems and create superensemble forecasts. We demonstrate that on average, the superensemble approach produces more accurate forecasts than those made from any of the individual forecasting systems. PMID:27733698
An impact analysis of forecasting methods and forecasting parameters on bullwhip effect
NASA Astrophysics Data System (ADS)
Silitonga, R. Y. H.; Jelly, N.
2018-04-01
Bullwhip effect is an increase of variance of demand fluctuation from downstream to upstream of supply chain. Forecasting methods and forecasting parameters were recognized as some factors that affect bullwhip phenomena. To study these factors, we can develop simulations. There are several ways to simulate bullwhip effect in previous studies, such as mathematical equation modelling, information control modelling, computer program, and many more. In this study a spreadsheet program named Bullwhip Explorer was used to simulate bullwhip effect. Several scenarios were developed to show the change in bullwhip effect ratio because of the difference in forecasting methods and forecasting parameters. Forecasting methods used were mean demand, moving average, exponential smoothing, demand signalling, and minimum expected mean squared error. Forecasting parameters were moving average period, smoothing parameter, signalling factor, and safety stock factor. It showed that decreasing moving average period, increasing smoothing parameter, increasing signalling factor can create bigger bullwhip effect ratio. Meanwhile, safety stock factor had no impact to bullwhip effect.
Forecasting Dust Storms Using the CARMA-Dust Model and MM5 Weather Data
NASA Astrophysics Data System (ADS)
Barnum, B. H.; Winstead, N. S.; Wesely, J.; Hakola, A.; Colarco, P.; Toon, O. B.; Ginoux, P.; Brooks, G.; Hasselbarth, L. M.; Toth, B.; Sterner, R.
2002-12-01
An operational model for the forecast of dust storms in Northern Africa, the Middle East and Southwest Asia has been developed for the United States Air Force Weather Agency (AFWA). The dust forecast model uses the 5th generation Penn State Mesoscale Meteorology Model (MM5), and a modified version of the Colorado Aerosol and Radiation Model for Atmospheres (CARMA). AFWA conducted a 60 day evaluation of the dust model to look at the model's ability to forecast dust storms for short, medium and long range (72 hour) forecast periods. The study used satellite and ground observations of dust storms to verify the model's effectiveness. Each of the main mesoscale forecast theaters was broken down into smaller sub-regions for detailed analysis. The study found the forecast model was able to forecast dust storms in Saharan Africa and the Sahel region with an average Probability of Detection (POD)exceeding 68%, with a 16% False Alarm Rate (FAR). The Southwest Asian theater had average POD's of 61% with FAR's averaging 10%.
Fuzzy time-series based on Fibonacci sequence for stock price forecasting
NASA Astrophysics Data System (ADS)
Chen, Tai-Liang; Cheng, Ching-Hsue; Jong Teoh, Hia
2007-07-01
Time-series models have been utilized to make reasonably accurate predictions in the areas of stock price movements, academic enrollments, weather, etc. For promoting the forecasting performance of fuzzy time-series models, this paper proposes a new model, which incorporates the concept of the Fibonacci sequence, the framework of Song and Chissom's model and the weighted method of Yu's model. This paper employs a 5-year period TSMC (Taiwan Semiconductor Manufacturing Company) stock price data and a 13-year period of TAIEX (Taiwan Stock Exchange Capitalization Weighted Stock Index) stock index data as experimental datasets. By comparing our forecasting performances with Chen's (Forecasting enrollments based on fuzzy time-series. Fuzzy Sets Syst. 81 (1996) 311-319), Yu's (Weighted fuzzy time-series models for TAIEX forecasting. Physica A 349 (2004) 609-624) and Huarng's (The application of neural networks to forecast fuzzy time series. Physica A 336 (2006) 481-491) models, we conclude that the proposed model surpasses in accuracy these conventional fuzzy time-series models.
Statistical Earthquake Focal Mechanism Forecasts
NASA Astrophysics Data System (ADS)
Kagan, Y. Y.; Jackson, D. D.
2013-12-01
The new whole Earth focal mechanism forecast, based on the GCMT catalog, has been created. In the present forecast, the sum of normalized seismic moment tensors within 1000 km radius is calculated and the P- and T-axes for the focal mechanism are evaluated on the basis of the sum. Simultaneously we calculate an average rotation angle between the forecasted mechanism and all the surrounding mechanisms. This average angle shows tectonic complexity of a region and indicates the accuracy of the prediction. The method was originally proposed by Kagan and Jackson (1994, JGR). Recent interest by CSEP and GEM has motivated some improvements, particularly to extend the previous forecast to polar and near-polar regions. The major problem in extending the forecast is the focal mechanism calculation on a spherical surface. In the previous forecast as our average focal mechanism was computed, it was assumed that longitude lines are approximately parallel within 1000 km radius. This is largely accurate in the equatorial and near-equatorial areas. However, when one approaches the 75 degree latitude, the longitude lines are no longer parallel: the bearing (azimuthal) difference at points separated by 1000 km reach about 35 degrees. In most situations a forecast point where we calculate an average focal mechanism is surrounded by earthquakes, so a bias should not be strong due to the difference effect cancellation. But if we move into polar regions, the bearing difference could approach 180 degrees. In a modified program focal mechanisms have been projected on a plane tangent to a sphere at a forecast point. New longitude axes which are parallel in the tangent plane are corrected for the bearing difference. A comparison with the old 75S-75N forecast shows that in equatorial regions the forecasted focal mechanisms are almost the same, and the difference in the forecasted focal mechanisms rotation angle is close to zero. However, though the forecasted focal mechanisms are similar, closer to the 75 latitude degree, the difference in the rotation angle is large (around a factor 1.5 in some places). The Gamma-index was calculated for the average focal mechanism moment. A non-zero Index indicates that earthquake focal mechanisms around the forecast point have different orientations. Thus deformation complexity displays itself in the average rotation angle and in the Index. However, sometimes the rotation angle is close to zero, whereas the Index is large, testifying to a large CLVD presence. Both new 0.5x0.5 and 0.1x0.1 degree forecasts are posted at http://eq.ess.ucla.edu/~kagan/glob_gcmt_index.html.
Weighting of NMME temperature and precipitation forecasts across Europe
NASA Astrophysics Data System (ADS)
Slater, Louise J.; Villarini, Gabriele; Bradley, A. Allen
2017-09-01
Multi-model ensemble forecasts are obtained by weighting multiple General Circulation Model (GCM) outputs to heighten forecast skill and reduce uncertainties. The North American Multi-Model Ensemble (NMME) project facilitates the development of such multi-model forecasting schemes by providing publicly-available hindcasts and forecasts online. Here, temperature and precipitation forecasts are enhanced by leveraging the strengths of eight NMME GCMs (CCSM3, CCSM4, CanCM3, CanCM4, CFSv2, GEOS5, GFDL2.1, and FLORb01) across all forecast months and lead times, for four broad climatic European regions: Temperate, Mediterranean, Humid-Continental and Subarctic-Polar. We compare five different approaches to multi-model weighting based on the equally weighted eight single-model ensembles (EW-8), Bayesian updating (BU) of the eight single-model ensembles (BU-8), BU of the 94 model members (BU-94), BU of the principal components of the eight single-model ensembles (BU-PCA-8) and BU of the principal components of the 94 model members (BU-PCA-94). We assess the forecasting skill of these five multi-models and evaluate their ability to predict some of the costliest historical droughts and floods in recent decades. Results indicate that the simplest approach based on EW-8 preserves model skill, but has considerable biases. The BU and BU-PCA approaches reduce the unconditional biases and negative skill in the forecasts considerably, but they can also sometimes diminish the positive skill in the original forecasts. The BU-PCA models tend to produce lower conditional biases than the BU models and have more homogeneous skill than the other multi-models, but with some loss of skill. The use of 94 NMME model members does not present significant benefits over the use of the 8 single model ensembles. These findings may provide valuable insights for the development of skillful, operational multi-model forecasting systems.
Chesapeake Bay hypoxic volume forecasts and results
Scavia, Donald; Evans, Mary Anne
2013-01-01
The 2013 Forecast - Given the average Jan-May 2013 total nitrogen load of 162,028 kg/day, this summer’s hypoxia volume forecast is 6.1 km3, slightly smaller than average size for the period of record and almost the same as 2012. The late July 2013 measured volume was 6.92 km3.
EMC Global Climate And Weather Modeling Branch Personnel
Comparison Statistics which includes: NCEP Raw and Bias-Corrected Ensemble Domain Averaged Bias NCEP Raw and Bias-Corrected Ensemble Domain Averaged Bias Reduction (Percents) CMC Raw and Bias-Corrected Control Forecast Domain Averaged Bias CMC Raw and Bias-Corrected Control Forecast Domain Averaged Bias Reduction
Statistical Post-Processing of Wind Speed Forecasts to Estimate Relative Economic Value
NASA Astrophysics Data System (ADS)
Courtney, Jennifer; Lynch, Peter; Sweeney, Conor
2013-04-01
The objective of this research is to get the best possible wind speed forecasts for the wind energy industry by using an optimal combination of well-established forecasting and post-processing methods. We start with the ECMWF 51 member ensemble prediction system (EPS) which is underdispersive and hence uncalibrated. We aim to produce wind speed forecasts that are more accurate and calibrated than the EPS. The 51 members of the EPS are clustered to 8 weighted representative members (RMs), chosen to minimize the within-cluster spread, while maximizing the inter-cluster spread. The forecasts are then downscaled using two limited area models, WRF and COSMO, at two resolutions, 14km and 3km. This process creates four distinguishable ensembles which are used as input to statistical post-processes requiring multi-model forecasts. Two such processes are presented here. The first, Bayesian Model Averaging, has been proven to provide more calibrated and accurate wind speed forecasts than the ECMWF EPS using this multi-model input data. The second, heteroscedastic censored regression is indicating positive results also. We compare the two post-processing methods, applied to a year of hindcast wind speed data around Ireland, using an array of deterministic and probabilistic verification techniques, such as MAE, CRPS, probability transform integrals and verification rank histograms, to show which method provides the most accurate and calibrated forecasts. However, the value of a forecast to an end-user cannot be fully quantified by just the accuracy and calibration measurements mentioned, as the relationship between skill and value is complex. Capturing the full potential of the forecast benefits also requires detailed knowledge of the end-users' weather sensitive decision-making processes and most importantly the economic impact it will have on their income. Finally, we present the continuous relative economic value of both post-processing methods to identify which is more beneficial to the wind energy industry of Ireland.
Fuzzy State Transition and Kalman Filter Applied in Short-Term Traffic Flow Forecasting
Ming-jun, Deng; Shi-ru, Qu
2015-01-01
Traffic flow is widely recognized as an important parameter for road traffic state forecasting. Fuzzy state transform and Kalman filter (KF) have been applied in this field separately. But the studies show that the former method has good performance on the trend forecasting of traffic state variation but always involves several numerical errors. The latter model is good at numerical forecasting but is deficient in the expression of time hysteretically. This paper proposed an approach that combining fuzzy state transform and KF forecasting model. In considering the advantage of the two models, a weight combination model is proposed. The minimum of the sum forecasting error squared is regarded as a goal in optimizing the combined weight dynamically. Real detection data are used to test the efficiency. Results indicate that the method has a good performance in terms of short-term traffic forecasting. PMID:26779258
Fuzzy State Transition and Kalman Filter Applied in Short-Term Traffic Flow Forecasting.
Deng, Ming-jun; Qu, Shi-ru
2015-01-01
Traffic flow is widely recognized as an important parameter for road traffic state forecasting. Fuzzy state transform and Kalman filter (KF) have been applied in this field separately. But the studies show that the former method has good performance on the trend forecasting of traffic state variation but always involves several numerical errors. The latter model is good at numerical forecasting but is deficient in the expression of time hysteretically. This paper proposed an approach that combining fuzzy state transform and KF forecasting model. In considering the advantage of the two models, a weight combination model is proposed. The minimum of the sum forecasting error squared is regarded as a goal in optimizing the combined weight dynamically. Real detection data are used to test the efficiency. Results indicate that the method has a good performance in terms of short-term traffic forecasting.
NCEP Air Quality Forecast(AQF) Verification. NOAA/NWS/NCEP/EMC
Southwest Desert All regions PROD All regions PARA Select averaged hour: 8 hr sfc average 1 hr sfc average Select forecast four: Diurnal period 01-24 hr by day 25-48 hr by day Select statistic type: BIAS RMSE
NASA Astrophysics Data System (ADS)
Yin, Yip Chee; Hock-Eam, Lim
2012-09-01
This paper investigates the forecasting ability of Mallows Model Averaging (MMA) by conducting an empirical analysis of five Asia countries, Malaysia, Thailand, Philippines, Indonesia and China's GDP growth rate. Results reveal that MMA has no noticeable differences in predictive ability compared to the general autoregressive fractional integrated moving average model (ARFIMA) and its predictive ability is sensitive to the effect of financial crisis. MMA could be an alternative forecasting method for samples without recent outliers such as financial crisis.
NASA Astrophysics Data System (ADS)
Higgins, S. M. W.; Du, H. L.; Smith, L. A.
2012-04-01
Ensemble forecasting on a lead time of seconds over several years generates a large forecast-outcome archive, which can be used to evaluate and weight "models". Challenges which arise as the archive becomes smaller are investigated: in weather forecasting one typically has only thousands of forecasts however those launched 6 hours apart are not independent of each other, nor is it justified to mix seasons with different dynamics. Seasonal forecasts, as from ENSEMBLES and DEMETER, typically have less than 64 unique launch dates; decadal forecasts less than eight, and long range climate forecasts arguably none. It is argued that one does not weight "models" so much as entire ensemble prediction systems (EPSs), and that the marginal value of an EPS will depend on the other members in the mix. The impact of using different skill scores is examined in the limits of both very large forecast-outcome archives (thereby evaluating the efficiency of the skill score) and in very small forecast-outcome archives (illustrating fundamental limitations due to sampling fluctuations and memory in the physical system being forecast). It is shown that blending with climatology (J. Bröcker and L.A. Smith, Tellus A, 60(4), 663-678, (2008)) tends to increase the robustness of the results; also a new kernel dressing methodology (simply insuring that the expected probability mass tends to lie outside the range of the ensemble) is illustrated. Fair comparisons using seasonal forecasts from the ENSEMBLES project are used to illustrate the importance of these results with fairly small archives. The robustness of these results across the range of small, moderate and huge archives is demonstrated using imperfect models of perfectly known nonlinear (chaotic) dynamical systems. The implications these results hold for distinguishing the skill of a forecast from its value to a user of the forecast are discussed.
Parametrisation of initial conditions for seasonal stream flow forecasting in the Swiss Rhine basin
NASA Astrophysics Data System (ADS)
Schick, Simon; Rössler, Ole; Weingartner, Rolf
2016-04-01
Current climate forecast models show - to the best of our knowledge - low skill in forecasting climate variability in Central Europe at seasonal lead times. When it comes to seasonal stream flow forecasting, initial conditions thus play an important role. Here, initial conditions refer to the catchments moisture at the date of forecast, i.e. snow depth, stream flow and lake level, soil moisture content, and groundwater level. The parametrisation of these initial conditions can take place at various spatial and temporal scales. Examples are the grid size of a distributed model or the time aggregation of predictors in statistical models. Therefore, the present study aims to investigate the extent to which the parametrisation of initial conditions at different spatial scales leads to differences in forecast errors. To do so, we conduct a forecast experiment for the Swiss Rhine at Basel, which covers parts of Germany, Austria, and Switzerland and is southerly bounded by the Alps. Seasonal mean stream flow is defined for the time aggregation of 30, 60, and 90 days and forecasted at 24 dates within the calendar year, i.e. at the 1st and 16th day of each month. A regression model is employed due to the various anthropogenic effects on the basins hydrology, which often are not quantifiable but might be grasped by a simple black box model. Furthermore, the pool of candidate predictors consists of antecedent temperature, precipitation, and stream flow only. This pragmatic approach follows the fact that observations of variables relevant for hydrological storages are either scarce in space or time (soil moisture, groundwater level), restricted to certain seasons (snow depth), or regions (lake levels, snow depth). For a systematic evaluation, we therefore focus on the comprehensive archives of meteorological observations and reanalyses to estimate the initial conditions via climate variability prior to the date of forecast. The experiment itself is based on four different approaches, whose differences in model skill were estimated within a rigorous cross-validation framework for the period 1982-2013: The predictands are regressed on antecedent temperature, precipitation, and stream flow. Here, temperature and precipitation constitute basin averages out of the E-OBS gridded data set. As in 1., but temperature and precipitation are used at the E-OBS grid scale (0.25 degree in longitude and latitude) without spatial averaging. As in 1., but the regression model is applied to 66 gauged subcatchments of the Rhine basin. Forecasts for these subcatchments are then simply summed and upscaled to the area of the Rhine basin. As in 3., but the forecasts at the subcatchment scale are additionally weighted in terms of hydrological representativeness of the corresponding subcatchment.
Validation of the Kp Geomagnetic Index Forecast at CCMC
NASA Astrophysics Data System (ADS)
Frechette, B. P.; Mays, M. L.
2017-12-01
The Community Coordinated Modeling Center (CCMC) Space Weather Research Center (SWRC) sub-team provides space weather services to NASA robotic mission operators and science campaigns and prototypes new models, forecasting techniques, and procedures. The Kp index is a measure of geomagnetic disturbances for space weather in the magnetosphere such as geomagnetic storms and substorms. In this study, we performed validation on the Newell et al. (2007) Kp prediction equation from December 2010 to July 2017. The purpose of this research is to understand the Kp forecast performance because it's critical for NASA missions to have confidence in the space weather forecast. This research was done by computing the Kp error for each forecast (average, minimum, maximum) and each synoptic period. Then to quantify forecast performance we computed the mean error, mean absolute error, root mean square error, multiplicative bias and correlation coefficient. A contingency table was made for each forecast and skill scores were computed. The results are compared to the perfect score and reference forecast skill score. In conclusion, the skill score and error results show that the minimum of the predicted Kp over each synoptic period from the Newell et al. (2007) Kp prediction equation performed better than the maximum or average of the prediction. However, persistence (reference forecast) outperformed all of the Kp forecasts (minimum, maximum, and average). Overall, the Newell Kp prediction still predicts within a range of 1, even though persistence beats it.
NASA Astrophysics Data System (ADS)
Solvang Johansen, Stian; Steinsland, Ingelin; Engeland, Kolbjørn
2016-04-01
Running hydrological models with precipitation and temperature ensemble forcing to generate ensembles of streamflow is a commonly used method in operational hydrology. Evaluations of streamflow ensembles have however revealed that the ensembles are biased with respect to both mean and spread. Thus postprocessing of the ensembles is needed in order to improve the forecast skill. The aims of this study is (i) to to evaluate how postprocessing of streamflow ensembles works for Norwegian catchments within different hydrological regimes and to (ii) demonstrate how post processed streamflow ensembles are used operationally by a hydropower producer. These aims were achieved by postprocessing forecasted daily discharge for 10 lead-times for 20 catchments in Norway by using EPS forcing from ECMWF applied the semi-distributed HBV-model dividing each catchment into 10 elevation zones. Statkraft Energi uses forecasts from these catchments for scheduling hydropower production. The catchments represent different hydrological regimes. Some catchments have stable winter condition with winter low flow and a major flood event during spring or early summer caused by snow melting. Others has a more mixed snow-rain regime, often with a secondary flood season during autumn, and in the coastal areas, the stream flow is dominated by rain, and the main flood season is autumn and winter. For post processing, a Bayesian model averaging model (BMA) close to (Kleiber et al 2011) is used. The model creates a predictive PDF that is a weighted average of PDFs centered on the individual bias corrected forecasts. The weights are here equal since all ensemble members come from the same model, and thus have the same probability. For modeling streamflow, the gamma distribution is chosen as a predictive PDF. The bias correction parameters and the PDF parameters are estimated using a 30-day sliding window training period. Preliminary results show that the improvement varies between catchments depending on where they are situated and the hydrological regime. There is an improvement in CRPS for all catchments compared to raw EPS ensembles. The improvement is up to lead-time 5-7. The postprocessing also improves the MAE for the median of the predictive PDF compared to the median of the raw EPS. But less compared to CRPS, often up to lead-time 2-3. The streamflow ensembles are to some extent used operationally in Statkraft Energi (Hydro Power company, Norway), with respect to early warning, risk assessment and decision-making. Presently all forecast used operationally for short-term scheduling are deterministic, but ensembles are used visually for expert assessment of risk in difficult situations where e.g. there is a chance of overflow in a reservoir. However, there are plans to incorporate ensembles in the daily scheduling of hydropower production.
Evaluation of statistical models for forecast errors from the HBV model
NASA Astrophysics Data System (ADS)
Engeland, Kolbjørn; Renard, Benjamin; Steinsland, Ingelin; Kolberg, Sjur
2010-04-01
SummaryThree statistical models for the forecast errors for inflow into the Langvatn reservoir in Northern Norway have been constructed and tested according to the agreement between (i) the forecast distribution and the observations and (ii) median values of the forecast distribution and the observations. For the first model observed and forecasted inflows were transformed by the Box-Cox transformation before a first order auto-regressive model was constructed for the forecast errors. The parameters were conditioned on weather classes. In the second model the Normal Quantile Transformation (NQT) was applied on observed and forecasted inflows before a similar first order auto-regressive model was constructed for the forecast errors. For the third model positive and negative errors were modeled separately. The errors were first NQT-transformed before conditioning the mean error values on climate, forecasted inflow and yesterday's error. To test the three models we applied three criterions: we wanted (a) the forecast distribution to be reliable; (b) the forecast intervals to be narrow; (c) the median values of the forecast distribution to be close to the observed values. Models 1 and 2 gave almost identical results. The median values improved the forecast with Nash-Sutcliffe R eff increasing from 0.77 for the original forecast to 0.87 for the corrected forecasts. Models 1 and 2 over-estimated the forecast intervals but gave the narrowest intervals. Their main drawback was that the distributions are less reliable than Model 3. For Model 3 the median values did not fit well since the auto-correlation was not accounted for. Since Model 3 did not benefit from the potential variance reduction that lies in bias estimation and removal it gave on average wider forecasts intervals than the two other models. At the same time Model 3 on average slightly under-estimated the forecast intervals, probably explained by the use of average measures to evaluate the fit.
Hoover, Stephen; Jackson, Eric V.; Paul, David; Locke, Robert
2016-01-01
Summary Background Accurate prediction of future patient census in hospital units is essential for patient safety, health outcomes, and resource planning. Forecasting census in the Neonatal Intensive Care Unit (NICU) is particularly challenging due to limited ability to control the census and clinical trajectories. The fixed average census approach, using average census from previous year, is a forecasting alternative used in clinical practice, but has limitations due to census variations. Objective Our objectives are to: (i) analyze the daily NICU census at a single health care facility and develop census forecasting models, (ii) explore models with and without patient data characteristics obtained at the time of admission, and (iii) evaluate accuracy of the models compared with the fixed average census approach. Methods We used five years of retrospective daily NICU census data for model development (January 2008 – December 2012, N=1827 observations) and one year of data for validation (January – December 2013, N=365 observations). Best-fitting models of ARIMA and linear regression were applied to various 7-day prediction periods and compared using error statistics. Results The census showed a slightly increasing linear trend. Best fitting models included a non-seasonal model, ARIMA(1,0,0), seasonal ARIMA models, ARIMA(1,0,0)x(1,1,2)7 and ARIMA(2,1,4)x(1,1,2)14, as well as a seasonal linear regression model. Proposed forecasting models resulted on average in 36.49% improvement in forecasting accuracy compared with the fixed average census approach. Conclusions Time series models provide higher prediction accuracy under different census conditions compared with the fixed average census approach. Presented methodology is easily applicable in clinical practice, can be generalized to other care settings, support short- and long-term census forecasting, and inform staff resource planning. PMID:27437040
Capan, Muge; Hoover, Stephen; Jackson, Eric V; Paul, David; Locke, Robert
2016-01-01
Accurate prediction of future patient census in hospital units is essential for patient safety, health outcomes, and resource planning. Forecasting census in the Neonatal Intensive Care Unit (NICU) is particularly challenging due to limited ability to control the census and clinical trajectories. The fixed average census approach, using average census from previous year, is a forecasting alternative used in clinical practice, but has limitations due to census variations. Our objectives are to: (i) analyze the daily NICU census at a single health care facility and develop census forecasting models, (ii) explore models with and without patient data characteristics obtained at the time of admission, and (iii) evaluate accuracy of the models compared with the fixed average census approach. We used five years of retrospective daily NICU census data for model development (January 2008 - December 2012, N=1827 observations) and one year of data for validation (January - December 2013, N=365 observations). Best-fitting models of ARIMA and linear regression were applied to various 7-day prediction periods and compared using error statistics. The census showed a slightly increasing linear trend. Best fitting models included a non-seasonal model, ARIMA(1,0,0), seasonal ARIMA models, ARIMA(1,0,0)x(1,1,2)7 and ARIMA(2,1,4)x(1,1,2)14, as well as a seasonal linear regression model. Proposed forecasting models resulted on average in 36.49% improvement in forecasting accuracy compared with the fixed average census approach. Time series models provide higher prediction accuracy under different census conditions compared with the fixed average census approach. Presented methodology is easily applicable in clinical practice, can be generalized to other care settings, support short- and long-term census forecasting, and inform staff resource planning.
Evaluation of Satellite and Model Precipitation Products Over Turkey
NASA Astrophysics Data System (ADS)
Yilmaz, M. T.; Amjad, M.
2017-12-01
Satellite-based remote sensing, gauge stations, and models are the three major platforms to acquire precipitation dataset. Among them satellites and models have the advantage of retrieving spatially and temporally continuous and consistent datasets, while the uncertainty estimates of these retrievals are often required for many hydrological studies to understand the source and the magnitude of the uncertainty in hydrological response parameters. In this study, satellite and model precipitation data products are validated over various temporal scales (daily, 3-daily, 7-daily, 10-daily and monthly) using in-situ measured precipitation observations from a network of 733 gauges from all over the Turkey. Tropical Rainfall Measurement Mission (TRMM) Multi-satellite Precipitation Analysis (TMPA) 3B42 version 7 and European Center of Medium-Range Weather Forecast (ECMWF) model estimates (daily, 3-daily, 7-daily and 10-daily accumulated forecast) are used in this study. Retrievals are evaluated for their mean and standard deviation and their accuracies are evaluated via bias, root mean square error, error standard deviation and correlation coefficient statistics. Intensity vs frequency analysis and some contingency table statistics like percent correct, probability of detection, false alarm ratio and critical success index are determined using daily time-series. Both ECMWF forecasts and TRMM observations, on average, overestimate the precipitation compared to gauge estimates; wet biases are 10.26 mm/month and 8.65 mm/month, respectively for ECMWF and TRMM. RMSE values of ECMWF forecasts and TRMM estimates are 39.69 mm/month and 41.55 mm/month, respectively. Monthly correlations between Gauges-ECMWF, Gauges-TRMM and ECMWF-TRMM are 0.76, 0.73 and 0.81, respectively. The model and the satellite error statistics are further compared against the gauges error statistics based on inverse distance weighting (IWD) analysis. Both the model and satellite data have less IWD errors (14.72 mm/month and 10.75 mm/month, respectively) compared to gauges IWD error (21.58 mm/month). These results show that, on average, ECMWF forecast data have higher skill than TRMM observations. Overall, both ECMWF forecast data and TRMM observations show good potential for catchment scale hydrological analysis.
NASA Astrophysics Data System (ADS)
Courdent, Vianney; Grum, Morten; Mikkelsen, Peter Steen
2018-01-01
Precipitation constitutes a major contribution to the flow in urban storm- and wastewater systems. Forecasts of the anticipated runoff flows, created from radar extrapolation and/or numerical weather predictions, can potentially be used to optimize operation in both wet and dry weather periods. However, flow forecasts are inevitably uncertain and their use will ultimately require a trade-off between the value of knowing what will happen in the future and the probability and consequence of being wrong. In this study we examine how ensemble forecasts from the HIRLAM-DMI-S05 numerical weather prediction (NWP) model subject to three different ensemble post-processing approaches can be used to forecast flow exceedance in a combined sewer for a wide range of ratios between the probability of detection (POD) and the probability of false detection (POFD). We use a hydrological rainfall-runoff model to transform the forecasted rainfall into forecasted flow series and evaluate three different approaches to establishing the relative operating characteristics (ROC) diagram of the forecast, which is a plot of POD against POFD for each fraction of concordant ensemble members and can be used to select the weight of evidence that matches the desired trade-off between POD and POFD. In the first approach, the rainfall input to the model is calculated for each of 25 ensemble members as a weighted average of rainfall from the NWP cells over the catchment where the weights are proportional to the areal intersection between the catchment and the NWP cells. In the second approach, a total of 2825 flow ensembles are generated using rainfall input from the neighbouring NWP cells up to approximately 6 cells in all directions from the catchment. In the third approach, the first approach is extended spatially by successively increasing the area covered and for each spatial increase and each time step selecting only the cell with the highest intensity resulting in a total of 175 ensemble members. While the first and second approaches have the disadvantage of not covering the full range of the ROC diagram and being computationally heavy, respectively, the third approach leads to both a broad coverage of the ROC diagram range at a relatively low computational cost. A broad coverage of the ROC diagram offers a larger selection of prediction skill to choose from to best match to the prediction purpose. The study distinguishes itself from earlier research in being the first application to urban hydrology, with fast runoff and small catchments that are highly sensitive to local extremes. Furthermore, no earlier reference has been found on the highly efficient third approach using only neighbouring cells with the highest threat to expand the range of the ROC diagram. This study provides an efficient and robust approach to using ensemble rainfall forecasts affected by bias and misplacement errors for predicting flow threshold exceedance in urban drainage systems.
Extended-Range Forecasts at Climate Prediction Center: Current Status and Future Plans
NASA Astrophysics Data System (ADS)
Kumar, A.
2016-12-01
Motivated by a user need to provide forecast information on extended-range time-scales (i.e., weeks 2-4), in recent years Climate Prediction Center (CPC) has made considerable efforts towards developing and testing the feasibility for developing the required forecasts. The forecasts targeting this particular time-scale face a unique challenge in that while the forecast skill due to atmospheric initial conditions is small (because of rapid decay in the memory associated with the atmospheric initial conditions), short time averages for which forecasts are made do not benefit from skill associated with anomalous boundary conditions either. Despite these challenges, CPC has embarked on providing an experimental outlook for weeks 3-4 average. The talk will summarize the current status of CPC's current suite of extended-range forecast products, and further, will discuss some future plans.
Evaluation of TIGGE Ensemble Forecasts of Precipitation in Distinct Climate Regions in Iran
NASA Astrophysics Data System (ADS)
Aminyavari, Saleh; Saghafian, Bahram; Delavar, Majid
2018-04-01
The application of numerical weather prediction (NWP) products is increasing dramatically. Existing reports indicate that ensemble predictions have better skill than deterministic forecasts. In this study, numerical ensemble precipitation forecasts in the TIGGE database were evaluated using deterministic, dichotomous (yes/no), and probabilistic techniques over Iran for the period 2008-16. Thirteen rain gauges spread over eight homogeneous precipitation regimes were selected for evaluation. The Inverse Distance Weighting and Kriging methods were adopted for interpolation of the prediction values, downscaled to the stations at lead times of one to three days. To enhance the forecast quality, NWP values were post-processed via Bayesian Model Averaging. The results showed that ECMWF had better scores than other products. However, products of all centers underestimated precipitation in high precipitation regions while overestimating precipitation in other regions. This points to a systematic bias in forecasts and demands application of bias correction techniques. Based on dichotomous evaluation, NCEP did better at most stations, although all centers overpredicted the number of precipitation events. Compared to those of ECMWF and NCEP, UKMO yielded higher scores in mountainous regions, but performed poorly at other selected stations. Furthermore, the evaluations showed that all centers had better skill in wet than in dry seasons. The quality of post-processed predictions was better than those of the raw predictions. In conclusion, the accuracy of the NWP predictions made by the selected centers could be classified as medium over Iran, while post-processing of predictions is recommended to improve the quality.
Prediction of a service demand using combined forecasting approach
NASA Astrophysics Data System (ADS)
Zhou, Ling
2017-08-01
Forecasting facilitates cutting down operational and management costs while ensuring service level for a logistics service provider. Our case study here is to investigate how to forecast short-term logistic demand for a LTL carrier. Combined approach depends on several forecasting methods simultaneously, instead of a single method. It can offset the weakness of a forecasting method with the strength of another, which could improve the precision performance of prediction. Main issues of combined forecast modeling are how to select methods for combination, and how to find out weight coefficients among methods. The principles of method selection include that each method should apply to the problem of forecasting itself, also methods should differ in categorical feature as much as possible. Based on these principles, exponential smoothing, ARIMA and Neural Network are chosen to form the combined approach. Besides, least square technique is employed to settle the optimal weight coefficients among forecasting methods. Simulation results show the advantage of combined approach over the three single methods. The work done in the paper helps manager to select prediction method in practice.
Evaluation and Applications of the Prediction of Intensity Model Error (PRIME) Model
NASA Astrophysics Data System (ADS)
Bhatia, K. T.; Nolan, D. S.; Demaria, M.; Schumacher, A.
2015-12-01
Forecasters and end users of tropical cyclone (TC) intensity forecasts would greatly benefit from a reliable expectation of model error to counteract the lack of consistency in TC intensity forecast performance. As a first step towards producing error predictions to accompany each TC intensity forecast, Bhatia and Nolan (2013) studied the relationship between synoptic parameters, TC attributes, and forecast errors. In this study, we build on previous results of Bhatia and Nolan (2013) by testing the ability of the Prediction of Intensity Model Error (PRIME) model to forecast the absolute error and bias of four leading intensity models available for guidance in the Atlantic basin. PRIME forecasts are independently evaluated at each 12-hour interval from 12 to 120 hours during the 2007-2014 Atlantic hurricane seasons. The absolute error and bias predictions of PRIME are compared to their respective climatologies to determine their skill. In addition to these results, we will present the performance of the operational version of PRIME run during the 2015 hurricane season. PRIME verification results show that it can reliably anticipate situations where particular models excel, and therefore could lead to a more informed protocol for hurricane evacuations and storm preparations. These positive conclusions suggest that PRIME forecasts also have the potential to lower the error in the original intensity forecasts of each model. As a result, two techniques are proposed to develop a post-processing procedure for a multimodel ensemble based on PRIME. The first approach is to inverse-weight models using PRIME absolute error predictions (higher predicted absolute error corresponds to lower weights). The second multimodel ensemble applies PRIME bias predictions to each model's intensity forecast and the mean of the corrected models is evaluated. The forecasts of both of these experimental ensembles are compared to those of the equal-weight ICON ensemble, which currently provides the most reliable forecasts in the Atlantic basin.
NASA Technical Reports Server (NTRS)
Lambert, Winifred C.
2003-01-01
This report describes the results from Phase II of the AMU's Short-Range Statistical Forecasting task for peak winds at the Shuttle Landing Facility (SLF). The peak wind speeds are an important forecast element for the Space Shuttle and Expendable Launch Vehicle programs. The 45th Weather Squadron and the Spaceflight Meteorology Group indicate that peak winds are challenging to forecast. The Applied Meteorology Unit was tasked to develop tools that aid in short-range forecasts of peak winds at tower sites of operational interest. A seven year record of wind tower data was used in the analysis. Hourly and directional climatologies by tower and month were developed to determine the seasonal behavior of the average and peak winds. Probability density functions (PDF) of peak wind speed were calculated to determine the distribution of peak speed with average speed. These provide forecasters with a means of determining the probability of meeting or exceeding a certain peak wind given an observed or forecast average speed. A PC-based Graphical User Interface (GUI) tool was created to display the data quickly.
A Wind Forecasting System for Energy Application
NASA Astrophysics Data System (ADS)
Courtney, Jennifer; Lynch, Peter; Sweeney, Conor
2010-05-01
Accurate forecasting of available energy is crucial for the efficient management and use of wind power in the national power grid. With energy output critically dependent upon wind strength there is a need to reduce the errors associated wind forecasting. The objective of this research is to get the best possible wind forecasts for the wind energy industry. To achieve this goal, three methods are being applied. First, a mesoscale numerical weather prediction (NWP) model called WRF (Weather Research and Forecasting) is being used to predict wind values over Ireland. Currently, a gird resolution of 10km is used and higher model resolutions are being evaluated to establish whether they are economically viable given the forecast skill improvement they produce. Second, the WRF model is being used in conjunction with ECMWF (European Centre for Medium-Range Weather Forecasts) ensemble forecasts to produce a probabilistic weather forecasting product. Due to the chaotic nature of the atmosphere, a single, deterministic weather forecast can only have limited skill. The ECMWF ensemble methods produce an ensemble of 51 global forecasts, twice a day, by perturbing initial conditions of a 'control' forecast which is the best estimate of the initial state of the atmosphere. This method provides an indication of the reliability of the forecast and a quantitative basis for probabilistic forecasting. The limitation of ensemble forecasting lies in the fact that the perturbed model runs behave differently under different weather patterns and each model run is equally likely to be closest to the observed weather situation. Models have biases, and involve assumptions about physical processes and forcing factors such as underlying topography. Third, Bayesian Model Averaging (BMA) is being applied to the output from the ensemble forecasts in order to statistically post-process the results and achieve a better wind forecasting system. BMA is a promising technique that will offer calibrated probabilistic wind forecasts which will be invaluable in wind energy management. In brief, this method turns the ensemble forecasts into a calibrated predictive probability distribution. Each ensemble member is provided with a 'weight' determined by its relative predictive skill over a training period of around 30 days. Verification of data is carried out using observed wind data from operational wind farms. These are then compared to existing forecasts produced by ECMWF and Met Eireann in relation to skill scores. We are developing decision-making models to show the benefits achieved using the data produced by our wind energy forecasting system. An energy trading model will be developed, based on the rules currently used by the Single Electricity Market Operator for energy trading in Ireland. This trading model will illustrate the potential for financial savings by using the forecast data generated by this research.
Forecasting coconut production in the Philippines with ARIMA model
NASA Astrophysics Data System (ADS)
Lim, Cristina Teresa
2015-02-01
The study aimed to depict the situation of the coconut industry in the Philippines for the future years applying Autoregressive Integrated Moving Average (ARIMA) method. Data on coconut production, one of the major industrial crops of the country, for the period of 1990 to 2012 were analyzed using time-series methods. Autocorrelation (ACF) and partial autocorrelation functions (PACF) were calculated for the data. Appropriate Box-Jenkins autoregressive moving average model was fitted. Validity of the model was tested using standard statistical techniques. The forecasting power of autoregressive moving average (ARMA) model was used to forecast coconut production for the eight leading years.
NASA Astrophysics Data System (ADS)
Lahmiri, Salim
2016-02-01
Multiresolution analysis techniques including continuous wavelet transform, empirical mode decomposition, and variational mode decomposition are tested in the context of interest rate next-day variation prediction. In particular, multiresolution analysis techniques are used to decompose interest rate actual variation and feedforward neural network for training and prediction. Particle swarm optimization technique is adopted to optimize its initial weights. For comparison purpose, autoregressive moving average model, random walk process and the naive model are used as main reference models. In order to show the feasibility of the presented hybrid models that combine multiresolution analysis techniques and feedforward neural network optimized by particle swarm optimization, we used a set of six illustrative interest rates; including Moody's seasoned Aaa corporate bond yield, Moody's seasoned Baa corporate bond yield, 3-Month, 6-Month and 1-Year treasury bills, and effective federal fund rate. The forecasting results show that all multiresolution-based prediction systems outperform the conventional reference models on the criteria of mean absolute error, mean absolute deviation, and root mean-squared error. Therefore, it is advantageous to adopt hybrid multiresolution techniques and soft computing models to forecast interest rate daily variations as they provide good forecasting performance.
Traffic forecasting report : 2007.
DOT National Transportation Integrated Search
2008-05-01
This is the sixth edition of the Traffic Forecasting Report (TFR). This edition of the TFR contains the latest (predominantly 2007) forecasting/modeling data as follows: : Functional class average traffic volume growth rates and trends : Vehi...
Weather forecasting based on hybrid neural model
NASA Astrophysics Data System (ADS)
Saba, Tanzila; Rehman, Amjad; AlGhamdi, Jarallah S.
2017-11-01
Making deductions and expectations about climate has been a challenge all through mankind's history. Challenges with exact meteorological directions assist to foresee and handle problems well in time. Different strategies have been investigated using various machine learning techniques in reported forecasting systems. Current research investigates climate as a major challenge for machine information mining and deduction. Accordingly, this paper presents a hybrid neural model (MLP and RBF) to enhance the accuracy of weather forecasting. Proposed hybrid model ensure precise forecasting due to the specialty of climate anticipating frameworks. The study concentrates on the data representing Saudi Arabia weather forecasting. The main input features employed to train individual and hybrid neural networks that include average dew point, minimum temperature, maximum temperature, mean temperature, average relative moistness, precipitation, normal wind speed, high wind speed and average cloudiness. The output layer composed of two neurons to represent rainy and dry weathers. Moreover, trial and error approach is adopted to select an appropriate number of inputs to the hybrid neural network. Correlation coefficient, RMSE and scatter index are the standard yard sticks adopted for forecast accuracy measurement. On individual standing MLP forecasting results are better than RBF, however, the proposed simplified hybrid neural model comes out with better forecasting accuracy as compared to both individual networks. Additionally, results are better than reported in the state of art, using a simple neural structure that reduces training time and complexity.
A short-term ensemble wind speed forecasting system for wind power applications
NASA Astrophysics Data System (ADS)
Baidya Roy, S.; Traiteur, J. J.; Callicutt, D.; Smith, M.
2011-12-01
This study develops an adaptive, blended forecasting system to provide accurate wind speed forecasts 1 hour ahead of time for wind power applications. The system consists of an ensemble of 21 forecasts with different configurations of the Weather Research and Forecasting Single Column Model (WRFSCM) and a persistence model. The ensemble is calibrated against observations for a 2 month period (June-July, 2008) at a potential wind farm site in Illinois using the Bayesian Model Averaging (BMA) technique. The forecasting system is evaluated against observations for August 2008 at the same site. The calibrated ensemble forecasts significantly outperform the forecasts from the uncalibrated ensemble while significantly reducing forecast uncertainty under all environmental stability conditions. The system also generates significantly better forecasts than persistence, autoregressive (AR) and autoregressive moving average (ARMA) models during the morning transition and the diurnal convective regimes. This forecasting system is computationally more efficient than traditional numerical weather prediction models and can generate a calibrated forecast, including model runs and calibration, in approximately 1 minute. Currently, hour-ahead wind speed forecasts are almost exclusively produced using statistical models. However, numerical models have several distinct advantages over statistical models including the potential to provide turbulence forecasts. Hence, there is an urgent need to explore the role of numerical models in short-term wind speed forecasting. This work is a step in that direction and is likely to trigger a debate within the wind speed forecasting community.
NASA Astrophysics Data System (ADS)
Murray, S.; Guerra, J. A.
2017-12-01
One essential component of operational space weather forecasting is the prediction of solar flares. Early flare forecasting work focused on statistical methods based on historical flaring rates, but more complex machine learning methods have been developed in recent years. A multitude of flare forecasting methods are now available, however it is still unclear which of these methods performs best, and none are substantially better than climatological forecasts. Current operational space weather centres cannot rely on automated methods, and generally use statistical forecasts with a little human intervention. Space weather researchers are increasingly looking towards methods used in terrestrial weather to improve current forecasting techniques. Ensemble forecasting has been used in numerical weather prediction for many years as a way to combine different predictions in order to obtain a more accurate result. It has proved useful in areas such as magnetospheric modelling and coronal mass ejection arrival analysis, however has not yet been implemented in operational flare forecasting. Here we construct ensemble forecasts for major solar flares by linearly combining the full-disk probabilistic forecasts from a group of operational forecasting methods (ASSA, ASAP, MAG4, MOSWOC, NOAA, and Solar Monitor). Forecasts from each method are weighted by a factor that accounts for the method's ability to predict previous events, and several performance metrics (both probabilistic and categorical) are considered. The results provide space weather forecasters with a set of parameters (combination weights, thresholds) that allow them to select the most appropriate values for constructing the 'best' ensemble forecast probability value, according to the performance metric of their choice. In this way different forecasts can be made to fit different end-user needs.
Short-term load forecasting of power system
NASA Astrophysics Data System (ADS)
Xu, Xiaobin
2017-05-01
In order to ensure the scientific nature of optimization about power system, it is necessary to improve the load forecasting accuracy. Power system load forecasting is based on accurate statistical data and survey data, starting from the history and current situation of electricity consumption, with a scientific method to predict the future development trend of power load and change the law of science. Short-term load forecasting is the basis of power system operation and analysis, which is of great significance to unit combination, economic dispatch and safety check. Therefore, the load forecasting of the power system is explained in detail in this paper. First, we use the data from 2012 to 2014 to establish the partial least squares model to regression analysis the relationship between daily maximum load, daily minimum load, daily average load and each meteorological factor, and select the highest peak by observing the regression coefficient histogram Day maximum temperature, daily minimum temperature and daily average temperature as the meteorological factors to improve the accuracy of load forecasting indicators. Secondly, in the case of uncertain climate impact, we use the time series model to predict the load data for 2015, respectively, the 2009-2014 load data were sorted out, through the previous six years of the data to forecast the data for this time in 2015. The criterion for the accuracy of the prediction is the average of the standard deviations for the prediction results and average load for the previous six years. Finally, considering the climate effect, we use the BP neural network model to predict the data in 2015, and optimize the forecast results on the basis of the time series model.
A similarity retrieval approach for weighted track and ambient field of tropical cyclones
NASA Astrophysics Data System (ADS)
Li, Ying; Xu, Luan; Hu, Bo; Li, Yuejun
2018-03-01
Retrieving historical tropical cyclones (TC) which have similar position and hazard intensity to the objective TC is an important means in TC track forecast and TC disaster assessment. A new similarity retrieval scheme is put forward based on historical TC track data and ambient field data, including ERA-Interim reanalysis and GFS and EC-fine forecast. It takes account of both TC track similarity and ambient field similarity, and optimal weight combination is explored subsequently. Result shows that both the distance and direction errors of TC track forecast at 24-hour timescale follow an approximately U-shape distribution. They tend to be large when the weight assigned to track similarity is close to 0 or 1.0, while relatively small when track similarity weight is from 0.2˜0.7 for distance error and 0.3˜0.6 for direction error.
NASA Astrophysics Data System (ADS)
Seyoum, Mesgana; van Andel, Schalk Jan; Xuan, Yunqing; Amare, Kibreab
Flow forecasting in poorly gauged, flood-prone Ribb and Gumara sub-catchments of the Blue Nile was studied with the aim of testing the performance of Quantitative Precipitation Forecasts (QPFs). Four types of QPFs namely MM5 forecasts with a spatial resolution of 2 km; the Maximum, Mean and Minimum members (MaxEPS, MeanEPS and MinEPS where EPS stands for Ensemble Prediction System) of the fixed, low resolution (2.5 by 2.5 degrees) National Oceanic and Atmospheric Administration Global Forecast System (NOAA GFS) ensemble forecasts were used. Both the MM5 and the EPS were not calibrated (bias correction, downscaling (for EPS), etc.). In addition, zero forecasts assuming no rainfall in the coming days, and monthly average forecasts assuming average monthly rainfall in the coming days, were used. These rainfall forecasts were then used to drive the Hydrologic Engineering Center’s-Hydrologic Modeling System, HEC-HMS, hydrologic model for flow predictions. The results show that flow predictions using MaxEPS and MM5 precipitation forecasts over-predicted the peak flow for most of the seven events analyzed, whereas under-predicted peak flow was found using zero- and monthly average rainfall. The comparison of observed and predicted flow hydrographs shows that MM5, MaxEPS and MeanEPS precipitation forecasts were able to capture the rainfall signal that caused peak flows. Flow predictions based on MaxEPS and MeanEPS gave results that were quantitatively close to the observed flow for most events, whereas flow predictions based on MM5 resulted in large overestimations for some events. In follow-up research for this particular case study, calibration of the MM5 model will be performed. The overall analysis shows that freely available atmospheric forecasting products can provide additional information on upcoming rainfall and peak flow events in areas where only base-line forecasts such as no-rainfall or climatology are available.
Chen, Shyi-Ming; Manalu, Gandhi Maruli Tua; Pan, Jeng-Shyang; Liu, Hsiang-Chuan
2013-06-01
In this paper, we present a new method for fuzzy forecasting based on two-factors second-order fuzzy-trend logical relationship groups and particle swarm optimization (PSO) techniques. First, we fuzzify the historical training data of the main factor and the secondary factor, respectively, to form two-factors second-order fuzzy logical relationships. Then, we group the two-factors second-order fuzzy logical relationships into two-factors second-order fuzzy-trend logical relationship groups. Then, we obtain the optimal weighting vector for each fuzzy-trend logical relationship group by using PSO techniques to perform the forecasting. We also apply the proposed method to forecast the Taiwan Stock Exchange Capitalization Weighted Stock Index and the NTD/USD exchange rates. The experimental results show that the proposed method gets better forecasting performance than the existing methods.
Forecasting daily meteorological time series using ARIMA and regression models
NASA Astrophysics Data System (ADS)
Murat, Małgorzata; Malinowska, Iwona; Gos, Magdalena; Krzyszczak, Jaromir
2018-04-01
The daily air temperature and precipitation time series recorded between January 1, 1980 and December 31, 2010 in four European sites (Jokioinen, Dikopshof, Lleida and Lublin) from different climatic zones were modeled and forecasted. In our forecasting we used the methods of the Box-Jenkins and Holt- Winters seasonal auto regressive integrated moving-average, the autoregressive integrated moving-average with external regressors in the form of Fourier terms and the time series regression, including trend and seasonality components methodology with R software. It was demonstrated that obtained models are able to capture the dynamics of the time series data and to produce sensible forecasts.
NASA Astrophysics Data System (ADS)
Wang, Gaili; Wong, Wai-Kin; Hong, Yang; Liu, Liping; Dong, Jili; Xue, Ming
2015-03-01
The primary objective of this study is to improve the performance of deterministic high resolution rainfall forecasts caused by severe storms by merging an extrapolation radar-based scheme with a storm-scale Numerical Weather Prediction (NWP) model. Effectiveness of Multi-scale Tracking and Forecasting Radar Echoes (MTaRE) model was compared with that of a storm-scale NWP model named Advanced Regional Prediction System (ARPS) for forecasting a violent tornado event that developed over parts of western and much of central Oklahoma on May 24, 2011. Then the bias corrections were performed to improve the forecast accuracy of ARPS forecasts. Finally, the corrected ARPS forecast and radar-based extrapolation were optimally merged by using a hyperbolic tangent weight scheme. The comparison of forecast skill between MTaRE and ARPS in high spatial resolution of 0.01° × 0.01° and high temporal resolution of 5 min showed that MTaRE outperformed ARPS in terms of index of agreement and mean absolute error (MAE). MTaRE had a better Critical Success Index (CSI) for less than 20-min lead times and was comparable to ARPS for 20- to 50-min lead times, while ARPS had a better CSI for more than 50-min lead times. Bias correction significantly improved ARPS forecasts in terms of MAE and index of agreement, although the CSI of corrected ARPS forecasts was similar to that of the uncorrected ARPS forecasts. Moreover, optimally merging results using hyperbolic tangent weight scheme further improved the forecast accuracy and became more stable.
NASA Technical Reports Server (NTRS)
Barrett, Joe H., III; Roeder, William P.
2010-01-01
Peak wind speed is important element in 24-Hour and Weekly Planning Forecasts issued by 45th Weather Squadron (45 WS). Forecasts issued for planning operations at KSC/CCAFS. 45 WS wind advisories issued for wind gusts greater than or equal to 25 kt. 35 kt and 50 kt from surface to 300 ft. AMU developed cool-season (Oct - Apr) tool to help 45 WS forecast: daily peak wind speed, 5-minute average speed at time of peak wind, and probability peak speed greater than or equal to 25 kt, 35 kt, 50 kt. AMU tool also forecasts daily average wind speed from 30 ft to 60 ft. Phase I and II tools delivered as a Microsoft Excel graphical user interface (GUI). Phase II tool also delivered as Meteorological Interactive Data Display System (MIDDS) GUI. Phase I and II forecast methods were compared to climatology, 45 WS wind advisories and North American Mesoscale model (MesoNAM) forecasts in a verification data set.
Monthly mean forecast experiments with the GISS model
NASA Technical Reports Server (NTRS)
Spar, J.; Atlas, R. M.; Kuo, E.
1976-01-01
The GISS general circulation model was used to compute global monthly mean forecasts for January 1973, 1974, and 1975 from initial conditions on the first day of each month and constant sea surface temperatures. Forecasts were evaluated in terms of global and hemispheric energetics, zonally averaged meridional and vertical profiles, forecast error statistics, and monthly mean synoptic fields. Although it generated a realistic mean meridional structure, the model did not adequately reproduce the observed interannual variations in the large scale monthly mean energetics and zonally averaged circulation. The monthly mean sea level pressure field was not predicted satisfactorily, but annual changes in the Icelandic low were simulated. The impact of temporal sea surface temperature variations on the forecasts was investigated by comparing two parallel forecasts for January 1974, one using climatological ocean temperatures and the other observed daily ocean temperatures. The use of daily updated sea surface temperatures produced no discernible beneficial effect.
Can Business Students Forecast Their Own Grade?
ERIC Educational Resources Information Center
Hossain, Belayet; Tsigaris, Panagiotis
2013-01-01
This study examines grade expectations of two groups of business students for their final course mark. We separate students that are on average "better" forecasters on the basis of them not making significant forecast errors during the semester from those students that are poor forecasters of their final grade. We find that the better…
Post-processing method for wind speed ensemble forecast using wind speed and direction
NASA Astrophysics Data System (ADS)
Sofie Eide, Siri; Bjørnar Bremnes, John; Steinsland, Ingelin
2017-04-01
Statistical methods are widely applied to enhance the quality of both deterministic and ensemble NWP forecasts. In many situations, like wind speed forecasting, most of the predictive information is contained in one variable in the NWP models. However, in statistical calibration of deterministic forecasts it is often seen that including more variables can further improve forecast skill. For ensembles this is rarely taken advantage of, mainly due to that it is generally not straightforward how to include multiple variables. In this study, it is demonstrated how multiple variables can be included in Bayesian model averaging (BMA) by using a flexible regression method for estimating the conditional means. The method is applied to wind speed forecasting at 204 Norwegian stations based on wind speed and direction forecasts from the ECMWF ensemble system. At about 85 % of the sites the ensemble forecasts were improved in terms of CRPS by adding wind direction as predictor compared to only using wind speed. On average the improvements were about 5 %, but mainly for moderate to strong wind situations. For weak wind speeds adding wind direction had more or less neutral impact.
2014 Gulf of Mexico Hypoxia Forecast
Scavia, Donald; Evans, Mary Anne; Obenour, Dan
2014-01-01
The Gulf of Mexico annual summer hypoxia forecasts are based on average May total nitrogen loads from the Mississippi River basin for that year. The load estimate, recently released by USGS, is 4,761 metric tons per day. Based on that estimate, we predict the area of this summer’s hypoxic zone to be 14,000 square kilometers (95% credible interval, 8,000 to 20,000) – an “average year”. Our forecast hypoxic volume is 50 km3 (95% credible interval, 20 to 77).
Time series forecasting using ERNN and QR based on Bayesian model averaging
NASA Astrophysics Data System (ADS)
Pwasong, Augustine; Sathasivam, Saratha
2017-08-01
The Bayesian model averaging technique is a multi-model combination technique. The technique was employed to amalgamate the Elman recurrent neural network (ERNN) technique with the quadratic regression (QR) technique. The amalgamation produced a hybrid technique known as the hybrid ERNN-QR technique. The potentials of forecasting with the hybrid technique are compared with the forecasting capabilities of individual techniques of ERNN and QR. The outcome revealed that the hybrid technique is superior to the individual techniques in the mean square error sense.
Chesapeake Bay Hypoxic Volume Forecasts and Results
Evans, Mary Anne; Scavia, Donald
2013-01-01
Given the average Jan-May 2013 total nitrogen load of 162,028 kg/day, this summer's hypoxia volume forecast is 6.1 km3, slightly smaller than average size for the period of record and almost the same as 2012. The late July 2013 measured volume was 6.92 km3.
Wang, Hongguang
2018-01-01
Annual power load forecasting is not only the premise of formulating reasonable macro power planning, but also an important guarantee for the safety and economic operation of power system. In view of the characteristics of annual power load forecasting, the grey model of GM (1,1) are widely applied. Introducing buffer operator into GM (1,1) to pre-process the historical annual power load data is an approach to improve the forecasting accuracy. To solve the problem of nonadjustable action intensity of traditional weakening buffer operator, variable-weight weakening buffer operator (VWWBO) and background value optimization (BVO) are used to dynamically pre-process the historical annual power load data and a VWWBO-BVO-based GM (1,1) is proposed. To find the optimal value of variable-weight buffer coefficient and background value weight generating coefficient of the proposed model, grey relational analysis (GRA) and improved gravitational search algorithm (IGSA) are integrated and a GRA-IGSA integration algorithm is constructed aiming to maximize the grey relativity between simulating value sequence and actual value sequence. By the adjustable action intensity of buffer operator, the proposed model optimized by GRA-IGSA integration algorithm can obtain a better forecasting accuracy which is demonstrated by the case studies and can provide an optimized solution for annual power load forecasting. PMID:29768450
Spatial and Temporal scales of time-averaged 700 MB height anomalies
NASA Technical Reports Server (NTRS)
Gutzler, D.
1981-01-01
The monthly and seasonal forecasting technique is based to a large extent on the extrapolation of trends in the positions of the centers of time averaged geopotential height anomalies. The complete forecasted height pattern is subsequently drawn around the forecasted anomaly centers. The efficacy of this technique was tested and time series of observed monthly mean and 5 day mean 700 mb geopotential heights were examined. Autocorrelation statistics are generated to document the tendency for persistence of anomalies. These statistics are compared to a red noise hypothesis to check for evidence of possible preferred time scales of persistence. Space-time spectral analyses at middle latitudes are checked for evidence of periodicities which could be associated with predictable month-to-month trends. A local measure of the average spatial scale of anomalies is devised for guidance in the completion of the anomaly pattern around the forecasted centers.
NASA Astrophysics Data System (ADS)
Goswami, M.; O'Connor, K. M.; Shamseldin, A. Y.
The "Galway Real-Time River Flow Forecasting System" (GFFS) is a software pack- age developed at the Department of Engineering Hydrology, of the National University of Ireland, Galway, Ireland. It is based on a selection of lumped black-box and con- ceptual rainfall-runoff models, all developed in Galway, consisting primarily of both the non-parametric (NP) and parametric (P) forms of two black-box-type rainfall- runoff models, namely, the Simple Linear Model (SLM-NP and SLM-P) and the seasonally-based Linear Perturbation Model (LPM-NP and LPM-P), together with the non-parametric wetness-index-based Linearly Varying Gain Factor Model (LVGFM), the black-box Artificial Neural Network (ANN) Model, and the conceptual Soil Mois- ture Accounting and Routing (SMAR) Model. Comprised of the above suite of mod- els, the system enables the user to calibrate each model individually, initially without updating, and it is capable also of producing combined (i.e. consensus) forecasts us- ing the Simple Average Method (SAM), the Weighted Average Method (WAM), or the Artificial Neural Network Method (NNM). The updating of each model output is achieved using one of four different techniques, namely, simple Auto-Regressive (AR) updating, Linear Transfer Function (LTF) updating, Artificial Neural Network updating (NNU), and updating by the Non-linear Auto-Regressive Exogenous-input method (NARXM). The models exhibit a considerable range of variation in degree of complexity of structure, with corresponding degrees of complication in objective func- tion evaluation. Operating in continuous river-flow simulation and updating modes, these models and techniques have been applied to two Irish catchments, namely, the Fergus and the Brosna. A number of performance evaluation criteria have been used to comparatively assess the model discharge forecast efficiency.
Optimising seasonal streamflow forecast lead time for operational decision making in Australia
NASA Astrophysics Data System (ADS)
Schepen, Andrew; Zhao, Tongtiegang; Wang, Q. J.; Zhou, Senlin; Feikema, Paul
2016-10-01
Statistical seasonal forecasts of 3-month streamflow totals are released in Australia by the Bureau of Meteorology and updated on a monthly basis. The forecasts are often released in the second week of the forecast period, due to the onerous forecast production process. The current service relies on models built using data for complete calendar months, meaning the forecast production process cannot begin until the first day of the forecast period. Somehow, the bureau needs to transition to a service that provides forecasts before the beginning of the forecast period; timelier forecast release will become critical as sub-seasonal (monthly) forecasts are developed. Increasing the forecast lead time to one month ahead is not considered a viable option for Australian catchments that typically lack any predictability associated with snowmelt. The bureau's forecasts are built around Bayesian joint probability models that have antecedent streamflow, rainfall and climate indices as predictors. In this study, we adapt the modelling approach so that forecasts have any number of days of lead time. Daily streamflow and sea surface temperatures are used to develop predictors based on 28-day sliding windows. Forecasts are produced for 23 forecast locations with 0-14- and 21-day lead time. The forecasts are assessed in terms of continuous ranked probability score (CRPS) skill score and reliability metrics. CRPS skill scores, on average, reduce monotonically with increase in days of lead time, although both positive and negative differences are observed. Considering only skilful forecast locations, CRPS skill scores at 7-day lead time are reduced on average by 4 percentage points, with differences largely contained within +5 to -15 percentage points. A flexible forecasting system that allows for any number of days of lead time could benefit Australian seasonal streamflow forecast users by allowing more time for forecasts to be disseminated, comprehended and made use of prior to the commencement of a forecast season. The system would allow for forecasts to be updated if necessary.
An Intelligent Decision Support System for Workforce Forecast
2011-01-01
ARIMA ) model to forecast the demand for construction skills in Hong Kong. This model was based...Decision Trees ARIMA Rule Based Forecasting Segmentation Forecasting Regression Analysis Simulation Modeling Input-Output Models LP and NLP Markovian...data • When results are needed as a set of easily interpretable rules 4.1.4 ARIMA Auto-regressive, integrated, moving-average ( ARIMA ) models
Improving wave forecasting by integrating ensemble modelling and machine learning
NASA Astrophysics Data System (ADS)
O'Donncha, F.; Zhang, Y.; James, S. C.
2017-12-01
Modern smart-grid networks use technologies to instantly relay information on supply and demand to support effective decision making. Integration of renewable-energy resources with these systems demands accurate forecasting of energy production (and demand) capacities. For wave-energy converters, this requires wave-condition forecasting to enable estimates of energy production. Current operational wave forecasting systems exhibit substantial errors with wave-height RMSEs of 40 to 60 cm being typical, which limits the reliability of energy-generation predictions thereby impeding integration with the distribution grid. In this study, we integrate physics-based models with statistical learning aggregation techniques that combine forecasts from multiple, independent models into a single "best-estimate" prediction of the true state. The Simulating Waves Nearshore physics-based model is used to compute wind- and currents-augmented waves in the Monterey Bay area. Ensembles are developed based on multiple simulations perturbing input data (wave characteristics supplied at the model boundaries and winds) to the model. A learning-aggregation technique uses past observations and past model forecasts to calculate a weight for each model. The aggregated forecasts are compared to observation data to quantify the performance of the model ensemble and aggregation techniques. The appropriately weighted ensemble model outperforms an individual ensemble member with regard to forecasting wave conditions.
Forecast of Frost Days Based on Monthly Temperatures
NASA Astrophysics Data System (ADS)
Castellanos, M. T.; Tarquis, A. M.; Morató, M. C.; Saa-Requejo, A.
2009-04-01
Although frost can cause considerable crop damage and mitigation practices against forecasted frost exist, frost forecasting technologies have not changed for many years. The paper reports a new method to forecast the monthly number of frost days (FD) for several meteorological stations at Community of Madrid (Spain) based on successive application of two models. The first one is a stochastic model, autoregressive integrated moving average (ARIMA), that forecasts monthly minimum absolute temperature (tmin) and monthly average of minimum temperature (tminav) following Box-Jenkins methodology. The second model relates these monthly temperatures to minimum daily temperature distribution during one month. Three ARIMA models were identified for the time series analyzed with a stational period correspondent to one year. They present the same stational behavior (moving average differenced model) and different non-stational part: autoregressive model (Model 1), moving average differenced model (Model 2) and autoregressive and moving average model (Model 3). At the same time, the results point out that minimum daily temperature (tdmin), for the meteorological stations studied, followed a normal distribution each month with a very similar standard deviation through years. This standard deviation obtained for each station and each month could be used as a risk index for cold months. The application of Model 1 to predict minimum monthly temperatures showed the best FD forecast. This procedure provides a tool for crop managers and crop insurance companies to asses the risk of frost frequency and intensity, so that they can take steps to mitigate against frost damage and estimated the damage that frost would cost. This research was supported by Comunidad de Madrid Research Project 076/92. The cooperation of the Spanish National Meteorological Institute and the Spanish Ministerio de Agricultura, Pesca y Alimentation (MAPA) is gratefully acknowledged.
Mixture EMOS model for calibrating ensemble forecasts of wind speed.
Baran, S; Lerch, S
2016-03-01
Ensemble model output statistics (EMOS) is a statistical tool for post-processing forecast ensembles of weather variables obtained from multiple runs of numerical weather prediction models in order to produce calibrated predictive probability density functions. The EMOS predictive probability density function is given by a parametric distribution with parameters depending on the ensemble forecasts. We propose an EMOS model for calibrating wind speed forecasts based on weighted mixtures of truncated normal (TN) and log-normal (LN) distributions where model parameters and component weights are estimated by optimizing the values of proper scoring rules over a rolling training period. The new model is tested on wind speed forecasts of the 50 member European Centre for Medium-range Weather Forecasts ensemble, the 11 member Aire Limitée Adaptation dynamique Développement International-Hungary Ensemble Prediction System ensemble of the Hungarian Meteorological Service, and the eight-member University of Washington mesoscale ensemble, and its predictive performance is compared with that of various benchmark EMOS models based on single parametric families and combinations thereof. The results indicate improved calibration of probabilistic and accuracy of point forecasts in comparison with the raw ensemble and climatological forecasts. The mixture EMOS model significantly outperforms the TN and LN EMOS methods; moreover, it provides better calibrated forecasts than the TN-LN combination model and offers an increased flexibility while avoiding covariate selection problems. © 2016 The Authors Environmetrics Published by JohnWiley & Sons Ltd.
NASA Astrophysics Data System (ADS)
O'Brien, Enda; McKinstry, Alastair; Ralph, Adam
2015-04-01
Building on previous work presented at EGU 2013 (http://www.sciencedirect.com/science/article/pii/S1876610213016068 ), more results are available now from a different wind-farm in complex terrain in southwest Ireland. The basic approach is to interpolate wind-speed forecasts from an operational weather forecast model (i.e., HARMONIE in the case of Ireland) to the precise location of each wind-turbine, and then use Bayes Model Averaging (BMA; with statistical information collected from a prior training-period of e.g., 25 days) to remove systematic biases. Bias-corrected wind-speed forecasts (and associated power-generation forecasts) are then provided twice daily (at 5am and 5pm) out to 30 hours, with each forecast validation fed back to BMA for future learning. 30-hr forecasts from the operational Met Éireann HARMONIE model at 2.5km resolution have been validated against turbine SCADA observations since Jan. 2014. An extra high-resolution (0.5km grid-spacing) HARMONIE configuration has been run since Nov. 2014 as an extra member of the forecast "ensemble". A new version of HARMONIE with extra filters designed to stabilize high-resolution configurations has been run since Jan. 2015. Measures of forecast skill and forecast errors will be provided, and the contributions made by the various physical and computational enhancements to HARMONIE will be quantified.
Wind power application research on the fusion of the determination and ensemble prediction
NASA Astrophysics Data System (ADS)
Lan, Shi; Lina, Xu; Yuzhu, Hao
2017-07-01
The fused product of wind speed for the wind farm is designed through the use of wind speed products of ensemble prediction from the European Centre for Medium-Range Weather Forecasts (ECMWF) and professional numerical model products on wind power based on Mesoscale Model5 (MM5) and Beijing Rapid Update Cycle (BJ-RUC), which are suitable for short-term wind power forecasting and electric dispatch. The single-valued forecast is formed by calculating the different ensemble statistics of the Bayesian probabilistic forecasting representing the uncertainty of ECMWF ensemble prediction. Using autoregressive integrated moving average (ARIMA) model to improve the time resolution of the single-valued forecast, and based on the Bayesian model averaging (BMA) and the deterministic numerical model prediction, the optimal wind speed forecasting curve and the confidence interval are provided. The result shows that the fusion forecast has made obvious improvement to the accuracy relative to the existing numerical forecasting products. Compared with the 0-24 h existing deterministic forecast in the validation period, the mean absolute error (MAE) is decreased by 24.3 % and the correlation coefficient (R) is increased by 12.5 %. In comparison with the ECMWF ensemble forecast, the MAE is reduced by 11.7 %, and R is increased 14.5 %. Additionally, MAE did not increase with the prolongation of the forecast ahead.
Forecasting malaria in a highly endemic country using environmental and clinical predictors.
Zinszer, Kate; Kigozi, Ruth; Charland, Katia; Dorsey, Grant; Brewer, Timothy F; Brownstein, John S; Kamya, Moses R; Buckeridge, David L
2015-06-18
Malaria thrives in poor tropical and subtropical countries where local resources are limited. Accurate disease forecasts can provide public and clinical health services with the information needed to implement targeted approaches for malaria control that make effective use of limited resources. The objective of this study was to determine the relevance of environmental and clinical predictors of malaria across different settings in Uganda. Forecasting models were based on health facility data collected by the Uganda Malaria Surveillance Project and satellite-derived rainfall, temperature, and vegetation estimates from 2006 to 2013. Facility-specific forecasting models of confirmed malaria were developed using multivariate autoregressive integrated moving average models and produced weekly forecast horizons over a 52-week forecasting period. The model with the most accurate forecasts varied by site and by forecast horizon. Clinical predictors were retained in the models with the highest predictive power for all facility sites. The average error over the 52 forecasting horizons ranged from 26 to 128% whereas the cumulative burden forecast error ranged from 2 to 22%. Clinical data, such as drug treatment, could be used to improve the accuracy of malaria predictions in endemic settings when coupled with environmental predictors. Further exploration of malaria forecasting is necessary to improve its accuracy and value in practice, including examining other environmental and intervention predictors, including insecticide-treated nets.
NASA Astrophysics Data System (ADS)
Castiglioni, S.; Toth, E.
2009-04-01
In the calibration procedure of continuously-simulating models, the hydrologist has to choose which part of the observed hydrograph is most important to fit, either implicitly, through the visual agreement in manual calibration, or explicitly, through the choice of the objective function(s). Changing the objective functions it is in fact possible to emphasise different kind of errors, giving them more weight in the calibration phase. The objective functions used for calibrating hydrological models are generally of the quadratic type (mean squared error, correlation coefficient, coefficient of determination, etc) and are therefore oversensitive to high and extreme error values, that typically correspond to high and extreme streamflow values. This is appropriate when, like in the majority of streamflow forecasting applications, the focus is on the ability to reproduce potentially dangerous flood events; on the contrary, if the aim of the modelling is the reproduction of low and average flows, as it is the case in water resource management problems, this may result in a deterioration of the forecasting performance. This contribution presents the results of a series of automatic calibration experiments of a continuously-simulating rainfall-runoff model applied over several real-world case-studies, where the objective function is chosen so to highlight the fit of average and low flows. In this work a simple conceptual model will be used, of the lumped type, with a relatively low number of parameters to be calibrated. The experiments will be carried out for a set of case-study watersheds in Central Italy, covering an extremely wide range of geo-morphologic conditions and for whom at least five years of contemporary daily series of streamflow, precipitation and evapotranspiration estimates are available. Different objective functions will be tested in calibration and the results will be compared, over validation data, against those obtained with traditional squared functions. A companion work presents the results, over the same case-study watersheds and observation periods, of a system-theoretic model, again calibrated for reproducing average and low streamflows.
Fuzzy logic-based analogue forecasting and hybrid modelling of horizontal visibility
NASA Astrophysics Data System (ADS)
Tuba, Zoltán; Bottyán, Zsolt
2018-04-01
Forecasting visibility is one of the greatest challenges in aviation meteorology. At the same time, high accuracy visibility forecasts can significantly reduce or make avoidable weather-related risk in aviation as well. To improve forecasting visibility, this research links fuzzy logic-based analogue forecasting and post-processed numerical weather prediction model outputs in hybrid forecast. Performance of analogue forecasting model was improved by the application of Analytic Hierarchy Process. Then, linear combination of the mentioned outputs was applied to create ultra-short term hybrid visibility prediction which gradually shifts the focus from statistical to numerical products taking their advantages during the forecast period. It gives the opportunity to bring closer the numerical visibility forecast to the observations even it is wrong initially. Complete verification of categorical forecasts was carried out; results are available for persistence and terminal aerodrome forecasts (TAF) as well in order to compare. The average value of Heidke Skill Score (HSS) of examined airports of analogue and hybrid forecasts shows very similar results even at the end of forecast period where the rate of analogue prediction in the final hybrid output is 0.1-0.2 only. However, in case of poor visibility (1000-2500 m), hybrid (0.65) and analogue forecasts (0.64) have similar average of HSS in the first 6 h of forecast period, and have better performance than persistence (0.60) or TAF (0.56). Important achievement that hybrid model takes into consideration physics and dynamics of the atmosphere due to the increasing part of the numerical weather prediction. In spite of this, its performance is similar to the most effective visibility forecasting methods and does not follow the poor verification results of clearly numerical outputs.
Building regional early flood warning systems by AI techniques
NASA Astrophysics Data System (ADS)
Chang, F. J.; Chang, L. C.; Amin, M. Z. B. M.
2017-12-01
Building early flood warning system is essential for the protection of the residents against flood hazards and make actions to mitigate the losses. This study implements AI technology for forecasting multi-step-ahead regional flood inundation maps during storm events. The methodology includes three major schemes: (1) configuring the self-organizing map (SOM) to categorize a large number of regional inundation maps into a meaningful topology; (2) building dynamic neural networks to forecast multi-step-ahead average inundated depths (AID); and (3) adjusting the weights of the selected neuron in the constructed SOM based on the forecasted AID to obtain real-time regional inundation maps. The proposed models are trained, and tested based on a large number of inundation data sets collected in regions with the most frequent and serious flooding in the river basin. The results appear that the SOM topological relationships between individual neurons and their neighbouring neurons are visible and clearly distinguishable, and the hybrid model can continuously provide multistep-ahead visible regional inundation maps with high resolution during storm events, which have relatively small RMSE values and high R2 as compared with numerical simulation data sets. The computing time is only few seconds, and thereby leads to real-time regional flood inundation forecasting and make early flood inundation warning system. We demonstrate that the proposed hybrid ANN-based model has a robust and reliable predictive ability and can be used for early warning to mitigate flood disasters.
NASA Astrophysics Data System (ADS)
Ye, Jing; Dang, Yaoguo; Li, Bingjun
2018-01-01
Grey-Markov forecasting model is a combination of grey prediction model and Markov chain which show obvious optimization effects for data sequences with characteristics of non-stationary and volatility. However, the state division process in traditional Grey-Markov forecasting model is mostly based on subjective real numbers that immediately affects the accuracy of forecasting values. To seek the solution, this paper introduces the central-point triangular whitenization weight function in state division to calculate possibilities of research values in each state which reflect preference degrees in different states in an objective way. On the other hand, background value optimization is applied in the traditional grey model to generate better fitting data. By this means, the improved Grey-Markov forecasting model is built. Finally, taking the grain production in Henan Province as an example, it verifies this model's validity by comparing with GM(1,1) based on background value optimization and the traditional Grey-Markov forecasting model.
NASA Astrophysics Data System (ADS)
Mahmud, A.; Hixson, M.; Kleeman, M. J.
2012-02-01
The effect of climate change on population-weighted concentrations of particulate matter (PM) during extreme events was studied using the Parallel Climate Model (PCM), the Weather Research and Forecasting (WRF) model and the UCD/CIT 3-D photochemical air quality model. A "business as usual" (B06.44) global emissions scenario was dynamically downscaled for the entire state of California between the years 2000-2006 and 2047-2053. Air quality simulations were carried out for 1008 days in each of the present-day and future climate conditions using year-2000 emissions. Population-weighted concentrations of PM0.1, PM2.5, and PM10 total mass, components species, and primary source contributions were calculated for California and three air basins: the Sacramento Valley air basin (SV), the San Joaquin Valley air basin (SJV) and the South Coast Air Basin (SoCAB). Results over annual-average periods were contrasted with extreme events. Climate change between 2000 vs. 2050 did not cause a statistically significant change in annual-average population-weighted PM2.5 mass concentrations within any major sub-region of California in the current study. Climate change did alter the annual-average composition of the airborne particles in the SoCAB, with notable reductions of elemental carbon (EC; -3%) and organic carbon (OC; -3%) due to increased annual-average wind speeds that diluted primary concentrations from gasoline combustion (-3%) and food cooking (-4%). In contrast, climate change caused significant increases in population-weighted PM2.5 mass concentrations in central California during extreme events. The maximum 24-h average PM2.5 concentration experienced by an average person during a ten-year period in the SJV increased by 21% due to enhanced production of secondary particulate matter (manifested as NH4NO3). In general, climate change caused increased stagnation during future extreme pollution events, leading to higher exposure to diesel engines particles (+32%) and wood combustion particles (+14%) when averaging across the population of the entire state. Enhanced stagnation also isolated populations from distant sources such as shipping (-61%) during extreme events. The combination of these factors altered the statewide population-averaged composition of particles during extreme events, with EC increasing by 23%, nitrate increasing by 58%, and sulfate decreasing by 46%.
NASA Astrophysics Data System (ADS)
Mahmud, A.; Hixson, M.; Kleeman, M. J.
2012-08-01
The effect of climate change on population-weighted concentrations of particulate matter (PM) during extreme pollution events was studied using the Parallel Climate Model (PCM), the Weather Research and Forecasting (WRF) model and the UCD/CIT 3-D photochemical air quality model. A "business as usual" (B06.44) global emissions scenario was dynamically downscaled for the entire state of California between the years 2000-2006 and 2047-2053. Air quality simulations were carried out for 1008 days in each of the present-day and future climate conditions using year-2000 emissions. Population-weighted concentrations of PM0.1, PM2.5, and PM10 total mass, components species, and primary source contributions were calculated for California and three air basins: the Sacramento Valley air basin (SV), the San Joaquin Valley air basin (SJV) and the South Coast Air Basin (SoCAB). Results over annual-average periods were contrasted with extreme events. The current study found that the change in annual-average population-weighted PM2.5 mass concentrations due to climate change between 2000 vs. 2050 within any major sub-region in California was not statistically significant. However, climate change did alter the annual-average composition of the airborne particles in the SoCAB, with notable reductions of elemental carbon (EC; -3%) and organic carbon (OC; -3%) due to increased annual-average wind speeds that diluted primary concentrations from gasoline combustion (-3%) and food cooking (-4%). In contrast, climate change caused significant increases in population-weighted PM2.5 mass concentrations in central California during extreme events. The maximum 24-h average PM2.5 concentration experienced by an average person during a ten-yr period in the SJV increased by 21% due to enhanced production of secondary particulate matter (manifested as NH4NO3). In general, climate change caused increased stagnation during future extreme pollution events, leading to higher exposure to diesel engines particles (+32%) and wood combustion particles (+14%) when averaging across the population of the entire state. Enhanced stagnation also isolated populations from distant sources such as shipping (-61%) during extreme events. The combination of these factors altered the statewide population-averaged composition of particles during extreme events, with EC increasing by 23 %, nitrate increasing by 58%, and sulfate decreasing by 46%.
Ji, Eun Sook; Park, Kyu-Hyun
2012-12-01
This study was conducted to evaluate methane (CH4) and nitrous oxide (N2O) emissions from livestock agriculture in 16 local administrative districts of Korea from 1990 to 2030. National Inventory Report used 3 yr averaged livestock population but this study used 1 yr livestock population to find yearly emission fluctuations. Extrapolation of the livestock population from 1990 to 2009 was used to forecast future livestock population from 2010 to 2030. Past (yr 1990 to 2009) and forecasted (yr 2010 to 2030) averaged enteric CH4 emissions and CH4 and N2O emissions from manure treatment were estimated. In the section of enteric fermentation, forecasted average CH4 emissions from 16 local administrative districts were estimated to increase by 4%-114% compared to that of the past except for Daejeon (-63%), Seoul (-36%) and Gyeonggi (-7%). As for manure treatment, forecasted average CH4 emissions from the 16 local administrative districts were estimated to increase by 3%-124% compared to past average except for Daejeon (-77%), Busan (-60%), Gwangju (-48%) and Seoul (-8%). For manure treatment, forecasted average N2O emissions from the 16 local administrative districts were estimated to increase by 10%-153% compared to past average CH4 emissions except for Daejeon (-60%), Seoul (-4.0%), and Gwangju (-0.2%). With the carbon dioxide equivalent emissions (CO2-Eq), forecasted average CO2-Eq from the 16 local administrative districts were estimated to increase by 31%-120% compared to past average CH4 emissions except Daejeon (-65%), Seoul (-24%), Busan (-18%), Gwangju (-8%) and Gyeonggi (-1%). The decreased CO2-Eq from 5 local administrative districts was only 34 kt, which was insignificantly small compared to increase of 2,809 kt from other 11 local administrative districts. Annual growth rates of enteric CH4 emissions, CH4 and N2O emissions from manure management in Korea from 1990 to 2009 were 1.7%, 2.6%, and 3.2%, respectively. The annual growth rate of total CO2-Eq was 2.2%. Efforts by the local administrative offices to improve the accuracy of activity data are essential to improve GHG inventories. Direct measurements of GHG emissions from enteric fermentation and manure treatment systems will further enhance the accuracy of the GHG data. (Key Words: Greenhouse Gas, Methane, Nitrous Oxide, Carbon Dioxide Equivalent Emission, Climate Change).
Coupling News Sentiment with Web Browsing Data Improves Prediction of Intra-Day Price Dynamics.
Ranco, Gabriele; Bordino, Ilaria; Bormetti, Giacomo; Caldarelli, Guido; Lillo, Fabrizio; Treccani, Michele
2016-01-01
The new digital revolution of big data is deeply changing our capability of understanding society and forecasting the outcome of many social and economic systems. Unfortunately, information can be very heterogeneous in the importance, relevance, and surprise it conveys, affecting severely the predictive power of semantic and statistical methods. Here we show that the aggregation of web users' behavior can be elicited to overcome this problem in a hard to predict complex system, namely the financial market. Specifically, our in-sample analysis shows that the combined use of sentiment analysis of news and browsing activity of users of Yahoo! Finance greatly helps forecasting intra-day and daily price changes of a set of 100 highly capitalized US stocks traded in the period 2012-2013. Sentiment analysis or browsing activity when taken alone have very small or no predictive power. Conversely, when considering a news signal where in a given time interval we compute the average sentiment of the clicked news, weighted by the number of clicks, we show that for nearly 50% of the companies such signal Granger-causes hourly price returns. Our result indicates a "wisdom-of-the-crowd" effect that allows to exploit users' activity to identify and weigh properly the relevant and surprising news, enhancing considerably the forecasting power of the news sentiment.
Watson, Stella C; Liu, Yan; Lund, Robert B; Gettings, Jenna R; Nordone, Shila K; McMahan, Christopher S; Yabsley, Michael J
2017-01-01
This paper models the prevalence of antibodies to Borrelia burgdorferi in domestic dogs in the United States using climate, geographic, and societal factors. We then use this model to forecast the prevalence of antibodies to B. burgdorferi in dogs for 2016. The data available for this study consists of 11,937,925 B. burgdorferi serologic test results collected at the county level within the 48 contiguous United States from 2011-2015. Using the serologic data, a baseline B. burgdorferi antibody prevalence map was constructed through the use of spatial smoothing techniques after temporal aggregation; i.e., head-banging and Kriging. In addition, several covariates purported to be associated with B. burgdorferi prevalence were collected on the same spatio-temporal granularity, and include forestation, elevation, water coverage, temperature, relative humidity, precipitation, population density, and median household income. A Bayesian spatio-temporal conditional autoregressive (CAR) model was used to analyze these data, for the purposes of identifying significant risk factors and for constructing disease forecasts. The fidelity of the forecasting technique was assessed using historical data, and a Lyme disease forecast for dogs in 2016 was constructed. The correlation between the county level model and baseline B. burgdorferi antibody prevalence estimates from 2011 to 2015 is 0.894, illustrating that the Bayesian spatio-temporal CAR model provides a good fit to these data. The fidelity of the forecasting technique was assessed in the usual fashion; i.e., the 2011-2014 data was used to forecast the 2015 county level prevalence, with comparisons between observed and predicted being made. The weighted (to acknowledge sample size) correlation between 2015 county level observed prevalence and 2015 forecasted prevalence is 0.978. A forecast for the prevalence of B. burgdorferi antibodies in domestic dogs in 2016 is also provided. The forecast presented from this model can be used to alert veterinarians in areas likely to see above average B. burgdorferi antibody prevalence in dogs in the upcoming year. In addition, because dogs and humans can be exposed to ticks in similar habitats, these data may ultimately prove useful in predicting areas where human Lyme disease risk may emerge.
Watson, Stella C.; Liu, Yan; Lund, Robert B.; Gettings, Jenna R.; Nordone, Shila K.; McMahan, Christopher S.
2017-01-01
This paper models the prevalence of antibodies to Borrelia burgdorferi in domestic dogs in the United States using climate, geographic, and societal factors. We then use this model to forecast the prevalence of antibodies to B. burgdorferi in dogs for 2016. The data available for this study consists of 11,937,925 B. burgdorferi serologic test results collected at the county level within the 48 contiguous United States from 2011-2015. Using the serologic data, a baseline B. burgdorferi antibody prevalence map was constructed through the use of spatial smoothing techniques after temporal aggregation; i.e., head-banging and Kriging. In addition, several covariates purported to be associated with B. burgdorferi prevalence were collected on the same spatio-temporal granularity, and include forestation, elevation, water coverage, temperature, relative humidity, precipitation, population density, and median household income. A Bayesian spatio-temporal conditional autoregressive (CAR) model was used to analyze these data, for the purposes of identifying significant risk factors and for constructing disease forecasts. The fidelity of the forecasting technique was assessed using historical data, and a Lyme disease forecast for dogs in 2016 was constructed. The correlation between the county level model and baseline B. burgdorferi antibody prevalence estimates from 2011 to 2015 is 0.894, illustrating that the Bayesian spatio-temporal CAR model provides a good fit to these data. The fidelity of the forecasting technique was assessed in the usual fashion; i.e., the 2011-2014 data was used to forecast the 2015 county level prevalence, with comparisons between observed and predicted being made. The weighted (to acknowledge sample size) correlation between 2015 county level observed prevalence and 2015 forecasted prevalence is 0.978. A forecast for the prevalence of B. burgdorferi antibodies in domestic dogs in 2016 is also provided. The forecast presented from this model can be used to alert veterinarians in areas likely to see above average B. burgdorferi antibody prevalence in dogs in the upcoming year. In addition, because dogs and humans can be exposed to ticks in similar habitats, these data may ultimately prove useful in predicting areas where human Lyme disease risk may emerge. PMID:28472096
Weight and cost forecasting for advanced manned space vehicles
NASA Technical Reports Server (NTRS)
Williams, Raymond
1989-01-01
A mass and cost estimating computerized methology for predicting advanced manned space vehicle weights and costs was developed. The user friendly methology designated MERCER (Mass Estimating Relationship/Cost Estimating Relationship) organizes the predictive process according to major vehicle subsystem levels. Design, development, test, evaluation, and flight hardware cost forecasting is treated by the study. This methodology consists of a complete set of mass estimating relationships (MERs) which serve as the control components for the model and cost estimating relationships (CERs) which use MER output as input. To develop this model, numerous MER and CER studies were surveyed and modified where required. Additionally, relationships were regressed from raw data to accommodate the methology. The models and formulations which estimated the cost of historical vehicles to within 20 percent of the actual cost were selected. The result of the research, along with components of the MERCER Program, are reported. On the basis of the analysis, the following conclusions were established: (1) The cost of a spacecraft is best estimated by summing the cost of individual subsystems; (2) No one cost equation can be used for forecasting the cost of all spacecraft; (3) Spacecraft cost is highly correlated with its mass; (4) No study surveyed contained sufficient formulations to autonomously forecast the cost and weight of the entire advanced manned vehicle spacecraft program; (5) No user friendly program was found that linked MERs with CERs to produce spacecraft cost; and (6) The group accumulation weight estimation method (summing the estimated weights of the various subsystems) proved to be a useful method for finding total weight and cost of a spacecraft.
NASA Astrophysics Data System (ADS)
Pokhrel, Prafulla; Wang, Q. J.; Robertson, David E.
2013-10-01
Seasonal streamflow forecasts are valuable for planning and allocation of water resources. In Australia, the Bureau of Meteorology employs a statistical method to forecast seasonal streamflows. The method uses predictors that are related to catchment wetness at the start of a forecast period and to climate during the forecast period. For the latter, a predictor is selected among a number of lagged climate indices as candidates to give the "best" model in terms of model performance in cross validation. This study investigates two strategies for further improvement in seasonal streamflow forecasts. The first is to combine, through Bayesian model averaging, multiple candidate models with different lagged climate indices as predictors, to take advantage of different predictive strengths of the multiple models. The second strategy is to introduce additional candidate models, using rainfall and sea surface temperature predictions from a global climate model as predictors. This is to take advantage of the direct simulations of various dynamic processes. The results show that combining forecasts from multiple statistical models generally yields more skillful forecasts than using only the best model and appears to moderate the worst forecast errors. The use of rainfall predictions from the dynamical climate model marginally improves the streamflow forecasts when viewed over all the study catchments and seasons, but the use of sea surface temperature predictions provide little additional benefit.
Medium-Range Forecast Skill for Extraordinary Arctic Cyclones in Summer of 2008-2016
NASA Astrophysics Data System (ADS)
Yamagami, Akio; Matsueda, Mio; Tanaka, Hiroshi L.
2018-05-01
Arctic cyclones (ACs) are a severe atmospheric phenomenon that affects the Arctic environment. This study assesses the forecast skill of five leading operational medium-range ensemble forecasts for 10 extraordinary ACs that occurred in summer during 2008-2016. Average existence probability of the predicted ACs was >0.9 at lead times of ≤3.5 days. Average central position error of the predicted ACs was less than half of the mean radius of the 10 ACs (469.1 km) at lead times of 2.5-4.5 days. Average central pressure error of the predicted ACs was 5.5-10.7 hPa at such lead times. Therefore, the operational ensemble prediction systems generally predict the position of ACs within 469.1 km 2.5-4.5 days before they mature. The forecast skill for the extraordinary ACs is lower than that for midlatitude cyclones in the Northern Hemisphere but similar to that in the Southern Hemisphere.
NASA Astrophysics Data System (ADS)
Fox, Neil I.; Micheas, Athanasios C.; Peng, Yuqiang
2016-07-01
This paper introduces the use of Bayesian full Procrustes shape analysis in object-oriented meteorological applications. In particular, the Procrustes methodology is used to generate mean forecast precipitation fields from a set of ensemble forecasts. This approach has advantages over other ensemble averaging techniques in that it can produce a forecast that retains the morphological features of the precipitation structures and present the range of forecast outcomes represented by the ensemble. The production of the ensemble mean avoids the problems of smoothing that result from simple pixel or cell averaging, while producing credible sets that retain information on ensemble spread. Also in this paper, the full Bayesian Procrustes scheme is used as an object verification tool for precipitation forecasts. This is an extension of a previously presented Procrustes shape analysis based verification approach into a full Bayesian format designed to handle the verification of precipitation forecasts that match objects from an ensemble of forecast fields to a single truth image. The methodology is tested on radar reflectivity nowcasts produced in the Warning Decision Support System - Integrated Information (WDSS-II) by varying parameters in the K-means cluster tracking scheme.
NASA Astrophysics Data System (ADS)
Rheinheimer, David E.; Bales, Roger C.; Oroza, Carlos A.; Lund, Jay R.; Viers, Joshua H.
2016-05-01
We assessed the potential value of hydrologic forecasting improvements for a snow-dominated high-elevation hydropower system in the Sierra Nevada of California, using a hydropower optimization model. To mimic different forecasting skill levels for inflow time series, rest-of-year inflows from regression-based forecasts were blended in different proportions with representative inflows from a spatially distributed hydrologic model. The statistical approach mimics the simpler, historical forecasting approach that is still widely used. Revenue was calculated using historical electricity prices, with perfect price foresight assumed. With current infrastructure and operations, perfect hydrologic forecasts increased annual hydropower revenue by 0.14 to 1.6 million, with lower values in dry years and higher values in wet years, or about $0.8 million (1.2%) on average, representing overall willingness-to-pay for perfect information. A second sensitivity analysis found a wider range of annual revenue gain or loss using different skill levels in snow measurement in the regression-based forecast, mimicking expected declines in skill as the climate warms and historical snow measurements no longer represent current conditions. The value of perfect forecasts was insensitive to storage capacity for small and large reservoirs, relative to average inflow, and modestly sensitive to storage capacity with medium (current) reservoir storage. The value of forecasts was highly sensitive to powerhouse capacity, particularly for the range of capacities in the northern Sierra Nevada. The approach can be extended to multireservoir, multipurpose systems to help guide investments in forecasting.
Wildfire suppression cost forecasts from the US Forest Service
Karen L. Abt; Jeffrey P. Prestemon; Krista M. Gebert
2009-01-01
The US Forest Service and other land-management agencies seek better tools for nticipating future expenditures for wildfire suppression. We developed regression models for forecasting US Forest Service suppression spending at 1-, 2-, and 3-year lead times. We compared these models to another readily available forecast model, the 10-year moving average model,...
ERIC Educational Resources Information Center
Bobbitt, Larry; Otto, Mark
Three Autoregressive Integrated Moving Averages (ARIMA) forecast procedures for Census Bureau X-11 concurrent seasonal adjustment were empirically tested. Forty time series from three Census Bureau economic divisions (business, construction, and industry) were analyzed. Forecasts were obtained from fitted seasonal ARIMA models augmented with…
Short-term forecasts gain in accuracy. [Regression technique using ''Box-Jenkins'' analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Not Available
Box-Jenkins time-series models offer accuracy for short-term forecasts that compare with large-scale macroeconomic forecasts. Utilities need to be able to forecast peak demand in order to plan their generating, transmitting, and distribution systems. This new method differs from conventional models by not assuming specific data patterns, but by fitting available data into a tentative pattern on the basis of auto-correlations. Three types of models (autoregressive, moving average, or mixed autoregressive/moving average) can be used according to which provides the most appropriate combination of autocorrelations and related derivatives. Major steps in choosing a model are identifying potential models, estimating the parametersmore » of the problem, and running a diagnostic check to see if the model fits the parameters. The Box-Jenkins technique is well suited for seasonal patterns, which makes it possible to have as short as hourly forecasts of load demand. With accuracy up to two years, the method will allow electricity price-elasticity forecasting that can be applied to facility planning and rate design. (DCK)« less
A Case Study on a Combination NDVI Forecasting Model Based on the Entropy Weight Method
DOE Office of Scientific and Technical Information (OSTI.GOV)
Huang, Shengzhi; Ming, Bo; Huang, Qiang
It is critically meaningful to accurately predict NDVI (Normalized Difference Vegetation Index), which helps guide regional ecological remediation and environmental managements. In this study, a combination forecasting model (CFM) was proposed to improve the performance of NDVI predictions in the Yellow River Basin (YRB) based on three individual forecasting models, i.e., the Multiple Linear Regression (MLR), Artificial Neural Network (ANN), and Support Vector Machine (SVM) models. The entropy weight method was employed to determine the weight coefficient for each individual model depending on its predictive performance. Results showed that: (1) ANN exhibits the highest fitting capability among the four orecastingmore » models in the calibration period, whilst its generalization ability becomes weak in the validation period; MLR has a poor performance in both calibration and validation periods; the predicted results of CFM in the calibration period have the highest stability; (2) CFM generally outperforms all individual models in the validation period, and can improve the reliability and stability of predicted results through combining the strengths while reducing the weaknesses of individual models; (3) the performances of all forecasting models are better in dense vegetation areas than in sparse vegetation areas.« less
A stochastic post-processing method for solar irradiance forecasts derived from NWPs models
NASA Astrophysics Data System (ADS)
Lara-Fanego, V.; Pozo-Vazquez, D.; Ruiz-Arias, J. A.; Santos-Alamillos, F. J.; Tovar-Pescador, J.
2010-09-01
Solar irradiance forecast is an important area of research for the future of the solar-based renewable energy systems. Numerical Weather Prediction models (NWPs) have proved to be a valuable tool for solar irradiance forecasting with lead time up to a few days. Nevertheless, these models show low skill in forecasting the solar irradiance under cloudy conditions. Additionally, climatic (averaged over seasons) aerosol loading are usually considered in these models, leading to considerable errors for the Direct Normal Irradiance (DNI) forecasts during high aerosols load conditions. In this work we propose a post-processing method for the Global Irradiance (GHI) and DNI forecasts derived from NWPs. Particularly, the methods is based on the use of Autoregressive Moving Average with External Explanatory Variables (ARMAX) stochastic models. These models are applied to the residuals of the NWPs forecasts and uses as external variables the measured cloud fraction and aerosol loading of the day previous to the forecast. The method is evaluated for a set one-moth length three-days-ahead forecast of the GHI and DNI, obtained based on the WRF mesoscale atmospheric model, for several locations in Andalusia (Southern Spain). The Cloud fraction is derived from MSG satellite estimates and the aerosol loading from the MODIS platform estimates. Both sources of information are readily available at the time of the forecast. Results showed a considerable improvement of the forecasting skill of the WRF model using the proposed post-processing method. Particularly, relative improvement (in terms of the RMSE) for the DNI during summer is about 20%. A similar value is obtained for the GHI during the winter.
GloFAS-Seasonal: Operational Seasonal Ensemble River Flow Forecasts at the Global Scale
NASA Astrophysics Data System (ADS)
Emerton, Rebecca; Zsoter, Ervin; Smith, Paul; Salamon, Peter
2017-04-01
Seasonal hydrological forecasting has potential benefits for many sectors, including agriculture, water resources management and humanitarian aid. At present, no global scale seasonal hydrological forecasting system exists operationally; although smaller scale systems have begun to emerge around the globe over the past decade, a system providing consistent global scale seasonal forecasts would be of great benefit in regions where no other forecasting system exists, and to organisations operating at the global scale, such as disaster relief. We present here a new operational global ensemble seasonal hydrological forecast, currently under development at ECMWF as part of the Global Flood Awareness System (GloFAS). The proposed system, which builds upon the current version of GloFAS, takes the long-range forecasts from the ECMWF System4 ensemble seasonal forecast system (which incorporates the HTESSEL land surface scheme) and uses this runoff as input to the Lisflood routing model, producing a seasonal river flow forecast out to 4 months lead time, for the global river network. The seasonal forecasts will be evaluated using the global river discharge reanalysis, and observations where available, to determine the potential value of the forecasts across the globe. The seasonal forecasts will be presented as a new layer in the GloFAS interface, which will provide a global map of river catchments, indicating whether the catchment-averaged discharge forecast is showing abnormally high or low flows during the 4-month lead time. Each catchment will display the corresponding forecast as an ensemble hydrograph of the weekly-averaged discharge forecast out to 4 months, with percentile thresholds shown for comparison with the discharge climatology. The forecast visualisation is based on a combination of the current medium-range GloFAS forecasts and the operational EFAS (European Flood Awareness System) seasonal outlook, and aims to effectively communicate the nature of a seasonal outlook while providing useful information to users and partners. We demonstrate the first version of an operational GloFAS seasonal outlook, outlining the model set-up and presenting a first look at the seasonal forecasts that will be displayed in the GloFAS interface, and discuss the initial results of the forecast evaluation.
Neural network versus classical time series forecasting models
NASA Astrophysics Data System (ADS)
Nor, Maria Elena; Safuan, Hamizah Mohd; Shab, Noorzehan Fazahiyah Md; Asrul, Mohd; Abdullah, Affendi; Mohamad, Nurul Asmaa Izzati; Lee, Muhammad Hisyam
2017-05-01
Artificial neural network (ANN) has advantage in time series forecasting as it has potential to solve complex forecasting problems. This is because ANN is data driven approach which able to be trained to map past values of a time series. In this study the forecast performance between neural network and classical time series forecasting method namely seasonal autoregressive integrated moving average models was being compared by utilizing gold price data. Moreover, the effect of different data preprocessing on the forecast performance of neural network being examined. The forecast accuracy was evaluated using mean absolute deviation, root mean square error and mean absolute percentage error. It was found that ANN produced the most accurate forecast when Box-Cox transformation was used as data preprocessing.
Operational Earthquake Forecasting of Aftershocks for New England
NASA Astrophysics Data System (ADS)
Ebel, J.; Fadugba, O. I.
2015-12-01
Although the forecasting of mainshocks is not possible, recent research demonstrates that probabilistic forecasts of expected aftershock activity following moderate and strong earthquakes is possible. Previous work has shown that aftershock sequences in intraplate regions behave similarly to those in California, and thus the operational aftershocks forecasting methods that are currently employed in California can be adopted for use in areas of the eastern U.S. such as New England. In our application, immediately after a felt earthquake in New England, a forecast of expected aftershock activity for the next 7 days will be generated based on a generic aftershock activity model. Approximately 24 hours after the mainshock, the parameters of the aftershock model will be updated using the observed aftershock activity observed to that point in time, and a new forecast of expected aftershock activity for the next 7 days will be issued. The forecast will estimate the average number of weak, felt aftershocks and the average expected number of aftershocks based on the aftershock statistics of past New England earthquakes. The forecast also will estimate the probability that an earthquake that is stronger than the mainshock will take place during the next 7 days. The aftershock forecast will specify the expected aftershocks locations as well as the areas over which aftershocks of different magnitudes could be felt. The system will use web pages, email and text messages to distribute the aftershock forecasts. For protracted aftershock sequences, new forecasts will be issued on a regular basis, such as weekly. Initially, the distribution system of the aftershock forecasts will be limited, but later it will be expanded as experience with and confidence in the system grows.
Handique, Bijoy K; Khan, Siraj A; Mahanta, J; Sudhakar, S
2014-09-01
Japanese encephalitis (JE) is one of the dreaded mosquito-borne viral diseases mostly prevalent in south Asian countries including India. Early warning of the disease in terms of disease intensity is crucial for taking adequate and appropriate intervention measures. The present study was carried out in Dibrugarh district in the state of Assam located in the northeastern region of India to assess the accuracy of selected forecasting methods based on historical morbidity patterns of JE incidence during the past 22 years (1985-2006). Four selected forecasting methods, viz. seasonal average (SA), seasonal adjustment with last three observations (SAT), modified method adjusting long-term and cyclic trend (MSAT), and autoregressive integrated moving average (ARIMA) have been employed to assess the accuracy of each of the forecasting methods. The forecasting methods were validated for five consecutive years from 2007-2012 and accuracy of each method has been assessed. The forecasting method utilising seasonal adjustment with long-term and cyclic trend emerged as best forecasting method among the four selected forecasting methods and outperformed the even statistically more advanced ARIMA method. Peak of the disease incidence could effectively be predicted with all the methods, but there are significant variations in magnitude of forecast errors among the selected methods. As expected, variation in forecasts at primary health centre (PHC) level is wide as compared to that of district level forecasts. The study showed that adopted forecasting techniques could reasonably forecast the intensity of JE cases at PHC level without considering the external variables. The results indicate that the understanding of long-term and cyclic trend of the disease intensity will improve the accuracy of the forecasts, but there is a need for making the forecast models more robust to explain sudden variation in the disease intensity with detail analysis of parasite and host population dynamics.
NASA Astrophysics Data System (ADS)
Elsberry, Russell L.; Jordan, Mary S.; Vitart, Frederic
2010-05-01
The objective of this study is to provide evidence of predictability on intraseasonal time scales (10-30 days) for western North Pacific tropical cyclone formation and subsequent tracks using the 51-member ECMWF 32-day forecasts made once a week from 5 June through 25 December 2008. Ensemble storms are defined by grouping ensemble member vortices whose positions are within a specified separation distance that is equal to 180 n mi at the initial forecast time t and increases linearly to 420 n mi at Day 14 and then is constant. The 12-h track segments are calculated with a Weighted-Mean Vector Motion technique in which the weighting factor is inversely proportional to the distance from the endpoint of the previous 12-h motion vector. Seventy-six percent of the ensemble storms had five or fewer member vortices. On average, the ensemble storms begin 2.5 days before the first entry of the Joint Typhoon Warning Center (JTWC) best-track file, tend to translate too slowly in the deep tropics, and persist for longer periods over land. A strict objective matching technique with the JTWC storms is combined with a second subjective procedure that is then applied to identify nearby ensemble storms that would indicate a greater likelihood of a tropical cyclone developing in that region with that track orientation. The ensemble storms identified in the ECMWF 32-day forecasts provided guidance on intraseasonal timescales of the formations and tracks of the three strongest typhoons and two other typhoons, but not for two early season typhoons and the late season Dolphin. Four strong tropical storms were predicted consistently over Week-1 through Week-4, as was one weak tropical storm. Two other weak tropical storms, three tropical cyclones that developed from precursor baroclinic systems, and three other tropical depressions were not predicted on intraseasonal timescales. At least for the strongest tropical cyclones during the peak season, the ECMWF 32-day ensemble provides guidance of formation and tracks on 10-30 day timescales.
Kusev, Petko; van Schaik, Paul; Tsaneva-Atanasova, Krasimira; Juliusson, Asgeir; Chater, Nick
2018-01-01
When attempting to predict future events, people commonly rely on historical data. One psychological characteristic of judgmental forecasting of time series, established by research, is that when people make forecasts from series, they tend to underestimate future values for upward trends and overestimate them for downward ones, so-called trend-damping (modeled by anchoring on, and insufficient adjustment from, the average of recent time series values). Events in a time series can be experienced sequentially (dynamic mode), or they can also be retrospectively viewed simultaneously (static mode), not experienced individually in real time. In one experiment, we studied the influence of presentation mode (dynamic and static) on two sorts of judgment: (a) predictions of the next event (forecast) and (b) estimation of the average value of all the events in the presented series (average estimation). Participants' responses in dynamic mode were anchored on more recent events than in static mode for all types of judgment but with different consequences; hence, dynamic presentation improved prediction accuracy, but not estimation. These results are not anticipated by existing theoretical accounts; we develop and present an agent-based model-the adaptive anchoring model (ADAM)-to account for the difference between processing sequences of dynamically and statically presented stimuli (visually presented data). ADAM captures how variation in presentation mode produces variation in responses (and the accuracy of these responses) in both forecasting and judgment tasks. ADAM's model predictions for the forecasting and judgment tasks fit better with the response data than a linear-regression time series model. Moreover, ADAM outperformed autoregressive-integrated-moving-average (ARIMA) and exponential-smoothing models, while neither of these models accounts for people's responses on the average estimation task. Copyright © 2017 The Authors. Cognitive Science published by Wiley Periodicals, Inc. on behalf of Cognitive Science Society.
A new method for determining the optimal lagged ensemble
DelSole, T.; Tippett, M. K.; Pegion, K.
2017-01-01
Abstract We propose a general methodology for determining the lagged ensemble that minimizes the mean square forecast error. The MSE of a lagged ensemble is shown to depend only on a quantity called the cross‐lead error covariance matrix, which can be estimated from a short hindcast data set and parameterized in terms of analytic functions of time. The resulting parameterization allows the skill of forecasts to be evaluated for an arbitrary ensemble size and initialization frequency. Remarkably, the parameterization also can estimate the MSE of a burst ensemble simply by taking the limit of an infinitely small interval between initialization times. This methodology is applied to forecasts of the Madden Julian Oscillation (MJO) from version 2 of the Climate Forecast System version 2 (CFSv2). For leads greater than a week, little improvement is found in the MJO forecast skill when ensembles larger than 5 days are used or initializations greater than 4 times per day. We find that if the initialization frequency is too infrequent, important structures of the lagged error covariance matrix are lost. Lastly, we demonstrate that the forecast error at leads ≥10 days can be reduced by optimally weighting the lagged ensemble members. The weights are shown to depend only on the cross‐lead error covariance matrix. While the methodology developed here is applied to CFSv2, the technique can be easily adapted to other forecast systems. PMID:28580050
7 CFR 1710.205 - Minimum approval requirements for all load forecasts.
Code of Federal Regulations, 2014 CFR
2014-01-01
... computer software applications. RUS will evaluate borrower load forecasts for readability, understanding..., distribution costs, other systems costs, average revenue per kWh, and inflation. Also, a borrower's engineering...
Forecasting asthma-related hospital admissions in London using negative binomial models.
Soyiri, Ireneous N; Reidpath, Daniel D; Sarran, Christophe
2013-05-01
Health forecasting can improve health service provision and individual patient outcomes. Environmental factors are known to impact chronic respiratory conditions such as asthma, but little is known about the extent to which these factors can be used for forecasting. Using weather, air quality and hospital asthma admissions, in London (2005-2006), two related negative binomial models were developed and compared with a naive seasonal model. In the first approach, predictive forecasting models were fitted with 7-day averages of each potential predictor, and then a subsequent multivariable model is constructed. In the second strategy, an exhaustive search of the best fitting models between possible combinations of lags (0-14 days) of all the environmental effects on asthma admission was conducted. Three models were considered: a base model (seasonal effects), contrasted with a 7-day average model and a selected lags model (weather and air quality effects). Season is the best predictor of asthma admissions. The 7-day average and seasonal models were trivial to implement. The selected lags model was computationally intensive, but of no real value over much more easily implemented models. Seasonal factors can predict daily hospital asthma admissions in London, and there is a little evidence that additional weather and air quality information would add to forecast accuracy.
Cao, Han; Wang, Jing; Li, Yichen; Li, Dongyang; Guo, Jin; Hu, Yifei; Meng, Kai; He, Dian; Liu, Bin; Liu, Zheng; Qi, Han; Zhang, Ling
2017-09-18
To analyse trends in mortality and causes of death among children aged under 5 years in Beijing, China between 1992 and 2015 and to forecast under-5 mortality rates (U5MRs) for the period 2016-2020. An entire population-based epidemiological study was conducted. Data collection was based on the Child Death Reporting Card of the Beijing Under-5 Mortality Rate Surveillance Network. Trends in mortality and leading causes of death were analysed using the χ 2 test and SPSS 19.0 software. An autoregressive integrated moving average (ARIMA) model was fitted to forecast U5MRs between 2016 and 2020 using the EViews 8.0 software. Mortality in neonates, infants and children aged under 5 years decreased by 84.06%, 80.04% and 80.17% from 1992 to 2015, respectively. However, the U5MR increased by 7.20% from 2013 to 2015. Birth asphyxia, congenital heart disease, preterm/low birth weight and other congenital abnormalities comprised the top five causes of death. The greatest, most rapid reduction was that of pneumonia by 92.26%, with an annual average rate of reduction of 10.53%. The distribution of causes of death differed among children of different ages. Accidental asphyxia and sepsis were among the top five causes of death in children aged 28 days to 1 year and accident was among the top five causes in children aged 1-4 years. The U5MRs in Beijing are projected to be 2.88‰, 2.87‰, 2.90‰, 2.97‰ and 3.09‰ for the period 2016-2020, based on the predictive model. Beijing has made considerable progress in reducing U5MRs from 1992 to 2015. However, U5MRs could show a slight upward trend from 2016 to 2020. Future considerations for child healthcare include the management of birth asphyxia, congenital heart disease, preterm/low birth weight and other congenital abnormalities. Specific preventative measures should be implemented for children of various age groups. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2017. All rights reserved. No commercial use is permitted unless otherwise expressly granted.
Cao, Han; Wang, Jing; Li, Yichen; Li, Dongyang; Guo, Jin; Hu, Yifei; Meng, Kai; He, Dian; Liu, Bin; Liu, Zheng; Qi, Han; Zhang, Ling
2017-01-01
Objectives To analyse trends in mortality and causes of death among children aged under 5 years in Beijing, China between 1992 and 2015 and to forecast under-5 mortality rates (U5MRs) for the period 2016–2020. Methods An entire population-based epidemiological study was conducted. Data collection was based on the Child Death Reporting Card of the Beijing Under-5 Mortality Rate Surveillance Network. Trends in mortality and leading causes of death were analysed using the χ2 test and SPSS 19.0 software. An autoregressive integrated moving average (ARIMA) model was fitted to forecast U5MRs between 2016 and 2020 using the EViews 8.0 software. Results Mortality in neonates, infants and children aged under 5 years decreased by 84.06%, 80.04% and 80.17% from 1992 to 2015, respectively. However, the U5MR increased by 7.20% from 2013 to 2015. Birth asphyxia, congenital heart disease, preterm/low birth weight and other congenital abnormalities comprised the top five causes of death. The greatest, most rapid reduction was that of pneumonia by 92.26%, with an annual average rate of reduction of 10.53%. The distribution of causes of death differed among children of different ages. Accidental asphyxia and sepsis were among the top five causes of death in children aged 28 days to 1 year and accident was among the top five causes in children aged 1–4 years. The U5MRs in Beijing are projected to be 2.88‰, 2.87‰, 2.90‰, 2.97‰ and 3.09‰ for the period 2016–2020, based on the predictive model. Conclusion Beijing has made considerable progress in reducing U5MRs from 1992 to 2015. However, U5MRs could show a slight upward trend from 2016 to 2020. Future considerations for child healthcare include the management of birth asphyxia, congenital heart disease, preterm/low birth weight and other congenital abnormalities. Specific preventative measures should be implemented for children of various age groups. PMID:28928178
Neural net forecasting for geomagnetic activity
NASA Technical Reports Server (NTRS)
Hernandez, J. V.; Tajima, T.; Horton, W.
1993-01-01
We use neural nets to construct nonlinear models to forecast the AL index given solar wind and interplanetary magnetic field (IMF) data. We follow two approaches: (1) the state space reconstruction approach, which is a nonlinear generalization of autoregressive-moving average models (ARMA) and (2) the nonlinear filter approach, which reduces to a moving average model (MA) in the linear limit. The database used here is that of Bargatze et al. (1985).
NASA Technical Reports Server (NTRS)
Barrett, Joe, III; Short, David; Roeder, William
2008-01-01
The expected peak wind speed for the day is an important element in the daily 24-Hour and Weekly Planning Forecasts issued by the 45th Weather Squadron (45 WS) for planning operations at Kennedy Space Center (KSC) and Cape Canaveral Air Force Station (CCAFS). The morning outlook for peak speeds also begins the warning decision process for gusts ^ 35 kt, ^ 50 kt, and ^ 60 kt from the surface to 300 ft. The 45 WS forecasters have indicated that peak wind speeds are a challenging parameter to forecast during the cool season (October-April). The 45 WS requested that the Applied Meteorology Unit (AMU) develop a tool to help them forecast the speed and timing of the daily peak and average wind, from the surface to 300 ft on KSC/CCAFS during the cool season. The tool must only use data available by 1200 UTC to support the issue time of the Planning Forecasts. Based on observations from the KSC/CCAFS wind tower network, surface observations from the Shuttle Landing Facility (SLF), and CCAFS upper-air soundings from the cool season months of October 2002 to February 2007, the AMU created multiple linear regression equations to predict the timing and speed of the daily peak wind speed, as well as the background average wind speed. Several possible predictors were evaluated, including persistence, the temperature inversion depth, strength, and wind speed at the top of the inversion, wind gust factor (ratio of peak wind speed to average wind speed), synoptic weather pattern, occurrence of precipitation at the SLF, and strongest wind in the lowest 3000 ft, 4000 ft, or 5000 ft. Six synoptic patterns were identified: 1) surface high near or over FL, 2) surface high north or east of FL, 3) surface high south or west of FL, 4) surface front approaching FL, 5) surface front across central FL, and 6) surface front across south FL. The following six predictors were selected: 1) inversion depth, 2) inversion strength, 3) wind gust factor, 4) synoptic weather pattern, 5) occurrence of precipitation at the SLF, and 6) strongest wind in the lowest 3000 ft. The forecast tool was developed as a graphical user interface with Microsoft Excel to help the forecaster enter the variables, and run the appropriate regression equations. Based on the forecaster's input and regression equations, a forecast of the day's peak and average wind is generated and displayed. The application also outputs the probability that the peak wind speed will be ^ 35 kt, 50 kt, and 60 kt.
Coupling News Sentiment with Web Browsing Data Improves Prediction of Intra-Day Price Dynamics
Ranco, Gabriele; Bordino, Ilaria; Bormetti, Giacomo; Caldarelli, Guido; Lillo, Fabrizio; Treccani, Michele
2016-01-01
The new digital revolution of big data is deeply changing our capability of understanding society and forecasting the outcome of many social and economic systems. Unfortunately, information can be very heterogeneous in the importance, relevance, and surprise it conveys, affecting severely the predictive power of semantic and statistical methods. Here we show that the aggregation of web users’ behavior can be elicited to overcome this problem in a hard to predict complex system, namely the financial market. Specifically, our in-sample analysis shows that the combined use of sentiment analysis of news and browsing activity of users of Yahoo! Finance greatly helps forecasting intra-day and daily price changes of a set of 100 highly capitalized US stocks traded in the period 2012–2013. Sentiment analysis or browsing activity when taken alone have very small or no predictive power. Conversely, when considering a news signal where in a given time interval we compute the average sentiment of the clicked news, weighted by the number of clicks, we show that for nearly 50% of the companies such signal Granger-causes hourly price returns. Our result indicates a “wisdom-of-the-crowd” effect that allows to exploit users’ activity to identify and weigh properly the relevant and surprising news, enhancing considerably the forecasting power of the news sentiment. PMID:26808833
Resolution of Probabilistic Weather Forecasts with Application in Disease Management.
Hughes, G; McRoberts, N; Burnett, F J
2017-02-01
Predictive systems in disease management often incorporate weather data among the disease risk factors, and sometimes this comes in the form of forecast weather data rather than observed weather data. In such cases, it is useful to have an evaluation of the operational weather forecast, in addition to the evaluation of the disease forecasts provided by the predictive system. Typically, weather forecasts and disease forecasts are evaluated using different methodologies. However, the information theoretic quantity expected mutual information provides a basis for evaluating both kinds of forecast. Expected mutual information is an appropriate metric for the average performance of a predictive system over a set of forecasts. Both relative entropy (a divergence, measuring information gain) and specific information (an entropy difference, measuring change in uncertainty) provide a basis for the assessment of individual forecasts.
A Comparison of Forecast Error Generators for Modeling Wind and Load Uncertainty
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lu, Ning; Diao, Ruisheng; Hafen, Ryan P.
2013-12-18
This paper presents four algorithms to generate random forecast error time series, including a truncated-normal distribution model, a state-space based Markov model, a seasonal autoregressive moving average (ARMA) model, and a stochastic-optimization based model. The error time series are used to create real-time (RT), hour-ahead (HA), and day-ahead (DA) wind and load forecast time series that statistically match historically observed forecasting data sets, used for variable generation integration studies. A comparison is made using historical DA load forecast and actual load values to generate new sets of DA forecasts with similar stoical forecast error characteristics. This paper discusses and comparesmore » the capabilities of each algorithm to preserve the characteristics of the historical forecast data sets.« less
Skilful seasonal forecasts of streamflow over Europe?
NASA Astrophysics Data System (ADS)
Arnal, Louise; Cloke, Hannah L.; Stephens, Elisabeth; Wetterhall, Fredrik; Prudhomme, Christel; Neumann, Jessica; Krzeminski, Blazej; Pappenberger, Florian
2018-04-01
This paper considers whether there is any added value in using seasonal climate forecasts instead of historical meteorological observations for forecasting streamflow on seasonal timescales over Europe. A Europe-wide analysis of the skill of the newly operational EFAS (European Flood Awareness System) seasonal streamflow forecasts (produced by forcing the Lisflood model with the ECMWF System 4 seasonal climate forecasts), benchmarked against the ensemble streamflow prediction (ESP) forecasting approach (produced by forcing the Lisflood model with historical meteorological observations), is undertaken. The results suggest that, on average, the System 4 seasonal climate forecasts improve the streamflow predictability over historical meteorological observations for the first month of lead time only (in terms of hindcast accuracy, sharpness and overall performance). However, the predictability varies in space and time and is greater in winter and autumn. Parts of Europe additionally exhibit a longer predictability, up to 7 months of lead time, for certain months within a season. In terms of hindcast reliability, the EFAS seasonal streamflow hindcasts are on average less skilful than the ESP for all lead times. The results also highlight the potential usefulness of the EFAS seasonal streamflow forecasts for decision-making (measured in terms of the hindcast discrimination for the lower and upper terciles of the simulated streamflow). Although the ESP is the most potentially useful forecasting approach in Europe, the EFAS seasonal streamflow forecasts appear more potentially useful than the ESP in some regions and for certain seasons, especially in winter for almost 40 % of Europe. Patterns in the EFAS seasonal streamflow hindcast skill are however not mirrored in the System 4 seasonal climate hindcasts, hinting at the need for a better understanding of the link between hydrological and meteorological variables on seasonal timescales, with the aim of improving climate-model-based seasonal streamflow forecasting.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Khalil, Mohammad; Salloum, Maher; Lee, Jina
2017-07-10
KARMA4 is a C++ library for autoregressive moving average (ARMA) modeling and forecasting of time-series data while incorporating both process and observation error. KARMA4 is designed for fitting and forecasting of time-series data for predictive purposes.
NASA Astrophysics Data System (ADS)
Kozel, Tomas; Stary, Milos
2017-12-01
The main advantage of stochastic forecasting is fan of possible value whose deterministic method of forecasting could not give us. Future development of random process is described better by stochastic then deterministic forecasting. Discharge in measurement profile could be categorized as random process. Content of article is construction and application of forecasting model for managed large open water reservoir with supply function. Model is based on neural networks (NS) and zone models, which forecasting values of average monthly flow from inputs values of average monthly flow, learned neural network and random numbers. Part of data was sorted to one moving zone. The zone is created around last measurement average monthly flow. Matrix of correlation was assembled only from data belonging to zone. The model was compiled for forecast of 1 to 12 month with using backward month flows (NS inputs) from 2 to 11 months for model construction. Data was got ridded of asymmetry with help of Box-Cox rule (Box, Cox, 1964), value r was found by optimization. In next step were data transform to standard normal distribution. The data were with monthly step and forecast is not recurring. 90 years long real flow series was used for compile of the model. First 75 years were used for calibration of model (matrix input-output relationship), last 15 years were used only for validation. Outputs of model were compared with real flow series. For comparison between real flow series (100% successfully of forecast) and forecasts, was used application to management of artificially made reservoir. Course of water reservoir management using Genetic algorithm (GE) + real flow series was compared with Fuzzy model (Fuzzy) + forecast made by Moving zone model. During evaluation process was founding the best size of zone. Results show that the highest number of input did not give the best results and ideal size of zone is in interval from 25 to 35, when course of management was almost same for all numbers from interval. Resulted course of management was compared with course, which was obtained from using GE + real flow series. Comparing results showed that fuzzy model with forecasted values has been able to manage main malfunction and artificially disorders made by model were founded essential, after values of water volume during management were evaluated. Forecasting model in combination with fuzzy model provide very good results in management of water reservoir with storage function and can be recommended for this purpose.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ajami, N K; Duan, Q; Gao, X
2005-04-11
This paper examines several multi-model combination techniques: the Simple Multi-model Average (SMA), the Multi-Model Super Ensemble (MMSE), Modified Multi-Model Super Ensemble (M3SE) and the Weighted Average Method (WAM). These model combination techniques were evaluated using the results from the Distributed Model Intercomparison Project (DMIP), an international project sponsored by the National Weather Service (NWS) Office of Hydrologic Development (OHD). All of the multi-model combination results were obtained using uncalibrated DMIP model outputs and were compared against the best uncalibrated as well as the best calibrated individual model results. The purpose of this study is to understand how different combination techniquesmore » affect the skill levels of the multi-model predictions. This study revealed that the multi-model predictions obtained from uncalibrated single model predictions are generally better than any single member model predictions, even the best calibrated single model predictions. Furthermore, more sophisticated multi-model combination techniques that incorporated bias correction steps work better than simple multi-model average predictions or multi-model predictions without bias correction.« less
NASA Astrophysics Data System (ADS)
Zhang, Shupeng; Yi, Xue; Zheng, Xiaogu; Chen, Zhuoqi; Dan, Bo; Zhang, Xuanze
2014-11-01
In this paper, a global carbon assimilation system (GCAS) is developed for optimizing the global land surface carbon flux at 1° resolution using multiple ecosystem models. In GCAS, three ecosystem models, Boreal Ecosystem Productivity Simulator, Carnegie-Ames-Stanford Approach, and Community Atmosphere Biosphere Land Exchange, produce the prior fluxes, and an atmospheric transport model, Model for OZone And Related chemical Tracers, is used to calculate atmospheric CO2 concentrations resulting from these prior fluxes. A local ensemble Kalman filter is developed to assimilate atmospheric CO2 data observed at 92 stations to optimize the carbon flux for six land regions, and the Bayesian model averaging method is implemented in GCAS to calculate the weighted average of the optimized fluxes based on individual ecosystem models. The weights for the models are found according to the closeness of their forecasted CO2 concentration to observation. Results of this study show that the model weights vary in time and space, allowing for an optimum utilization of different strengths of different ecosystem models. It is also demonstrated that spatial localization is an effective technique to avoid spurious optimization results for regions that are not well constrained by the atmospheric data. Based on the multimodel optimized flux from GCAS, we found that the average global terrestrial carbon sink over the 2002-2008 period is 2.97 ± 1.1 PgC yr-1, and the sinks are 0.88 ± 0.52, 0.27 ± 0.33, 0.67 ± 0.39, 0.90 ± 0.68, 0.21 ± 0.31, and 0.04 ± 0.08 PgC yr-1 for the North America, South America, Africa, Eurasia, Tropical Asia, and Australia, respectively. This multimodel GCAS can be used to improve global carbon cycle estimation.
Adaptive time-variant models for fuzzy-time-series forecasting.
Wong, Wai-Keung; Bai, Enjian; Chu, Alice Wai-Ching
2010-12-01
A fuzzy time series has been applied to the prediction of enrollment, temperature, stock indices, and other domains. Related studies mainly focus on three factors, namely, the partition of discourse, the content of forecasting rules, and the methods of defuzzification, all of which greatly influence the prediction accuracy of forecasting models. These studies use fixed analysis window sizes for forecasting. In this paper, an adaptive time-variant fuzzy-time-series forecasting model (ATVF) is proposed to improve forecasting accuracy. The proposed model automatically adapts the analysis window size of fuzzy time series based on the prediction accuracy in the training phase and uses heuristic rules to generate forecasting values in the testing phase. The performance of the ATVF model is tested using both simulated and actual time series including the enrollments at the University of Alabama, Tuscaloosa, and the Taiwan Stock Exchange Capitalization Weighted Stock Index (TAIEX). The experiment results show that the proposed ATVF model achieves a significant improvement in forecasting accuracy as compared to other fuzzy-time-series forecasting models.
Men, Zhongxian; Yee, Eugene; Lien, Fue-Sang; Yang, Zhiling; Liu, Yongqian
2014-01-01
Short-term wind speed and wind power forecasts (for a 72 h period) are obtained using a nonlinear autoregressive exogenous artificial neural network (ANN) methodology which incorporates either numerical weather prediction or high-resolution computational fluid dynamics wind field information as an exogenous input. An ensemble approach is used to combine the predictions from many candidate ANNs in order to provide improved forecasts for wind speed and power, along with the associated uncertainties in these forecasts. More specifically, the ensemble ANN is used to quantify the uncertainties arising from the network weight initialization and from the unknown structure of the ANN. All members forming the ensemble of neural networks were trained using an efficient particle swarm optimization algorithm. The results of the proposed methodology are validated using wind speed and wind power data obtained from an operational wind farm located in Northern China. The assessment demonstrates that this methodology for wind speed and power forecasting generally provides an improvement in predictive skills when compared to the practice of using an "optimal" weight vector from a single ANN while providing additional information in the form of prediction uncertainty bounds.
Lien, Fue-Sang; Yang, Zhiling; Liu, Yongqian
2014-01-01
Short-term wind speed and wind power forecasts (for a 72 h period) are obtained using a nonlinear autoregressive exogenous artificial neural network (ANN) methodology which incorporates either numerical weather prediction or high-resolution computational fluid dynamics wind field information as an exogenous input. An ensemble approach is used to combine the predictions from many candidate ANNs in order to provide improved forecasts for wind speed and power, along with the associated uncertainties in these forecasts. More specifically, the ensemble ANN is used to quantify the uncertainties arising from the network weight initialization and from the unknown structure of the ANN. All members forming the ensemble of neural networks were trained using an efficient particle swarm optimization algorithm. The results of the proposed methodology are validated using wind speed and wind power data obtained from an operational wind farm located in Northern China. The assessment demonstrates that this methodology for wind speed and power forecasting generally provides an improvement in predictive skills when compared to the practice of using an “optimal” weight vector from a single ANN while providing additional information in the form of prediction uncertainty bounds. PMID:27382627
A Hybrid Approach on Tourism Demand Forecasting
NASA Astrophysics Data System (ADS)
Nor, M. E.; Nurul, A. I. M.; Rusiman, M. S.
2018-04-01
Tourism has become one of the important industries that contributes to the country’s economy. Tourism demand forecasting gives valuable information to policy makers, decision makers and organizations related to tourism industry in order to make crucial decision and planning. However, it is challenging to produce an accurate forecast since economic data such as the tourism data is affected by social, economic and environmental factors. In this study, an equally-weighted hybrid method, which is a combination of Box-Jenkins and Artificial Neural Networks, was applied to forecast Malaysia’s tourism demand. The forecasting performance was assessed by taking the each individual method as a benchmark. The results showed that this hybrid approach outperformed the other two models
Benefits of Sharing Information: Supermodel Ensemble and Applications in South America
NASA Astrophysics Data System (ADS)
Dias, P. L.
2006-05-01
A model intercomparison program involving a large number of academic and operational institutions has been implemented in South America since 2003, motivated by the SALLJEX Intercomparison Program in 2003 (a research program focused on the identification of the role of the Andes low level jet moisture transport from the Amazon to the Plata basin) and the WMO/THORPEX (www.wmo.int/thorpex) goals to improve predictability through the proper combination of numerical weather forecasts. This program also explores the potential predictability associated with the combination of a large number of possible scenarios in the time scale of a few days to up to 15 days. Five academic institutions and five operational forecasting centers in several countries in South America, 1 academic institution in the USA, and the main global forecasting centers (NCEP, UKMO, ECMWF) agreed to provide numerical products based on operational and experimental models. The metric for model validation is concentrated on the fit of the forecast to surface observations. Meteorological data from airports, synoptic stations operated by national weather services, automatic data platforms maintained by different institutions, the PIRATA buoys etc are all collected through LDM/NCAR or direct transmission. Approximately 40 models outputs are available on a daily basis, twice a day. A simple procedure based on data assimilation principles was quite successful in combining the available forecasts in order to produce temperature, dew point, wind, pressure and precipitation forecasts at station points in S. America. The procedure is based on removing each model bias at the observational point and a weighted average based on the mean square error of the forecasts. The base period for estimating the bias and mean square error is of the order of 15 to 30 days. Products of the intercomparison model program and the optimal statistical combination of the available forecasts are public and available in real time (www.master.iag.usp.br/). Monitoring of the use of the products reveal a growing trend in the last year (reaching about 10.000 accesses per day in recent months). The intercomparison program provides a rich data set for educational products (real time use in Synoptic Meteorology and Numerical Weather Forecasting lectures), operational weather forecasts in national or regional weather centers and for research purposes. During the first phase of the program it was difficult to convince potential participants to share the information in the public homepage. However, as the system evolved, more and more institutions became associated with the program. The general opinion of the participants is that the system provides an unified metric for evaluation, a forum for discussion of the physical origin of the model forecast differences and therefore improvement of the quality of the numerical guidance.
Using Seasonal Forecasts for medium-term Electricity Demand Forecasting on Italy
NASA Astrophysics Data System (ADS)
De Felice, M.; Alessandri, A.; Ruti, P.
2012-12-01
Electricity demand forecast is an essential tool for energy management and operation scheduling for electric utilities. In power engineering, medium-term forecasting is defined as the prediction up to 12 months ahead, and commonly is performed considering weather climatology and not actual forecasts. This work aims to analyze the predictability of electricity demand on seasonal time scale, considering seasonal samples, i.e. average on three months. Electricity demand data has been provided by Italian Transmission System Operator for eight different geographical areas, in Fig. 1 for each area is shown the average yearly demand anomaly for each season. This work uses data for each summer during 1990-2010 and all the datasets have been pre-processed to remove trends and reduce the influence of calendar and economic effects. The choice of focusing this research on the summer period is due to the critical peaks of demand that power grid is subject during hot days. Weather data have been included considering observations provided by ECMWF ERA-INTERIM reanalyses. Primitive variables (2-metres temperature, pressure, etc) and derived variables (cooling and heating degree days) have been averaged for summer months. A particular attention has been given to the influence of persistence of positive temperature anomaly and a derived variable which count the number of consecutive days of extreme-days has been used. Electricity demand forecast has been performed using linear and nonlinear regression methods and stepwise model selection procedures have been used to perform a variable selection with respect to performance measures. Significance tests on multiple linear regression showed the importance of cooling degree days during summer in the North-East and South of Italy with an increase of statistical significance after 2003, a result consistent with the diffusion of air condition and ventilation equipment in the last decade. Finally, using seasonal climate forecasts we evaluate the performances of electricity demand forecast performed with predicted variables on Italian regions with encouraging results on the South of Italy. This work gives an initial assessment on the predictability of electricity demand on seasonal time scale, evaluating the relevance of climate information provided by seasonal forecasts for electricity management during high-demand periods.;
Exploiting Domain Knowledge to Forecast Heating Oil Consumption
NASA Astrophysics Data System (ADS)
Corliss, George F.; Sakauchi, Tsuginosuke; Vitullo, Steven R.; Brown, Ronald H.
2011-11-01
The GasDay laboratory at Marquette University provides forecasts of energy consumption. One such service is the Heating Oil Forecaster, a service for a heating oil or propane delivery company. Accurate forecasts can help reduce the number of trucks and drivers while providing efficient inventory management by stretching the time between deliveries. Accurate forecasts help retain valuable customers. If a customer runs out of fuel, the delivery service incurs costs for an emergency delivery and often a service call. Further, the customer probably changes providers. The basic modeling is simple: Fit delivery amounts sk to cumulative Heating Degree Days (HDDk = Σmax(0,60 °F—daily average temperature)), with wind adjustment, for each delivery period: sk≈ŝk = β0+β1HDDk. For the first few deliveries, there is not enough data to provide a reliable estimate K = 1/β1 so we use Bayesian techniques with priors constructed from historical data. A fresh model is trained for each customer with each delivery, producing daily consumption forecasts using actual and forecast weather until the next delivery. In practice, a delivery may not fill the oil tank if the delivery truck runs out of oil or the automatic shut-off activates prematurely. Special outlier detection and recovery based on domain knowledge addresses this and other special cases. The error at each delivery is the difference between that delivery and the aggregate of daily forecasts using actual weather since the preceding delivery. Out-of-sample testing yields MAPE = 21.2% and an average error of 6.0% of tank capacity for Company A. The MAPE and an average error as a percentage of tank capacity for Company B are 31.5 % and 6.6 %, respectively. One heating oil delivery company who uses this forecasting service [1] reported instances of a customer running out of oil reduced from about 250 in 50,000 deliveries per year before contracting for our service to about 10 with our service. They delivered slightly more oil with 20 % fewer trucks and drivers, citing 250,000 annual savings in operational costs.
1992 five year battery forecast
DOE Office of Scientific and Technical Information (OSTI.GOV)
Amistadi, D.
1992-12-01
Five-year trends for automotive and industrial batteries are projected. Topic covered include: SLI shipments; lead consumption; automotive batteries (5-year annual growth rates); industrial batteries (standby power and motive power); estimated average battery life by area/country for 1989; US motor vehicle registrations; replacement battery shipments; potential lead consumption in electric vehicles; BCI recycling rates for lead-acid batteries; US average car/light truck battery life; channels of distribution; replacement battery inventory end July; 2nd US battery shipment forecast.
A Systems Modeling Approach to Forecast Corn Economic Optimum Nitrogen Rate.
Puntel, Laila A; Sawyer, John E; Barker, Daniel W; Thorburn, Peter J; Castellano, Michael J; Moore, Kenneth J; VanLoocke, Andrew; Heaton, Emily A; Archontoulis, Sotirios V
2018-01-01
Historically crop models have been used to evaluate crop yield responses to nitrogen (N) rates after harvest when it is too late for the farmers to make in-season adjustments. We hypothesize that the use of a crop model as an in-season forecast tool will improve current N decision-making. To explore this, we used the Agricultural Production Systems sIMulator (APSIM) calibrated with long-term experimental data for central Iowa, USA (16-years in continuous corn and 15-years in soybean-corn rotation) combined with actual weather data up to a specific crop stage and historical weather data thereafter. The objectives were to: (1) evaluate the accuracy and uncertainty of corn yield and economic optimum N rate (EONR) predictions at four forecast times (planting time, 6th and 12th leaf, and silking phenological stages); (2) determine whether the use of analogous historical weather years based on precipitation and temperature patterns as opposed to using a 35-year dataset could improve the accuracy of the forecast; and (3) quantify the value added by the crop model in predicting annual EONR and yields using the site-mean EONR and the yield at the EONR to benchmark predicted values. Results indicated that the mean corn yield predictions at planting time ( R 2 = 0.77) using 35-years of historical weather was close to the observed and predicted yield at maturity ( R 2 = 0.81). Across all forecasting times, the EONR predictions were more accurate in corn-corn than soybean-corn rotation (relative root mean square error, RRMSE, of 25 vs. 45%, respectively). At planting time, the APSIM model predicted the direction of optimum N rates (above, below or at average site-mean EONR) in 62% of the cases examined ( n = 31) with an average error range of ±38 kg N ha -1 (22% of the average N rate). Across all forecast times, prediction error of EONR was about three times higher than yield predictions. The use of the 35-year weather record was better than using selected historical weather years to forecast (RRMSE was on average 3% lower). Overall, the proposed approach of using the crop model as a forecasting tool could improve year-to-year predictability of corn yields and optimum N rates. Further improvements in modeling and set-up protocols are needed toward more accurate forecast, especially for extreme weather years with the most significant economic and environmental cost.
A Systems Modeling Approach to Forecast Corn Economic Optimum Nitrogen Rate
Puntel, Laila A.; Sawyer, John E.; Barker, Daniel W.; Thorburn, Peter J.; Castellano, Michael J.; Moore, Kenneth J.; VanLoocke, Andrew; Heaton, Emily A.; Archontoulis, Sotirios V.
2018-01-01
Historically crop models have been used to evaluate crop yield responses to nitrogen (N) rates after harvest when it is too late for the farmers to make in-season adjustments. We hypothesize that the use of a crop model as an in-season forecast tool will improve current N decision-making. To explore this, we used the Agricultural Production Systems sIMulator (APSIM) calibrated with long-term experimental data for central Iowa, USA (16-years in continuous corn and 15-years in soybean-corn rotation) combined with actual weather data up to a specific crop stage and historical weather data thereafter. The objectives were to: (1) evaluate the accuracy and uncertainty of corn yield and economic optimum N rate (EONR) predictions at four forecast times (planting time, 6th and 12th leaf, and silking phenological stages); (2) determine whether the use of analogous historical weather years based on precipitation and temperature patterns as opposed to using a 35-year dataset could improve the accuracy of the forecast; and (3) quantify the value added by the crop model in predicting annual EONR and yields using the site-mean EONR and the yield at the EONR to benchmark predicted values. Results indicated that the mean corn yield predictions at planting time (R2 = 0.77) using 35-years of historical weather was close to the observed and predicted yield at maturity (R2 = 0.81). Across all forecasting times, the EONR predictions were more accurate in corn-corn than soybean-corn rotation (relative root mean square error, RRMSE, of 25 vs. 45%, respectively). At planting time, the APSIM model predicted the direction of optimum N rates (above, below or at average site-mean EONR) in 62% of the cases examined (n = 31) with an average error range of ±38 kg N ha−1 (22% of the average N rate). Across all forecast times, prediction error of EONR was about three times higher than yield predictions. The use of the 35-year weather record was better than using selected historical weather years to forecast (RRMSE was on average 3% lower). Overall, the proposed approach of using the crop model as a forecasting tool could improve year-to-year predictability of corn yields and optimum N rates. Further improvements in modeling and set-up protocols are needed toward more accurate forecast, especially for extreme weather years with the most significant economic and environmental cost. PMID:29706974
Evaluation of Multi-Model Ensemble System for Seasonal and Monthly Prediction
NASA Astrophysics Data System (ADS)
Zhang, Q.; Van den Dool, H. M.
2013-12-01
Since August 2011, the realtime seasonal forecasts of U.S. National Multi-Model Ensemble (NMME) have been made on 8th of each month by NCEP Climate Prediction Center (CPC). During the first year, the participating models were NCEP/CFSv1&2, GFDL/CM2.2, NCAR/U.Miami/COLA/CCSM3, NASA/GEOS5, IRI/ ECHAM-a & ECHAM-f for the realtime NMME forecast. The Canadian Meteorological Center CanCM3 and CM4 replaced the CFSv1 and IRI's models in the second year. The NMME team at CPC collects three variables, including precipitation, 2-meter temperature and sea surface temperature from each modeling center on a 1x1 global grid, removes systematic errors, makes the grand ensemble mean with equal weight for each model and constructs a probability forecast with equal weight for each member. The team then provides the NMME forecast to the operational CPC forecaster responsible for the seasonal and monthly outlook each month. Verification of the seasonal and monthly prediction from NMME is conducted by calculating the anomaly correlation (AC) from the 30-year hindcasts (1982-2011) of individual model and NMME ensemble. The motivation of this study is to provide skill benchmarks for future improvements of the NMME seasonal and monthly prediction system. The experimental (Phase I) stage of the project already supplies routine guidance to users of the NMME forecasts.
Habka, Dany; Mann, David; Landes, Ronald; Soto-Gutierrez, Alejandro
2015-01-01
During the past 20 years liver transplantation has become the definitive treatment for most severe types of liver failure and hepatocellular carcinoma, in both children and adults. In the U.S., roughly 16,000 individuals are on the liver transplant waiting list. Only 38% of them will receive a transplant due to the organ shortage. This paper explores another option: bioengineering an autologous liver graft. We developed a 20-year model projecting future demand for liver transplants, along with costs based on current technology. We compared these cost projections against projected costs to bioengineer autologous liver grafts. The model was divided into: 1) the epidemiology model forecasting the number of wait-listed patients, operated patients and postoperative patients; and 2) the treatment model forecasting costs (pre-transplant-related costs; transplant (admission)-related costs; and 10-year post-transplant-related costs) during the simulation period. The patient population was categorized using the Model for End-Stage Liver Disease score. The number of patients on the waiting list was projected to increase 23% over 20 years while the weighted average treatment costs in the pre-liver transplantation phase were forecast to increase 83% in Year 20. Projected demand for livers will increase 10% in 10 years and 23% in 20 years. Total costs of liver transplantation are forecast to increase 33% in 10 years and 81% in 20 years. By comparison, the projected cost to bioengineer autologous liver grafts is $9.7M based on current catalog prices for iPS-derived liver cells. The model projects a persistent increase in need and cost of donor livers over the next 20 years that’s constrained by a limited supply of donor livers. The number of patients who die while on the waiting list will reflect this ever-growing disparity. Currently, bioengineering autologous liver grafts is cost prohibitive. However, costs will decline rapidly with the introduction of new manufacturing strategies and economies of scale. PMID:26177505
Habka, Dany; Mann, David; Landes, Ronald; Soto-Gutierrez, Alejandro
2015-01-01
During the past 20 years liver transplantation has become the definitive treatment for most severe types of liver failure and hepatocellular carcinoma, in both children and adults. In the U.S., roughly 16,000 individuals are on the liver transplant waiting list. Only 38% of them will receive a transplant due to the organ shortage. This paper explores another option: bioengineering an autologous liver graft. We developed a 20-year model projecting future demand for liver transplants, along with costs based on current technology. We compared these cost projections against projected costs to bioengineer autologous liver grafts. The model was divided into: 1) the epidemiology model forecasting the number of wait-listed patients, operated patients and postoperative patients; and 2) the treatment model forecasting costs (pre-transplant-related costs; transplant (admission)-related costs; and 10-year post-transplant-related costs) during the simulation period. The patient population was categorized using the Model for End-Stage Liver Disease score. The number of patients on the waiting list was projected to increase 23% over 20 years while the weighted average treatment costs in the pre-liver transplantation phase were forecast to increase 83% in Year 20. Projected demand for livers will increase 10% in 10 years and 23% in 20 years. Total costs of liver transplantation are forecast to increase 33% in 10 years and 81% in 20 years. By comparison, the projected cost to bioengineer autologous liver grafts is $9.7M based on current catalog prices for iPS-derived liver cells. The model projects a persistent increase in need and cost of donor livers over the next 20 years that's constrained by a limited supply of donor livers. The number of patients who die while on the waiting list will reflect this ever-growing disparity. Currently, bioengineering autologous liver grafts is cost prohibitive. However, costs will decline rapidly with the introduction of new manufacturing strategies and economies of scale.
Soyiri, Ireneous N; Reidpath, Daniel D
2013-01-01
Forecasting higher than expected numbers of health events provides potentially valuable insights in its own right, and may contribute to health services management and syndromic surveillance. This study investigates the use of quantile regression to predict higher than expected respiratory deaths. Data taken from 70,830 deaths occurring in New York were used. Temporal, weather and air quality measures were fitted using quantile regression at the 90th-percentile with half the data (in-sample). Four QR models were fitted: an unconditional model predicting the 90th-percentile of deaths (Model 1), a seasonal/temporal (Model 2), a seasonal, temporal plus lags of weather and air quality (Model 3), and a seasonal, temporal model with 7-day moving averages of weather and air quality. Models were cross-validated with the out of sample data. Performance was measured as proportionate reduction in weighted sum of absolute deviations by a conditional, over unconditional models; i.e., the coefficient of determination (R1). The coefficient of determination showed an improvement over the unconditional model between 0.16 and 0.19. The greatest improvement in predictive and forecasting accuracy of daily mortality was associated with the inclusion of seasonal and temporal predictors (Model 2). No gains were made in the predictive models with the addition of weather and air quality predictors (Models 3 and 4). However, forecasting models that included weather and air quality predictors performed slightly better than the seasonal and temporal model alone (i.e., Model 3 > Model 4 > Model 2) This study provided a new approach to predict higher than expected numbers of respiratory related-deaths. The approach, while promising, has limitations and should be treated at this stage as a proof of concept.
Soyiri, Ireneous N.; Reidpath, Daniel D.
2013-01-01
Forecasting higher than expected numbers of health events provides potentially valuable insights in its own right, and may contribute to health services management and syndromic surveillance. This study investigates the use of quantile regression to predict higher than expected respiratory deaths. Data taken from 70,830 deaths occurring in New York were used. Temporal, weather and air quality measures were fitted using quantile regression at the 90th-percentile with half the data (in-sample). Four QR models were fitted: an unconditional model predicting the 90th-percentile of deaths (Model 1), a seasonal / temporal (Model 2), a seasonal, temporal plus lags of weather and air quality (Model 3), and a seasonal, temporal model with 7-day moving averages of weather and air quality. Models were cross-validated with the out of sample data. Performance was measured as proportionate reduction in weighted sum of absolute deviations by a conditional, over unconditional models; i.e., the coefficient of determination (R1). The coefficient of determination showed an improvement over the unconditional model between 0.16 and 0.19. The greatest improvement in predictive and forecasting accuracy of daily mortality was associated with the inclusion of seasonal and temporal predictors (Model 2). No gains were made in the predictive models with the addition of weather and air quality predictors (Models 3 and 4). However, forecasting models that included weather and air quality predictors performed slightly better than the seasonal and temporal model alone (i.e., Model 3 > Model 4 > Model 2) This study provided a new approach to predict higher than expected numbers of respiratory related-deaths. The approach, while promising, has limitations and should be treated at this stage as a proof of concept. PMID:24147122
NASA Astrophysics Data System (ADS)
dos Santos, A. F.; Freitas, S. R.; de Mattos, J. G. Z.; de Campos Velho, H. F.; Gan, M. A.; da Luz, E. F. P.; Grell, G. A.
2013-09-01
In this paper we consider an optimization problem applying the metaheuristic Firefly algorithm (FY) to weight an ensemble of rainfall forecasts from daily precipitation simulations with the Brazilian developments on the Regional Atmospheric Modeling System (BRAMS) over South America during January 2006. The method is addressed as a parameter estimation problem to weight the ensemble of precipitation forecasts carried out using different options of the convective parameterization scheme. Ensemble simulations were performed using different choices of closures, representing different formulations of dynamic control (the modulation of convection by the environment) in a deep convection scheme. The optimization problem is solved as an inverse problem of parameter estimation. The application and validation of the methodology is carried out using daily precipitation fields, defined over South America and obtained by merging remote sensing estimations with rain gauge observations. The quadratic difference between the model and observed data was used as the objective function to determine the best combination of the ensemble members to reproduce the observations. To reduce the model rainfall biases, the set of weights determined by the algorithm is used to weight members of an ensemble of model simulations in order to compute a new precipitation field that represents the observed precipitation as closely as possible. The validation of the methodology is carried out using classical statistical scores. The algorithm has produced the best combination of the weights, resulting in a new precipitation field closest to the observations.
NASA Astrophysics Data System (ADS)
Rahmat, R. F.; Nasution, F. R.; Seniman; Syahputra, M. F.; Sitompul, O. S.
2018-02-01
Weather is condition of air in a certain region at a relatively short period of time, measured with various parameters such as; temperature, air preasure, wind velocity, humidity and another phenomenons in the atmosphere. In fact, extreme weather due to global warming would lead to drought, flood, hurricane and other forms of weather occasion, which directly affects social andeconomic activities. Hence, a forecasting technique is to predict weather with distinctive output, particullary mapping process based on GIS with information about current weather status in certain cordinates of each region with capability to forecast for seven days afterward. Data used in this research are retrieved in real time from the server openweathermap and BMKG. In order to obtain a low error rate and high accuracy of forecasting, the authors use Bayesian Model Averaging (BMA) method. The result shows that the BMA method has good accuracy. Forecasting error value is calculated by mean square error shows (MSE). The error value emerges at minumum temperature rated at 0.28 and maximum temperature rated at 0.15. Meanwhile, the error value of minimum humidity rates at 0.38 and the error value of maximum humidity rates at 0.04. Afterall, the forecasting error rate of wind speed is at 0.076. The lower the forecasting error rate, the more optimized the accuracy is.
NASA Astrophysics Data System (ADS)
De Felice, Matteo; Petitta, Marcello; Ruti, Paolo
2014-05-01
Photovoltaic diffusion is steadily growing on Europe, passing from a capacity of almost 14 GWp in 2011 to 21.5 GWp in 2012 [1]. Having accurate forecast is needed for planning and operational purposes, with the possibility to model and predict solar variability at different time-scales. This study examines the predictability of daily surface solar radiation comparing ECMWF operational forecasts with CM-SAF satellite measurements on the Meteosat (MSG) full disk domain. Operational forecasts used are the IFS system up to 10 days and the System4 seasonal forecast up to three months. Forecast are analysed considering average and variance of errors, showing error maps and average on specific domains with respect to prediction lead times. In all the cases, forecasts are compared with predictions obtained using persistence and state-of-art time-series models. We can observe a wide range of errors, with the performance of forecasts dramatically affected by orography and season. Lower errors are on southern Italy and Spain, with errors on some areas consistently under 10% up to ten days during summer (JJA). Finally, we conclude the study with some insight on how to "translate" the error on solar radiation to error on solar power production using available production data from solar power plants. [1] EurObserver, "Baromètre Photovoltaïque, Le journal des énergies renouvables, April 2012."
DOE Office of Scientific and Technical Information (OSTI.GOV)
Optis, Michael; Scott, George N.; Draxl, Caroline
The goal of this analysis was to assess the wind power forecast accuracy of the Vermont Weather Analytics Center (VTWAC) forecast system and to identify potential improvements to the forecasts. Based on the analysis at Georgia Mountain, the following recommendations for improving forecast performance were made: 1. Resolve the significant negative forecast bias in February-March 2017 (50% underprediction on average) 2. Improve the ability of the forecast model to capture the strong diurnal cycle of wind power 3. Add ability for forecast model to assess internal wake loss, particularly at sites where strong diurnal shifts in wind direction are present.more » Data availability and quality limited the robustness of this forecast assessment. A more thorough analysis would be possible given a longer period of record for the data (at least one full year), detailed supervisory control and data acquisition data for each wind plant, and more detailed information on the forecast system input data and methodologies.« less
Identification and synthetic modeling of factors affecting American black duck populations
Conroy, Michael J.; Miller, Mark W.; Hines, James E.
2002-01-01
We reviewed the literature on factors potentially affecting the population status of American black ducks (Anas rupribes). Our review suggests that there is some support for the influence of 4 major, continental-scope factors in limiting or regulating black duck populations: 1) loss in the quantity or quality of breeding habitats; 2) loss in the quantity or quality of wintering habitats; 3) harvest, and 4) interactions (competition, hybridization) with mallards (Anas platyrhychos) during the breeding and/or wintering periods. These factors were used as the basis of an annual life cycle model in which reproduction rates and survival rates were modeled as functions of the above factors, with parameters of the model describing the strength of these relationships. Variation in the model parameter values allows for consideration of scientific uncertainty as to the degree each of these factors may be contributing to declines in black duck populations, and thus allows for the investigation of the possible effects of management (e.g., habitat improvement, harvest reductions) under different assumptions. We then used available, historical data on black duck populations (abundance, annual reproduction rates, and survival rates) and possible driving factors (trends in breeding and wintering habitats, harvest rates, and abundance of mallards) to estimate model parameters. Our estimated reproduction submodel included parameters describing negative density feedback of black ducks, positive influence of breeding habitat, and negative influence of mallard densities; our survival submodel included terms for positive influence of winter habitat on reproduction rates, and negative influences of black duck density (i.e., compensation to harvest mortality). Individual models within each group (reproduction, survival) involved various combinations of these factors, and each was given an information theoretic weight for use in subsequent prediction. The reproduction model with highest AIC weight (0.70) predicted black duck age ratios increasing as a function of decreasing mallard abundance and increasing acreage of breeding habitat; all models considered involved negative density dependence for black ducks. The survival model with highest AIC weight (0.51) predicted nonharvest survival increasing as a function of increasing acreage of wintering habitat and decreasing harvest rates (additive mortality); models involving compensatory mortality effects received ≈0.12 total weight, vs. 0.88 for additive models. We used the combined model, together with our historical data set, to perform a series of 1-year population forecasts, similar to those that might be performed under adaptive management. Initial model forecasts over-predicted observed breeding populations by ≈25%. Least-squares calibration reduced the bias to ≈0.5% under prediction. After calibration, model-averaged predictions over the 16 alternative models (4 reproduction × 4 survival, weighted by AIC model weights) explained 67% of the variation in annual breeding population abundance for black ducks, suggesting that it might have utility as a predictive tool in adaptive management. We investigated the effects of statistical uncertainty in parameter values on predicted population growth rates for the combined annual model, via sensitivity analyses. Parameter sensitivity varied in relation to the parameter values over the estimated confidence intervals, and in relation to harvest rates and mallard abundance. Forecasts of black duck abundance were extremely sensitive to variation in parameter values for the coefficients for breeding and wintering habitat effects. Model-averaged forecasts of black duck abundance were also sensitive to changes in harvest rate and mallard abundance, with rapid declines in black duck abundance predicted for a range of harvest rates and mallard abundance higher than current levels of either factor, but easily envisaged, particularly given current rates of growth for mallard populations. Because of concerns about sensitivity to habitat coefficients, and particularly in light of deficiencies in the historical data used to estimate these parameters, we developed a simplified model that excludes habitat effects. We also developed alternative models involving a calibration adjustment for reproduction rates, survival rates, or neither. Calibration of survival rates performed best (AIC weight 0.59, % BIAS = -0.280, R2=0.679), with reproduction calibration somewhat inferior (AIC weight 0.41, % BIAS = -0.267, R2=0.672); models without calibration received virtually no AIC weight and were discarded. We recommend that the simplified model set (4 biological models × 2 alternative calibration factors) be retained as the best working set of alternative models for research and management. Finally, we provide some preliminary guidance for the development of adaptive harvest management for black ducks, using our working set of models.
Forecasting daily patient volumes in the emergency department.
Jones, Spencer S; Thomas, Alun; Evans, R Scott; Welch, Shari J; Haug, Peter J; Snow, Gregory L
2008-02-01
Shifts in the supply of and demand for emergency department (ED) resources make the efficient allocation of ED resources increasingly important. Forecasting is a vital activity that guides decision-making in many areas of economic, industrial, and scientific planning, but has gained little traction in the health care industry. There are few studies that explore the use of forecasting methods to predict patient volumes in the ED. The goals of this study are to explore and evaluate the use of several statistical forecasting methods to predict daily ED patient volumes at three diverse hospital EDs and to compare the accuracy of these methods to the accuracy of a previously proposed forecasting method. Daily patient arrivals at three hospital EDs were collected for the period January 1, 2005, through March 31, 2007. The authors evaluated the use of seasonal autoregressive integrated moving average, time series regression, exponential smoothing, and artificial neural network models to forecast daily patient volumes at each facility. Forecasts were made for horizons ranging from 1 to 30 days in advance. The forecast accuracy achieved by the various forecasting methods was compared to the forecast accuracy achieved when using a benchmark forecasting method already available in the emergency medicine literature. All time series methods considered in this analysis provided improved in-sample model goodness of fit. However, post-sample analysis revealed that time series regression models that augment linear regression models by accounting for serial autocorrelation offered only small improvements in terms of post-sample forecast accuracy, relative to multiple linear regression models, while seasonal autoregressive integrated moving average, exponential smoothing, and artificial neural network forecasting models did not provide consistently accurate forecasts of daily ED volumes. This study confirms the widely held belief that daily demand for ED services is characterized by seasonal and weekly patterns. The authors compared several time series forecasting methods to a benchmark multiple linear regression model. The results suggest that the existing methodology proposed in the literature, multiple linear regression based on calendar variables, is a reasonable approach to forecasting daily patient volumes in the ED. However, the authors conclude that regression-based models that incorporate calendar variables, account for site-specific special-day effects, and allow for residual autocorrelation provide a more appropriate, informative, and consistently accurate approach to forecasting daily ED patient volumes.
Multicomponent ensemble models to forecast induced seismicity
NASA Astrophysics Data System (ADS)
Király-Proag, E.; Gischig, V.; Zechar, J. D.; Wiemer, S.
2018-01-01
In recent years, human-induced seismicity has become a more and more relevant topic due to its economic and social implications. Several models and approaches have been developed to explain underlying physical processes or forecast induced seismicity. They range from simple statistical models to coupled numerical models incorporating complex physics. We advocate the need for forecast testing as currently the best method for ascertaining if models are capable to reasonably accounting for key physical governing processes—or not. Moreover, operational forecast models are of great interest to help on-site decision-making in projects entailing induced earthquakes. We previously introduced a standardized framework following the guidelines of the Collaboratory for the Study of Earthquake Predictability, the Induced Seismicity Test Bench, to test, validate, and rank induced seismicity models. In this study, we describe how to construct multicomponent ensemble models based on Bayesian weightings that deliver more accurate forecasts than individual models in the case of Basel 2006 and Soultz-sous-Forêts 2004 enhanced geothermal stimulation projects. For this, we examine five calibrated variants of two significantly different model groups: (1) Shapiro and Smoothed Seismicity based on the seismogenic index, simple modified Omori-law-type seismicity decay, and temporally weighted smoothed seismicity; (2) Hydraulics and Seismicity based on numerically modelled pore pressure evolution that triggers seismicity using the Mohr-Coulomb failure criterion. We also demonstrate how the individual and ensemble models would perform as part of an operational Adaptive Traffic Light System. Investigating seismicity forecasts based on a range of potential injection scenarios, we use forecast periods of different durations to compute the occurrence probabilities of seismic events M ≥ 3. We show that in the case of the Basel 2006 geothermal stimulation the models forecast hazardous levels of seismicity days before the occurrence of felt events.
NASA Technical Reports Server (NTRS)
Kalnay, Eugenia; Dalcher, Amnon
1987-01-01
It is shown that it is possible to predict the skill of numerical weather forecasts - a quantity which is variable from day to day and region to region. This has been accomplished using as predictor the dispersion (measured by the average correlation) between members of an ensemble of forecasts started from five different analyses. The analyses had been previously derived for satellite-data-impact studies and included, in the Northern Hemisphere, moderate perturbations associated with the use of different observing systems. When the Northern Hemisphere was used as a verification region, the prediction of skill was rather poor. This is due to the fact that such a large area usually contains regions with excellent forecasts as well as regions with poor forecasts, and does not allow for discrimination between them. However, when regional verifications were used, the ensemble forecast dispersion provided a very good prediction of the quality of the individual forecasts.
Performance of univariate forecasting on seasonal diseases: the case of tuberculosis.
Permanasari, Adhistya Erna; Rambli, Dayang Rohaya Awang; Dominic, P Dhanapal Durai
2011-01-01
The annual disease incident worldwide is desirable to be predicted for taking appropriate policy to prevent disease outbreak. This chapter considers the performance of different forecasting method to predict the future number of disease incidence, especially for seasonal disease. Six forecasting methods, namely linear regression, moving average, decomposition, Holt-Winter's, ARIMA, and artificial neural network (ANN), were used for disease forecasting on tuberculosis monthly data. The model derived met the requirement of time series with seasonality pattern and downward trend. The forecasting performance was compared using similar error measure in the base of the last 5 years forecast result. The findings indicate that ARIMA model was the most appropriate model since it obtained the less relatively error than the other model.
Analysis and forecasting of municipal solid waste in Nankana City using geo-spatial techniques.
Mahmood, Shakeel; Sharif, Faiza; Rahman, Atta-Ur; Khan, Amin U
2018-04-11
The objective of this study was to analyze and forecast municipal solid waste (MSW) in Nankana City (NC), District Nankana, Province of Punjab, Pakistan. The study is based on primary data acquired through a questionnaire, Global Positioning System (GPS), and direct waste sampling and analysis. Inverse distance weighting (IDW) technique was applied to geo-visualize the spatial trend of MSW generation. Analysis revealed that the total MSW generated was 12,419,636 kg/annum (12,419.64 t) or 34,026.4 kg/day (34.03 t), or 0.46 kg/capita/day (kg/cap/day). The average wastes generated per day by studied households, clinics, hospitals, and hotels were 3, 7.5, 20, and 15 kg, respectively. The residential sector was the top producer with 95.5% (32,511 kg/day) followed by commercial sector 1.9% (665 kg/day). On average, high-income and low-income households were generating waste of 4.2 kg/household/day (kg/hh/day) and 1.7 kg/hh/day, respectively. Similarly, large-size families were generating more (4.4 kg/hh/day) waste than small-size families (1.8 kg/hh/day). The physical constituents of MSW generated in the study area with a population of about 70,000 included paper (7%); compostable matter (61%); plastics (9%); fine earth, ashes, ceramics, and stones (20.4%); and others (2.6%).The spatial trend of MSW generation varies; city center has a high rate of generation and towards periphery generation lowers. Based on the current population growth and MSW generation rate, NC is expected to generate 2.8 times more waste by the year 2050.This is imperative to develop a proper solid waste management plan to reduce the risk of environmental degradation and protect human health. This study provides insights into MSW generation rate, physical composition, and forecasting which are vital in its management strategies.
Seasonal Forecasting of Fire Weather Based on a New Global Fire Weather Database
NASA Technical Reports Server (NTRS)
Dowdy, Andrew J.; Field, Robert D.; Spessa, Allan C.
2016-01-01
Seasonal forecasting of fire weather is examined based on a recently produced global database of the Fire Weather Index (FWI) system beginning in 1980. Seasonal average values of the FWI are examined in relation to measures of the El Nino-Southern Oscillation (ENSO) and the Indian Ocean Dipole (IOD). The results are used to examine seasonal forecasts of fire weather conditions throughout the world.
Ensemble averaging and stacking of ARIMA and GSTAR model for rainfall forecasting
NASA Astrophysics Data System (ADS)
Anggraeni, D.; Kurnia, I. F.; Hadi, A. F.
2018-04-01
Unpredictable rainfall changes can affect human activities, such as in agriculture, aviation, shipping which depend on weather forecasts. Therefore, we need forecasting tools with high accuracy in predicting the rainfall in the future. This research focus on local forcasting of the rainfall at Jember in 2005 until 2016, from 77 rainfall stations. The rainfall here was not only related to the occurrence of the previous of its stations, but also related to others, it’s called the spatial effect. The aim of this research is to apply the GSTAR model, to determine whether there are some correlations of spatial effect between one to another stations. The GSTAR model is an expansion of the space-time model that combines the time-related effects, the locations (stations) in a time series effects, and also the location it self. The GSTAR model will also be compared to the ARIMA model that completely ignores the independent variables. The forcested value of the ARIMA and of the GSTAR models then being combined using the ensemble forecasting technique. The averaging and stacking method of ensemble forecasting method here provide us the best model with higher acuracy model that has the smaller RMSE (Root Mean Square Error) value. Finally, with the best model we can offer a better local rainfall forecasting in Jember for the future.
Ridge Polynomial Neural Network with Error Feedback for Time Series Forecasting
Ghazali, Rozaida; Herawan, Tutut
2016-01-01
Time series forecasting has gained much attention due to its many practical applications. Higher-order neural network with recurrent feedback is a powerful technique that has been used successfully for time series forecasting. It maintains fast learning and the ability to learn the dynamics of the time series over time. Network output feedback is the most common recurrent feedback for many recurrent neural network models. However, not much attention has been paid to the use of network error feedback instead of network output feedback. In this study, we propose a novel model, called Ridge Polynomial Neural Network with Error Feedback (RPNN-EF) that incorporates higher order terms, recurrence and error feedback. To evaluate the performance of RPNN-EF, we used four univariate time series with different forecasting horizons, namely star brightness, monthly smoothed sunspot numbers, daily Euro/Dollar exchange rate, and Mackey-Glass time-delay differential equation. We compared the forecasting performance of RPNN-EF with the ordinary Ridge Polynomial Neural Network (RPNN) and the Dynamic Ridge Polynomial Neural Network (DRPNN). Simulation results showed an average 23.34% improvement in Root Mean Square Error (RMSE) with respect to RPNN and an average 10.74% improvement with respect to DRPNN. That means that using network errors during training helps enhance the overall forecasting performance for the network. PMID:27959927
Alwee, Razana; Hj Shamsuddin, Siti Mariyam; Sallehuddin, Roselina
2013-01-01
Crimes forecasting is an important area in the field of criminology. Linear models, such as regression and econometric models, are commonly applied in crime forecasting. However, in real crimes data, it is common that the data consists of both linear and nonlinear components. A single model may not be sufficient to identify all the characteristics of the data. The purpose of this study is to introduce a hybrid model that combines support vector regression (SVR) and autoregressive integrated moving average (ARIMA) to be applied in crime rates forecasting. SVR is very robust with small training data and high-dimensional problem. Meanwhile, ARIMA has the ability to model several types of time series. However, the accuracy of the SVR model depends on values of its parameters, while ARIMA is not robust to be applied to small data sets. Therefore, to overcome this problem, particle swarm optimization is used to estimate the parameters of the SVR and ARIMA models. The proposed hybrid model is used to forecast the property crime rates of the United State based on economic indicators. The experimental results show that the proposed hybrid model is able to produce more accurate forecasting results as compared to the individual models. PMID:23766729
Page, Morgan T.; Van Der Elst, Nicholas; Hardebeck, Jeanne L.; Felzer, Karen; Michael, Andrew J.
2016-01-01
Following a large earthquake, seismic hazard can be orders of magnitude higher than the long‐term average as a result of aftershock triggering. Because of this heightened hazard, emergency managers and the public demand rapid, authoritative, and reliable aftershock forecasts. In the past, U.S. Geological Survey (USGS) aftershock forecasts following large global earthquakes have been released on an ad hoc basis with inconsistent methods, and in some cases aftershock parameters adapted from California. To remedy this, the USGS is currently developing an automated aftershock product based on the Reasenberg and Jones (1989) method that will generate more accurate forecasts. To better capture spatial variations in aftershock productivity and decay, we estimate regional aftershock parameters for sequences within the García et al. (2012) tectonic regions. We find that regional variations for mean aftershock productivity reach almost a factor of 10. We also develop a method to account for the time‐dependent magnitude of completeness following large events in the catalog. In addition to estimating average sequence parameters within regions, we develop an inverse method to estimate the intersequence parameter variability. This allows for a more complete quantification of the forecast uncertainties and Bayesian updating of the forecast as sequence‐specific information becomes available.
Alwee, Razana; Shamsuddin, Siti Mariyam Hj; Sallehuddin, Roselina
2013-01-01
Crimes forecasting is an important area in the field of criminology. Linear models, such as regression and econometric models, are commonly applied in crime forecasting. However, in real crimes data, it is common that the data consists of both linear and nonlinear components. A single model may not be sufficient to identify all the characteristics of the data. The purpose of this study is to introduce a hybrid model that combines support vector regression (SVR) and autoregressive integrated moving average (ARIMA) to be applied in crime rates forecasting. SVR is very robust with small training data and high-dimensional problem. Meanwhile, ARIMA has the ability to model several types of time series. However, the accuracy of the SVR model depends on values of its parameters, while ARIMA is not robust to be applied to small data sets. Therefore, to overcome this problem, particle swarm optimization is used to estimate the parameters of the SVR and ARIMA models. The proposed hybrid model is used to forecast the property crime rates of the United State based on economic indicators. The experimental results show that the proposed hybrid model is able to produce more accurate forecasting results as compared to the individual models.
Ridge Polynomial Neural Network with Error Feedback for Time Series Forecasting.
Waheeb, Waddah; Ghazali, Rozaida; Herawan, Tutut
2016-01-01
Time series forecasting has gained much attention due to its many practical applications. Higher-order neural network with recurrent feedback is a powerful technique that has been used successfully for time series forecasting. It maintains fast learning and the ability to learn the dynamics of the time series over time. Network output feedback is the most common recurrent feedback for many recurrent neural network models. However, not much attention has been paid to the use of network error feedback instead of network output feedback. In this study, we propose a novel model, called Ridge Polynomial Neural Network with Error Feedback (RPNN-EF) that incorporates higher order terms, recurrence and error feedback. To evaluate the performance of RPNN-EF, we used four univariate time series with different forecasting horizons, namely star brightness, monthly smoothed sunspot numbers, daily Euro/Dollar exchange rate, and Mackey-Glass time-delay differential equation. We compared the forecasting performance of RPNN-EF with the ordinary Ridge Polynomial Neural Network (RPNN) and the Dynamic Ridge Polynomial Neural Network (DRPNN). Simulation results showed an average 23.34% improvement in Root Mean Square Error (RMSE) with respect to RPNN and an average 10.74% improvement with respect to DRPNN. That means that using network errors during training helps enhance the overall forecasting performance for the network.
Chang, Li-Chiu; Chen, Pin-An; Chang, Fi-John
2012-08-01
A reliable forecast of future events possesses great value. The main purpose of this paper is to propose an innovative learning technique for reinforcing the accuracy of two-step-ahead (2SA) forecasts. The real-time recurrent learning (RTRL) algorithm for recurrent neural networks (RNNs) can effectively model the dynamics of complex processes and has been used successfully in one-step-ahead forecasts for various time series. A reinforced RTRL algorithm for 2SA forecasts using RNNs is proposed in this paper, and its performance is investigated by two famous benchmark time series and a streamflow during flood events in Taiwan. Results demonstrate that the proposed reinforced 2SA RTRL algorithm for RNNs can adequately forecast the benchmark (theoretical) time series, significantly improve the accuracy of flood forecasts, and effectively reduce time-lag effects.
Influenza forecasting with Google Flu Trends.
Dugas, Andrea Freyer; Jalalpour, Mehdi; Gel, Yulia; Levin, Scott; Torcaso, Fred; Igusa, Takeru; Rothman, Richard E
2013-01-01
We developed a practical influenza forecast model based on real-time, geographically focused, and easy to access data, designed to provide individual medical centers with advanced warning of the expected number of influenza cases, thus allowing for sufficient time to implement interventions. Secondly, we evaluated the effects of incorporating a real-time influenza surveillance system, Google Flu Trends, and meteorological and temporal information on forecast accuracy. Forecast models designed to predict one week in advance were developed from weekly counts of confirmed influenza cases over seven seasons (2004-2011) divided into seven training and out-of-sample verification sets. Forecasting procedures using classical Box-Jenkins, generalized linear models (GLM), and generalized linear autoregressive moving average (GARMA) methods were employed to develop the final model and assess the relative contribution of external variables such as, Google Flu Trends, meteorological data, and temporal information. A GARMA(3,0) forecast model with Negative Binomial distribution integrating Google Flu Trends information provided the most accurate influenza case predictions. The model, on the average, predicts weekly influenza cases during 7 out-of-sample outbreaks within 7 cases for 83% of estimates. Google Flu Trend data was the only source of external information to provide statistically significant forecast improvements over the base model in four of the seven out-of-sample verification sets. Overall, the p-value of adding this external information to the model is 0.0005. The other exogenous variables did not yield a statistically significant improvement in any of the verification sets. Integer-valued autoregression of influenza cases provides a strong base forecast model, which is enhanced by the addition of Google Flu Trends confirming the predictive capabilities of search query based syndromic surveillance. This accessible and flexible forecast model can be used by individual medical centers to provide advanced warning of future influenza cases.
Chen, Yeh-Hsin; Schwartz, Joel D.; Rood, Richard B.; O’Neill, Marie S.
2014-01-01
Background: Heat wave and health warning systems are activated based on forecasts of health-threatening hot weather. Objective: We estimated heat–mortality associations based on forecast and observed weather data in Detroit, Michigan, and compared the accuracy of forecast products for predicting heat waves. Methods: We derived and compared apparent temperature (AT) and heat wave days (with heat waves defined as ≥ 2 days of daily mean AT ≥ 95th percentile of warm-season average) from weather observations and six different forecast products. We used Poisson regression with and without adjustment for ozone and/or PM10 (particulate matter with aerodynamic diameter ≤ 10 μm) to estimate and compare associations of daily all-cause mortality with observed and predicted AT and heat wave days. Results: The 1-day-ahead forecast of a local operational product, Revised Digital Forecast, had about half the number of false positives compared with all other forecasts. On average, controlling for heat waves, days with observed AT = 25.3°C were associated with 3.5% higher mortality (95% CI: –1.6, 8.8%) than days with AT = 8.5°C. Observed heat wave days were associated with 6.2% higher mortality (95% CI: –0.4, 13.2%) than non–heat wave days. The accuracy of predictions varied, but associations between mortality and forecast heat generally tended to overestimate heat effects, whereas associations with forecast heat waves tended to underestimate heat wave effects, relative to associations based on observed weather metrics. Conclusions: Our findings suggest that incorporating knowledge of local conditions may improve the accuracy of predictions used to activate heat wave and health warning systems. Citation: Zhang K, Chen YH, Schwartz JD, Rood RB, O’Neill MS. 2014. Using forecast and observed weather data to assess performance of forecast products in identifying heat waves and estimating heat wave effects on mortality. Environ Health Perspect 122:912–918; http://dx.doi.org/10.1289/ehp.1306858 PMID:24833618
NASA Astrophysics Data System (ADS)
Forster, Caroline; Cooper, Owen; Stohl, Andreas; Eckhardt, Sabine; James, Paul; Dunlea, Edward; Nicks, Dennis K.; Holloway, John S.; Hübler, Gerd; Parrish, David D.; Ryerson, Tom B.; Trainer, Michael
2004-04-01
On the basis of Lagrangian tracer transport simulations this study presents an intercontinental transport climatology and tracer forecasts for the Intercontinental Transport and Chemical Transformation 2002 (ITCT 2K2) aircraft measurement campaign, which took place at Monterey, California, in April-May 2002 to measure Asian pollution arriving at the North American West Coast. For the climatology the average transport of an Asian CO tracer was calculated over a time period of 15 years using the particle dispersion model FLEXPART. To determine by how much the transport from Asia to North America during ITCT 2K2 deviated from the climatological mean, the 15-year average for April and May was compared with the average for April and May 2002 and that for the ITCT 2K2 period. It was found that 8% less Asian CO tracer arrived at the North American West Coast during the ITCT 2K2 period compared to the climatological mean. Below 8-km altitude, the maximum altitude of the research aircraft, 13% less arrived. Nevertheless, pronounced layers of Asian pollution were measured during 3 of the 13 ITCT 2K2 flights. FLEXPART was also successfully used as a forecasting tool for the flight planning during ITCT 2K2. It provided 3-day forecasts for three different anthropogenic CO tracers originating from Asia, North America, and Europe. In two case studies the forecast abilities of FLEXPART are analyzed and discussed by comparing the forecasts with measurement data and infrared satellite images. The model forecasts underestimated the measured CO enhancements by about a factor of 4, mainly because of an underestimation of the Asian emissions in the emission inventory and because of biomass-burning influence that was not modeled. Nevertheless, the intercontinental transport and dispersion of pollution plumes were qualitatively well predicted, and on the basis of the model results the aircraft could successfully be guided into the polluted air masses.
Birth Weight and Social Trust in Adulthood: Evidence for Early Calibration of Social Cognition.
Petersen, Michael Bang; Aarøe, Lene
2015-11-01
Social trust forms the fundamental basis for social interaction within societies. Understanding the cognitive architecture of trust and the roots of individual differences in trust is of key importance. We predicted that one of the factors calibrating individual levels of trust is the intrauterine flow of nutrients from mother to child as indexed by birth weight. Birth weight forecasts both the future external environment and the internal condition of the individual in multiple ways relevant for social cognition. Specifically, we predicted that low birth weight is utilized as a forecast of a harsh environment, vulnerable condition, or both and, consequently, reduces social trust. The results of the study reported here are consistent with this prediction. Controlling for many confounds through sibling and panel designs, we found that lower birth weight reduced social trust in adulthood. Furthermore, we obtained tentative evidence that this effect is mitigated if adult environments do not induce stress. © The Author(s) 2015.
NASA Astrophysics Data System (ADS)
Clark, Elizabeth; Wood, Andy; Nijssen, Bart; Mendoza, Pablo; Newman, Andy; Nowak, Kenneth; Arnold, Jeffrey
2017-04-01
In an automated forecast system, hydrologic data assimilation (DA) performs the valuable function of correcting raw simulated watershed model states to better represent external observations, including measurements of streamflow, snow, soil moisture, and the like. Yet the incorporation of automated DA into operational forecasting systems has been a long-standing challenge due to the complexities of the hydrologic system, which include numerous lags between state and output variations. To help demonstrate that such methods can succeed in operational automated implementations, we present results from the real-time application of an ensemble particle filter (PF) for short-range (7 day lead) ensemble flow forecasts in western US river basins. We use the System for Hydromet Applications, Research and Prediction (SHARP), developed by the National Center for Atmospheric Research (NCAR) in collaboration with the University of Washington, U.S. Army Corps of Engineers, and U.S. Bureau of Reclamation. SHARP is a fully automated platform for short-term to seasonal hydrologic forecasting applications, incorporating uncertainty in initial hydrologic conditions (IHCs) and in hydrometeorological predictions through ensemble methods. In this implementation, IHC uncertainty is estimated by propagating an ensemble of 100 temperature and precipitation time series through conceptual and physically-oriented models. The resulting ensemble of derived IHCs exhibits a broad range of possible soil moisture and snow water equivalent (SWE) states. The PF selects and/or weights and resamples the IHCs that are most consistent with external streamflow observations, and uses the particles to initialize a streamflow forecast ensemble driven by ensemble precipitation and temperature forecasts downscaled from the Global Ensemble Forecast System (GEFS). We apply this method in real-time for several basins in the western US that are important for water resources management, and perform a hindcast experiment to evaluate the utility of PF-based data assimilation on streamflow forecasts skill. This presentation describes findings, including a comparison of sequential and non-sequential particle weighting methods.
FAA (Federal Aviation Administration) Aviation Forecasts: Fiscal Years 1989-2000
1989-03-01
predict interim business cycles. FAA FORECAST ECONOMIC ASSUMPTIONS FISCAL YEARS 1989 - 2000 HISTORICAL FORECAST PERCENT AVERAGE ANNUAL GROWTH ECONOMIC ...During previous economic cycles, changes in the general aviation industry have generally paralleled changes in business activity. Empirical results have...FiFAA-APO 89- MARCH 198 US eat e T of 0rrs orci Fedra Aviatio Ad instato 0 NA II I1 Technical Report Documentation Page 1 ReotN.2. Government
Physics-based coastal current tomographic tracking using a Kalman filter.
Wang, Tongchen; Zhang, Ying; Yang, T C; Chen, Huifang; Xu, Wen
2018-05-01
Ocean acoustic tomography can be used based on measurements of two-way travel-time differences between the nodes deployed on the perimeter of the surveying area to invert/map the ocean current inside the area. Data at different times can be related using a Kalman filter, and given an ocean circulation model, one can in principle now cast and even forecast current distribution given an initial distribution and/or the travel-time difference data on the boundary. However, an ocean circulation model requires many inputs (many of them often not available) and is unpractical for estimation of the current field. A simplified form of the discretized Navier-Stokes equation is used to show that the future velocity state is just a weighted spatial average of the current state. These weights could be obtained from an ocean circulation model, but here in a data driven approach, auto-regressive methods are used to obtain the time and space dependent weights from the data. It is shown, based on simulated data, that the current field tracked using a Kalman filter (with an arbitrary initial condition) is more accurate than that estimated by the standard methods where data at different times are treated independently. Real data are also examined.
Forecasting of Water Consumptions Expenditure Using Holt-Winter’s and ARIMA
NASA Astrophysics Data System (ADS)
Razali, S. N. A. M.; Rusiman, M. S.; Zawawi, N. I.; Arbin, N.
2018-04-01
This study is carried out to forecast water consumption expenditure of Malaysian university specifically at University Tun Hussein Onn Malaysia (UTHM). The proposed Holt-Winter’s and Auto-Regressive Integrated Moving Average (ARIMA) models were applied to forecast the water consumption expenditure in Ringgit Malaysia from year 2006 until year 2014. The two models were compared and performance measurement of the Mean Absolute Percentage Error (MAPE) and Mean Absolute Deviation (MAD) were used. It is found that ARIMA model showed better results regarding the accuracy of forecast with lower values of MAPE and MAD. Analysis showed that ARIMA (2,1,4) model provided a reasonable forecasting tool for university campus water usage.
An ensemble forecast of the South China Sea monsoon
NASA Astrophysics Data System (ADS)
Krishnamurti, T. N.; Tewari, Mukul; Bensman, Ed; Han, Wei; Zhang, Zhan; Lau, William K. M.
1999-05-01
This paper presents a generalized ensemble forecast procedure for the tropical latitudes. Here we propose an empirical orthogonal function-based procedure for the definition of a seven-member ensemble. The wind and the temperature fields are perturbed over the global tropics. Although the forecasts are made over the global belt with a high-resolution model, the emphasis of this study is on a South China Sea monsoon. Over this domain of the South China Sea includes the passage of a Tropical Storm, Gary, that moved eastwards north of the Philippines. The ensemble forecast handled the precipitation of this storm reasonably well. A global model at the resolution Triangular Truncation 126 waves is used to carry out these seven forecasts. The evaluation of the ensemble of forecasts is carried out via standard root mean square errors of the precipitation and the wind fields. The ensemble average is shown to have a higher skill compared to a control experiment, which was a first analysis based on operational data sets over both the global tropical and South China Sea domain. All of these experiments were subjected to physical initialization which provides a spin-up of the model rain close to that obtained from satellite and gauge-based estimates. The results furthermore show that inherently much higher skill resides in the forecast precipitation fields if they are averaged over area elements of the order of 4° latitude by 4° longitude squares.
NASA Astrophysics Data System (ADS)
Tito Arandia Martinez, Fabian
2014-05-01
Adequate uncertainty assessment is an important issue in hydrological modelling. An important issue for hydropower producers is to obtain ensemble forecasts which truly grasp the uncertainty linked to upcoming streamflows. If properly assessed, this uncertainty can lead to optimal reservoir management and energy production (ex. [1]). The meteorological inputs to the hydrological model accounts for an important part of the total uncertainty in streamflow forecasting. Since the creation of the THORPEX initiative and the TIGGE database, access to meteorological ensemble forecasts from nine agencies throughout the world have been made available. This allows for hydrological ensemble forecasts based on multiple meteorological ensemble forecasts. Consequently, both the uncertainty linked to the architecture of the meteorological model and the uncertainty linked to the initial condition of the atmosphere can be accounted for. The main objective of this work is to show that a weighted combination of meteorological ensemble forecasts based on different atmospheric models can lead to improved hydrological ensemble forecasts, for horizons from one to ten days. This experiment is performed for the Baskatong watershed, a head subcatchment of the Gatineau watershed in the province of Quebec, in Canada. Baskatong watershed is of great importance for hydro-power production, as it comprises the main reservoir for the Gatineau watershed, on which there are six hydropower plants managed by Hydro-Québec. Since the 70's, they have been using pseudo ensemble forecast based on deterministic meteorological forecasts to which variability derived from past forecasting errors is added. We use a combination of meteorological ensemble forecasts from different models (precipitation and temperature) as the main inputs for hydrological model HSAMI ([2]). The meteorological ensembles from eight of the nine agencies available through TIGGE are weighted according to their individual performance and combined to form a grand ensemble. Results show that the hydrological forecasts derived from the grand ensemble perform better than the pseudo ensemble forecasts actually used operationally at Hydro-Québec. References: [1] M. Verbunt, A. Walser, J. Gurtz et al., "Probabilistic flood forecasting with a limited-area ensemble prediction system: Selected case studies," Journal of Hydrometeorology, vol. 8, no. 4, pp. 897-909, Aug, 2007. [2] N. Evora, Valorisation des prévisions météorologiques d'ensemble, Institu de recherceh d'Hydro-Québec 2005. [3] V. Fortin, Le modèle météo-apport HSAMI: historique, théorie et application, Institut de recherche d'Hydro-Québec, 2000.
Profitability of Integrated Management of Fusarium Head Blight in North Carolina Winter Wheat.
Cowger, Christina; Weisz, Randy; Arellano, Consuelo; Murphy, Paul
2016-08-01
Fusarium head blight (FHB) is one of the most difficult small-grain diseases to manage, due to the partial effectiveness of management techniques and the narrow window of time in which to apply fungicides profitably. The most effective management approach is to integrate cultivar resistance with FHB-specific fungicide applications; yet, when forecasted risk is intermediate, it is often unclear whether such an application will be profitable. To model the profitability of FHB management under varying conditions, we conducted a 2-year split-plot field experiment having as main plots high-yielding soft red winter wheat cultivars, four moderately resistant (MR) and three susceptible (S) to FHB. Subplots were sprayed at flowering with Prosaro or Caramba, or left untreated. The experiment was planted in seven North Carolina environments (location-year combinations); three were irrigated to promote FHB development and four were not irrigated. Response variables were yield, test weight, disease incidence, disease severity, deoxynivalenol (DON), Fusarium-damaged kernels, and percent infected kernels. Partial profits were compared in two ways: first, across low-, medium-, or high-DON environments; and second, across environment-cultivar combinations divided by risk forecast into "do spray" and "do not spray" categories. After surveying DON and test weight dockage among 21 North Carolina wheat purchasers, three typical market scenarios were used for modeling profitability: feed-wheat, flexible (feed or flour), and the flour market. A major finding was that, on average, MR cultivars were at least as profitable as S cultivars, regardless of epidemic severity or market. Fungicides were profitable in the feed-grain and flexible markets when DON was high, with MR cultivars in the flexible or flour markets when DON was intermediate, and on S cultivars aimed at the flexible market. The flour market was only profitable when FHB was present if DON levels were intermediate and cultivar resistance was combined with a fungicide. It proved impossible to use the risk forecast to predict profitability of fungicide application. Overall, the results indicated that cultivar resistance to FHB was important for profitability, an FHB-targeted fungicide expanded market options when risk was moderate or high, and the efficacy of fungicide decision-making is reduced by factors that limit the accuracy of risk forecasts.
A Canonical Ensemble Correlation Prediction Model for Seasonal Precipitation Anomaly
NASA Technical Reports Server (NTRS)
Shen, Samuel S. P.; Lau, William K. M.; Kim, Kyu-Myong; Li, Guilong
2001-01-01
This report describes an optimal ensemble forecasting model for seasonal precipitation and its error estimation. Each individual forecast is based on the canonical correlation analysis (CCA) in the spectral spaces whose bases are empirical orthogonal functions (EOF). The optimal weights in the ensemble forecasting crucially depend on the mean square error of each individual forecast. An estimate of the mean square error of a CCA prediction is made also using the spectral method. The error is decomposed onto EOFs of the predictand and decreases linearly according to the correlation between the predictor and predictand. This new CCA model includes the following features: (1) the use of area-factor, (2) the estimation of prediction error, and (3) the optimal ensemble of multiple forecasts. The new CCA model is applied to the seasonal forecasting of the United States precipitation field. The predictor is the sea surface temperature.
Gershunov, A.; Barnett, T.P.; Cayan, D.R.; Tubbs, T.; Goddard, L.
2000-01-01
Three long-range forecasting methods have been evaluated for prediction and downscaling of seasonal and intraseasonal precipitation statistics in California. Full-statistical, hybrid-dynamical - statistical and full-dynamical approaches have been used to forecast El Nin??o - Southern Oscillation (ENSO) - related total precipitation, daily precipitation frequency, and average intensity anomalies during the January - March season. For El Nin??o winters, the hybrid approach emerges as the best performer, while La Nin??a forecasting skill is poor. The full-statistical forecasting method features reasonable forecasting skill for both La Nin??a and El Nin??o winters. The performance of the full-dynamical approach could not be evaluated as rigorously as that of the other two forecasting schemes. Although the full-dynamical forecasting approach is expected to outperform simpler forecasting schemes in the long run, evidence is presented to conclude that, at present, the full-dynamical forecasting approach is the least viable of the three, at least in California. The authors suggest that operational forecasting of any intraseasonal temperature, precipitation, or streamflow statistic derivable from the available records is possible now for ENSO-extreme years.
Sitepu, Monika S; Kaewkungwal, Jaranit; Luplerdlop, Nathanej; Soonthornworasiri, Ngamphol; Silawan, Tassanee; Poungsombat, Supawadee; Lawpoolsri, Saranath
2013-03-01
This study aimed to describe the temporal patterns of dengue transmission in Jakarta from 2001 to 2010, using data from the national surveillance system. The Box-Jenkins forecasting technique was used to develop a seasonal autoregressive integrated moving average (SARIMA) model for the study period and subsequently applied to forecast DHF incidence in 2011 in Jakarta Utara, Jakarta Pusat, Jakarta Barat, and the municipalities of Jakarta Province. Dengue incidence in 2011, based on the forecasting model was predicted to increase from the previous year.
Shi, Yuan; Liu, Xu; Kok, Suet-Yheng; Rajarethinam, Jayanthi; Liang, Shaohong; Yap, Grace; Chong, Chee-Seng; Lee, Kim-Sung; Tan, Sharon S Y; Chin, Christopher Kuan Yew; Lo, Andrew; Kong, Waiming; Ng, Lee Ching; Cook, Alex R
2016-09-01
With its tropical rainforest climate, rapid urbanization, and changing demography and ecology, Singapore experiences endemic dengue; the last large outbreak in 2013 culminated in 22,170 cases. In the absence of a vaccine on the market, vector control is the key approach for prevention. We sought to forecast the evolution of dengue epidemics in Singapore to provide early warning of outbreaks and to facilitate the public health response to moderate an impending outbreak. We developed a set of statistical models using least absolute shrinkage and selection operator (LASSO) methods to forecast the weekly incidence of dengue notifications over a 3-month time horizon. This forecasting tool used a variety of data streams and was updated weekly, including recent case data, meteorological data, vector surveillance data, and population-based national statistics. The forecasting methodology was compared with alternative approaches that have been proposed to model dengue case data (seasonal autoregressive integrated moving average and step-down linear regression) by fielding them on the 2013 dengue epidemic, the largest on record in Singapore. Operationally useful forecasts were obtained at a 3-month lag using the LASSO-derived models. Based on the mean average percentage error, the LASSO approach provided more accurate forecasts than the other methods we assessed. We demonstrate its utility in Singapore's dengue control program by providing a forecast of the 2013 outbreak for advance preparation of outbreak response. Statistical models built using machine learning methods such as LASSO have the potential to markedly improve forecasting techniques for recurrent infectious disease outbreaks such as dengue. Shi Y, Liu X, Kok SY, Rajarethinam J, Liang S, Yap G, Chong CS, Lee KS, Tan SS, Chin CK, Lo A, Kong W, Ng LC, Cook AR. 2016. Three-month real-time dengue forecast models: an early warning system for outbreak alerts and policy decision support in Singapore. Environ Health Perspect 124:1369-1375; http://dx.doi.org/10.1289/ehp.1509981.
The GISS sounding temperature impact test
NASA Technical Reports Server (NTRS)
Halem, M.; Ghil, M.; Atlas, R.; Susskind, J.; Quirk, W. J.
1978-01-01
The impact of DST 5 and DST 6 satellite sounding data on mid-range forecasting was studied. The GISS temperature sounding technique, the GISS time-continuous four-dimensional assimilation procedure based on optimal statistical analysis, the GISS forecast model, and the verification techniques developed, including impact on local precipitation forecasts are described. It is found that the impact of sounding data was substantial and beneficial for the winter test period, Jan. 29 - Feb. 21. 1976. Forecasts started from initial state obtained with the aid of satellite data showed a mean improvement of about 4 points in the 48 and 772 hours Sub 1 scores as verified over North America and Europe. This corresponds to an 8 to 12 hour forecast improvement in the forecast range at 48 hours. An automated local precipitation forecast model applied to 128 cities in the United States showed on an average 15% improvement when satellite data was used for numerical forecasts. The improvement was 75% in the midwest.
NASA Astrophysics Data System (ADS)
Allawi, Mohammed Falah; Jaafar, Othman; Mohamad Hamzah, Firdaus; Mohd, Nuruol Syuhadaa; Deo, Ravinesh C.; El-Shafie, Ahmed
2017-10-01
Existing forecast models applied for reservoir inflow forecasting encounter several drawbacks, due to the difficulty of the underlying mathematical procedures being to cope with and to mimic the naturalization and stochasticity of the inflow data patterns. In this study, appropriate adjustments to the conventional coactive neuro-fuzzy inference system (CANFIS) method are proposed to improve the mathematical procedure, thus enabling a better detection of the high nonlinearity patterns found in the reservoir inflow training data. This modification includes the updating of the back propagation algorithm, leading to a consequent update of the membership rules and the induction of the centre-weighted set rather than the global weighted set used in feature extraction. The modification also aids in constructing an integrated model that is able to not only detect the nonlinearity in the training data but also the wide range of features within the training data records used to simulate the forecasting model. To demonstrate the model's efficacy, the proposed CANFIS method has been applied to forecast monthly inflow data at Aswan High Dam (AHD), located in southern Egypt. Comparative analyses of the forecasting skill of the modified CANFIS and the conventional ANFIS model are carried out with statistical score indicators to assess the reliability of the developed method. The statistical metrics support the better performance of the developed CANFIS model, which significantly outperforms the ANFIS model to attain a low relative error value (23%), mean absolute error (1.4 BCM month-1), root mean square error (1.14 BCM month-1), and a relative large coefficient of determination (0.94). The present study ascertains the better utility of the modified CANFIS model in respect to the traditional ANFIS model applied in reservoir inflow forecasting for a semi-arid region.
Evaluation of the North American Multi-Model Ensemble System for Monthly and Seasonal Prediction
NASA Astrophysics Data System (ADS)
Zhang, Q.
2014-12-01
Since August 2011, the real time seasonal forecasts of the U.S. National Multi-Model Ensemble (NMME) have been made on 8th of each month by NCEP Climate Prediction Center (CPC). The participating models were NCEP/CFSv1&2, GFDL/CM2.2, NCAR/U.Miami/COLA/CCSM3, NASA/GEOS5, IRI/ ECHAM-a & ECHAM-f in the first year of the real time NMME forecast. Two Canadian coupled models CMC/CanCM3 and CM4 joined in and CFSv1 and IRI's models dropped out in the second year. The NMME team at CPC collects monthly means of three variables, precipitation, temperature at 2m and sea surface temperature from each modeling center on a 1x1 global grid, removes systematic errors, makes the grand ensemble mean in equal weight for each model mean and probability forecast with equal weight for each member of each model. This provides the NMME forecast locked in schedule for the CPC operational seasonal and monthly outlook. The basic verification metrics of seasonal and monthly prediction of NMME are calculated as an evaluation of skill, including both deterministic and probabilistic forecasts for the 3-year real time (August, 2011- July 2014) period and the 30-year retrospective forecast (1982-2011) of the individual models as well as the NMME ensemble. The motivation of this study is to provide skill benchmarks for future improvements of the NMME seasonal and monthly prediction system. We also want to establish whether the real time and hindcast periods (used for bias correction in real time) are consistent. The experimental phase I of the project already supplies routine guidance to users of the NMME forecasts.
Boosting Learning Algorithm for Stock Price Forecasting
NASA Astrophysics Data System (ADS)
Wang, Chengzhang; Bai, Xiaoming
2018-03-01
To tackle complexity and uncertainty of stock market behavior, more studies have introduced machine learning algorithms to forecast stock price. ANN (artificial neural network) is one of the most successful and promising applications. We propose a boosting-ANN model in this paper to predict the stock close price. On the basis of boosting theory, multiple weak predicting machines, i.e. ANNs, are assembled to build a stronger predictor, i.e. boosting-ANN model. New error criteria of the weak studying machine and rules of weights updating are adopted in this study. We select technical factors from financial markets as forecasting input variables. Final results demonstrate the boosting-ANN model works better than other ones for stock price forecasting.
Akhtar, Saeed; Rozi, Shafquat
2009-01-01
AIM: To identify the stochastic autoregressive integrated moving average (ARIMA) model for short term forecasting of hepatitis C virus (HCV) seropositivity among volunteer blood donors in Karachi, Pakistan. METHODS: Ninety-six months (1998-2005) data on HCV seropositive cases (1000-1 × month-1) among male volunteer blood donors tested at four major blood banks in Karachi, Pakistan were subjected to ARIMA modeling. Subsequently, a fitted ARIMA model was used to forecast HCV seropositive donors for 91-96 mo to contrast with observed series of the same months. To assess the forecast accuracy, the mean absolute error rate (%) between the observed and predicted HCV seroprevalence was calculated. Finally, a fitted ARIMA model was used for short-term forecasts beyond the observed series. RESULTS: The goodness-of-fit test of the optimum ARIMA (2,1,7) model showed non-significant autocorrelations in the residuals of the model. The forecasts by ARIMA for 91-96 mo closely followed the pattern of observed series for the same months, with mean monthly absolute forecast errors (%) over 6 mo of 6.5%. The short-term forecasts beyond the observed series adequately captured the pattern in the data and showed increasing tendency of HCV seropositivity with a mean ± SD HCV seroprevalence (1000-1 × month-1) of 24.3 ± 1.4 over the forecast interval. CONCLUSION: To curtail HCV spread, public health authorities need to educate communities and health care providers about HCV transmission routes based on known HCV epidemiology in Pakistan and its neighboring countries. Future research may focus on factors associated with hyperendemic levels of HCV infection. PMID:19340903
Voukantsis, Dimitris; Karatzas, Kostas; Kukkonen, Jaakko; Räsänen, Teemu; Karppinen, Ari; Kolehmainen, Mikko
2011-03-01
In this paper we propose a methodology consisting of specific computational intelligence methods, i.e. principal component analysis and artificial neural networks, in order to inter-compare air quality and meteorological data, and to forecast the concentration levels for environmental parameters of interest (air pollutants). We demonstrate these methods to data monitored in the urban areas of Thessaloniki and Helsinki in Greece and Finland, respectively. For this purpose, we applied the principal component analysis method in order to inter-compare the patterns of air pollution in the two selected cities. Then, we proceeded with the development of air quality forecasting models for both studied areas. On this basis, we formulated and employed a novel hybrid scheme in the selection process of input variables for the forecasting models, involving a combination of linear regression and artificial neural networks (multi-layer perceptron) models. The latter ones were used for the forecasting of the daily mean concentrations of PM₁₀ and PM₂.₅ for the next day. Results demonstrated an index of agreement between measured and modelled daily averaged PM₁₀ concentrations, between 0.80 and 0.85, while the kappa index for the forecasting of the daily averaged PM₁₀ concentrations reached 60% for both cities. Compared with previous corresponding studies, these statistical parameters indicate an improved performance of air quality parameters forecasting. It was also found that the performance of the models for the forecasting of the daily mean concentrations of PM₁₀ was not substantially different for both cities, despite the major differences of the two urban environments under consideration. Copyright © 2011 Elsevier B.V. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Anghileri, Daniela; Voisin, Nathalie; Castelletti, Andrea F.
In this study, we develop a forecast-based adaptive control framework for Oroville reservoir, California, to assess the value of seasonal and inter-annual forecasts for reservoir operation.We use an Ensemble Streamflow Prediction (ESP) approach to generate retrospective, one-year-long streamflow forecasts based on the Variable Infiltration Capacity hydrology model. The optimal sequence of daily release decisions from the reservoir is then determined by Model Predictive Control, a flexible and adaptive optimization scheme.We assess the forecast value by comparing system performance based on the ESP forecasts with that based on climatology and a perfect forecast. In addition, we evaluate system performance based onmore » a synthetic forecast, which is designed to isolate the contribution of seasonal and inter-annual forecast skill to the overall value of the ESP forecasts.Using the same ESP forecasts, we generalize our results by evaluating forecast value as a function of forecast skill, reservoir features, and demand. Our results show that perfect forecasts are valuable when the water demand is high and the reservoir is sufficiently large to allow for annual carry-over. Conversely, ESP forecast value is highest when the reservoir can shift water on a seasonal basis.On average, for the system evaluated here, the overall ESP value is 35% less than the perfect forecast value. The inter-annual component of the ESP forecast contributes 20-60% of the total forecast value. Improvements in the seasonal component of the ESP forecast would increase the overall ESP forecast value between 15 and 20%.« less
An Evaluation of the NOAA Climate Forecast System Subseasonal Forecasts
NASA Astrophysics Data System (ADS)
Mass, C.; Weber, N.
2016-12-01
This talk will describe a multi-year evaluation of the 1-5 week forecasts of the NOAA Climate Forecasting System (CFS) over the globe, North America, and the western U.S. Forecasts are evaluated for both specific times and for a variety of time-averaging periods. Initial results show a loss of predictability at approximately three weeks, with sea surface temperature retaining predictability longer than atmospheric variables. It is shown that a major CFS problem is an inability to realistically simulate propagating convection in the tropics, with substantial implications for midlatitude teleconnections and subseasonal predictability. The inability of CFS to deal with tropical convection will be discussed in connection with the prediction of extreme climatic events over the midlatitudes.
Multiobjective hedging rules for flood water conservation
NASA Astrophysics Data System (ADS)
Ding, Wei; Zhang, Chi; Cai, Ximing; Li, Yu; Zhou, Huicheng
2017-03-01
Flood water conservation can be beneficial for water uses especially in areas with water stress but also can pose additional flood risk. The potential of flood water conservation is affected by many factors, especially decision makers' preference for water conservation and reservoir inflow forecast uncertainty. This paper discusses the individual and joint effects of these two factors on the trade-off between flood control and water conservation, using a multiobjective, two-stage reservoir optimal operation model. It is shown that hedging between current water conservation and future flood control exists only when forecast uncertainty or decision makers' preference is within a certain range, beyond which, hedging is trivial and the multiobjective optimization problem is reduced to a single objective problem with either flood control or water conservation. Different types of hedging rules are identified with different levels of flood water conservation preference, forecast uncertainties, acceptable flood risk, and reservoir storage capacity. Critical values of decision preference (represented by a weight) and inflow forecast uncertainty (represented by standard deviation) are identified. These inform reservoir managers with a feasible range of their preference to water conservation and thresholds of forecast uncertainty, specifying possible water conservation within the thresholds. The analysis also provides inputs for setting up an optimization model by providing the range of objective weights and the choice of hedging rule types. A case study is conducted to illustrate the concepts and analyses.
New technique for ensemble dressing combining Multimodel SuperEnsemble and precipitation PDF
NASA Astrophysics Data System (ADS)
Cane, D.; Milelli, M.
2009-09-01
The Multimodel SuperEnsemble technique (Krishnamurti et al., Science 285, 1548-1550, 1999) is a postprocessing method for the estimation of weather forecast parameters reducing direct model output errors. It differs from other ensemble analysis techniques by the use of an adequate weighting of the input forecast models to obtain a combined estimation of meteorological parameters. Weights are calculated by least-square minimization of the difference between the model and the observed field during a so-called training period. Although it can be applied successfully on the continuous parameters like temperature, humidity, wind speed and mean sea level pressure (Cane and Milelli, Meteorologische Zeitschrift, 15, 2, 2006), the Multimodel SuperEnsemble gives good results also when applied on the precipitation, a parameter quite difficult to handle with standard post-processing methods. Here we present our methodology for the Multimodel precipitation forecasts applied on a wide spectrum of results over Piemonte very dense non-GTS weather station network. We will focus particularly on an accurate statistical method for bias correction and on the ensemble dressing in agreement with the observed precipitation forecast-conditioned PDF. Acknowledgement: this work is supported by the Italian Civil Defence Department.
2013-03-01
moving average ( ARIMA ) model because the data is not a times series. The best a manpower planner can do at this point is to make an educated assumption...MARKOV MODEL FOR FORECASTING END STRENGTH OF SELECTED MARINE CORPS RESERVE (SMCR) OFFICERS by Anthony D. Licari March 2013 Thesis Advisor...March 2013 3. REPORT TYPE AND DATES COVERED Master’s Thesis 4. TITLE AND SUBTITLE DEVELOPING A MARKOV MODEL FOR FORECASTING END STRENGTH OF
NASA Astrophysics Data System (ADS)
Perkins, W. A.; Hakim, G. J.
2016-12-01
In this work, we examine the skill of a new approach to performing climate field reconstructions (CFRs) using a form of online paleoclimate data assimilation (PDA). Many previous studies have foregone climate model forecasts during assimilation due to the computational expense of running coupled global climate models (CGCMs), and the relatively low skill of these forecasts on longer timescales. Here we greatly diminish the computational costs by employing an empirical forecast model (known as a linear inverse model; LIM), which has been shown to have comparable skill to CGCMs. CFRs of annually averaged 2m air temperature anomalies are compared between the Last Millennium Reanalysis framework (no forecasting or "offline"), a persistence forecast, and four LIM forecasting experiments over the instrumental period (1850 - 2000). We test LIM calibrations for observational (Berkeley Earth), reanalysis (20th Century Reanalysis), and CMIP5 climate model (CCSM4 and MPI) data. Generally, we find that the usage of LIM forecasts for online PDA increases reconstruction agreement with the instrumental record for both spatial and global mean temperature (GMT). The detrended GMT skill metrics show the most dramatic increases in skill with coefficient of efficiency (CE) improvements over the no-forecasting benchmark averaging 57%. LIM experiments display a common pattern of spatial field increases in CE skill over northern hemisphere land areas and in the high-latitude North Atlantic - Barents Sea corridor (Figure 1). However, the non-GCM-calibrated LIMs introduce other deficiencies into the spatial skill of these reconstructions, likely due to aspects of the LIM calibration process. Overall, the CMIP5 LIMs have the best performance when considering both spatial fields and GMT. A comparison with the persistence forecast experiment suggests that improvements are associated with the usage of the LIM forecasts, and not simple persistence of temperature anomalies over time. These results show that the use of LIM forecasting can help add further dynamical constraint to CFRs. As we move forward, this will be an important factor in fully utilizing dynamically consistent information from the proxy record while reconstructing the past millennium.
Analysis of forecasting and inventory control of raw material supplies in PT INDAC INT’L
NASA Astrophysics Data System (ADS)
Lesmana, E.; Subartini, B.; Riaman; Jabar, D. A.
2018-03-01
This study discusses the data forecasting sales of carbon electrodes at PT. INDAC INT L uses winters and double moving average methods, while for predicting the amount of inventory and cost required in ordering raw material of carbon electrode next period using Economic Order Quantity (EOQ) model. The result of error analysis shows that winters method for next period gives result of MAE, MSE, and MAPE, the winters method is a better forecasting method for forecasting sales of carbon electrode products. So that PT. INDAC INT L is advised to provide products that will be sold following the sales amount by the winters method.
Intermittent Demand Forecasting in a Tertiary Pediatric Intensive Care Unit.
Cheng, Chen-Yang; Chiang, Kuo-Liang; Chen, Meng-Yin
2016-10-01
Forecasts of the demand for medical supplies both directly and indirectly affect the operating costs and the quality of the care provided by health care institutions. Specifically, overestimating demand induces an inventory surplus, whereas underestimating demand possibly compromises patient safety. Uncertainty in forecasting the consumption of medical supplies generates intermittent demand events. The intermittent demand patterns for medical supplies are generally classified as lumpy, erratic, smooth, and slow-moving demand. This study was conducted with the purpose of advancing a tertiary pediatric intensive care unit's efforts to achieve a high level of accuracy in its forecasting of the demand for medical supplies. On this point, several demand forecasting methods were compared in terms of the forecast accuracy of each. The results confirm that applying Croston's method combined with a single exponential smoothing method yields the most accurate results for forecasting lumpy, erratic, and slow-moving demand, whereas the Simple Moving Average (SMA) method is the most suitable for forecasting smooth demand. In addition, when the classification of demand consumption patterns were combined with the demand forecasting models, the forecasting errors were minimized, indicating that this classification framework can play a role in improving patient safety and reducing inventory management costs in health care institutions.
NASA Astrophysics Data System (ADS)
Omi, Takahiro; Ogata, Yosihiko; Hirata, Yoshito; Aihara, Kazuyuki
2015-04-01
Because aftershock occurrences can cause significant seismic risks for a considerable time after the main shock, prospective forecasting of the intermediate-term aftershock activity as soon as possible is important. The epidemic-type aftershock sequence (ETAS) model with the maximum likelihood estimate effectively reproduces general aftershock activity including secondary or higher-order aftershocks and can be employed for the forecasting. However, because we cannot always expect the accurate parameter estimation from incomplete early aftershock data where many events are missing, such forecasting using only a single estimated parameter set (plug-in forecasting) can frequently perform poorly. Therefore, we here propose Bayesian forecasting that combines the forecasts by the ETAS model with various probable parameter sets given the data. By conducting forecasting tests of 1 month period aftershocks based on the first 1 day data after the main shock as an example of the early intermediate-term forecasting, we show that the Bayesian forecasting performs better than the plug-in forecasting on average in terms of the log-likelihood score. Furthermore, to improve forecasting of large aftershocks, we apply a nonparametric (NP) model using magnitude data during the learning period and compare its forecasting performance with that of the Gutenberg-Richter (G-R) formula. We show that the NP forecast performs better than the G-R formula in some cases but worse in other cases. Therefore, robust forecasting can be obtained by employing an ensemble forecast that combines the two complementary forecasts. Our proposed method is useful for a stable unbiased intermediate-term assessment of aftershock probabilities.
Federal Register 2010, 2011, 2012, 2013, 2014
2013-02-01
... prices will likely be forecasted using trends from the Energy Information Administration's most recent... forecasted energy prices, using shipment projections and average energy efficiency projections. DOE... DEPARTMENT OF ENERGY 10 CFR Part 431 [Docket No. EERE-2013-BT-STD-0007] RIN 1904-AC95 Energy...
The Field Production of Water for Injection
1985-12-01
L/day Bedridden Patient 0.75 L/day Average Diseased Patient 0.50 L/day e (There is no feasible methodology to forecast the number of procedures per... Bedridden Patient 0.75 All Diseased Patients 0.50 An estimate of the liters/day needed may be calculated based on a forecasted patient stream, including
Steve McNulty; Jennifer Moore Myers; Peter Caldwell; Ge Sun
2011-01-01
Since 1960, all but two southern capital cities (Montgomery, AL and Oklahoma City, OK) have experienced a statistically significant increase in average annual temperature (approximately 0.016 °C), but none has experienced significant trends in precipitation. The South is forecasted to experience warmer temperatures for the duration of the 21st century; forecasts are...
The Prediction of Teacher Turnover Employing Time Series Analysis.
ERIC Educational Resources Information Center
Costa, Crist H.
The purpose of this study was to combine knowledge of teacher demographic data with time-series forecasting methods to predict teacher turnover. Moving averages and exponential smoothing were used to forecast discrete time series. The study used data collected from the 22 largest school districts in Iowa, designated as FACT schools. Predictions…
NCEP Air Quality Forecast(AQF) Verification. NOAA/NWS/NCEP/EMC
average Select forecast four: Day 1 AOD skill for all thresholds Day 1 Time series for AOD GT 0 Day 2 AOD skill for all thresholds Day 2 Time series for AOD GT 0 Diurnal plots for AOD GT 0 Select statistic type
Monthly streamflow forecasting with auto-regressive integrated moving average
NASA Astrophysics Data System (ADS)
Nasir, Najah; Samsudin, Ruhaidah; Shabri, Ani
2017-09-01
Forecasting of streamflow is one of the many ways that can contribute to better decision making for water resource management. The auto-regressive integrated moving average (ARIMA) model was selected in this research for monthly streamflow forecasting with enhancement made by pre-processing the data using singular spectrum analysis (SSA). This study also proposed an extension of the SSA technique to include a step where clustering was performed on the eigenvector pairs before reconstruction of the time series. The monthly streamflow data of Sungai Muda at Jeniang, Sungai Muda at Jambatan Syed Omar and Sungai Ketil at Kuala Pegang was gathered from the Department of Irrigation and Drainage Malaysia. A ratio of 9:1 was used to divide the data into training and testing sets. The ARIMA, SSA-ARIMA and Clustered SSA-ARIMA models were all developed in R software. Results from the proposed model are then compared to a conventional auto-regressive integrated moving average model using the root-mean-square error and mean absolute error values. It was found that the proposed model can outperform the conventional model.
Barba, Lida; Rodríguez, Nibaldo; Montt, Cecilia
2014-01-01
Two smoothing strategies combined with autoregressive integrated moving average (ARIMA) and autoregressive neural networks (ANNs) models to improve the forecasting of time series are presented. The strategy of forecasting is implemented using two stages. In the first stage the time series is smoothed using either, 3-point moving average smoothing, or singular value Decomposition of the Hankel matrix (HSVD). In the second stage, an ARIMA model and two ANNs for one-step-ahead time series forecasting are used. The coefficients of the first ANN are estimated through the particle swarm optimization (PSO) learning algorithm, while the coefficients of the second ANN are estimated with the resilient backpropagation (RPROP) learning algorithm. The proposed models are evaluated using a weekly time series of traffic accidents of Valparaíso, Chilean region, from 2003 to 2012. The best result is given by the combination HSVD-ARIMA, with a MAPE of 0:26%, followed by MA-ARIMA with a MAPE of 1:12%; the worst result is given by the MA-ANN based on PSO with a MAPE of 15:51%.
Time series forecasting of future claims amount of SOCSO's employment injury scheme (EIS)
NASA Astrophysics Data System (ADS)
Zulkifli, Faiz; Ismail, Isma Liana; Chek, Mohd Zaki Awang; Jamal, Nur Faezah; Ridzwan, Ahmad Nur Azam Ahmad; Jelas, Imran Md; Noor, Syamsul Ikram Mohd; Ahmad, Abu Bakar
2012-09-01
The Employment Injury Scheme (EIS) provides protection to employees who are injured due to accidents whilst working, commuting from home to the work place or during employee takes a break during an authorized recess time or while travelling that is related with his work. The main purpose of this study is to forecast value on claims amount of EIS for the year 2011 until 2015 by using appropriate models. These models were tested on the actual EIS data from year 1972 until year 2010. Three different forecasting models are chosen for comparisons. These are the Naïve with Trend Model, Average Percent Change Model and Double Exponential Smoothing Model. The best model is selected based on the smallest value of error measures using the Mean Squared Error (MSE) and Mean Absolute Percentage Error (MAPE). From the result, the best model that best fit the forecast for the EIS is the Average Percent Change Model. Furthermore, the result also shows the claims amount of EIS for the year 2011 to year 2015 continue to trend upwards from year 2010.
Forecasting Natural Rubber Price In Malaysia Using Arima
NASA Astrophysics Data System (ADS)
Zahari, Fatin Z.; Khalid, Kamil; Roslan, Rozaini; Sufahani, Suliadi; Mohamad, Mahathir; Saifullah Rusiman, Mohd; Ali, Maselan
2018-04-01
This paper contains introduction, materials and methods, results and discussions, conclusions and references. Based on the title mentioned, high volatility of the price of natural rubber nowadays will give the significant risk to the producers, traders, consumers, and others parties involved in the production of natural rubber. To help them in making decisions, forecasting is needed to predict the price of natural rubber. The main objective of the research is to forecast the upcoming price of natural rubber by using the reliable statistical method. The data are gathered from Malaysia Rubber Board which the data are from January 2000 until December 2015. In this research, average monthly price of Standard Malaysia Rubber 20 (SMR20) will be forecast by using Box-Jenkins approach. Time series plot is used to determine the pattern of the data. The data have trend pattern which indicates the data is non-stationary data and the data need to be transformed. By using the Box-Jenkins method, the best fit model for the time series data is ARIMA (1, 1, 0) which this model satisfy all the criteria needed. Hence, ARIMA (1, 1, 0) is the best fitted model and the model will be used to forecast the average monthly price of Standard Malaysia Rubber 20 (SMR20) for twelve months ahead.
NASA Technical Reports Server (NTRS)
Keitz, J. F.
1982-01-01
The impact of more timely and accurate weather data on airline flight planning with the emphasis on fuel savings is studied. This summary report discusses the results of each of the four major tasks of the study. Task 1 compared airline flight plans based on operational forecasts to plans based on the verifying analyses and found that average fuel savings of 1.2 to 2.5 percent are possible with improved forecasts. Task 2 consisted of similar comparisons but used a model developed for the FAA by SRI International that simulated the impact of ATc diversions on the flight plans. While parts of Task 2 confirm the Task I findings, inconsistency with other data and the known impact of ATC suggests that other Task 2 findings are the result of errors in the model. Task 3 compares segment weather data from operational flight plans with the weather actually observed by the aircraft and finds the average error could result in fuel burn penalties (or savings) of up to 3.6 percent for the average 8747 flight. In Task 4 an in-depth analysis of the weather forecast for the 33 days included in the study finds that significant errors exist on 15 days. Wind speeds in the area of maximum winds are underestimated by 20 to 50 kts., a finding confirmed in the other three tasks.
A novel hybrid ensemble learning paradigm for tourism forecasting
NASA Astrophysics Data System (ADS)
Shabri, Ani
2015-02-01
In this paper, a hybrid forecasting model based on Empirical Mode Decomposition (EMD) and Group Method of Data Handling (GMDH) is proposed to forecast tourism demand. This methodology first decomposes the original visitor arrival series into several Intrinsic Model Function (IMFs) components and one residual component by EMD technique. Then, IMFs components and the residual components is forecasted respectively using GMDH model whose input variables are selected by using Partial Autocorrelation Function (PACF). The final forecasted result for tourism series is produced by aggregating all the forecasted results. For evaluating the performance of the proposed EMD-GMDH methodologies, the monthly data of tourist arrivals from Singapore to Malaysia are used as an illustrative example. Empirical results show that the proposed EMD-GMDH model outperforms the EMD-ARIMA as well as the GMDH and ARIMA (Autoregressive Integrated Moving Average) models without time series decomposition.
Multilayer Stock Forecasting Model Using Fuzzy Time Series
Javedani Sadaei, Hossein; Lee, Muhammad Hisyam
2014-01-01
After reviewing the vast body of literature on using FTS in stock market forecasting, certain deficiencies are distinguished in the hybridization of findings. In addition, the lack of constructive systematic framework, which can be helpful to indicate direction of growth in entire FTS forecasting systems, is outstanding. In this study, we propose a multilayer model for stock market forecasting including five logical significant layers. Every single layer has its detailed concern to assist forecast development by reconciling certain problems exclusively. To verify the model, a set of huge data containing Taiwan Stock Index (TAIEX), National Association of Securities Dealers Automated Quotations (NASDAQ), Dow Jones Industrial Average (DJI), and S&P 500 have been chosen as experimental datasets. The results indicate that the proposed methodology has the potential to be accepted as a framework for model development in stock market forecasts using FTS. PMID:24605058
Evaluating probabilistic dengue risk forecasts from a prototype early warning system for Brazil.
Lowe, Rachel; Coelho, Caio As; Barcellos, Christovam; Carvalho, Marilia Sá; Catão, Rafael De Castro; Coelho, Giovanini E; Ramalho, Walter Massa; Bailey, Trevor C; Stephenson, David B; Rodó, Xavier
2016-02-24
Recently, a prototype dengue early warning system was developed to produce probabilistic forecasts of dengue risk three months ahead of the 2014 World Cup in Brazil. Here, we evaluate the categorical dengue forecasts across all microregions in Brazil, using dengue cases reported in June 2014 to validate the model. We also compare the forecast model framework to a null model, based on seasonal averages of previously observed dengue incidence. When considering the ability of the two models to predict high dengue risk across Brazil, the forecast model produced more hits and fewer missed events than the null model, with a hit rate of 57% for the forecast model compared to 33% for the null model. This early warning model framework may be useful to public health services, not only ahead of mass gatherings, but also before the peak dengue season each year, to control potentially explosive dengue epidemics.
Statistical Short-Range Guidance for Peak Wind Speed Forecasts at Edwards Air Force Base, CA
NASA Technical Reports Server (NTRS)
Dreher, Joseph; Crawford, Winifred; Lafosse, Richard; Hoeth, Brian; Burns, Kerry
2008-01-01
The peak winds near the surface are an important forecast element for Space Shuttle landings. As defined in the Shuttle Flight Rules (FRs), there are peak wind thresholds that cannot be exceeded in order to ensure the safety of the shuttle during landing operations. The National Weather Service Spaceflight Meteorology Group (SMG) is responsible for weather forecasts for all shuttle landings. They indicate peak winds are a challenging parameter to forecast. To alleviate the difficulty in making such wind forecasts, the Applied Meteorology Unit (AMTJ) developed a personal computer based graphical user interface (GUI) for displaying peak wind climatology and probabilities of exceeding peak-wind thresholds for the Shuttle Landing Facility (SLF) at Kennedy Space Center. However, the shuttle must land at Edwards Air Force Base (EAFB) in southern California when weather conditions at Kennedy Space Center in Florida are not acceptable, so SMG forecasters requested that a similar tool be developed for EAFB. Marshall Space Flight Center (MSFC) personnel archived and performed quality control of 2-minute average and 10-minute peak wind speeds at each tower adjacent to the main runway at EAFB from 1997- 2004. They calculated wind climatologies and probabilities of average peak wind occurrence based on the average speed. The climatologies were calculated for each tower and month, and were stratified by hour, direction, and direction/hour. For the probabilities of peak wind occurrence, MSFC calculated empirical and modeled probabilities of meeting or exceeding specific 10-minute peak wind speeds using probability density functions. The AMU obtained and reformatted the data into Microsoft Excel PivotTables, which allows users to display different values with point-click-drag techniques. The GUT was then created from the PivotTables using Visual Basic for Applications code. The GUI is run through a macro within Microsoft Excel and allows forecasters to quickly display and interpret peak wind climatology and likelihoods in a fast-paced operational environment. A summary of how the peak wind climatologies and probabilities were created and an overview of the GUT will be presented.
Statistical Short-Range Guidance for Peak Wind Speed Forecasts at Edwards Air Force Base, CA
NASA Technical Reports Server (NTRS)
Dreher, Joseph G.; Crawford, Winifred; Lafosse, Richard; Hoeth, Brian; Burns, Kerry
2009-01-01
The peak winds near the surface are an important forecast element for space shuttle landings. As defined in the Flight Rules (FR), there are peak wind thresholds that cannot be exceeded in order to ensure the safety of the shuttle during landing operations. The National Weather Service Spaceflight Meteorology Group (SMG) is responsible for weather forecasts for all shuttle landings, and is required to issue surface average and 10-minute peak wind speed forecasts. They indicate peak winds are a challenging parameter to forecast. To alleviate the difficulty in making such wind forecasts, the Applied Meteorology Unit (AMU) developed a PC-based graphical user interface (GUI) for displaying peak wind climatology and probabilities of exceeding peak wind thresholds for the Shuttle Landing Facility (SLF) at Kennedy Space Center (KSC; Lambert 2003). However, the shuttle occasionally may land at Edwards Air Force Base (EAFB) in southern California when weather conditions at KSC in Florida are not acceptable, so SMG forecasters requested a similar tool be developed for EAFB.
Forecasting Natural Gas Prices Using Wavelets, Time Series, and Artificial Neural Networks
2015-01-01
Following the unconventional gas revolution, the forecasting of natural gas prices has become increasingly important because the association of these prices with those of crude oil has weakened. With this as motivation, we propose some modified hybrid models in which various combinations of the wavelet approximation, detail components, autoregressive integrated moving average, generalized autoregressive conditional heteroskedasticity, and artificial neural network models are employed to predict natural gas prices. We also emphasize the boundary problem in wavelet decomposition, and compare results that consider the boundary problem case with those that do not. The empirical results show that our suggested approach can handle the boundary problem, such that it facilitates the extraction of the appropriate forecasting results. The performance of the wavelet-hybrid approach was superior in all cases, whereas the application of detail components in the forecasting was only able to yield a small improvement in forecasting performance. Therefore, forecasting with only an approximation component would be acceptable, in consideration of forecasting efficiency. PMID:26539722
Forecasting Natural Gas Prices Using Wavelets, Time Series, and Artificial Neural Networks.
Jin, Junghwan; Kim, Jinsoo
2015-01-01
Following the unconventional gas revolution, the forecasting of natural gas prices has become increasingly important because the association of these prices with those of crude oil has weakened. With this as motivation, we propose some modified hybrid models in which various combinations of the wavelet approximation, detail components, autoregressive integrated moving average, generalized autoregressive conditional heteroskedasticity, and artificial neural network models are employed to predict natural gas prices. We also emphasize the boundary problem in wavelet decomposition, and compare results that consider the boundary problem case with those that do not. The empirical results show that our suggested approach can handle the boundary problem, such that it facilitates the extraction of the appropriate forecasting results. The performance of the wavelet-hybrid approach was superior in all cases, whereas the application of detail components in the forecasting was only able to yield a small improvement in forecasting performance. Therefore, forecasting with only an approximation component would be acceptable, in consideration of forecasting efficiency.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jiang, Huaiguang; Zhang, Yingchen
This paper proposes an approach for distribution system state forecasting, which aims to provide an accurate and high speed state forecasting with an optimal synchrophasor sensor placement (OSSP) based state estimator and an extreme learning machine (ELM) based forecaster. Specifically, considering the sensor installation cost and measurement error, an OSSP algorithm is proposed to reduce the number of synchrophasor sensor and keep the whole distribution system numerically and topologically observable. Then, the weighted least square (WLS) based system state estimator is used to produce the training data for the proposed forecaster. Traditionally, the artificial neural network (ANN) and support vectormore » regression (SVR) are widely used in forecasting due to their nonlinear modeling capabilities. However, the ANN contains heavy computation load and the best parameters for SVR are difficult to obtain. In this paper, the ELM, which overcomes these drawbacks, is used to forecast the future system states with the historical system states. The proposed approach is effective and accurate based on the testing results.« less
Forecasting seeing and parameters of long-exposure images by means of ARIMA
NASA Astrophysics Data System (ADS)
Kornilov, Matwey V.
2016-02-01
Atmospheric turbulence is the one of the major limiting factors for ground-based astronomical observations. In this paper, the problem of short-term forecasting seeing is discussed. The real data that were obtained by atmospheric optical turbulence (OT) measurements above Mount Shatdzhatmaz in 2007-2013 have been analysed. Linear auto-regressive integrated moving average (ARIMA) models are used for the forecasting. A new procedure for forecasting the image characteristics of direct astronomical observations (central image intensity, full width at half maximum, radius encircling 80 % of the energy) has been proposed. Probability density functions of the forecast of these quantities are 1.5-2 times thinner than the respective unconditional probability density functions. Overall, this study found that the described technique could adequately describe temporal stochastic variations of the OT power.
Forecasting the student–professor matches that result in unusually effective teaching
Gross, Jennifer; Lakey, Brian; Lucas, Jessica L; LaCross, Ryan; R Plotkowski, Andrea; Winegard, Bo
2015-01-01
Background Two important influences on students' evaluations of teaching are relationship and professor effects. Relationship effects reflect unique matches between students and professors such that some professors are unusually effective for some students, but not for others. Professor effects reflect inter-rater agreement that some professors are more effective than others, on average across students. Aims We attempted to forecast students' evaluations of live lectures from brief, video-recorded teaching trailers. Sample Participants were 145 college students (74% female) enrolled in introductory psychology courses at a public university in the Great Lakes region of the United States. Methods Students viewed trailers early in the semester and attended live lectures months later. Because subgroups of students viewed the same professors, statistical analyses could isolate professor and relationship effects. Results Evaluations were influenced strongly by relationship and professor effects, and students' evaluations of live lectures could be forecasted from students' evaluations of teaching trailers. That is, we could forecast the individual students who would respond unusually well to a specific professor (relationship effects). We could also forecast which professors elicited better evaluations in live lectures, on average across students (professor effects). Professors who elicited unusually good evaluations in some students also elicited better memory for lectures in those students. Conclusions It appears possible to forecast relationship and professor effects on teaching evaluations by presenting brief teaching trailers to students. Thus, it might be possible to develop online recommender systems to help match students and professors so that unusually effective teaching emerges. PMID:24953773
Towards custom made seasonal/decadal forecasting
NASA Astrophysics Data System (ADS)
Mahlstein, Irina; Spirig, Christoph; Liniger, Mark
2014-05-01
Climate indices offer the possibility to deliver information to the end user that can be easily applied to their field of work. For instance, a 3-monthly mean average temperature does not say much about the Heating Degree Days of a season, or how many frost days there are to be expected. Hence, delivering aggregated climate information can be more useful to the consumer than just raw data. In order to ensure that the end-users actually get what they need, the providers need to know what exactly they need to deliver. Hence, the specific user-needs have to be identified. In the framework of EUPORIAS, interviews with the end-user were conducted in order to learn more about the types of information that are needed. But also to investigate what knowledge exists among the users about seasonal/decadal forecasting and in what way uncertainties are taken into account. It is important that we gain better knowledge of how forecasts/predictions are applied by the end-user to their specific situation and business. EUPORIAS, which is embedded in the framework of EU FP7, aims exactly to improve that knowledge and deliver very specific forecasts that are custom made. Here we present examples of seasonal forecasts and their skill of several climate impact indices with direct relevance for specific economic sectors, such as energy. The results are compared to the visualization of conventional depiction of seasonal forecasts, such as 3 monthly average temperature tercile probabilities and the differences are highlighted.
NASA Astrophysics Data System (ADS)
Chen, Y.; Li, J.; Xu, H.
2015-10-01
Physically based distributed hydrological models discrete the terrain of the whole catchment into a number of grid cells at fine resolution, and assimilate different terrain data and precipitation to different cells, and are regarded to have the potential to improve the catchment hydrological processes simulation and prediction capability. In the early stage, physically based distributed hydrological models are assumed to derive model parameters from the terrain properties directly, so there is no need to calibrate model parameters, but unfortunately, the uncertanties associated with this model parameter deriving is very high, which impacted their application in flood forecasting, so parameter optimization may also be necessary. There are two main purposes for this study, the first is to propose a parameter optimization method for physically based distributed hydrological models in catchment flood forecasting by using PSO algorithm and to test its competence and to improve its performances, the second is to explore the possibility of improving physically based distributed hydrological models capability in cathcment flood forecasting by parameter optimization. In this paper, based on the scalar concept, a general framework for parameter optimization of the PBDHMs for catchment flood forecasting is first proposed that could be used for all PBDHMs. Then, with Liuxihe model as the study model, which is a physically based distributed hydrological model proposed for catchment flood forecasting, the improverd Particle Swarm Optimization (PSO) algorithm is developed for the parameter optimization of Liuxihe model in catchment flood forecasting, the improvements include to adopt the linear decreasing inertia weight strategy to change the inertia weight, and the arccosine function strategy to adjust the acceleration coefficients. This method has been tested in two catchments in southern China with different sizes, and the results show that the improved PSO algorithm could be used for Liuxihe model parameter optimization effectively, and could improve the model capability largely in catchment flood forecasting, thus proven that parameter optimization is necessary to improve the flood forecasting capability of physically based distributed hydrological model. It also has been found that the appropriate particle number and the maximum evolution number of PSO algorithm used for Liuxihe model catchment flood forcasting is 20 and 30, respectively.
NASA Astrophysics Data System (ADS)
Song, Chen; Zhong-Cheng, Wu; Hong, Lv
2018-03-01
Building Energy forecasting plays an important role in energy management and plan. Using mind evolutionary algorithm to find the optimal network weights and threshold, to optimize the BP neural network, can overcome the problem of the BP neural network into a local minimum point. The optimized network is used for time series prediction, and the same month forecast, to get two predictive values. Then two kinds of predictive values are put into neural network, to get the final forecast value. The effectiveness of the method was verified by experiment with the energy value of three buildings in Hefei.
Time-Series Forecast Modeling on High-Bandwidth Network Measurements
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yoo, Wucherl; Sim, Alex
With the increasing number of geographically distributed scientific collaborations and the growing sizes of scientific data, it has become challenging for users to achieve the best possible network performance on a shared network. In this paper, we have developed a model to forecast expected bandwidth utilization on high-bandwidth wide area networks. The forecast model can improve the efficiency of the resource utilization and scheduling of data movements on high-bandwidth networks to accommodate ever increasing data volume for large-scale scientific data applications. A univariate time-series forecast model is developed with the Seasonal decomposition of Time series by Loess (STL) and themore » AutoRegressive Integrated Moving Average (ARIMA) on Simple Network Management Protocol (SNMP) path utilization measurement data. Compared with the traditional approach such as Box-Jenkins methodology to train the ARIMA model, our forecast model reduces computation time up to 92.6 %. It also shows resilience against abrupt network usage changes. Finally, our forecast model conducts the large number of multi-step forecast, and the forecast errors are within the mean absolute deviation (MAD) of the monitored measurements.« less
Technical note: Combining quantile forecasts and predictive distributions of streamflows
NASA Astrophysics Data System (ADS)
Bogner, Konrad; Liechti, Katharina; Zappa, Massimiliano
2017-11-01
The enhanced availability of many different hydro-meteorological modelling and forecasting systems raises the issue of how to optimally combine this great deal of information. Especially the usage of deterministic and probabilistic forecasts with sometimes widely divergent predicted future streamflow values makes it even more complicated for decision makers to sift out the relevant information. In this study multiple streamflow forecast information will be aggregated based on several different predictive distributions, and quantile forecasts. For this combination the Bayesian model averaging (BMA) approach, the non-homogeneous Gaussian regression (NGR), also known as the ensemble model output statistic (EMOS) techniques, and a novel method called Beta-transformed linear pooling (BLP) will be applied. By the help of the quantile score (QS) and the continuous ranked probability score (CRPS), the combination results for the Sihl River in Switzerland with about 5 years of forecast data will be compared and the differences between the raw and optimally combined forecasts will be highlighted. The results demonstrate the importance of applying proper forecast combination methods for decision makers in the field of flood and water resource management.
Time-Series Forecast Modeling on High-Bandwidth Network Measurements
Yoo, Wucherl; Sim, Alex
2016-06-24
With the increasing number of geographically distributed scientific collaborations and the growing sizes of scientific data, it has become challenging for users to achieve the best possible network performance on a shared network. In this paper, we have developed a model to forecast expected bandwidth utilization on high-bandwidth wide area networks. The forecast model can improve the efficiency of the resource utilization and scheduling of data movements on high-bandwidth networks to accommodate ever increasing data volume for large-scale scientific data applications. A univariate time-series forecast model is developed with the Seasonal decomposition of Time series by Loess (STL) and themore » AutoRegressive Integrated Moving Average (ARIMA) on Simple Network Management Protocol (SNMP) path utilization measurement data. Compared with the traditional approach such as Box-Jenkins methodology to train the ARIMA model, our forecast model reduces computation time up to 92.6 %. It also shows resilience against abrupt network usage changes. Finally, our forecast model conducts the large number of multi-step forecast, and the forecast errors are within the mean absolute deviation (MAD) of the monitored measurements.« less
NASA Astrophysics Data System (ADS)
Pan, Yujie; Xue, Ming; Zhu, Kefeng; Wang, Mingjun
2018-05-01
A dual-resolution (DR) version of a regional ensemble Kalman filter (EnKF)-3D ensemble variational (3DEnVar) coupled hybrid data assimilation system is implemented as a prototype for the operational Rapid Refresh forecasting system. The DR 3DEnVar system combines a high-resolution (HR) deterministic background forecast with lower-resolution (LR) EnKF ensemble perturbations used for flow-dependent background error covariance to produce a HR analysis. The computational cost is substantially reduced by running the ensemble forecasts and EnKF analyses at LR. The DR 3DEnVar system is tested with 3-h cycles over a 9-day period using a 40/˜13-km grid spacing combination. The HR forecasts from the DR hybrid analyses are compared with forecasts launched from HR Gridpoint Statistical Interpolation (GSI) 3D variational (3DVar) analyses, and single LR hybrid analyses interpolated to the HR grid. With the DR 3DEnVar system, a 90% weight for the ensemble covariance yields the lowest forecast errors and the DR hybrid system clearly outperforms the HR GSI 3DVar. Humidity and wind forecasts are also better than those launched from interpolated LR hybrid analyses, but the temperature forecasts are slightly worse. The humidity forecasts are improved most. For precipitation forecasts, the DR 3DEnVar always outperforms HR GSI 3DVar. It also outperforms the LR 3DEnVar, except for the initial forecast period and lower thresholds.
Short-term Power Load Forecasting Based on Balanced KNN
NASA Astrophysics Data System (ADS)
Lv, Xianlong; Cheng, Xingong; YanShuang; Tang, Yan-mei
2018-03-01
To improve the accuracy of load forecasting, a short-term load forecasting model based on balanced KNN algorithm is proposed; According to the load characteristics, the historical data of massive power load are divided into scenes by the K-means algorithm; In view of unbalanced load scenes, the balanced KNN algorithm is proposed to classify the scene accurately; The local weighted linear regression algorithm is used to fitting and predict the load; Adopting the Apache Hadoop programming framework of cloud computing, the proposed algorithm model is parallelized and improved to enhance its ability of dealing with massive and high-dimension data. The analysis of the household electricity consumption data for a residential district is done by 23-nodes cloud computing cluster, and experimental results show that the load forecasting accuracy and execution time by the proposed model are the better than those of traditional forecasting algorithm.
Multiresolution forecasting for futures trading using wavelet decompositions.
Zhang, B L; Coggins, R; Jabri, M A; Dersch, D; Flower, B
2001-01-01
We investigate the effectiveness of a financial time-series forecasting strategy which exploits the multiresolution property of the wavelet transform. A financial series is decomposed into an over complete, shift invariant scale-related representation. In transform space, each individual wavelet series is modeled by a separate multilayer perceptron (MLP). We apply the Bayesian method of automatic relevance determination to choose short past windows (short-term history) for the inputs to the MLPs at lower scales and long past windows (long-term history) at higher scales. To form the overall forecast, the individual forecasts are then recombined by the linear reconstruction property of the inverse transform with the chosen autocorrelation shell representation, or by another perceptron which learns the weight of each scale in the prediction of the original time series. The forecast results are then passed to a money management system to generate trades.
Short-Term State Forecasting-Based Optimal Voltage Regulation in Distribution Systems: Preprint
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yang, Rui; Jiang, Huaiguang; Zhang, Yingchen
2017-05-17
A novel short-term state forecasting-based optimal power flow (OPF) approach for distribution system voltage regulation is proposed in this paper. An extreme learning machine (ELM) based state forecaster is developed to accurately predict system states (voltage magnitudes and angles) in the near future. Based on the forecast system states, a dynamically weighted three-phase AC OPF problem is formulated to minimize the voltage violations with higher penalization on buses which are forecast to have higher voltage violations in the near future. By solving the proposed OPF problem, the controllable resources in the system are optimally coordinated to alleviate the potential severemore » voltage violations and improve the overall voltage profile. The proposed approach has been tested in a 12-bus distribution system and simulation results are presented to demonstrate the performance of the proposed approach.« less
Operational foreshock forecasting: Fifteen years after
NASA Astrophysics Data System (ADS)
Ogata, Y.
2010-12-01
We are concerned with operational forecasting of the probability that events are foreshocks of a forthcoming earthquake that is significantly larger (mainshock). Specifically, we define foreshocks as the preshocks substantially smaller than the mainshock by a magnitude gap of 0.5 or larger. The probability gain of foreshock forecast is extremely high compare to long-term forecast by renewal processes or various alarm-based intermediate-term forecasts because of a large event’s low occurrence rate in a short period and a narrow target region. Thus, it is desired to establish operational foreshock probability forecasting as seismologists have done for aftershocks. When a series of earthquakes occurs in a region, we attempt to discriminate foreshocks from a swarm or mainshock-aftershock sequence. Namely, after real time identification of an earthquake cluster using methods such as the single-link algorithm, the probability is calculated by applying statistical features that discriminate foreshocks from other types of clusters, by considering the events' stronger proximity in time and space and tendency towards chronologically increasing magnitudes. These features were modeled for probability forecasting and the coefficients of the model were estimated in Ogata et al. (1996) for the JMA hypocenter data (M≧4, 1926-1993). Currently, fifteen years has passed since the publication of the above-stated work so that we are able to present the performance and validation of the forecasts (1994-2009) by using the same model. Taking isolated events into consideration, the probability of the first events in a potential cluster being a foreshock vary in a range between 0+% and 10+% depending on their locations. This conditional forecasting performs significantly better than the unconditional (average) foreshock probability of 3.7% throughout Japan region. Furthermore, when we have the additional events in a cluster, the forecast probabilities range more widely from nearly 0% to about 40% depending on the discrimination features among the events in the cluster. This conditional forecasting further performs significantly better than the unconditional foreshock probability of 7.3%, which is the average probability of the plural events in the earthquake clusters. Indeed, the frequency ratios of the actual foreshocks are consistent with the forecasted probabilities. Reference: Ogata, Y., Utsu, T. and Katsura, K. (1996). Statistical discrimination of foreshocks from other earthquake clusters, Geophys. J. Int. 127, 17-30.
Predicting vehicle fuel consumption patterns using floating vehicle data.
Du, Yiman; Wu, Jianping; Yang, Senyan; Zhou, Liutong
2017-09-01
The status of energy consumption and air pollution in China is serious. It is important to analyze and predict the different fuel consumption of various types of vehicles under different influence factors. In order to fully describe the relationship between fuel consumption and the impact factors, massive amounts of floating vehicle data were used. The fuel consumption pattern and congestion pattern based on large samples of historical floating vehicle data were explored, drivers' information and vehicles' parameters from different group classification were probed, and the average velocity and average fuel consumption in the temporal dimension and spatial dimension were analyzed respectively. The fuel consumption forecasting model was established by using a Back Propagation Neural Network. Part of the sample set was used to train the forecasting model and the remaining part of the sample set was used as input to the forecasting model. Copyright © 2017. Published by Elsevier B.V.
NASA Technical Reports Server (NTRS)
Balikhin, M. A.; Rodriguez, J. V.; Boynton, R. J.; Walker, S. N.; Aryan, Homayon; Sibeck, D. G.; Billings, S. A.
2016-01-01
Reliable forecasts of relativistic electrons at geostationary orbit (GEO) are important for the mitigation of their hazardous effects on spacecraft at GEO. For a number of years the Space Weather Prediction Center at NOAA has provided advanced online forecasts of the fluence of electrons with energy >2 MeV at GEO using the Relativistic Electron Forecast Model (REFM). The REFM forecasts are based on real-time solar wind speed observations at L1. The high reliability of this forecasting tool serves as a benchmark for the assessment of other forecasting tools. Since 2012 the Sheffield SNB3GEO model has been operating online, providing a 24 h ahead forecast of the same fluxes. In addition to solar wind speed, the SNB3GEO forecasts use solar wind density and interplanetary magnetic field B(sub z) observations at L1. The period of joint operation of both of these forecasts has been used to compare their accuracy. Daily averaged measurements of electron fluxes by GOES 13 have been used to estimate the prediction efficiency of both forecasting tools. To assess the reliability of both models to forecast infrequent events of very high fluxes, the Heidke skill score was employed. The results obtained indicate that SNB3GEO provides a more accurate 1 day ahead forecast when compared to REFM. It is shown that the correction methodology utilized by REFM potentially can improve the SNB3GEO forecast.
Balikhin, M A; Rodriguez, J V; Boynton, R J; Walker, S N; Aryan, H; Sibeck, D G; Billings, S A
2016-01-01
Reliable forecasts of relativistic electrons at geostationary orbit (GEO) are important for the mitigation of their hazardous effects on spacecraft at GEO. For a number of years the Space Weather Prediction Center at NOAA has provided advanced online forecasts of the fluence of electrons with energy >2 MeV at GEO using the Relativistic Electron Forecast Model (REFM). The REFM forecasts are based on real-time solar wind speed observations at L1. The high reliability of this forecasting tool serves as a benchmark for the assessment of other forecasting tools. Since 2012 the Sheffield SNB 3 GEO model has been operating online, providing a 24 h ahead forecast of the same fluxes. In addition to solar wind speed, the SNB 3 GEO forecasts use solar wind density and interplanetary magnetic field B z observations at L1.The period of joint operation of both of these forecasts has been used to compare their accuracy. Daily averaged measurements of electron fluxes by GOES 13 have been used to estimate the prediction efficiency of both forecasting tools. To assess the reliability of both models to forecast infrequent events of very high fluxes, the Heidke skill score was employed. The results obtained indicate that SNB 3 GEO provides a more accurate 1 day ahead forecast when compared to REFM. It is shown that the correction methodology utilized by REFM potentially can improve the SNB 3 GEO forecast.
Statistical models for estimating daily streamflow in Michigan
Holtschlag, D.J.; Salehi, Habib
1992-01-01
Statistical models for estimating daily streamflow were analyzed for 25 pairs of streamflow-gaging stations in Michigan. Stations were paired by randomly choosing a station operated in 1989 at which 10 or more years of continuous flow data had been collected and at which flow is virtually unregulated; a nearby station was chosen where flow characteristics are similar. Streamflow data from the 25 randomly selected stations were used as the response variables; streamflow data at the nearby stations were used to generate a set of explanatory variables. Ordinary-least squares regression (OLSR) equations, autoregressive integrated moving-average (ARIMA) equations, and transfer function-noise (TFN) equations were developed to estimate the log transform of flow for the 25 randomly selected stations. The precision of each type of equation was evaluated on the basis of the standard deviation of the estimation errors. OLSR equations produce one set of estimation errors; ARIMA and TFN models each produce l sets of estimation errors corresponding to the forecast lead. The lead-l forecast is the estimate of flow l days ahead of the most recent streamflow used as a response variable in the estimation. In this analysis, the standard deviation of lead l ARIMA and TFN forecast errors were generally lower than the standard deviation of OLSR errors for l < 2 days and l < 9 days, respectively. Composite estimates were computed as a weighted average of forecasts based on TFN equations and backcasts (forecasts of the reverse-ordered series) based on ARIMA equations. The standard deviation of composite errors varied throughout the length of the estimation interval and generally was at maximum near the center of the interval. For comparison with OLSR errors, the mean standard deviation of composite errors were computed for intervals of length 1 to 40 days. The mean standard deviation of length-l composite errors were generally less than the standard deviation of the OLSR errors for l < 32 days. In addition, the composite estimates ensure a gradual transition between periods of estimated and measured flows. Model performance among stations of differing model error magnitudes were compared by computing ratios of the mean standard deviation of the length l composite errors to the standard deviation of OLSR errors. The mean error ratio for the set of 25 selected stations was less than 1 for intervals l < 32 days. Considering the frequency characteristics of the length of intervals of estimated record in Michigan, the effective mean error ratio for intervals < 30 days was 0.52. Thus, for intervals of estimation of 1 month or less, the error of the composite estimate is substantially lower than error of the OLSR estimate.
Omar, Hani; Hoang, Van Hai; Liu, Duen-Ren
2016-01-01
Enhancing sales and operations planning through forecasting analysis and business intelligence is demanded in many industries and enterprises. Publishing industries usually pick attractive titles and headlines for their stories to increase sales, since popular article titles and headlines can attract readers to buy magazines. In this paper, information retrieval techniques are adopted to extract words from article titles. The popularity measures of article titles are then analyzed by using the search indexes obtained from Google search engine. Backpropagation Neural Networks (BPNNs) have successfully been used to develop prediction models for sales forecasting. In this study, we propose a novel hybrid neural network model for sales forecasting based on the prediction result of time series forecasting and the popularity of article titles. The proposed model uses the historical sales data, popularity of article titles, and the prediction result of a time series, Autoregressive Integrated Moving Average (ARIMA) forecasting method to learn a BPNN-based forecasting model. Our proposed forecasting model is experimentally evaluated by comparing with conventional sales prediction techniques. The experimental result shows that our proposed forecasting method outperforms conventional techniques which do not consider the popularity of title words.
NASA Technical Reports Server (NTRS)
Koster, Randal; Walker, Greg; Mahanama, Sarith; Reichle, Rolf
2012-01-01
Continental-scale offline simulations with a land surface model are used to address two important issues in the forecasting of large-scale seasonal streamflow: (i) the extent to which errors in soil moisture initialization degrade streamflow forecasts, and (ii) the extent to which the downscaling of seasonal precipitation forecasts, if it could be done accurately, would improve streamflow forecasts. The reduction in streamflow forecast skill (with forecasted streamflow measured against observations) associated with adding noise to a soil moisture field is found to be, to first order, proportional to the average reduction in the accuracy of the soil moisture field itself. This result has implications for streamflow forecast improvement under satellite-based soil moisture measurement programs. In the second and more idealized ("perfect model") analysis, precipitation downscaling is found to have an impact on large-scale streamflow forecasts only if two conditions are met: (i) evaporation variance is significant relative to the precipitation variance, and (ii) the subgrid spatial variance of precipitation is adequately large. In the large-scale continental region studied (the conterminous United States), these two conditions are met in only a somewhat limited area.
Omar, Hani; Hoang, Van Hai; Liu, Duen-Ren
2016-01-01
Enhancing sales and operations planning through forecasting analysis and business intelligence is demanded in many industries and enterprises. Publishing industries usually pick attractive titles and headlines for their stories to increase sales, since popular article titles and headlines can attract readers to buy magazines. In this paper, information retrieval techniques are adopted to extract words from article titles. The popularity measures of article titles are then analyzed by using the search indexes obtained from Google search engine. Backpropagation Neural Networks (BPNNs) have successfully been used to develop prediction models for sales forecasting. In this study, we propose a novel hybrid neural network model for sales forecasting based on the prediction result of time series forecasting and the popularity of article titles. The proposed model uses the historical sales data, popularity of article titles, and the prediction result of a time series, Autoregressive Integrated Moving Average (ARIMA) forecasting method to learn a BPNN-based forecasting model. Our proposed forecasting model is experimentally evaluated by comparing with conventional sales prediction techniques. The experimental result shows that our proposed forecasting method outperforms conventional techniques which do not consider the popularity of title words. PMID:27313605
The total probabilities from high-resolution ensemble forecasting of floods
NASA Astrophysics Data System (ADS)
Olav Skøien, Jon; Bogner, Konrad; Salamon, Peter; Smith, Paul; Pappenberger, Florian
2015-04-01
Ensemble forecasting has for a long time been used in meteorological modelling, to give an indication of the uncertainty of the forecasts. As meteorological ensemble forecasts often show some bias and dispersion errors, there is a need for calibration and post-processing of the ensembles. Typical methods for this are Bayesian Model Averaging (Raftery et al., 2005) and Ensemble Model Output Statistics (EMOS) (Gneiting et al., 2005). There are also methods for regionalizing these methods (Berrocal et al., 2007) and for incorporating the correlation between lead times (Hemri et al., 2013). To make optimal predictions of floods along the stream network in hydrology, we can easily use the ensemble members as input to the hydrological models. However, some of the post-processing methods will need modifications when regionalizing the forecasts outside the calibration locations, as done by Hemri et al. (2013). We present a method for spatial regionalization of the post-processed forecasts based on EMOS and top-kriging (Skøien et al., 2006). We will also look into different methods for handling the non-normality of runoff and the effect on forecasts skills in general and for floods in particular. Berrocal, V. J., Raftery, A. E. and Gneiting, T.: Combining Spatial Statistical and Ensemble Information in Probabilistic Weather Forecasts, Mon. Weather Rev., 135(4), 1386-1402, doi:10.1175/MWR3341.1, 2007. Gneiting, T., Raftery, A. E., Westveld, A. H. and Goldman, T.: Calibrated Probabilistic Forecasting Using Ensemble Model Output Statistics and Minimum CRPS Estimation, Mon. Weather Rev., 133(5), 1098-1118, doi:10.1175/MWR2904.1, 2005. Hemri, S., Fundel, F. and Zappa, M.: Simultaneous calibration of ensemble river flow predictions over an entire range of lead times, Water Resour. Res., 49(10), 6744-6755, doi:10.1002/wrcr.20542, 2013. Raftery, A. E., Gneiting, T., Balabdaoui, F. and Polakowski, M.: Using Bayesian Model Averaging to Calibrate Forecast Ensembles, Mon. Weather Rev., 133(5), 1155-1174, doi:10.1175/MWR2906.1, 2005. Skøien, J. O., Merz, R. and Blöschl, G.: Top-kriging - Geostatistics on stream networks, Hydrol. Earth Syst. Sci., 10(2), 277-287, 2006.
Shi, Yuan; Liu, Xu; Kok, Suet-Yheng; Rajarethinam, Jayanthi; Liang, Shaohong; Yap, Grace; Chong, Chee-Seng; Lee, Kim-Sung; Tan, Sharon S.Y.; Chin, Christopher Kuan Yew; Lo, Andrew; Kong, Waiming; Ng, Lee Ching; Cook, Alex R.
2015-01-01
Background: With its tropical rainforest climate, rapid urbanization, and changing demography and ecology, Singapore experiences endemic dengue; the last large outbreak in 2013 culminated in 22,170 cases. In the absence of a vaccine on the market, vector control is the key approach for prevention. Objectives: We sought to forecast the evolution of dengue epidemics in Singapore to provide early warning of outbreaks and to facilitate the public health response to moderate an impending outbreak. Methods: We developed a set of statistical models using least absolute shrinkage and selection operator (LASSO) methods to forecast the weekly incidence of dengue notifications over a 3-month time horizon. This forecasting tool used a variety of data streams and was updated weekly, including recent case data, meteorological data, vector surveillance data, and population-based national statistics. The forecasting methodology was compared with alternative approaches that have been proposed to model dengue case data (seasonal autoregressive integrated moving average and step-down linear regression) by fielding them on the 2013 dengue epidemic, the largest on record in Singapore. Results: Operationally useful forecasts were obtained at a 3-month lag using the LASSO-derived models. Based on the mean average percentage error, the LASSO approach provided more accurate forecasts than the other methods we assessed. We demonstrate its utility in Singapore’s dengue control program by providing a forecast of the 2013 outbreak for advance preparation of outbreak response. Conclusions: Statistical models built using machine learning methods such as LASSO have the potential to markedly improve forecasting techniques for recurrent infectious disease outbreaks such as dengue. Citation: Shi Y, Liu X, Kok SY, Rajarethinam J, Liang S, Yap G, Chong CS, Lee KS, Tan SS, Chin CK, Lo A, Kong W, Ng LC, Cook AR. 2016. Three-month real-time dengue forecast models: an early warning system for outbreak alerts and policy decision support in Singapore. Environ Health Perspect 124:1369–1375; http://dx.doi.org/10.1289/ehp.1509981 PMID:26662617
How do I know if my forecasts are better? Using benchmarks in hydrological ensemble prediction
NASA Astrophysics Data System (ADS)
Pappenberger, F.; Ramos, M. H.; Cloke, H. L.; Wetterhall, F.; Alfieri, L.; Bogner, K.; Mueller, A.; Salamon, P.
2015-03-01
The skill of a forecast can be assessed by comparing the relative proximity of both the forecast and a benchmark to the observations. Example benchmarks include climatology or a naïve forecast. Hydrological ensemble prediction systems (HEPS) are currently transforming the hydrological forecasting environment but in this new field there is little information to guide researchers and operational forecasters on how benchmarks can be best used to evaluate their probabilistic forecasts. In this study, it is identified that the forecast skill calculated can vary depending on the benchmark selected and that the selection of a benchmark for determining forecasting system skill is sensitive to a number of hydrological and system factors. A benchmark intercomparison experiment is then undertaken using the continuous ranked probability score (CRPS), a reference forecasting system and a suite of 23 different methods to derive benchmarks. The benchmarks are assessed within the operational set-up of the European Flood Awareness System (EFAS) to determine those that are 'toughest to beat' and so give the most robust discrimination of forecast skill, particularly for the spatial average fields that EFAS relies upon. Evaluating against an observed discharge proxy the benchmark that has most utility for EFAS and avoids the most naïve skill across different hydrological situations is found to be meteorological persistency. This benchmark uses the latest meteorological observations of precipitation and temperature to drive the hydrological model. Hydrological long term average benchmarks, which are currently used in EFAS, are very easily beaten by the forecasting system and the use of these produces much naïve skill. When decomposed into seasons, the advanced meteorological benchmarks, which make use of meteorological observations from the past 20 years at the same calendar date, have the most skill discrimination. They are also good at discriminating skill in low flows and for all catchment sizes. Simpler meteorological benchmarks are particularly useful for high flows. Recommendations for EFAS are to move to routine use of meteorological persistency, an advanced meteorological benchmark and a simple meteorological benchmark in order to provide a robust evaluation of forecast skill. This work provides the first comprehensive evidence on how benchmarks can be used in evaluation of skill in probabilistic hydrological forecasts and which benchmarks are most useful for skill discrimination and avoidance of naïve skill in a large scale HEPS. It is recommended that all HEPS use the evidence and methodology provided here to evaluate which benchmarks to employ; so forecasters can have trust in their skill evaluation and will have confidence that their forecasts are indeed better.
NASA Astrophysics Data System (ADS)
Sivavaraprasad, G.; Venkata Ratnam, D.
2017-07-01
Ionospheric delay is one of the major atmospheric effects on the performance of satellite-based radio navigation systems. It limits the accuracy and availability of Global Positioning System (GPS) measurements, related to critical societal and safety applications. The temporal and spatial gradients of ionospheric total electron content (TEC) are driven by several unknown priori geophysical conditions and solar-terrestrial phenomena. Thereby, the prediction of ionospheric delay is challenging especially over Indian sub-continent. Therefore, an appropriate short/long-term ionospheric delay forecasting model is necessary. Hence, the intent of this paper is to forecast ionospheric delays by considering day to day, monthly and seasonal ionospheric TEC variations. GPS-TEC data (January 2013-December 2013) is extracted from a multi frequency GPS receiver established at K L University, Vaddeswaram, Guntur station (geographic: 16.37°N, 80.37°E; geomagnetic: 7.44°N, 153.75°E), India. An evaluation, in terms of forecasting capabilities, of three ionospheric time delay models - an Auto Regressive Moving Average (ARMA) model, Auto Regressive Integrated Moving Average (ARIMA) model, and a Holt-Winter's model is presented. The performances of these models are evaluated through error measurement analysis during both geomagnetic quiet and disturbed days. It is found that, ARMA model is effectively forecasting the ionospheric delay with an accuracy of 82-94%, which is 10% more superior to ARIMA and Holt-Winter's models. Moreover, the modeled VTEC derived from International Reference Ionosphere, IRI (IRI-2012) model and new global TEC model, Neustrelitz TEC Model (NTCM-GL) have compared with forecasted VTEC values of ARMA, ARIMA and Holt-Winter's models during geomagnetic quiet days. The forecast results are indicating that ARMA model would be useful to set up an early warning system for ionospheric disturbances at low latitude regions.
Bayesian flood forecasting methods: A review
NASA Astrophysics Data System (ADS)
Han, Shasha; Coulibaly, Paulin
2017-08-01
Over the past few decades, floods have been seen as one of the most common and largely distributed natural disasters in the world. If floods could be accurately forecasted in advance, then their negative impacts could be greatly minimized. It is widely recognized that quantification and reduction of uncertainty associated with the hydrologic forecast is of great importance for flood estimation and rational decision making. Bayesian forecasting system (BFS) offers an ideal theoretic framework for uncertainty quantification that can be developed for probabilistic flood forecasting via any deterministic hydrologic model. It provides suitable theoretical structure, empirically validated models and reasonable analytic-numerical computation method, and can be developed into various Bayesian forecasting approaches. This paper presents a comprehensive review on Bayesian forecasting approaches applied in flood forecasting from 1999 till now. The review starts with an overview of fundamentals of BFS and recent advances in BFS, followed with BFS application in river stage forecasting and real-time flood forecasting, then move to a critical analysis by evaluating advantages and limitations of Bayesian forecasting methods and other predictive uncertainty assessment approaches in flood forecasting, and finally discusses the future research direction in Bayesian flood forecasting. Results show that the Bayesian flood forecasting approach is an effective and advanced way for flood estimation, it considers all sources of uncertainties and produces a predictive distribution of the river stage, river discharge or runoff, thus gives more accurate and reliable flood forecasts. Some emerging Bayesian forecasting methods (e.g. ensemble Bayesian forecasting system, Bayesian multi-model combination) were shown to overcome limitations of single model or fixed model weight and effectively reduce predictive uncertainty. In recent years, various Bayesian flood forecasting approaches have been developed and widely applied, but there is still room for improvements. Future research in the context of Bayesian flood forecasting should be on assimilation of various sources of newly available information and improvement of predictive performance assessment methods.
NASA Astrophysics Data System (ADS)
Engin, Doruk; Mathason, Brian; Storm, Mark
2017-08-01
Global wind measurements are critically needed to improve and extend NOAA weather forecasting that impacts U.S. economic activity such as agriculture crop production, as well as hurricane forecasting, flooding, and FEMA disaster planning.1 NASA and the 2007 National Research Council (NRC) Earth Science Decadal Study have also identified global wind measurements as critical for global change research. NASA has conducted aircraft-based wind lidar measurements using 2 um Ho:YLF lasers, which has shown that robust wind measurements can be made. Fibertek designed and demonstrated a high-efficiency, 100 W average power continuous wave (CW) 1940 nm thulium (Tm)- doped fiber laser bread-board system meeting all requirements for a NASA Earth Science spaceflight 2 μm Ho:YLF pump laser. Our preliminary design shows that it is possible to package the laser for high-reliability spaceflight operation in an ultra-compact 2″x8″x14″ size and weight <8.5 lbs. A spaceflight 100 W polarization maintaining (PM) Tm laser provides a path to space for a pulsed, Q-switched 2 μm Ho:YLF laser with 30-80 mJ/pulse range at 100-200 Hz repletion rates.
Averaging business cycles vs. myopia: Do we need a long term vision when developing IRP?
DOE Office of Scientific and Technical Information (OSTI.GOV)
McDonald, C.; Gupta, P.C.
1995-05-01
Utility demand forecasting is inherently imprecise due to the number of uncertainties resulting from business cycles, policy making, technology breakthroughs, national and international political upheavals and the limitations of the forecasting tools. This implies that revisions based primarily on recent experience could lead to unstable forecasts. Moreover, new planning tools are required that provide an explicit consideration of uncertainty and lead to flexible and robust planning tools are required that provide an explicit consideration of uncertainty and lead to flexible and robust planning decisions.
NASA Astrophysics Data System (ADS)
Chen, Y.; Li, J.; Xu, H.
2016-01-01
Physically based distributed hydrological models (hereafter referred to as PBDHMs) divide the terrain of the whole catchment into a number of grid cells at fine resolution and assimilate different terrain data and precipitation to different cells. They are regarded to have the potential to improve the catchment hydrological process simulation and prediction capability. In the early stage, physically based distributed hydrological models are assumed to derive model parameters from the terrain properties directly, so there is no need to calibrate model parameters. However, unfortunately the uncertainties associated with this model derivation are very high, which impacted their application in flood forecasting, so parameter optimization may also be necessary. There are two main purposes for this study: the first is to propose a parameter optimization method for physically based distributed hydrological models in catchment flood forecasting by using particle swarm optimization (PSO) algorithm and to test its competence and to improve its performances; the second is to explore the possibility of improving physically based distributed hydrological model capability in catchment flood forecasting by parameter optimization. In this paper, based on the scalar concept, a general framework for parameter optimization of the PBDHMs for catchment flood forecasting is first proposed that could be used for all PBDHMs. Then, with the Liuxihe model as the study model, which is a physically based distributed hydrological model proposed for catchment flood forecasting, the improved PSO algorithm is developed for the parameter optimization of the Liuxihe model in catchment flood forecasting. The improvements include adoption of the linearly decreasing inertia weight strategy to change the inertia weight and the arccosine function strategy to adjust the acceleration coefficients. This method has been tested in two catchments in southern China with different sizes, and the results show that the improved PSO algorithm could be used for the Liuxihe model parameter optimization effectively and could improve the model capability largely in catchment flood forecasting, thus proving that parameter optimization is necessary to improve the flood forecasting capability of physically based distributed hydrological models. It also has been found that the appropriate particle number and the maximum evolution number of PSO algorithm used for the Liuxihe model catchment flood forecasting are 20 and 30 respectively.
Goode, Daniel J.; Senior, Lisa A.; Subah, Ali; Jaber, Ayman
2013-01-01
Changes in groundwater levels and salinity in six groundwater basins in Jordan were characterized by using linear trends fit to well-monitoring data collected from 1960 to early 2011. On the basis of data for 117 wells, groundwater levels in the six basins were declining, on average about -1 meter per year (m/yr), in 2010. The highest average rate of decline, -1.9 m/yr, occurred in the Jordan Side Valleys basin, and on average no decline occurred in the Hammad basin. The highest rate of decline for an individual well was -9 m/yr. Aquifer saturated thickness, a measure of water storage, was forecast for year 2030 by using linear extrapolation of the groundwater-level trend in 2010. From 30 to 40 percent of the saturated thickness, on average, was forecast to be depleted by 2030. Five percent of the wells evaluated were forecast to have zero saturated thickness by 2030. Electrical conductivity was used as a surrogate for salinity (total dissolved solids). Salinity trends in groundwater were much more variable and less linear than groundwater-level trends. The long-term linear salinity trend at most of the 205 wells evaluated was not increasing, although salinity trends are increasing in some areas. The salinity in about 58 percent of the wells in the Amman-Zarqa basin was substantially increasing, and the salinity in Hammad basin showed a long-term increasing trend. Salinity increases were not always observed in areas with groundwater-level declines. The highest rates of salinity increase were observed in regional discharge areas near groundwater pumping centers.
Gan, Ryan W; Ford, Bonne; Lassman, William; Pfister, Gabriele; Vaidyanathan, Ambarish; Fischer, Emily; Volckens, John; Pierce, Jeffrey R; Magzamen, Sheryl
2017-03-01
Climate forecasts predict an increase in frequency and intensity of wildfires. Associations between health outcomes and population exposure to smoke from Washington 2012 wildfires were compared using surface monitors, chemical-weather models, and a novel method blending three exposure information sources. The association between smoke particulate matter ≤2.5 μm in diameter (PM 2.5 ) and cardiopulmonary hospital admissions occurring in Washington from 1 July to 31 October 2012 was evaluated using a time-stratified case-crossover design. Hospital admissions aggregated by ZIP code were linked with population-weighted daily average concentrations of smoke PM 2.5 estimated using three distinct methods: a simulation with the Weather Research and Forecasting with Chemistry (WRF-Chem) model, a kriged interpolation of PM 2.5 measurements from surface monitors, and a geographically weighted ridge regression (GWR) that blended inputs from WRF-Chem, satellite observations of aerosol optical depth, and kriged PM 2.5 . A 10 μg/m 3 increase in GWR smoke PM 2.5 was associated with an 8% increased risk in asthma-related hospital admissions (odds ratio (OR): 1.076, 95% confidence interval (CI): 1.019-1.136); other smoke estimation methods yielded similar results. However, point estimates for chronic obstructive pulmonary disease (COPD) differed by smoke PM 2.5 exposure method: a 10 μg/m 3 increase using GWR was significantly associated with increased risk of COPD (OR: 1.084, 95%CI: 1.026-1.145) and not significant using WRF-Chem (OR: 0.986, 95%CI: 0.931-1.045). The magnitude (OR) and uncertainty (95%CI) of associations between smoke PM 2.5 and hospital admissions were dependent on estimation method used and outcome evaluated. Choice of smoke exposure estimation method used can impact the overall conclusion of the study.
NASA Technical Reports Server (NTRS)
Warner, Thomas T.; Key, Lawrence E.; Lario, Annette M.
1989-01-01
The effects of horizontal and vertical data resolution, data density, data location, different objective analysis algorithms, and measurement error on mesoscale-forecast accuracy are studied with observing-system simulation experiments. Domain-averaged errors are shown to generally decrease with time. It is found that the vertical distribution of error growth depends on the initial vertical distribution of the error itself. Larger gravity-inertia wave noise is produced in forecasts with coarser vertical data resolution. The use of a low vertical resolution observing system with three data levels leads to more forecast errors than moderate and high vertical resolution observing systems with 8 and 14 data levels. Also, with poor vertical resolution in soundings, the initial and forecast errors are not affected by the horizontal data resolution.
NASA Astrophysics Data System (ADS)
Sirch, Tobias; Bugliaro, Luca; Zinner, Tobias; Möhrlein, Matthias; Vazquez-Navarro, Margarita
2017-02-01
A novel approach for the nowcasting of clouds and direct normal irradiance (DNI) based on the Spinning Enhanced Visible and Infrared Imager (SEVIRI) aboard the geostationary Meteosat Second Generation (MSG) satellite is presented for a forecast horizon up to 120 min. The basis of the algorithm is an optical flow method to derive cloud motion vectors for all cloudy pixels. To facilitate forecasts over a relevant time period, a classification of clouds into objects and a weighted triangular interpolation of clear-sky regions are used. Low and high level clouds are forecasted separately because they show different velocities and motion directions. Additionally a distinction in advective and convective clouds together with an intensity correction for quickly thinning convective clouds is integrated. The DNI is calculated from the forecasted optical thickness of the low and high level clouds. In order to quantitatively assess the performance of the algorithm, a forecast validation against MSG/SEVIRI observations is performed for a period of 2 months. Error rates and Hanssen-Kuiper skill scores are derived for forecasted cloud masks. For a forecast of 5 min for most cloud situations more than 95 % of all pixels are predicted correctly cloudy or clear. This number decreases to 80-95 % for a forecast of 2 h depending on cloud type and vertical cloud level. Hanssen-Kuiper skill scores for cloud mask go down to 0.6-0.7 for a 2 h forecast. Compared to persistence an improvement of forecast horizon by a factor of 2 is reached for all forecasts up to 2 h. A comparison of forecasted optical thickness distributions and DNI against observations yields correlation coefficients larger than 0.9 for 15 min forecasts and around 0.65 for 2 h forecasts.
Unlocking the climate riddle in forested ecosystems
Greg C. Liknes; Christopher W. Woodall; Brian F. Walters; Sara A. Goeking
2012-01-01
Climate information is often used as a predictor in ecological studies, where temporal averages are typically based on climate normals (30-year means) or seasonal averages. While ensemble projections of future climate forecast a higher global average annual temperature, they also predict increased climate variability. It remains to be seen whether forest ecosystems...
Stream-flow forecasting using extreme learning machines: A case study in a semi-arid region in Iraq
NASA Astrophysics Data System (ADS)
Yaseen, Zaher Mundher; Jaafar, Othman; Deo, Ravinesh C.; Kisi, Ozgur; Adamowski, Jan; Quilty, John; El-Shafie, Ahmed
2016-11-01
Monthly stream-flow forecasting can yield important information for hydrological applications including sustainable design of rural and urban water management systems, optimization of water resource allocations, water use, pricing and water quality assessment, and agriculture and irrigation operations. The motivation for exploring and developing expert predictive models is an ongoing endeavor for hydrological applications. In this study, the potential of a relatively new data-driven method, namely the extreme learning machine (ELM) method, was explored for forecasting monthly stream-flow discharge rates in the Tigris River, Iraq. The ELM algorithm is a single-layer feedforward neural network (SLFNs) which randomly selects the input weights, hidden layer biases and analytically determines the output weights of the SLFNs. Based on the partial autocorrelation functions of historical stream-flow data, a set of five input combinations with lagged stream-flow values are employed to establish the best forecasting model. A comparative investigation is conducted to evaluate the performance of the ELM compared to other data-driven models: support vector regression (SVR) and generalized regression neural network (GRNN). The forecasting metrics defined as the correlation coefficient (r), Nash-Sutcliffe efficiency (ENS), Willmott's Index (WI), root-mean-square error (RMSE) and mean absolute error (MAE) computed between the observed and forecasted stream-flow data are employed to assess the ELM model's effectiveness. The results revealed that the ELM model outperformed the SVR and the GRNN models across a number of statistical measures. In quantitative terms, superiority of ELM over SVR and GRNN models was exhibited by ENS = 0.578, 0.378 and 0.144, r = 0.799, 0.761 and 0.468 and WI = 0.853, 0.802 and 0.689, respectively and the ELM model attained lower RMSE value by approximately 21.3% (relative to SVR) and by approximately 44.7% (relative to GRNN). Based on the findings of this study, several recommendations were suggested for further exploration of the ELM model in hydrological forecasting problems.
Sea Ice Outlook for September 2015 June Report - NASA Global Modeling and Assimilation Office
NASA Technical Reports Server (NTRS)
Cullather, Richard I.; Keppenne, Christian L.; Marshak, Jelena; Pawson, Steven; Schubert, Siegfried D.; Suarez, Max J.; Vernieres, Guillaume; Zhao, Bin
2015-01-01
The recent decline in perennial sea ice cover in Arctic Ocean is a topic of enormous scientific interest and has relevance to a broad variety of scientific disciplines and human endeavors including biological and physical oceanography, atmospheric circulation, high latitude ecology, the sustainability of indigenous communities, commerce, and resource exploration. A credible seasonal prediction of sea ice extent would be of substantial use to many of the stakeholders in these fields and may also reveal details on the physical processes that result in the current trends in the ice cover. Forecasts are challenging due in part to limitations in the polar observing network, the large variability in the climate system, and an incomplete knowledge of the significant processes. Nevertheless it is a useful to understand the current capabilities of high latitude seasonal forecasting and identify areas where such forecasts may be improved. Since 2008 the Arctic Research Consortium of the United States (ARCUS) has conducted a seasonal forecasting contest in which the average Arctic sea ice extent for the month of September (the month of the annual extent minimum) is predicted from available forecasts in early June, July, and August. The competition is known as the Sea Ice Outlook (SIO) but recently came under the auspices of the Sea Ice Prediction Network (SIPN), and multi-agency funded project to evaluate the SIO. The forecasts are submitted based on modeling, statistical, and heuristic methods. Forecasts of Arctic sea ice extent from the GMAO are derived from seasonal prediction system of the NASA Goddard Earth Observing System model, version 5 (GEOS 5) coupled atmosphere and ocean general circulation model (AOGCM). The projections are made in order to understand the relative skill of the forecasting system and to determine the effects of future improvements to the system. This years prediction is for a September average Arctic ice extent of 5.030.41 million km2.
Using total precipitable water anomaly as a forecast aid for heavy precipitation events
NASA Astrophysics Data System (ADS)
VandenBoogart, Lance M.
Heavy precipitation events are of interest to weather forecasters, local government officials, and the Department of Defense. These events can cause flooding which endangers lives and property. Military concerns include decreased trafficability for military vehicles, which hinders both war- and peace-time missions. Even in data-rich areas such as the United States, it is difficult to determine when and where a heavy precipitation event will occur. The challenges are compounded in data-denied regions. The hypothesis that total precipitable water anomaly (TPWA) will be positive and increasing preceding heavy precipitation events is tested in order to establish an understanding of TPWA evolution. Results are then used to create a precipitation forecast aid. The operational, 16 km-gridded, 6-hourly TPWA product developed at the Cooperative Institute for Research in the Atmosphere (CIRA) compares a blended TPW product with a TPW climatology to give a percent of normal TPWA value. TPWA evolution is examined for 84 heavy precipitation events which occurred between August 2010 and November 2011. An algorithm which uses various TPWA thresholds derived from the 84 events is then developed and tested using dichotomous contingency table verification statistics to determine the extent to which satellite-based TPWA might be used to aid in forecasting precipitation over mesoscale domains. The hypothesis of positive and increasing TPWA preceding heavy precipitation events is supported by the analysis. Event-average TPWA rises for 36 hours and peaks at 154% of normal at the event time. The average precipitation event detected by the forecast algorithm is not of sufficient magnitude to be termed a "heavy" precipitation event; however, the algorithm adds skill to a climatological precipitation forecast. Probability of detection is low and false alarm ratios are large, thus qualifying the algorithm's current use as an aid rather than a deterministic forecast tool. The algorithm's ability to be easily modified and quickly run gives it potential for future use in precipitation forecasting.
NASA Astrophysics Data System (ADS)
Kruglova, Ekaterina; Kulikova, Irina; Khan, Valentina; Tischenko, Vladimir
2017-04-01
The subseasonal predictability of low-frequency modes and the atmospheric circulation regimes is investigated based on the using of outputs from global Semi-Lagrangian (SL-AV) model of the Hydrometcentre of Russia and Institute of Numerical Mathematics of Russian Academy of Science. Teleconnection indices (AO, WA, EA, NAO, EU, WP, PNA) are used as the quantitative characteristics of low-frequency variability to identify zonal and meridional flow regimes with focus on control distribution of high impact weather patterns in the Northern Eurasia. The predictability of weekly and monthly averaged indices is estimated by the methods of diagnostic verification of forecast and reanalysis data covering the hindcast period, and also with the use of the recommended WMO quantitative criteria. Characteristics of the low frequency variability have been discussed. Particularly, it is revealed that the meridional flow regimes are reproduced by SL-AV for summer season better comparing to winter period. It is shown that the model's deterministic forecast (ensemble mean) skill at week 1 (days 1-7) is noticeably better than that of climatic forecasts. The decrease of skill scores at week 2 (days 8-14) and week 3( days 15-21) is explained by deficiencies in the modeling system and inaccurate initial conditions. It was noticed the slightly improvement of the skill of model at week 4 (days 22-28), when the condition of atmosphere is more determined by the flow of energy from the outside. The reliability of forecasts of monthly (days 1-30) averaged indices is comparable to that at week 1 (days 1-7). Numerical experiments demonstrated that the forecast accuracy can be improved (thus the limit of practical predictability can be extended) through the using of probabilistic approach based on ensemble forecasts. It is shown that the quality of forecasts of the regimes of circulation like blocking is higher, than that of zonal flow.
Biggerstaff, Matthew; Johansson, Michael; Alper, David; Brooks, Logan C; Chakraborty, Prithwish; Farrow, David C; Hyun, Sangwon; Kandula, Sasikiran; McGowan, Craig; Ramakrishnan, Naren; Rosenfeld, Roni; Shaman, Jeffrey; Tibshirani, Rob; Tibshirani, Ryan J; Vespignani, Alessandro; Yang, Wan; Zhang, Qian; Reed, Carrie
2018-02-24
Accurate forecasts could enable more informed public health decisions. Since 2013, CDC has worked with external researchers to improve influenza forecasts by coordinating seasonal challenges for the United States and the 10 Health and Human Service Regions. Forecasted targets for the 2014-15 challenge were the onset week, peak week, and peak intensity of the season and the weekly percent of outpatient visits due to influenza-like illness (ILI) 1-4 weeks in advance. We used a logarithmic scoring rule to score the weekly forecasts, averaged the scores over an evaluation period, and then exponentiated the resulting logarithmic score. Poor forecasts had a score near 0, and perfect forecasts a score of 1. Five teams submitted forecasts from seven different models. At the national level, the team scores for onset week ranged from <0.01 to 0.41, peak week ranged from 0.08 to 0.49, and peak intensity ranged from <0.01 to 0.17. The scores for predictions of ILI 1-4 weeks in advance ranged from 0.02-0.38 and was highest 1 week ahead. Forecast skill varied by HHS region. Forecasts can predict epidemic characteristics that inform public health actions. CDC, state and local health officials, and researchers are working together to improve forecasts. Published by Elsevier B.V.
Forecasting the quality of water-suppressed 1 H MR spectra based on a single-shot water scan.
Kyathanahally, Sreenath P; Kreis, Roland
2017-08-01
To investigate whether an initial non-water-suppressed acquisition that provides information about the signal-to-noise ratio (SNR) and linewidth is enough to forecast the maximally achievable final spectral quality and thus inform the operator whether the foreseen number of averages and achieved field homogeneity is adequate. A large range of spectra with varying SNR and linewidth was simulated and fitted with popular fitting programs to determine the dependence of fitting errors on linewidth and SNR. A tool to forecast variance based on a single acquisition was developed and its performance evaluated on simulated and in vivo data obtained at 3 Tesla from various brain regions and acquisition settings. A strong correlation to real uncertainties in estimated metabolite contents was found for the forecast values and the Cramer-Rao lower bounds obtained from the water-suppressed spectra. It appears to be possible to forecast the best-case errors associated with specific metabolites to be found in model fits of water-suppressed spectra based on a single water scan. Thus, nonspecialist operators will be able to judge ahead of time whether the planned acquisition can possibly be of sufficient quality to answer the targeted clinical question or whether it needs more averages or improved shimming. Magn Reson Med 78:441-451, 2017. © 2016 International Society for Magnetic Resonance in Medicine. © 2016 International Society for Magnetic Resonance in Medicine.
Forecasting the student-professor matches that result in unusually effective teaching.
Gross, Jennifer; Lakey, Brian; Lucas, Jessica L; LaCross, Ryan; Plotkowski, Andrea R; Winegard, Bo
2015-03-01
Two important influences on students' evaluations of teaching are relationship and professor effects. Relationship effects reflect unique matches between students and professors such that some professors are unusually effective for some students, but not for others. Professor effects reflect inter-rater agreement that some professors are more effective than others, on average across students. We attempted to forecast students' evaluations of live lectures from brief, video-recorded teaching trailers. Participants were 145 college students (74% female) enrolled in introductory psychology courses at a public university in the Great Lakes region of the United States. Students viewed trailers early in the semester and attended live lectures months later. Because subgroups of students viewed the same professors, statistical analyses could isolate professor and relationship effects. Evaluations were influenced strongly by relationship and professor effects, and students' evaluations of live lectures could be forecasted from students' evaluations of teaching trailers. That is, we could forecast the individual students who would respond unusually well to a specific professor (relationship effects). We could also forecast which professors elicited better evaluations in live lectures, on average across students (professor effects). Professors who elicited unusually good evaluations in some students also elicited better memory for lectures in those students. It appears possible to forecast relationship and professor effects on teaching evaluations by presenting brief teaching trailers to students. Thus, it might be possible to develop online recommender systems to help match students and professors so that unusually effective teaching emerges. © 2014 The Authors. British Journal of Educational Psychology published by John Wiley & Sons Ltd on behalf of the British Psychological Society.
ERIC Educational Resources Information Center
Moore, Corey L.; Wang, Ningning; Washington, Janique Tynez
2017-01-01
Purpose: This study assessed and demonstrated the efficacy of two select empirical forecast models (i.e., autoregressive integrated moving average [ARIMA] model vs. grey model [GM]) in accurately predicting state vocational rehabilitation agency (SVRA) rehabilitation success rate trends across six different racial and ethnic population cohorts…
Marcek, Dusan; Durisova, Maria
2016-01-01
This paper deals with application of quantitative soft computing prediction models into financial area as reliable and accurate prediction models can be very helpful in management decision-making process. The authors suggest a new hybrid neural network which is a combination of the standard RBF neural network, a genetic algorithm, and a moving average. The moving average is supposed to enhance the outputs of the network using the error part of the original neural network. Authors test the suggested model on high-frequency time series data of USD/CAD and examine the ability to forecast exchange rate values for the horizon of one day. To determine the forecasting efficiency, they perform a comparative statistical out-of-sample analysis of the tested model with autoregressive models and the standard neural network. They also incorporate genetic algorithm as an optimizing technique for adapting parameters of ANN which is then compared with standard backpropagation and backpropagation combined with K-means clustering algorithm. Finally, the authors find out that their suggested hybrid neural network is able to produce more accurate forecasts than the standard models and can be helpful in eliminating the risk of making the bad decision in decision-making process. PMID:26977450
Falat, Lukas; Marcek, Dusan; Durisova, Maria
2016-01-01
This paper deals with application of quantitative soft computing prediction models into financial area as reliable and accurate prediction models can be very helpful in management decision-making process. The authors suggest a new hybrid neural network which is a combination of the standard RBF neural network, a genetic algorithm, and a moving average. The moving average is supposed to enhance the outputs of the network using the error part of the original neural network. Authors test the suggested model on high-frequency time series data of USD/CAD and examine the ability to forecast exchange rate values for the horizon of one day. To determine the forecasting efficiency, they perform a comparative statistical out-of-sample analysis of the tested model with autoregressive models and the standard neural network. They also incorporate genetic algorithm as an optimizing technique for adapting parameters of ANN which is then compared with standard backpropagation and backpropagation combined with K-means clustering algorithm. Finally, the authors find out that their suggested hybrid neural network is able to produce more accurate forecasts than the standard models and can be helpful in eliminating the risk of making the bad decision in decision-making process.
Forecasting Daily Volume and Acuity of Patients in the Emergency Department.
Calegari, Rafael; Fogliatto, Flavio S; Lucini, Filipe R; Neyeloff, Jeruza; Kuchenbecker, Ricardo S; Schaan, Beatriz D
2016-01-01
This study aimed at analyzing the performance of four forecasting models in predicting the demand for medical care in terms of daily visits in an emergency department (ED) that handles high complexity cases, testing the influence of climatic and calendrical factors on demand behavior. We tested different mathematical models to forecast ED daily visits at Hospital de Clínicas de Porto Alegre (HCPA), which is a tertiary care teaching hospital located in Southern Brazil. Model accuracy was evaluated using mean absolute percentage error (MAPE), considering forecasting horizons of 1, 7, 14, 21, and 30 days. The demand time series was stratified according to patient classification using the Manchester Triage System's (MTS) criteria. Models tested were the simple seasonal exponential smoothing (SS), seasonal multiplicative Holt-Winters (SMHW), seasonal autoregressive integrated moving average (SARIMA), and multivariate autoregressive integrated moving average (MSARIMA). Performance of models varied according to patient classification, such that SS was the best choice when all types of patients were jointly considered, and SARIMA was the most accurate for modeling demands of very urgent (VU) and urgent (U) patients. The MSARIMA models taking into account climatic factors did not improve the performance of the SARIMA models, independent of patient classification.
Forecasting Daily Volume and Acuity of Patients in the Emergency Department
Fogliatto, Flavio S.; Neyeloff, Jeruza; Kuchenbecker, Ricardo S.; Schaan, Beatriz D.
2016-01-01
This study aimed at analyzing the performance of four forecasting models in predicting the demand for medical care in terms of daily visits in an emergency department (ED) that handles high complexity cases, testing the influence of climatic and calendrical factors on demand behavior. We tested different mathematical models to forecast ED daily visits at Hospital de Clínicas de Porto Alegre (HCPA), which is a tertiary care teaching hospital located in Southern Brazil. Model accuracy was evaluated using mean absolute percentage error (MAPE), considering forecasting horizons of 1, 7, 14, 21, and 30 days. The demand time series was stratified according to patient classification using the Manchester Triage System's (MTS) criteria. Models tested were the simple seasonal exponential smoothing (SS), seasonal multiplicative Holt-Winters (SMHW), seasonal autoregressive integrated moving average (SARIMA), and multivariate autoregressive integrated moving average (MSARIMA). Performance of models varied according to patient classification, such that SS was the best choice when all types of patients were jointly considered, and SARIMA was the most accurate for modeling demands of very urgent (VU) and urgent (U) patients. The MSARIMA models taking into account climatic factors did not improve the performance of the SARIMA models, independent of patient classification. PMID:27725842
Rodríguez, Nibaldo
2014-01-01
Two smoothing strategies combined with autoregressive integrated moving average (ARIMA) and autoregressive neural networks (ANNs) models to improve the forecasting of time series are presented. The strategy of forecasting is implemented using two stages. In the first stage the time series is smoothed using either, 3-point moving average smoothing, or singular value Decomposition of the Hankel matrix (HSVD). In the second stage, an ARIMA model and two ANNs for one-step-ahead time series forecasting are used. The coefficients of the first ANN are estimated through the particle swarm optimization (PSO) learning algorithm, while the coefficients of the second ANN are estimated with the resilient backpropagation (RPROP) learning algorithm. The proposed models are evaluated using a weekly time series of traffic accidents of Valparaíso, Chilean region, from 2003 to 2012. The best result is given by the combination HSVD-ARIMA, with a MAPE of 0 : 26%, followed by MA-ARIMA with a MAPE of 1 : 12%; the worst result is given by the MA-ANN based on PSO with a MAPE of 15 : 51%. PMID:25243200
NCEP HYSPLIT SMOKE & DUST Verification. NOAA/NWS/NCEP/EMC
April May June July August Summer September October November December Prod vs Para Summer 2013 CA/MX Hawaii All regions PROD run All regions PARA run Select averaged hour: 1 hr average Select forecast four
NASA Astrophysics Data System (ADS)
Dutton, John A.; James, Richard P.; Ross, Jeremy D.
2013-06-01
Seasonal probability forecasts produced with numerical dynamics on supercomputers offer great potential value in managing risk and opportunity created by seasonal variability. The skill and reliability of contemporary forecast systems can be increased by calibration methods that use the historical performance of the forecast system to improve the ongoing real-time forecasts. Two calibration methods are applied to seasonal surface temperature forecasts of the US National Weather Service, the European Centre for Medium Range Weather Forecasts, and to a World Climate Service multi-model ensemble created by combining those two forecasts with Bayesian methods. As expected, the multi-model is somewhat more skillful and more reliable than the original models taken alone. The potential value of the multimodel in decision making is illustrated with the profits achieved in simulated trading of a weather derivative. In addition to examining the seasonal models, the article demonstrates that calibrated probability forecasts of weekly average temperatures for leads of 2-4 weeks are also skillful and reliable. The conversion of ensemble forecasts into probability distributions of impact variables is illustrated with degree days derived from the temperature forecasts. Some issues related to loss of stationarity owing to long-term warming are considered. The main conclusion of the article is that properly calibrated probabilistic forecasts possess sufficient skill and reliability to contribute to effective decisions in government and business activities that are sensitive to intraseasonal and seasonal climate variability.
Evaluating probabilistic dengue risk forecasts from a prototype early warning system for Brazil
Lowe, Rachel; Coelho, Caio AS; Barcellos, Christovam; Carvalho, Marilia Sá; Catão, Rafael De Castro; Coelho, Giovanini E; Ramalho, Walter Massa; Bailey, Trevor C; Stephenson, David B; Rodó, Xavier
2016-01-01
Recently, a prototype dengue early warning system was developed to produce probabilistic forecasts of dengue risk three months ahead of the 2014 World Cup in Brazil. Here, we evaluate the categorical dengue forecasts across all microregions in Brazil, using dengue cases reported in June 2014 to validate the model. We also compare the forecast model framework to a null model, based on seasonal averages of previously observed dengue incidence. When considering the ability of the two models to predict high dengue risk across Brazil, the forecast model produced more hits and fewer missed events than the null model, with a hit rate of 57% for the forecast model compared to 33% for the null model. This early warning model framework may be useful to public health services, not only ahead of mass gatherings, but also before the peak dengue season each year, to control potentially explosive dengue epidemics. DOI: http://dx.doi.org/10.7554/eLife.11285.001 PMID:26910315
Knowing what to expect, forecasting monthly emergency department visits: A time-series analysis.
Bergs, Jochen; Heerinckx, Philipe; Verelst, Sandra
2014-04-01
To evaluate an automatic forecasting algorithm in order to predict the number of monthly emergency department (ED) visits one year ahead. We collected retrospective data of the number of monthly visiting patients for a 6-year period (2005-2011) from 4 Belgian Hospitals. We used an automated exponential smoothing approach to predict monthly visits during the year 2011 based on the first 5 years of the dataset. Several in- and post-sample forecasting accuracy measures were calculated. The automatic forecasting algorithm was able to predict monthly visits with a mean absolute percentage error ranging from 2.64% to 4.8%, indicating an accurate prediction. The mean absolute scaled error ranged from 0.53 to 0.68 indicating that, on average, the forecast was better compared with in-sample one-step forecast from the naïve method. The applied automated exponential smoothing approach provided useful predictions of the number of monthly visits a year in advance. Copyright © 2013 Elsevier Ltd. All rights reserved.
Liu, Yan; Watson, Stella C; Gettings, Jenna R; Lund, Robert B; Nordone, Shila K; Yabsley, Michael J; McMahan, Christopher S
2017-01-01
This paper forecasts the 2016 canine Anaplasma spp. seroprevalence in the United States from eight climate, geographic and societal factors. The forecast's construction and an assessment of its performance are described. The forecast is based on a spatial-temporal conditional autoregressive model fitted to over 11 million Anaplasma spp. seroprevalence test results for dogs conducted in the 48 contiguous United States during 2011-2015. The forecast uses county-level data on eight predictive factors, including annual temperature, precipitation, relative humidity, county elevation, forestation coverage, surface water coverage, population density and median household income. Non-static factors are extrapolated into the forthcoming year with various statistical methods. The fitted model and factor extrapolations are used to estimate next year's regional prevalence. The correlation between the observed and model-estimated county-by-county Anaplasma spp. seroprevalence for the five-year period 2011-2015 is 0.902, demonstrating reasonable model accuracy. The weighted correlation (accounting for different sample sizes) between 2015 observed and forecasted county-by-county Anaplasma spp. seroprevalence is 0.987, exhibiting that the proposed approach can be used to accurately forecast Anaplasma spp. seroprevalence. The forecast presented herein can a priori alert veterinarians to areas expected to see Anaplasma spp. seroprevalence beyond the accepted endemic range. The proposed methods may prove useful for forecasting other diseases.
Forecasting of global solar radiation using anfis and armax techniques
NASA Astrophysics Data System (ADS)
Muhammad, Auwal; Gaya, M. S.; Aliyu, Rakiya; Aliyu Abdulkadir, Rabi'u.; Dauda Umar, Ibrahim; Aminu Yusuf, Lukuman; Umar Ali, Mudassir; Khairi, M. T. M.
2018-01-01
Procurement of measuring device, maintenance cost coupled with calibration of the instrument contributed to the difficulty in forecasting of global solar radiation in underdeveloped countries. Most of the available regressional and mathematical models do not capture well the behavior of the global solar radiation. This paper presents the comparison of Adaptive Neuro Fuzzy Inference System (ANFIS) and Autoregressive Moving Average with eXogenous term (ARMAX) in forecasting global solar radiation. Full-Scale (experimental) data of Nigerian metrological agency, Sultan Abubakar III international airport Sokoto was used to validate the models. The simulation results demonstrated that the ANFIS model having achieved MAPE of 5.34% outperformed the ARMAX model. The ANFIS could be a valuable tool for forecasting the global solar radiation.
Environmentally-driven ensemble forecasts of dengue fever
NASA Astrophysics Data System (ADS)
Yamana, T. K.; Shaman, J. L.
2017-12-01
Dengue fever is a mosquito-borne viral disease prevalent in the tropics and subtropics, with an estimated 2.5 billion people at risk of transmission. In many areas where dengue is found, disease transmission is seasonal but prone to high inter-annual variability with occasional severe epidemics. Predicting and preparing for periods of higher than average transmission remains a significant public health challenge. Recently, we developed a framework for forecasting dengue incidence using an dynamical model of disease transmission coupled with observational data of dengue cases using data-assimilation methods. Here, we investigate the use of environmental data to drive the disease transmission model. We produce retrospective forecasts of the timing and severity of dengue outbreaks, and quantify forecast predictive accuracy.
Using NMME in Region-Specific Operational Seasonal Climate Forecasts
NASA Astrophysics Data System (ADS)
Gronewold, A.; Bolinger, R. A.; Fry, L. M.; Kompoltowicz, K.
2015-12-01
The National Oceanic and Atmospheric Administration's Climate Prediction Center (NOAA/CPC) provides access to a suite of real-time monthly climate forecasts that comprise the North American Multi-Model Ensemble (NMME) in an attempt to meet increasing demands for monthly to seasonal climate prediction. While the graphical map forecasts of the NMME are informative, there is a need to provide decision-makers with probabilistic forecasts specific to their region of interest. Here, we demonstrate the potential application of the NMME to address regional climate projection needs by developing new forecasts of temperature and precipitation for the North American Great Lakes, the largest system of lakes on Earth. Regional opertional water budget forecasts rely on these outlooks to initiate monthly forecasts not only of the water budget, but of monthly lake water levels as well. More specifically, we present an alternative for improving existing operational protocols that currently involve a relatively time-consuming and subjective procedure based on interpreting the maps of the NMME. In addition, all forecasts are currently presented in the NMME in a probabilistic format, with equal weighting given to each member of the ensemble. In our new evolution of this product, we provide historical context for the forecasts by superimposing them (in an on-line graphical user interface) with the historical range of observations. Implementation of this new tool has already led to noticeable advantages in regional water budget forecasting, and has the potential to be transferred to other regional decision-making authorities as well.
Forecast model applications of retrieved three dimensional liquid water fields
NASA Technical Reports Server (NTRS)
Raymond, William H.; Olson, William S.
1990-01-01
Forecasts are made for tropical storm Emily using heating rates derived from the SSM/I physical retrievals described in chapters 2 and 3. Average values of the latent heating rates from the convective and stratiform cloud simulations, used in the physical retrieval, are obtained for individual 1.1 km thick vertical layers. Then, the layer-mean latent heating rates are regressed against the slant path-integrated liquid and ice precipitation water contents to determine the best fit two parameter regression coefficients for each layer. The regression formulae and retrieved precipitation water contents are utilized to infer the vertical distribution of heating rates for forecast model applications. In the forecast model, diabatic temperature contributions are calculated and used in a diabatic initialization, or in a diabatic initialization combined with a diabatic forcing procedure. Our forecasts show that the time needed to spin-up precipitation processes in tropical storm Emily is greatly accelerated through the application of the data.
Water quality in the Schuylkill River, Pennsylvania: the potential for long-lead forecasts
NASA Astrophysics Data System (ADS)
Block, P. J.; Peralez, J.
2012-12-01
Prior analysis of pathogen levels in the Schuylkill River has led to a categorical daily forecast of water quality (denoted as red, yellow, or green flag days.) The forecast, available to the public online through the Philadelphia Water Department, is predominantly based on the local precipitation forecast. In this study, we explore the feasibility of extending the forecast to the seasonal scale by associating large-scale climate drivers with local precipitation and water quality parameter levels. This advance information is relevant for recreational activities, ecosystem health, and water treatment (energy, chemicals), as the Schuylkill provides 40% of Philadelphia's water supply. Preliminary results indicate skillful prediction of average summertime water quality parameters and characteristics, including chloride, coliform, turbidity, alkalinity, and others, using season-ahead oceanic and atmospheric variables, predominantly from the North Atlantic. Water quality parameter trends, including historic land use changes along the river, association with climatic variables, and prediction models will be presented.
Skill in Precipitation Forecasting in the National Weather Service.
NASA Astrophysics Data System (ADS)
Charba, Jerome P.; Klein, William H.
1980-12-01
All known long-term records of forecasting performance for different types of precipitation forecasts in the National Weather Service were examined for relative skill and secular trends in skill. The largest upward trends were achieved by local probability of precipitation (PoP) forecasts for the periods 24-36 h and 36-48 h after 0000 and 1200 GMT. Over the last 13 years, the skill of these forecasts has improved at an average rate of 7.2% per 10-year interval. Over the same period, improvement has been smaller in local PoP skill in the 12-24 h range (2.0% per 10 years) and in the accuracy of "Yea/No" forecasts of measurable precipitation. The overall trend in accuracy of centralized quantitative precipitation forecasts of 0.5 in and 1.0 in has been slightly upward at the 0-24 h range and strongly upward at the 24-48 h range. Most of the improvement in these forecasts has been achieved from the early 1970s to the present. Strong upward accuracy trends in all types of precipitation forecasts within the past eight years are attributed primarily to improvements in numerical and statistical centralized guidance forecasts.The skill and accuracy of both measurable and quantitative precipitation forecasts is 35-55% greater during the cool season than during the warm season. Also, the secular rate of improvement of the cool season precipitation forecasts is 50-110% greater than that of the warm season. This seasonal difference in performance reflects the relative difficulty of forecasting predominantly stratiform precipitation of the cool season and convective precipitation of the warm season.
NASA Astrophysics Data System (ADS)
Showstack, Randy
Fourteen tropical storms, nine hurricanes, and four intense hurricanes with winds above 111 mph. That's the forecast for hurricane activity in the Atlantic Basin for the upcoming hurricane season which extends from June 1 through November 30, 1999, according to a Colorado State Hurricane Forecast team led by William Gray, professor of atmospheric science. The forecast supports an earlier report by the team.Hurricane activity, said Gray will be similar to 1998—which yielded 14 tropical storms, 10 hurricanes, and 3 intense storms. These numbers are significantly higher than the long-term statistical averages of 9.3, 5.8, and 2.2, annually.
2013 Gulf of Mexico Hypoxia Forecast
Scavia, Donald; Evans, Mary Anne; Obenour, Dan
2013-01-01
The Gulf of Mexico annual summer hypoxia forecasts are based on average May total nitrogen loads from the Mississippi River basin for that year. The load estimate, recently released by USGS, is 7,316 metric tons per day. Based on that estimate, we predict the area of this summer’s hypoxic zone to be 18,900 square kilometers (95% credible interval, 13,400 to 24,200), the 7th largest reported and about the size of New Jersey. Our forecast hypoxic volume is 74.5 km3 (95% credible interval, 51.5 to 97.0), also the 7th largest on record.
NASA Astrophysics Data System (ADS)
Hirpa, F. A.; Gebremichael, M.; Hopson, T. M.; Wojick, R.
2011-12-01
We present results of data assimilation of ground discharge observation and remotely sensed soil moisture observations into Sacramento Soil Moisture Accounting (SACSMA) model in a small watershed (1593 km2) in Minnesota, the Unites States. Specifically, we perform assimilation experiments with Ensemble Kalman Filter (EnKF) and Particle Filter (PF) in order to improve streamflow forecast accuracy at six hourly time step. The EnKF updates the soil moisture states in the SACSMA from the relative errors of the model and observations, while the PF adjust the weights of the state ensemble members based on the likelihood of the forecast. Results of the improvements of each filter over the reference model (without data assimilation) will be presented. Finally, the EnKF and PF are coupled together to further improve the streamflow forecast accuracy.
Liu, Da; Xu, Ming; Niu, Dongxiao; Wang, Shoukai; Liang, Sai
2016-01-01
Traditional forecasting models fit a function approximation from dependent invariables to independent variables. However, they usually get into trouble when date are presented in various formats, such as text, voice and image. This study proposes a novel image-encoded forecasting method that input and output binary digital two-dimensional (2D) images are transformed from decimal data. Omitting any data analysis or cleansing steps for simplicity, all raw variables were selected and converted to binary digital images as the input of a deep learning model, convolutional neural network (CNN). Using shared weights, pooling and multiple-layer back-propagation techniques, the CNN was adopted to locate the nexus among variations in local binary digital images. Due to the computing capability that was originally developed for binary digital bitmap manipulation, this model has significant potential for forecasting with vast volume of data. The model was validated by a power loads predicting dataset from the Global Energy Forecasting Competition 2012.
Xu, Ming; Niu, Dongxiao; Wang, Shoukai; Liang, Sai
2016-01-01
Traditional forecasting models fit a function approximation from dependent invariables to independent variables. However, they usually get into trouble when date are presented in various formats, such as text, voice and image. This study proposes a novel image-encoded forecasting method that input and output binary digital two-dimensional (2D) images are transformed from decimal data. Omitting any data analysis or cleansing steps for simplicity, all raw variables were selected and converted to binary digital images as the input of a deep learning model, convolutional neural network (CNN). Using shared weights, pooling and multiple-layer back-propagation techniques, the CNN was adopted to locate the nexus among variations in local binary digital images. Due to the computing capability that was originally developed for binary digital bitmap manipulation, this model has significant potential for forecasting with vast volume of data. The model was validated by a power loads predicting dataset from the Global Energy Forecasting Competition 2012. PMID:27281032
A review of multimodel superensemble forecasting for weather, seasonal climate, and hurricanes
NASA Astrophysics Data System (ADS)
Krishnamurti, T. N.; Kumar, V.; Simon, A.; Bhardwaj, A.; Ghosh, T.; Ross, R.
2016-06-01
This review provides a summary of work in the area of ensemble forecasts for weather, climate, oceans, and hurricanes. This includes a combination of multiple forecast model results that does not dwell on the ensemble mean but uses a unique collective bias reduction procedure. A theoretical framework for this procedure is provided, utilizing a suite of models that is constructed from the well-known Lorenz low-order nonlinear system. A tutorial that includes a walk-through table and illustrates the inner workings of the multimodel superensemble's principle is provided. Systematic errors in a single deterministic model arise from a host of features that range from the model's initial state (data assimilation), resolution, representation of physics, dynamics, and ocean processes, local aspects of orography, water bodies, and details of the land surface. Models, in their diversity of representation of such features, end up leaving unique signatures of systematic errors. The multimodel superensemble utilizes as many as 10 million weights to take into account the bias errors arising from these diverse features of multimodels. The design of a single deterministic forecast models that utilizes multiple features from the use of the large volume of weights is provided here. This has led to a better understanding of the error growths and the collective bias reductions for several of the physical parameterizations within diverse models, such as cumulus convection, planetary boundary layer physics, and radiative transfer. A number of examples for weather, seasonal climate, hurricanes and sub surface oceanic forecast skills of member models, the ensemble mean, and the superensemble are provided.
Impact of Brexit on the forest products industry of the United Kingdom and the rest of the world
Craig M. T. Johnston; Joseph Buongiorno
2016-01-01
The Global Forest Products Model was applied to forecast the effect of Brexit on the global forest products industry to2003 under two scenarios; an optimistic and pessimistic future storyline regarding the potential economic effect of Brexit. The forecasts integrated a range of gross domestic product growth rates using an average of the optimistic and...
Demand Forecasting: An Evaluation of DODs Accuracy Metric and Navys Procedures
2016-06-01
inventory management improvement plan, mean of absolute scaled error, lead time adjusted squared error, forecast accuracy, benchmarking, naïve method...Manager JASA Journal of the American Statistical Association LASE Lead-time Adjusted Squared Error LCI Life Cycle Indicator MA Moving Average MAE...Mean Squared Error xvi NAVSUP Naval Supply Systems Command NDAA National Defense Authorization Act NIIN National Individual Identification Number
Asymmetric affective forecasting errors and their correlation with subjective well-being
2018-01-01
Aims Social scientists have postulated that the discrepancy between achievements and expectations affects individuals' subjective well-being. Still, little has been done to qualify and quantify such a psychological effect. Our empirical analysis assesses the consequences of positive and negative affective forecasting errors—the difference between realized and expected subjective well-being—on the subsequent level of subjective well-being. Data We use longitudinal data on a representative sample of 13,431 individuals from the German Socio-Economic Panel. In our sample, 52% of individuals are females, average age is 43 years, average years of education is 11.4 and 27% of our sample lives in East Germany. Subjective well-being (measured by self-reported life satisfaction) is assessed on a 0–10 discrete scale and its sample average is equal to 6.75 points. Methods We develop a simple theoretical framework to assess the consequences of positive and negative affective forecasting errors—the difference between realized and expected subjective well-being—on the subsequent level of subjective well-being, properly accounting for the endogenous adjustment of expectations to positive and negative affective forecasting errors, and use it to derive testable predictions. Given the theoretical framework, we estimate two panel-data equations, the first depicting the association between positive and negative affective forecasting errors and the successive level of subjective well-being and the second describing the correlation between subjective well-being expectations for the future and hedonic failures and successes. Our models control for individual fixed effects and a large battery of time-varying demographic characteristics, health and socio-economic status. Results and conclusions While surpassing expectations is uncorrelated with subjective well-being, failing to match expectations is negatively associated with subsequent realizations of subjective well-being. Expectations are positively (negatively) correlated to positive (negative) forecasting errors. We speculate that in the first case the positive adjustment in expectations is strong enough to cancel out the potential positive effects on subjective well-being of beaten expectations, while in the second case it is not, and individuals persistently bear the negative emotional consequences of not achieving expectations. PMID:29513685
Medium term municipal solid waste generation prediction by autoregressive integrated moving average
DOE Office of Scientific and Technical Information (OSTI.GOV)
Younes, Mohammad K.; Nopiah, Z. M.; Basri, Noor Ezlin A.
2014-09-12
Generally, solid waste handling and management are performed by municipality or local authority. In most of developing countries, local authorities suffer from serious solid waste management (SWM) problems and insufficient data and strategic planning. Thus it is important to develop robust solid waste generation forecasting model. It helps to proper manage the generated solid waste and to develop future plan based on relatively accurate figures. In Malaysia, solid waste generation rate increases rapidly due to the population growth and new consumption trends that characterize the modern life style. This paper aims to develop monthly solid waste forecasting model using Autoregressivemore » Integrated Moving Average (ARIMA), such model is applicable even though there is lack of data and will help the municipality properly establish the annual service plan. The results show that ARIMA (6,1,0) model predicts monthly municipal solid waste generation with root mean square error equals to 0.0952 and the model forecast residuals are within accepted 95% confident interval.« less
Statistical Modeling and Prediction for Tourism Economy Using Dendritic Neural Network
Yu, Ying; Wang, Yirui; Tang, Zheng
2017-01-01
With the impact of global internationalization, tourism economy has also been a rapid development. The increasing interest aroused by more advanced forecasting methods leads us to innovate forecasting methods. In this paper, the seasonal trend autoregressive integrated moving averages with dendritic neural network model (SA-D model) is proposed to perform the tourism demand forecasting. First, we use the seasonal trend autoregressive integrated moving averages model (SARIMA model) to exclude the long-term linear trend and then train the residual data by the dendritic neural network model and make a short-term prediction. As the result showed in this paper, the SA-D model can achieve considerably better predictive performances. In order to demonstrate the effectiveness of the SA-D model, we also use the data that other authors used in the other models and compare the results. It also proved that the SA-D model achieved good predictive performances in terms of the normalized mean square error, absolute percentage of error, and correlation coefficient. PMID:28246527
Wang, Kewei; Song, Wentao; Li, Jinping; Lu, Wu; Yu, Jiangang; Han, Xiaofeng
2016-05-01
The aim of this study is to forecast the incidence of bacillary dysentery with a prediction model. We collected the annual and monthly laboratory data of confirmed cases from January 2004 to December 2014. In this study, we applied an autoregressive integrated moving average (ARIMA) model to forecast bacillary dysentery incidence in Jiangsu, China. The ARIMA (1, 1, 1) × (1, 1, 2)12 model fitted exactly with the number of cases during January 2004 to December 2014. The fitted model was then used to predict bacillary dysentery incidence during the period January to August 2015, and the number of cases fell within the model's CI for the predicted number of cases during January-August 2015. This study shows that the ARIMA model fits the fluctuations in bacillary dysentery frequency, and it can be used for future forecasting when applied to bacillary dysentery prevention and control. © 2016 APJPH.
Medium term municipal solid waste generation prediction by autoregressive integrated moving average
NASA Astrophysics Data System (ADS)
Younes, Mohammad K.; Nopiah, Z. M.; Basri, Noor Ezlin A.; Basri, Hassan
2014-09-01
Generally, solid waste handling and management are performed by municipality or local authority. In most of developing countries, local authorities suffer from serious solid waste management (SWM) problems and insufficient data and strategic planning. Thus it is important to develop robust solid waste generation forecasting model. It helps to proper manage the generated solid waste and to develop future plan based on relatively accurate figures. In Malaysia, solid waste generation rate increases rapidly due to the population growth and new consumption trends that characterize the modern life style. This paper aims to develop monthly solid waste forecasting model using Autoregressive Integrated Moving Average (ARIMA), such model is applicable even though there is lack of data and will help the municipality properly establish the annual service plan. The results show that ARIMA (6,1,0) model predicts monthly municipal solid waste generation with root mean square error equals to 0.0952 and the model forecast residuals are within accepted 95% confident interval.
Statistical Modeling and Prediction for Tourism Economy Using Dendritic Neural Network.
Yu, Ying; Wang, Yirui; Gao, Shangce; Tang, Zheng
2017-01-01
With the impact of global internationalization, tourism economy has also been a rapid development. The increasing interest aroused by more advanced forecasting methods leads us to innovate forecasting methods. In this paper, the seasonal trend autoregressive integrated moving averages with dendritic neural network model (SA-D model) is proposed to perform the tourism demand forecasting. First, we use the seasonal trend autoregressive integrated moving averages model (SARIMA model) to exclude the long-term linear trend and then train the residual data by the dendritic neural network model and make a short-term prediction. As the result showed in this paper, the SA-D model can achieve considerably better predictive performances. In order to demonstrate the effectiveness of the SA-D model, we also use the data that other authors used in the other models and compare the results. It also proved that the SA-D model achieved good predictive performances in terms of the normalized mean square error, absolute percentage of error, and correlation coefficient.
Ahmadu, Baba Usman; Yakubu, Nyandaiti; Yusuph, Haruna; Alfred, Marshall; Bazza, Buba; Lamurde, Abdullahi Suleiman
2013-01-01
Maternal malnutrition can lead to low birth weight in babies, which puts them at risk of developing non-communicable diseases later in life. Evidence from developed countries has shown that low birth weight is associated with a predisposition to higher rates of non-communicable diseases later in life. However, information on this is lacking in developing countries. Thus, this work studied the effects of maternal nutritional indicators (hemoglobin and total protein) on birth weight outcome of babies to forecast a paradigm shift toward increased levels of non-communicable diseases in children. Mother-baby pairs were enrolled in this study using systematic random sampling. Maternal haemogblobin and total proteins were measured using micro-hematocrit and biuret methods, and birth weights of their babies were estimated using the bassinet weighing scale. Of the 168 (100%) babies that participated in this study, 122 (72.6%) were delivered at term and 142 (84.5%) had normal birth weights. Mean comparison of baby's birth weight and maternal hemoglobin was not significant (P = 0.483), that for maternal total protein was also not significant (P = 0.411). Even though positive correlation coefficients were observed between birth weight of babies, maternal hemoglobin and total proteins, these were however not significant. Maternal nutrition did not contribute significantly to low birth weight in our babies. Therefore, association between maternal nutrition and low birth weight to predict future development of non-communicable diseases in our study group is highly unlikely. However, we recommend further work.
Federal Register 2010, 2011, 2012, 2013, 2014
2011-05-19
...-0015] RIN 2132-AB01 Bus Testing: Calculation of Average Passenger Weight and Test Vehicle Weight, and... of proposed rulemaking (NPRM) regarding the calculation of average passenger weights and test vehicle... passenger weights and actual transit vehicle loads. Specifically, FTA proposed to change the average...
Developing a method for estimating AADT on all Louisiana roads : [tech summary].
DOT National Transportation Integrated Search
2015-12-01
Annual Average Daily Tra c (AADT), the average daily volume of vehicle tra c on a highway or road, is an : important measure in transportation engineering. AADT is used in highway geometric design, pavement : design, tra c forecasting, and h...
Forecasting Influenza Epidemics in Hong Kong.
Yang, Wan; Cowling, Benjamin J; Lau, Eric H Y; Shaman, Jeffrey
2015-07-01
Recent advances in mathematical modeling and inference methodologies have enabled development of systems capable of forecasting seasonal influenza epidemics in temperate regions in real-time. However, in subtropical and tropical regions, influenza epidemics can occur throughout the year, making routine forecast of influenza more challenging. Here we develop and report forecast systems that are able to predict irregular non-seasonal influenza epidemics, using either the ensemble adjustment Kalman filter or a modified particle filter in conjunction with a susceptible-infected-recovered (SIR) model. We applied these model-filter systems to retrospectively forecast influenza epidemics in Hong Kong from January 1998 to December 2013, including the 2009 pandemic. The forecast systems were able to forecast both the peak timing and peak magnitude for 44 epidemics in 16 years caused by individual influenza strains (i.e., seasonal influenza A(H1N1), pandemic A(H1N1), A(H3N2), and B), as well as 19 aggregate epidemics caused by one or more of these influenza strains. Average forecast accuracies were 37% (for both peak timing and magnitude) at 1-3 week leads, and 51% (peak timing) and 50% (peak magnitude) at 0 lead. Forecast accuracy increased as the spread of a given forecast ensemble decreased; the forecast accuracy for peak timing (peak magnitude) increased up to 43% (45%) for H1N1, 93% (89%) for H3N2, and 53% (68%) for influenza B at 1-3 week leads. These findings suggest that accurate forecasts can be made at least 3 weeks in advance for subtropical and tropical regions.
Forecasting Influenza Epidemics in Hong Kong
Yang, Wan; Cowling, Benjamin J.; Lau, Eric H. Y.; Shaman, Jeffrey
2015-01-01
Recent advances in mathematical modeling and inference methodologies have enabled development of systems capable of forecasting seasonal influenza epidemics in temperate regions in real-time. However, in subtropical and tropical regions, influenza epidemics can occur throughout the year, making routine forecast of influenza more challenging. Here we develop and report forecast systems that are able to predict irregular non-seasonal influenza epidemics, using either the ensemble adjustment Kalman filter or a modified particle filter in conjunction with a susceptible-infected-recovered (SIR) model. We applied these model-filter systems to retrospectively forecast influenza epidemics in Hong Kong from January 1998 to December 2013, including the 2009 pandemic. The forecast systems were able to forecast both the peak timing and peak magnitude for 44 epidemics in 16 years caused by individual influenza strains (i.e., seasonal influenza A(H1N1), pandemic A(H1N1), A(H3N2), and B), as well as 19 aggregate epidemics caused by one or more of these influenza strains. Average forecast accuracies were 37% (for both peak timing and magnitude) at 1-3 week leads, and 51% (peak timing) and 50% (peak magnitude) at 0 lead. Forecast accuracy increased as the spread of a given forecast ensemble decreased; the forecast accuracy for peak timing (peak magnitude) increased up to 43% (45%) for H1N1, 93% (89%) for H3N2, and 53% (68%) for influenza B at 1-3 week leads. These findings suggest that accurate forecasts can be made at least 3 weeks in advance for subtropical and tropical regions. PMID:26226185
A quality assessment of the MARS crop yield forecasting system for the European Union
NASA Astrophysics Data System (ADS)
van der Velde, Marijn; Bareuth, Bettina
2015-04-01
Timely information on crop production forecasts can become of increasing importance as commodity markets are more and more interconnected. Impacts across large crop production areas due to (e.g.) extreme weather and pest outbreaks can create ripple effects that may affect food prices and availability elsewhere. The MARS Unit (Monitoring Agricultural ResourceS), DG Joint Research Centre, European Commission, has been providing forecasts of European crop production levels since 1993. The operational crop production forecasting is carried out with the MARS Crop Yield Forecasting System (M-CYFS). The M-CYFS is used to monitor crop growth development, evaluate short-term effects of anomalous meteorological events, and provide monthly forecasts of crop yield at national and European Union level. The crop production forecasts are published in the so-called MARS bulletins. Forecasting crop yield over large areas in the operational context requires quality benchmarks. Here we present an analysis of the accuracy and skill of past crop yield forecasts of the main crops (e.g. soft wheat, grain maize), throughout the growing season, and specifically for the final forecast before harvest. Two simple benchmarks to assess the skill of the forecasts were defined as comparing the forecasts to 1) a forecast equal to the average yield and 2) a forecast using a linear trend established through the crop yield time-series. These reveal a variability in performance as a function of crop and Member State. In terms of production, the yield forecasts of 67% of the EU-28 soft wheat production and 80% of the EU-28 maize production have been forecast superior to both benchmarks during the 1993-2013 period. In a changing and increasingly variable climate crop yield forecasts can become increasingly valuable - provided they are used wisely. We end our presentation by discussing research activities that could contribute to this goal.
NASA Astrophysics Data System (ADS)
Hu, Qi; Pytlik Zillig, Lisa M.; Lynne, Gary D.; Tomkins, Alan J.; Waltman, William J.; Hayes, Michael J.; Hubbard, Kenneth G.; Artikov, Ikrom; Hoffman, Stacey J.; Wilhite, Donald A.
2006-09-01
Although the accuracy of weather and climate forecasts is continuously improving and new information retrieved from climate data is adding to the understanding of climate variation, use of the forecasts and climate information by farmers in farming decisions has changed little. This lack of change may result from knowledge barriers and psychological, social, and economic factors that undermine farmer motivation to use forecasts and climate information. According to the theory of planned behavior (TPB), the motivation to use forecasts may arise from personal attitudes, social norms, and perceived control or ability to use forecasts in specific decisions. These attributes are examined using data from a survey designed around the TPB and conducted among farming communities in the region of eastern Nebraska and the western U.S. Corn Belt. There were three major findings: 1) the utility and value of the forecasts for farming decisions as perceived by farmers are, on average, around 3.0 on a 0 7 scale, indicating much room to improve attitudes toward the forecast value. 2) The use of forecasts by farmers to influence decisions is likely affected by several social groups that can provide “expert viewpoints” on forecast use. 3) A major obstacle, next to forecast accuracy, is the perceived identity and reliability of the forecast makers. Given the rapidly increasing number of forecasts in this growing service business, the ambiguous identity of forecast providers may have left farmers confused and may have prevented them from developing both trust in forecasts and skills to use them. These findings shed light on productive avenues for increasing the influence of forecasts, which may lead to greater farming productivity. In addition, this study establishes a set of reference points that can be used for comparisons with future studies to quantify changes in forecast use and influence.
Guo, Xiang; Wang, Ming Tian; Zhang, Guo Zhi
2017-12-01
The winter reproductive areas of Puccinia striiformis var. striiformis in Sichuan Basin are often the places mostly affected by wheat stripe rust. With data on the meteorological condition and stripe rust situation at typical stations in the winter reproductive area in Sichuan Basin from 1999 to 2016, this paper classified the meteorological conditions inducing wheat stripe rust into 5 grades, based on the incidence area ratio of the disease. The meteorological factors which were biologically related to wheat stripe rust were determined through multiple analytical methods, and a meteorological grade model for forecasting wheat stripe rust was created. The result showed that wheat stripe rust in Sichuan Basin was significantly correlated with many meteorological factors, such as the ave-rage (maximum and minimum) temperature, precipitation and its anomaly percentage, relative humidity and its anomaly percentage, average wind speed and sunshine duration. Among these, the average temperature and the anomaly percentage of relative humidity were the determining factors. According to a historical retrospective test, the accuracy of the forecast based on the model was 64% for samples in the county-level test, and 89% for samples in the municipal-level test. In a meteorological grade forecast of wheat stripe rust in the winter reproductive areas in Sichuan Basin in 2017, the prediction was accurate for 62.8% of the samples, with 27.9% error by one grade and only 9.3% error by two or more grades. As a result, the model could deliver satisfactory forecast results, and predicate future wheat stripe rust from a meteorological point of view.
Forecasting influenza in Hong Kong with Google search queries and statistical model fusion.
Xu, Qinneng; Gel, Yulia R; Ramirez Ramirez, L Leticia; Nezafati, Kusha; Zhang, Qingpeng; Tsui, Kwok-Leung
2017-01-01
The objective of this study is to investigate predictive utility of online social media and web search queries, particularly, Google search data, to forecast new cases of influenza-like-illness (ILI) in general outpatient clinics (GOPC) in Hong Kong. To mitigate the impact of sensitivity to self-excitement (i.e., fickle media interest) and other artifacts of online social media data, in our approach we fuse multiple offline and online data sources. Four individual models: generalized linear model (GLM), least absolute shrinkage and selection operator (LASSO), autoregressive integrated moving average (ARIMA), and deep learning (DL) with Feedforward Neural Networks (FNN) are employed to forecast ILI-GOPC both one week and two weeks in advance. The covariates include Google search queries, meteorological data, and previously recorded offline ILI. To our knowledge, this is the first study that introduces deep learning methodology into surveillance of infectious diseases and investigates its predictive utility. Furthermore, to exploit the strength from each individual forecasting models, we use statistical model fusion, using Bayesian model averaging (BMA), which allows a systematic integration of multiple forecast scenarios. For each model, an adaptive approach is used to capture the recent relationship between ILI and covariates. DL with FNN appears to deliver the most competitive predictive performance among the four considered individual models. Combing all four models in a comprehensive BMA framework allows to further improve such predictive evaluation metrics as root mean squared error (RMSE) and mean absolute predictive error (MAPE). Nevertheless, DL with FNN remains the preferred method for predicting locations of influenza peaks. The proposed approach can be viewed a feasible alternative to forecast ILI in Hong Kong or other countries where ILI has no constant seasonal trend and influenza data resources are limited. The proposed methodology is easily tractable and computationally efficient.
Real-time forecasts of dengue epidemics
NASA Astrophysics Data System (ADS)
Yamana, T. K.; Shaman, J. L.
2015-12-01
Dengue is a mosquito-borne viral disease prevalent in the tropics and subtropics, with an estimated 2.5 billion people at risk of transmission. In many areas with endemic dengue, disease transmission is seasonal but prone to high inter-annual variability with occasional severe epidemics. Predicting and preparing for periods of higher than average transmission is a significant public health challenge. Here we present a model of dengue transmission and a framework for optimizing model simulations with real-time observational data of dengue cases and environmental variables in order to generate ensemble-based forecasts of the timing and severity of disease outbreaks. The model-inference system is validated using synthetic data and dengue outbreak records. Retrospective forecasts are generated for a number of locations and the accuracy of these forecasts is quantified.
System designed for issuing landslide alerts in the San Francisco Bay area
Finley, D.
1987-01-01
A system for forecasting landslides during major storms has been developed for the San Francisco Bay area by the U.S Geological Survey and was successfully tested during heavy storms in the bay area during February 1986. Based on the forecasts provided by the USGS, the National Weather Service (NWS) included landslide warnings in its regular weather forecasts or in special weather statements transmitted to local radio and television stations and other news media. USGS scientists said the landslide forecasting and warning system for the San Francisco Bay area can be used as a prototype in developing similar systems for other parts of the Nation susceptible to landsliding. Studies show damage from landslides in the United States averages an estimated $1.5 billion per year.
Liu, Yan; Watson, Stella C.; Gettings, Jenna R.; Lund, Robert B.; Nordone, Shila K.; McMahan, Christopher S.
2017-01-01
This paper forecasts the 2016 canine Anaplasma spp. seroprevalence in the United States from eight climate, geographic and societal factors. The forecast’s construction and an assessment of its performance are described. The forecast is based on a spatial-temporal conditional autoregressive model fitted to over 11 million Anaplasma spp. seroprevalence test results for dogs conducted in the 48 contiguous United States during 2011–2015. The forecast uses county-level data on eight predictive factors, including annual temperature, precipitation, relative humidity, county elevation, forestation coverage, surface water coverage, population density and median household income. Non-static factors are extrapolated into the forthcoming year with various statistical methods. The fitted model and factor extrapolations are used to estimate next year’s regional prevalence. The correlation between the observed and model-estimated county-by-county Anaplasma spp. seroprevalence for the five-year period 2011–2015 is 0.902, demonstrating reasonable model accuracy. The weighted correlation (accounting for different sample sizes) between 2015 observed and forecasted county-by-county Anaplasma spp. seroprevalence is 0.987, exhibiting that the proposed approach can be used to accurately forecast Anaplasma spp. seroprevalence. The forecast presented herein can a priori alert veterinarians to areas expected to see Anaplasma spp. seroprevalence beyond the accepted endemic range. The proposed methods may prove useful for forecasting other diseases. PMID:28738085
NASA Astrophysics Data System (ADS)
Gronewold, A.; Fry, L. M.; Hunter, T.; Pei, L.; Smith, J.; Lucier, H.; Mueller, R.
2017-12-01
The U.S. Army Corps of Engineers (USACE) has recently operationalized a suite of ensemble forecasts of Net Basin Supply (NBS), water levels, and connecting channel flows that was developed through a collaboration among USACE, NOAA's Great Lakes Environmental Research Laboratory, Ontario Power Generation (OPG), New York Power Authority (NYPA), and the Niagara River Control Center (NRCC). These forecasts are meant to provide reliable projections of potential extremes in daily discharge in the Niagara and St. Lawrence Rivers over a long time horizon (5 years). The suite of forecasts includes eight configurations that vary by (a) NBS model configuration, (b) meteorological forcings, and (c) incorporation of seasonal climate projections through the use of weighting. Forecasts are updated on a weekly basis, and represent the first operational forecasts of Great Lakes water levels and flows that span daily to inter-annual horizons and employ realistic regulation logic and lake-to-lake routing. We will present results from a hindcast assessment conducted during the transition from research to operation, as well as early indications of success rates determined through operational verification of forecasts. Assessment will include an exploration of the relative skill of various forecast configurations at different time horizons and the potential for application to hydropower decision making and Great Lakes water management.
NASA Astrophysics Data System (ADS)
Krishnamurthy, Lakshmi; Muñoz, Ángel G.; Vecchi, Gabriel A.; Msadek, Rym; Wittenberg, Andrew T.; Stern, Bill; Gudgel, Rich; Zeng, Fanrong
2018-05-01
The Caribbean low-level jet (CLLJ) is an important component of the atmospheric circulation over the Intra-Americas Sea (IAS) which impacts the weather and climate both locally and remotely. It influences the rainfall variability in the Caribbean, Central America, northern South America, the tropical Pacific and the continental Unites States through the transport of moisture. We make use of high-resolution coupled and uncoupled models from the Geophysical Fluid Dynamics Laboratory (GFDL) to investigate the simulation of the CLLJ and its teleconnections and further compare with low-resolution models. The high-resolution coupled model FLOR shows improvements in the simulation of the CLLJ and its teleconnections with rainfall and SST over the IAS compared to the low-resolution coupled model CM2.1. The CLLJ is better represented in uncoupled models (AM2.1 and AM2.5) forced with observed sea-surface temperatures (SSTs), emphasizing the role of SSTs in the simulation of the CLLJ. Further, we determine the forecast skill for observed rainfall using both high- and low-resolution predictions of rainfall and SSTs for the July-August-September season. We determine the role of statistical correction of model biases, coupling and horizontal resolution on the forecast skill. Statistical correction dramatically improves area-averaged forecast skill. But the analysis of spatial distribution in skill indicates that the improvement in skill after statistical correction is region dependent. Forecast skill is sensitive to coupling in parts of the Caribbean, Central and northern South America, and it is mostly insensitive over North America. Comparison of forecast skill between high and low-resolution coupled models does not show any dramatic difference. However, uncoupled models show improvement in the area-averaged skill in the high-resolution atmospheric model compared to lower resolution model. Understanding and improving the forecast skill over the IAS has important implications for highly vulnerable nations in the region.
Reconstructing paleoclimate fields using online data assimilation with a linear inverse model
NASA Astrophysics Data System (ADS)
Perkins, Walter A.; Hakim, Gregory J.
2017-05-01
We examine the skill of a new approach to climate field reconstructions (CFRs) using an online paleoclimate data assimilation (PDA) method. Several recent studies have foregone climate model forecasts during assimilation due to the computational expense of running coupled global climate models (CGCMs) and the relatively low skill of these forecasts on longer timescales. Here we greatly diminish the computational cost by employing an empirical forecast model (linear inverse model, LIM), which has been shown to have skill comparable to CGCMs for forecasting annual-to-decadal surface temperature anomalies. We reconstruct annual-average 2 m air temperature over the instrumental period (1850-2000) using proxy records from the PAGES 2k Consortium Phase 1 database; proxy models for estimating proxy observations are calibrated on GISTEMP surface temperature analyses. We compare results for LIMs calibrated using observational (Berkeley Earth), reanalysis (20th Century Reanalysis), and CMIP5 climate model (CCSM4 and MPI) data relative to a control offline reconstruction method. Generally, we find that the usage of LIM forecasts for online PDA increases reconstruction agreement with the instrumental record for both spatial fields and global mean temperature (GMT). Specifically, the coefficient of efficiency (CE) skill metric for detrended GMT increases by an average of 57 % over the offline benchmark. LIM experiments display a common pattern of skill improvement in the spatial fields over Northern Hemisphere land areas and in the high-latitude North Atlantic-Barents Sea corridor. Experiments for non-CGCM-calibrated LIMs reveal region-specific reductions in spatial skill compared to the offline control, likely due to aspects of the LIM calibration process. Overall, the CGCM-calibrated LIMs have the best performance when considering both spatial fields and GMT. A comparison with the persistence forecast experiment suggests that improvements are associated with the linear dynamical constraints of the forecast and not simply persistence of temperature anomalies.
Inanlouganji, Alireza; Reddy, T. Agami; Katipamula, Srinivas
2018-04-13
Forecasting solar irradiation has acquired immense importance in view of the exponential increase in the number of solar photovoltaic (PV) system installations. In this article, analyses results involving statistical and machine-learning techniques to predict solar irradiation for different forecasting horizons are reported. Yearlong typical meteorological year 3 (TMY3) datasets from three cities in the United States with different climatic conditions have been used in this analysis. A simple forecast approach that assumes consecutive days to be identical serves as a baseline model to compare forecasting alternatives. To account for seasonal variability and to capture short-term fluctuations, different variants of themore » lagged moving average (LMX) model with cloud cover as the input variable are evaluated. Finally, the proposed LMX model is evaluated against an artificial neural network (ANN) model. How the one-hour and 24-hour models can be used in conjunction to predict different short-term rolling horizons is discussed, and this joint application is illustrated for a four-hour rolling horizon forecast scheme. Lastly, the effect of using predicted cloud cover values, instead of measured ones, on the accuracy of the models is assessed. Results show that LMX models do not degrade in forecast accuracy if models are trained with the forecast cloud cover data.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Inanlouganji, Alireza; Reddy, T. Agami; Katipamula, Srinivas
Forecasting solar irradiation has acquired immense importance in view of the exponential increase in the number of solar photovoltaic (PV) system installations. In this article, analyses results involving statistical and machine-learning techniques to predict solar irradiation for different forecasting horizons are reported. Yearlong typical meteorological year 3 (TMY3) datasets from three cities in the United States with different climatic conditions have been used in this analysis. A simple forecast approach that assumes consecutive days to be identical serves as a baseline model to compare forecasting alternatives. To account for seasonal variability and to capture short-term fluctuations, different variants of themore » lagged moving average (LMX) model with cloud cover as the input variable are evaluated. Finally, the proposed LMX model is evaluated against an artificial neural network (ANN) model. How the one-hour and 24-hour models can be used in conjunction to predict different short-term rolling horizons is discussed, and this joint application is illustrated for a four-hour rolling horizon forecast scheme. Lastly, the effect of using predicted cloud cover values, instead of measured ones, on the accuracy of the models is assessed. Results show that LMX models do not degrade in forecast accuracy if models are trained with the forecast cloud cover data.« less
Evaluation of annual, global seismicity forecasts, including ensemble models
NASA Astrophysics Data System (ADS)
Taroni, Matteo; Zechar, Jeremy; Marzocchi, Warner
2013-04-01
In 2009, the Collaboratory for the Study of the Earthquake Predictability (CSEP) initiated a prototype global earthquake forecast experiment. Three models participated in this experiment for 2009, 2010 and 2011—each model forecast the number of earthquakes above magnitude 6 in 1x1 degree cells that span the globe. Here we use likelihood-based metrics to evaluate the consistency of the forecasts with the observed seismicity. We compare model performance with statistical tests and a new method based on the peer-to-peer gambling score. The results of the comparisons are used to build ensemble models that are a weighted combination of the individual models. Notably, in these experiments the ensemble model always performs significantly better than the single best-performing model. Our results indicate the following: i) time-varying forecasts, if not updated after each major shock, may not provide significant advantages with respect to time-invariant models in 1-year forecast experiments; ii) the spatial distribution seems to be the most important feature to characterize the different forecasting performances of the models; iii) the interpretation of consistency tests may be misleading because some good models may be rejected while trivial models may pass consistency tests; iv) a proper ensemble modeling seems to be a valuable procedure to get the best performing model for practical purposes.
Assessing a 3D smoothed seismicity model of induced earthquakes
NASA Astrophysics Data System (ADS)
Zechar, Jeremy; Király, Eszter; Gischig, Valentin; Wiemer, Stefan
2016-04-01
As more energy exploration and extraction efforts cause earthquakes, it becomes increasingly important to control induced seismicity. Risk management schemes must be improved and should ultimately be based on near-real-time forecasting systems. With this goal in mind, we propose a test bench to evaluate models of induced seismicity based on metrics developed by the CSEP community. To illustrate the test bench, we consider a model based on the so-called seismogenic index and a rate decay; to produce three-dimensional forecasts, we smooth past earthquakes in space and time. We explore four variants of this model using the Basel 2006 and Soultz-sous-Forêts 2004 datasets to make short-term forecasts, test their consistency, and rank the model variants. Our results suggest that such a smoothed seismicity model is useful for forecasting induced seismicity within three days, and giving more weight to recent events improves forecast performance. Moreover, the location of the largest induced earthquake is forecast well by this model. Despite the good spatial performance, the model does not estimate the seismicity rate well: it frequently overestimates during stimulation and during the early post-stimulation period, and it systematically underestimates around shut-in. In this presentation, we also describe a robust estimate of information gain, a modification that can also benefit forecast experiments involving tectonic earthquakes.
NASA Astrophysics Data System (ADS)
Rodrigues, Luis R. L.; Doblas-Reyes, Francisco J.; Coelho, Caio A. S.
2018-02-01
A Bayesian method known as the Forecast Assimilation (FA) was used to calibrate and combine monthly near-surface temperature and precipitation outputs from seasonal dynamical forecast systems. The simple multimodel (SMM), a method that combines predictions with equal weights, was used as a benchmark. This research focuses on Europe and adjacent regions for predictions initialized in May and November, covering the boreal summer and winter months. The forecast quality of the FA and SMM as well as the single seasonal dynamical forecast systems was assessed using deterministic and probabilistic measures. A non-parametric bootstrap method was used to account for the sampling uncertainty of the forecast quality measures. We show that the FA performs as well as or better than the SMM in regions where the dynamical forecast systems were able to represent the main modes of climate covariability. An illustration with the near-surface temperature over North Atlantic, the Mediterranean Sea and Middle-East in summer months associated with the well predicted first mode of climate covariability is offered. However, the main modes of climate covariability are not well represented in most situations discussed in this study as the seasonal dynamical forecast systems have limited skill when predicting the European climate. In these situations, the SMM performs better more often.
NASA Astrophysics Data System (ADS)
Tian, D.; Medina, H.
2017-12-01
Post-processing of medium range reference evapotranspiration (ETo) forecasts based on numerical weather prediction (NWP) models has the potential of improving the quality and utility of these forecasts. This work compares the performance of several post-processing methods for correcting ETo forecasts over the continental U.S. generated from The Observing System Research and Predictability Experiment (THORPEX) Interactive Grand Global Ensemble (TIGGE) database using data from Europe (EC), the United Kingdom (MO), and the United States (NCEP). The pondered post-processing techniques are: simple bias correction, the use of multimodels, the Ensemble Model Output Statistics (EMOS, Gneitting et al., 2005) and the Bayesian Model Averaging (BMA, Raftery et al., 2005). ETo estimates based on quality-controlled U.S. Regional Climate Reference Network measurements, and computed with the FAO 56 Penman Monteith equation, are adopted as baseline. EMOS and BMA are generally the most efficient post-processing techniques of the ETo forecasts. Nevertheless, the simple bias correction of the best model is commonly much more rewarding than using multimodel raw forecasts. Our results demonstrate the potential of different forecasting and post-processing frameworks in operational evapotranspiration and irrigation advisory systems at national scale.
Sea level forecasts for Pacific Islands based on Satellite Altimetry
NASA Astrophysics Data System (ADS)
Yoon, H.; Merrifield, M. A.; Thompson, P. R.; Widlansky, M. J.; Marra, J. J.
2017-12-01
Coastal flooding at tropical Pacific Islands often occurs when positive sea level anomalies coincide with high tides. To help mitigate this risk, a forecast tool for daily-averaged sea level anomalies is developed that can be added to predicted tides at tropical Pacific Island sites. The forecast takes advantage of the observed westward propagation that sea level anomalies exhibit over a range of time scales. The daily near-real time altimetry gridded data from Archiving, Validation, and Interpretation of Satellite Oceanographic (AVISO) is used to specify upstream sea level at each site, with lead times computed based on mode-one baroclinic Rossby wave speeds. To validate the forecast, hindcasts are compared to tide gauge and nearby AVISO gridded time series. The forecast skills exceed persistence at most stations out to a month or more lead time. The skill is highest at stations where eddy variability is relatively weak. The impacts on the forecasts due to varying propagation speed, decay time, and smoothing of the AVISO data are examined. In addition, the inclusion of forecast winds in a forced wave equation is compared to the freely propagating results. Case studies are presented for seasonally high tide events throughout the Pacific Island region.
NASA Astrophysics Data System (ADS)
Yuchi, Weiran; Yao, Jiayun; McLean, Kathleen E.; Stull, Roland; Pavlovic, Radenko; Davignon, Didier; Moran, Michael D.; Henderson, Sarah B.
2016-11-01
Fine particulate matter (PM2.5) generated by forest fires has been associated with a wide range of adverse health outcomes, including exacerbation of respiratory diseases and increased risk of mortality. Due to the unpredictable nature of forest fires, it is challenging for public health authorities to reliably evaluate the magnitude and duration of potential exposures before they occur. Smoke forecasting tools are a promising development from the public health perspective, but their widespread adoption is limited by their inherent uncertainties. Observed measurements from air quality monitoring networks and remote sensing platforms are more reliable, but they are inherently retrospective. It would be ideal to reduce the uncertainty in smoke forecasts by integrating any available observations. This study takes spatially resolved PM2.5 estimates from an empirical model that integrates air quality measurements with satellite data, and averages them with PM2.5 predictions from two smoke forecasting systems. Two different indicators of population respiratory health are then used to evaluate whether the blending improved the utility of the smoke forecasts. Among a total of six models, including two single forecasts and four blended forecasts, the blended estimates always performed better than the forecast values alone. Integrating measured observations into smoke forecasts could improve public health preparedness for smoke events, which are becoming more frequent and intense as the climate changes.
NASA Astrophysics Data System (ADS)
Kato, Takeyoshi; Sone, Akihito; Shimakage, Toyonari; Suzuoki, Yasuo
A microgrid (MG) is one of the measures for enhancing the high penetration of renewable energy (RE)-based distributed generators (DGs). For constructing a MG economically, the capacity optimization of controllable DGs against RE-based DGs is essential. By using a numerical simulation model developed based on the demonstrative studies on a MG using PAFC and NaS battery as controllable DGs and photovoltaic power generation system (PVS) as a RE-based DG, this study discusses the influence of forecast accuracy of PVS output on the capacity optimization and daily operation evaluated with the cost. The main results are as follows. The required capacity of NaS battery must be increased by 10-40% against the ideal situation without the forecast error of PVS power output. The influence of forecast error on the received grid electricity would not be so significant on annual basis because the positive and negative forecast error varies with days. The annual total cost of facility and operation increases by 2-7% due to the forecast error applied in this study. The impact of forecast error on the facility optimization and operation optimization is almost the same each other at a few percentages, implying that the forecast accuracy should be improved in terms of both the number of times with large forecast error and the average error.
Fast Algorithms for Mining Co-evolving Time Series
2011-09-01
Keogh et al., 2001, 2004] and (b) forecasting, like an autoregressive integrated moving average model ( ARIMA ) and related meth- ods [Box et al., 1994...computing hardware? We develop models to mine time series with missing values, to extract compact representation from time sequences, to segment the...sequences, and to do forecasting. For large scale data, we propose algorithms for learning time series models , in particular, including Linear Dynamical
Lejiang Yu; Shiyuan Zhong; Xindi Bian; Warren E. Heilman
2015-01-01
This study examines the spatial and temporal variability of wind speed at 80m above ground (the average hub height of most modern wind turbines) in the contiguous United States using Climate Forecast System Reanalysis (CFSR) data from 1979 to 2011. The mean 80-m wind exhibits strong seasonality and large spatial variability, with higher (lower) wind speeds in the...
Impact of TRMM and SSM/I-derived Precipitation and Moisture Data on the GEOS Global Analysis
NASA Technical Reports Server (NTRS)
Hou, Arthur Y.; Zhang, Sara Q.; daSilva, Arlindo M.; Olson, William S.
1999-01-01
Current global analyses contain significant errors in primary hydrological fields such as precipitation, evaporation, and related cloud and moisture in the tropics. The Data Assimilation Office at NASA's Goddard Space Flight Center has been exploring the use of space-based rainfall and total precipitable water (TPW) estimates to constrain these hydrological parameters in the Goddard Earth Observing System (GEOS) global data assimilation system. We present results showing that assimilating the 6-hour averaged rain rates and TPW estimates from the Tropical Rainfall Measuring Mission (TRMM) and Special Sensor Microwave/Imager (SSM/I) instruments improves not only the precipitation and moisture estimates but also reduce state-dependent systematic errors in key climate parameters directly linked to convection such as the outgoing longwave radiation, clouds, and the large-scale circulation. The improved analysis also improves short-range forecasts beyond 1 day, but the impact is relatively modest compared with improvements in the time-averaged analysis. The study shows that, in the presence of biases and other errors of the forecast model, improving the short-range forecast is not necessarily prerequisite for improving the assimilation as a climate data set. The full impact of a given type of observation on the assimilated data set should not be measured solely in terms of forecast skills.
Weighted south-wide average pulpwood prices
James E. Granskog; Kevin D. Growther
1991-01-01
Weighted average prices provide a more accurate representation of regional pulpwood price trends when production volumes valy widely by state. Unweighted South-wide average delivered prices for pulpwood, as reported by Timber Mart-South, were compared to average annual prices weighted by each state's pulpwood production from 1977 to 1986. Weighted average prices...
Parsons, Thomas E.; Geist, Eric L.
2009-01-01
The idea that faults rupture in repeated, characteristic earthquakes is central to most probabilistic earthquake forecasts. The concept is elegant in its simplicity, and if the same event has repeated itself multiple times in the past, we might anticipate the next. In practice however, assembling a fault-segmented characteristic earthquake rupture model can grow into a complex task laden with unquantified uncertainty. We weigh the evidence that supports characteristic earthquakes against a potentially simpler model made from extrapolation of a Gutenberg–Richter magnitude-frequency law to individual fault zones. We find that the Gutenberg–Richter model satisfies key data constraints used for earthquake forecasting equally well as a characteristic model. Therefore, judicious use of instrumental and historical earthquake catalogs enables large-earthquake-rate calculations with quantifiable uncertainty that should get at least equal weighting in probabilistic forecasting.
Zlotnik, Alexander; Gallardo-Antolín, Ascensión; Cuchí Alfaro, Miguel; Pérez Pérez, María Carmen; Montero Martínez, Juan Manuel
2015-08-01
Although emergency department visit forecasting can be of use for nurse staff planning, previous research has focused on models that lacked sufficient resolution and realistic error metrics for these predictions to be applied in practice. Using data from a 1100-bed specialized care hospital with 553,000 patients assigned to its healthcare area, forecasts with different prediction horizons, from 2 to 24 weeks ahead, with an 8-hour granularity, using support vector regression, M5P, and stratified average time-series models were generated with an open-source software package. As overstaffing and understaffing errors have different implications, error metrics and potential personnel monetary savings were calculated with a custom validation scheme, which simulated subsequent generation of predictions during a 4-year period. Results were then compared with a generalized estimating equation regression. Support vector regression and M5P models were found to be superior to the stratified average model with a 95% confidence interval. Our findings suggest that medium and severe understaffing situations could be reduced in more than an order of magnitude and average yearly savings of up to €683,500 could be achieved if dynamic nursing staff allocation was performed with support vector regression instead of the static staffing levels currently in use.
Simultaneous calibration of ensemble river flow predictions over an entire range of lead times
NASA Astrophysics Data System (ADS)
Hemri, S.; Fundel, F.; Zappa, M.
2013-10-01
Probabilistic estimates of future water levels and river discharge are usually simulated with hydrologic models using ensemble weather forecasts as main inputs. As hydrologic models are imperfect and the meteorological ensembles tend to be biased and underdispersed, the ensemble forecasts for river runoff typically are biased and underdispersed, too. Thus, in order to achieve both reliable and sharp predictions statistical postprocessing is required. In this work Bayesian model averaging (BMA) is applied to statistically postprocess ensemble runoff raw forecasts for a catchment in Switzerland, at lead times ranging from 1 to 240 h. The raw forecasts have been obtained using deterministic and ensemble forcing meteorological models with different forecast lead time ranges. First, BMA is applied based on mixtures of univariate normal distributions, subject to the assumption of independence between distinct lead times. Then, the independence assumption is relaxed in order to estimate multivariate runoff forecasts over the entire range of lead times simultaneously, based on a BMA version that uses multivariate normal distributions. Since river runoff is a highly skewed variable, Box-Cox transformations are applied in order to achieve approximate normality. Both univariate and multivariate BMA approaches are able to generate well calibrated probabilistic forecasts that are considerably sharper than climatological forecasts. Additionally, multivariate BMA provides a promising approach for incorporating temporal dependencies into the postprocessed forecasts. Its major advantage against univariate BMA is an increase in reliability when the forecast system is changing due to model availability.
A Comparison of Forecast Error Generators for Modeling Wind and Load Uncertainty
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lu, Ning; Diao, Ruisheng; Hafen, Ryan P.
2013-07-25
This paper presents four algorithms to generate random forecast error time series. The performance of four algorithms is compared. The error time series are used to create real-time (RT), hour-ahead (HA), and day-ahead (DA) wind and load forecast time series that statistically match historically observed forecasting data sets used in power grid operation to study the net load balancing need in variable generation integration studies. The four algorithms are truncated-normal distribution models, state-space based Markov models, seasonal autoregressive moving average (ARMA) models, and a stochastic-optimization based approach. The comparison is made using historical DA load forecast and actual load valuesmore » to generate new sets of DA forecasts with similar stoical forecast error characteristics (i.e., mean, standard deviation, autocorrelation, and cross-correlation). The results show that all methods generate satisfactory results. One method may preserve one or two required statistical characteristics better the other methods, but may not preserve other statistical characteristics as well compared with the other methods. Because the wind and load forecast error generators are used in wind integration studies to produce wind and load forecasts time series for stochastic planning processes, it is sometimes critical to use multiple methods to generate the error time series to obtain a statistically robust result. Therefore, this paper discusses and compares the capabilities of each algorithm to preserve the characteristics of the historical forecast data sets.« less
A Functional-Genetic Scheme for Seizure Forecasting in Canine Epilepsy.
Bou Assi, Elie; Nguyen, Dang K; Rihana, Sandy; Sawan, Mohamad
2018-06-01
The objective of this work is the development of an accurate seizure forecasting algorithm that considers brain's functional connectivity for electrode selection. We start by proposing Kmeans-directed transfer function, an adaptive functional connectivity method intended for seizure onset zone localization in bilateral intracranial EEG recordings. Electrodes identified as seizure activity sources and sinks are then used to implement a seizure-forecasting algorithm on long-term continuous recordings in dogs with naturally-occurring epilepsy. A precision-recall genetic algorithm is proposed for feature selection in line with a probabilistic support vector machine classifier. Epileptic activity generators were focal in all dogs confirming the diagnosis of focal epilepsy in these animals while sinks spanned both hemispheres in 2 of 3 dogs. Seizure forecasting results show performance improvement compared to previous studies, achieving average sensitivity of 84.82% and time in warning of 0.1. Achieved performances highlight the feasibility of seizure forecasting in canine epilepsy. The ability to improve seizure forecasting provides promise for the development of EEG-triggered closed-loop seizure intervention systems for ambulatory implantation in patients with refractory epilepsy.
NASA Astrophysics Data System (ADS)
Durazo, Juan A.; Kostelich, Eric J.; Mahalov, Alex
2017-09-01
We propose a targeted observation strategy, based on the influence matrix diagnostic, that optimally selects where additional observations may be placed to improve ionospheric forecasts. This strategy is applied in data assimilation observing system experiments, where synthetic electron density vertical profiles, which represent those of Constellation Observing System for Meteorology, Ionosphere, and Climate/Formosa satellite 3, are assimilated into the Thermosphere-Ionosphere-Electrodynamics General Circulation Model using the local ensemble transform Kalman filter during the 26 September 2011 geomagnetic storm. During each analysis step, the observation vector is augmented with five synthetic vertical profiles optimally placed to target electron density errors, using our targeted observation strategy. Forecast improvement due to assimilation of augmented vertical profiles is measured with the root-mean-square error (RMSE) of analyzed electron density, averaged over 600 km regions centered around the augmented vertical profile locations. Assimilating vertical profiles with targeted locations yields about 60%-80% reduction in electron density RMSE, compared to a 15% average reduction when assimilating randomly placed vertical profiles. Assimilating vertical profiles whose locations target the zonal component of neutral winds (Un) yields on average a 25% RMSE reduction in Un estimates, compared to a 2% average improvement obtained with randomly placed vertical profiles. These results demonstrate that our targeted strategy can improve data assimilation efforts during extreme events by detecting regions where additional observations would provide the largest benefit to the forecast.
NASA Astrophysics Data System (ADS)
Kasiviswanathan, K.; Sudheer, K.
2013-05-01
Artificial neural network (ANN) based hydrologic models have gained lot of attention among water resources engineers and scientists, owing to their potential for accurate prediction of flood flows as compared to conceptual or physics based hydrologic models. The ANN approximates the non-linear functional relationship between the complex hydrologic variables in arriving at the river flow forecast values. Despite a large number of applications, there is still some criticism that ANN's point prediction lacks in reliability since the uncertainty of predictions are not quantified, and it limits its use in practical applications. A major concern in application of traditional uncertainty analysis techniques on neural network framework is its parallel computing architecture with large degrees of freedom, which makes the uncertainty assessment a challenging task. Very limited studies have considered assessment of predictive uncertainty of ANN based hydrologic models. In this study, a novel method is proposed that help construct the prediction interval of ANN flood forecasting model during calibration itself. The method is designed to have two stages of optimization during calibration: at stage 1, the ANN model is trained with genetic algorithm (GA) to obtain optimal set of weights and biases vector, and during stage 2, the optimal variability of ANN parameters (obtained in stage 1) is identified so as to create an ensemble of predictions. During the 2nd stage, the optimization is performed with multiple objectives, (i) minimum residual variance for the ensemble mean, (ii) maximum measured data points to fall within the estimated prediction interval and (iii) minimum width of prediction interval. The method is illustrated using a real world case study of an Indian basin. The method was able to produce an ensemble that has an average prediction interval width of 23.03 m3/s, with 97.17% of the total validation data points (measured) lying within the interval. The derived prediction interval for a selected hydrograph in the validation data set is presented in Fig 1. It is noted that most of the observed flows lie within the constructed prediction interval, and therefore provides information about the uncertainty of the prediction. One specific advantage of the method is that when ensemble mean value is considered as a forecast, the peak flows are predicted with improved accuracy by this method compared to traditional single point forecasted ANNs. Fig. 1 Prediction Interval for selected hydrograph
The impact of underwater glider observations in the forecast of Hurricane Gonzalo (2014)
NASA Astrophysics Data System (ADS)
Goni, G. J.; Domingues, R. M.; Kim, H. S.; Domingues, R. M.; Halliwell, G. R., Jr.; Bringas, F.; Morell, J. M.; Pomales, L.; Baltes, R.
2017-12-01
The tropical Atlantic basin is one of seven global regions where tropical cyclones (TC) are commonly observed to originate and intensify from June to November. On average, approximately 12 TCs travel through the region every year, frequently affecting coastal, and highly populated areas. In an average year, 2 to 3 of them are categorized as intense hurricanes. Given the appropriate atmospheric conditions, TC intensification has been linked to ocean conditions, such as increased ocean heat content and enhanced salinity stratification near the surface. While errors in hurricane track forecasts have been reduced during the last years, errors in intensity forecasts remain mostly unchanged. Several studies have indicated that the use of in situ observations has the potential to improve the representation of the ocean to correctly initialize coupled hurricane intensity forecast models. However, a sustained in situ ocean observing system in the tropical North Atlantic Ocean and Caribbean Sea dedicated to measuring subsurface thermal and salinity fields in support of TC intensity studies and forecasts has yet to be implemented. Autonomous technologies offer new and cost-effective opportunities to accomplish this objective. We highlight here a partnership effort that utilize underwater gliders to better understand air-sea processes during high wind events, and are particularly geared towards improving hurricane intensity forecasts. Results are presented for Hurricane Gonzalo (2014), where glider observations obtained in the tropical Atlantic: Helped to provide an accurate description of the upper ocean conditions, that included the presence of a low salinity barrier layer; Allowed a detailed analysis of the upper ocean response to hurricane force winds of Gonzalo; Improved the initialization of the ocean in a coupled ocean-atmosphere numerical model; and together with observations from other ocean observing platforms, substantially reduced the error in intensity forecast using the HYCOM-HWRF model. Data collected by this project are transmitted in real-time to the Global Telecommunication System, distributed through the institutional web pages, by the IOOS Glider Data Assembly Center, and by NCEI, and assimilated in real-time numerical weather forecast models.
Pettersen, J M; Rich, K M; Jensen, B Bang; Aunsmo, A
2015-10-01
Pancreas disease (PD) is an important viral disease in Norwegian, Scottish and Irish aquaculture causing biological losses in terms of reduced growth, mortality, increased feed conversion ratio, and carcass downgrading. We developed a bio-economic model to investigate the economic benefits of a disease triggered early harvesting strategy to control PD losses. In this strategy, the salmon farm adopts a PCR (Polymerase Chain Reaction) diagnostic screening program to monitor the virus levels in stocks. Virus levels are used to forecast a clinical outbreak of pancreas disease, which then initiates a prescheduled harvest of the stock to avoid disease losses. The model is based on data inputs from national statistics, literature, company data, and an expert panel, and use stochastic simulations to account for the variation and/or uncertainty associated with disease effects and selected production expenditures. With the model, we compared the impacts of a salmon farm undergoing prescheduled harvest versus the salmon farm going through a PD outbreak. We also estimated the direct costs of a PD outbreak as the sum of biological losses, treatment costs, prevention costs, and other additional costs, less the costs of insurance pay-outs. Simulation results suggests that the economic benefit from a prescheduled harvest is positive once the average salmon weight at the farm has reached 3.2kg or more for an average Norwegian salmon farm stocked with 1,000,000smolts and using average salmon sales prices for 2013. The direct costs from a PD outbreak occurring nine months (average salmon weight 1.91kg) after sea transfer and using 2013 sales prices was on average estimated at NOK 55.4 million (5%, 50% and 90% percentile: 38.0, 55.8 and 72.4) (NOK=€0.128 in 2013). Sensitivity analyses revealed that the losses from a PD outbreak are sensitive to feed- and salmon sales prices, and that high 2013 sales prices contributed to substantial losses associated with a PD outbreak. Copyright © 2015 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Hiemer, S.; Woessner, J.; Basili, R.; Danciu, L.; Giardini, D.; Wiemer, S.
2014-08-01
We present a time-independent gridded earthquake rate forecast for the European region including Turkey. The spatial component of our model is based on kernel density estimation techniques, which we applied to both past earthquake locations and fault moment release on mapped crustal faults and subduction zone interfaces with assigned slip rates. Our forecast relies on the assumption that the locations of past seismicity is a good guide to future seismicity, and that future large-magnitude events occur more likely in the vicinity of known faults. We show that the optimal weighted sum of the corresponding two spatial densities depends on the magnitude range considered. The kernel bandwidths and density weighting function are optimized using retrospective likelihood-based forecast experiments. We computed earthquake activity rates (a- and b-value) of the truncated Gutenberg-Richter distribution separately for crustal and subduction seismicity based on a maximum likelihood approach that considers the spatial and temporal completeness history of the catalogue. The final annual rate of our forecast is purely driven by the maximum likelihood fit of activity rates to the catalogue data, whereas its spatial component incorporates contributions from both earthquake and fault moment-rate densities. Our model constitutes one branch of the earthquake source model logic tree of the 2013 European seismic hazard model released by the EU-FP7 project `Seismic HAzard haRmonization in Europe' (SHARE) and contributes to the assessment of epistemic uncertainties in earthquake activity rates. We performed retrospective and pseudo-prospective likelihood consistency tests to underline the reliability of our model and SHARE's area source model (ASM) using the testing algorithms applied in the collaboratory for the study of earthquake predictability (CSEP). We comparatively tested our model's forecasting skill against the ASM and find a statistically significant better performance for testing periods of 10-20 yr. The testing results suggest that our model is a viable candidate model to serve for long-term forecasting on timescales of years to decades for the European region.
NASA Astrophysics Data System (ADS)
Min, Young-Mi; Kryjov, Vladimir N.; Oh, Sang Myeong; Lee, Hyun-Ju
2017-12-01
This paper assesses the real-time 1-month lead forecasts of 3-month (seasonal) mean temperature and precipitation on a monthly basis issued by the Asia-Pacific Economic Cooperation Climate Center (APCC) for 2008-2015 (8 years, 96 forecasts). It shows the current level of the APCC operational multi-model prediction system performance. The skill of the APCC forecasts strongly depends on seasons and regions that it is higher for the tropics and boreal winter than for the extratropics and boreal summer due to direct effects and remote teleconnections from boundary forcings. There is a negative relationship between the forecast skill and its interseasonal variability for both variables and the forecast skill for precipitation is more seasonally and regionally dependent than that for temperature. The APCC operational probabilistic forecasts during this period show a cold bias (underforecasting of above-normal temperature and overforecasting of below-normal temperature) underestimating a long-term warming trend. A wet bias is evident for precipitation, particularly in the extratropical regions. The skill of both temperature and precipitation forecasts strongly depends upon the ENSO strength. Particularly, the highest forecast skill noted in 2015/2016 boreal winter is associated with the strong forcing of an extreme El Nino event. Meanwhile, the relatively low skill is associated with the transition and/or continuous ENSO-neutral phases of 2012-2014. As a result the skill of real-time forecast for boreal winter season is higher than that of hindcast. However, on average, the level of forecast skill during the period 2008-2015 is similar to that of hindcast.
NASA Technical Reports Server (NTRS)
Rousseaux, Cecile S.; Gregg, Watson W.
2018-01-01
Using a global ocean biogeochemical model combined with a forecast of physical oceanic and atmospheric variables from the NASA Global Modeling and Assimilation Office, we assess the skill of a chlorophyll concentrations forecast in the Equatorial Pacific for the period 2012-2015 with a focus on the forecast of the onset of the 2015 El Nino event. Using a series of retrospective 9-month hindcasts, we assess the uncertainties of the forecasted chlorophyll by comparing the monthly total chlorophyll concentration from the forecast with the corresponding monthly ocean chlorophyll data from the Suomi-National Polar-orbiting Partnership Visible Infrared Imaging Radiometer Suite (S-NPP VIIRS) satellite. The forecast was able to reproduce the phasing of the variability in chlorophyll concentration in the Equatorial Pacific, including the beginning of the 2015-2016 El Nino. The anomaly correlation coefficient (ACC) was significant (p less than 0.05) for forecast at 1-month (R=0.33), 8-month (R=0.42) and 9-month (R=0.41) lead times. The root mean square error (RMSE) increased from 0.0399 microgram chl L(exp -1) for the 1-month lead forecast to a maximum of 0.0472 microgram chl L(exp -1) for the 9-month lead forecast indicating that the forecast of the amplitude of chlorophyll concentration variability was getting worse. Forecasts with a 3-month lead time were on average the closest to the S-NPP VIIRS data (23% or 0.033 microgram chl L(exp -1)) while the forecast with a 9-month lead time were the furthest (31% or 0.042 microgram chl L(exp -1)). These results indicate the potential for forecasting chlorophyll concentration in this region but also highlights various deficiencies and suggestions for improvements to the current biogeochemical forecasting system. This system provides an initial basis for future applications including the effects of El Nino events on fisheries and other ocean resources given improvements identified in the analysis of these results.
Forecasting Ocean Chlorophyll in the Equatorial Pacific.
Rousseaux, Cecile S; Gregg, Watson W
2017-01-01
Using a global ocean biogeochemical model combined with a forecast of physical oceanic and atmospheric variables from the NASA Global Modeling and Assimilation Office, we assess the skill of a chlorophyll concentrations forecast in the Equatorial Pacific for the period 2012-2015 with a focus on the forecast of the onset of the 2015 El Niño event. Using a series of retrospective 9-month hindcasts, we assess the uncertainties of the forecasted chlorophyll by comparing the monthly total chlorophyll concentration from the forecast with the corresponding monthly ocean chlorophyll data from the Suomi-National Polar-orbiting Partnership Visible Infrared Imaging Radiometer Suite (S-NPP VIIRS) satellite. The forecast was able to reproduce the phasing of the variability in chlorophyll concentration in the Equatorial Pacific, including the beginning of the 2015-2016 El Niño. The anomaly correlation coefficient (ACC) was significant ( p < 0.05) for forecast at 1-month ( R = 0.33), 8-month ( R = 0.42) and 9-month ( R = 0.41) lead times. The root mean square error (RMSE) increased from 0.0399 μg chl L -1 for the 1-month lead forecast to a maximum of 0.0472 μg chl L -1 for the 9-month lead forecast indicating that the forecast of the amplitude of chlorophyll concentration variability was getting worse. Forecasts with a 3-month lead time were on average the closest to the S-NPP VIIRS data (23% or 0.033 μg chl L -1 ) while the forecast with a 9-month lead time were the furthest (31% or 0.042 μg chl L -1 ). These results indicate the potential for forecasting chlorophyll concentration in this region but also highlights various deficiencies and suggestions for improvements to the current biogeochemical forecasting system. This system provides an initial basis for future applications including the effects of El Niño events on fisheries and other ocean resources given improvements identified in the analysis of these results.
NASA Astrophysics Data System (ADS)
Liu, Y.; Zhang, Y.; Wood, A.; Lee, H. S.; Wu, L.; Schaake, J. C.
2016-12-01
Seasonal precipitation forecasts are a primary driver for seasonal streamflow prediction that is critical for a range of water resources applications, such as reservoir operations and drought management. However, it is well known that seasonal precipitation forecasts from climate models are often biased and also too coarse in spatial resolution for hydrologic applications. Therefore, post-processing procedures such as downscaling and bias correction are often needed. In this presentation, we discuss results from a recent study that applies a two-step methodology to downscale and correct the ensemble mean precipitation forecasts from the Climate Forecast System (CFS). First, CFS forecasts are downscaled and bias corrected using monthly reforecast analogs: we identify past precipitation forecasts that are similar to the current forecast, and then use the finer-scale observational analysis fields from the corresponding dates to represent the post-processed ensemble forecasts. Second, we construct the posterior distribution of forecast precipitation from the post-processed ensemble by integrating climate indices: a correlation analysis is performed to identify dominant climate indices for the study region, which are then used to weight the analysis analogs selected in the first step using a Bayesian approach. The methodology is applied to the California Nevada River Forecast Center (CNRFC) and the Middle Atlantic River Forecast Center (MARFC) regions for 1982-2015, using the North American Land Data Assimilation System (NLDAS-2) precipitation as the analysis. The results from cross validation show that the post-processed CFS precipitation forecast are considerably more skillful than the raw CFS with the analog approach only. Integrating climate indices can further improve the skill if the number of ensemble members considered is large enough; however, the improvement is generally limited to the first couple of months when compared against climatology. Impacts of various factors such as ensemble size, lead time, and choice of climate indices will also be discussed.
Linden, Ariel
2018-05-11
Interrupted time series analysis (ITSA) is an evaluation methodology in which a single treatment unit's outcome is studied serially over time and the intervention is expected to "interrupt" the level and/or trend of that outcome. ITSA is commonly evaluated using methods which may produce biased results if model assumptions are violated. In this paper, treatment effects are alternatively assessed by using forecasting methods to closely fit the preintervention observations and then forecast the post-intervention trend. A treatment effect may be inferred if the actual post-intervention observations diverge from the forecasts by some specified amount. The forecasting approach is demonstrated using the effect of California's Proposition 99 for reducing cigarette sales. Three forecast models are fit to the preintervention series-linear regression (REG), Holt-Winters (HW) non-seasonal smoothing, and autoregressive moving average (ARIMA)-and forecasts are generated into the post-intervention period. The actual observations are then compared with the forecasts to assess intervention effects. The preintervention data were fit best by HW, followed closely by ARIMA. REG fit the data poorly. The actual post-intervention observations were above the forecasts in HW and ARIMA, suggesting no intervention effect, but below the forecasts in the REG (suggesting a treatment effect), thereby raising doubts about any definitive conclusion of a treatment effect. In a single-group ITSA, treatment effects are likely to be biased if the model is misspecified. Therefore, evaluators should consider using forecast models to accurately fit the preintervention data and generate plausible counterfactual forecasts, thereby improving causal inference of treatment effects in single-group ITSA studies. © 2018 John Wiley & Sons, Ltd.
Adaptive use of research aircraft data sets for hurricane forecasts
NASA Astrophysics Data System (ADS)
Biswas, M. K.; Krishnamurti, T. N.
2008-02-01
This study uses an adaptive observational strategy for hurricane forecasting. It shows the impacts of Lidar Atmospheric Sensing Experiment (LASE) and dropsonde data sets from Convection and Moisture Experiment (CAMEX) field campaigns on hurricane track and intensity forecasts. The following cases are used in this study: Bonnie, Danielle and Georges of 1998 and Erin, Gabrielle and Humberto of 2001. A single model run for each storm is carried out using the Florida State University Global Spectral Model (FSUGSM) with the European Center for Medium Range Weather Forecasts (ECMWF) analysis as initial conditions, in addition to 50 other model runs where the analysis is randomly perturbed for each storm. The centers of maximum variance of the DLM heights are located from the forecast error variance fields at the 84-hr forecast. Back correlations are then performed using the centers of these maximum variances and the fields at the 36-hr forecast. The regions having the highest correlations in the vicinity of the hurricanes are indicative of regions from where the error growth emanates and suggests the need for additional observations. Data sets are next assimilated in those areas that contain high correlations. Forecasts are computed using the new initial conditions for the storm cases, and track and intensity skills are then examined with respect to the control forecast. The adaptive strategy is capable of identifying sensitive areas where additional observations can help in reducing the hurricane track forecast errors. A reduction of position error by approximately 52% for day 3 of forecast (averaged over 7 storm cases) over the control runs is observed. The intensity forecast shows only a slight positive impact due to the model’s coarse resolution.
Forecasting conditional climate-change using a hybrid approach
Esfahani, Akbar Akbari; Friedel, Michael J.
2014-01-01
A novel approach is proposed to forecast the likelihood of climate-change across spatial landscape gradients. This hybrid approach involves reconstructing past precipitation and temperature using the self-organizing map technique; determining quantile trends in the climate-change variables by quantile regression modeling; and computing conditional forecasts of climate-change variables based on self-similarity in quantile trends using the fractionally differenced auto-regressive integrated moving average technique. The proposed modeling approach is applied to states (Arizona, California, Colorado, Nevada, New Mexico, and Utah) in the southwestern U.S., where conditional forecasts of climate-change variables are evaluated against recent (2012) observations, evaluated at a future time period (2030), and evaluated as future trends (2009–2059). These results have broad economic, political, and social implications because they quantify uncertainty in climate-change forecasts affecting various sectors of society. Another benefit of the proposed hybrid approach is that it can be extended to any spatiotemporal scale providing self-similarity exists.
Integrating predictive information into an agro-economic model to guide agricultural management
NASA Astrophysics Data System (ADS)
Zhang, Y.; Block, P.
2016-12-01
Skillful season-ahead climate predictions linked with responsive agricultural planning and management have the potential to reduce losses, if adopted by farmers, particularly for rainfed-dominated agriculture such as in Ethiopia. Precipitation predictions during the growing season in major agricultural regions of Ethiopia are used to generate predicted climate yield factors, which reflect the influence of precipitation amounts on crop yields and serve as inputs into an agro-economic model. The adapted model, originally developed by the International Food Policy Research Institute, produces outputs of economic indices (GDP, poverty rates, etc.) at zonal and national levels. Forecast-based approaches, in which farmers' actions are in response to forecasted conditions, are compared with no-forecast approaches in which farmers follow business as usual practices, expecting "average" climate conditions. The effects of farmer adoption rates, including the potential for reduced uptake due to poor predictions, and increasing forecast lead-time on economic outputs are also explored. Preliminary results indicate superior gains under forecast-based approaches.
Forecast of severe fever with thrombocytopenia syndrome incidence with meteorological factors.
Sun, Ji-Min; Lu, Liang; Liu, Ke-Ke; Yang, Jun; Wu, Hai-Xia; Liu, Qi-Yong
2018-06-01
Severe fever with thrombocytopenia syndrome (SFTS) is emerging and some studies reported that SFTS incidence was associated with meteorological factors, while no report on SFTS forecast models was reported up to date. In this study, we constructed and compared three forecast models using autoregressive integrated moving average (ARIMA) model, negative binomial regression model (NBM), and quasi-Poisson generalized additive model (GAM). The dataset from 2011 to 2015 were used for model construction and the dataset in 2016 were used for external validity assessment. All the three models fitted the SFTS cases reasonably well during the training process and forecast process, while the NBM model forecasted better than other two models. Moreover, we demonstrated that temperature and relative humidity played key roles in explaining the temporal dynamics of SFTS occurrence. Our study contributes to better understanding of SFTS dynamics and provides predictive tools for the control and prevention of SFTS. Copyright © 2018 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Narasimha Murthy, K. V.; Saravana, R.; Vijaya Kumar, K.
2018-02-01
Weather forecasting is an important issue in the field of meteorology all over the world. The pattern and amount of rainfall are the essential factors that affect agricultural systems. India experiences the precious Southwest monsoon season for four months from June to September. The present paper describes an empirical study for modeling and forecasting the time series of Southwest monsoon rainfall patterns in the North-East India. The Box-Jenkins Seasonal Autoregressive Integrated Moving Average (SARIMA) methodology has been adopted for model identification, diagnostic checking and forecasting for this region. The study has shown that the SARIMA (0, 1, 1) (1, 0, 1)4 model is appropriate for analyzing and forecasting the future rainfall patterns. The Analysis of Means (ANOM) is a useful alternative to the analysis of variance (ANOVA) for comparing the group of treatments to study the variations and critical comparisons of rainfall patterns in different months of the season.
NASA Astrophysics Data System (ADS)
Singh, Navneet K.; Singh, Asheesh K.; Tripathy, Manoj
2012-05-01
For power industries electricity load forecast plays an important role for real-time control, security, optimal unit commitment, economic scheduling, maintenance, energy management, and plant structure planning
Fuzzy Temporal Logic Based Railway Passenger Flow Forecast Model
Dou, Fei; Jia, Limin; Wang, Li; Xu, Jie; Huang, Yakun
2014-01-01
Passenger flow forecast is of essential importance to the organization of railway transportation and is one of the most important basics for the decision-making on transportation pattern and train operation planning. Passenger flow of high-speed railway features the quasi-periodic variations in a short time and complex nonlinear fluctuation because of existence of many influencing factors. In this study, a fuzzy temporal logic based passenger flow forecast model (FTLPFFM) is presented based on fuzzy logic relationship recognition techniques that predicts the short-term passenger flow for high-speed railway, and the forecast accuracy is also significantly improved. An applied case that uses the real-world data illustrates the precision and accuracy of FTLPFFM. For this applied case, the proposed model performs better than the k-nearest neighbor (KNN) and autoregressive integrated moving average (ARIMA) models. PMID:25431586
[Medium-term forecast of solar cosmic rays radiation risk during a manned Mars mission].
Petrov, V M; Vlasov, A G
2006-01-01
Medium-term forecasting radiation hazard from solar cosmic rays will be vital in a manned Mars mission. Modern methods of space physics lack acceptable reliability in medium-term forecasting the SCR onset and parameters. The proposed estimation of average radiation risk from SCR during the manned Mars mission is made with the use of existing SCR fluence and spectrum models and correlation of solar particle event frequency with predicted Wolf number. Radiation risk is considered an additional death probability from acute radiation reactions (ergonomic component) or acute radial disease in flight. The algorithm for radiation risk calculation is described and resulted risk levels for various periods of the 23-th solar cycle are presented. Applicability of this method to advance forecasting and possible improvements are being investigated. Recommendations to the crew based on risk estimation are exemplified.
NASA Astrophysics Data System (ADS)
Mariani, S.; Casaioli, M.; Lastoria, B.; Accadia, C.; Flavoni, S.
2009-04-01
The Institute for Environmental Protection and Research - ISPRA (former Agency for Environmental Protection and Technical Services - APAT) runs operationally since 2000 an integrated meteo-marine forecasting chain, named the Hydro-Meteo-Marine Forecasting System (Sistema Idro-Meteo-Mare - SIMM), formed by a cascade of four numerical models, telescoping from the Mediterranean basin to the Venice Lagoon, and initialized by means of analyses and forecasts from the European Centre for Medium-Range Weather Forecasts (ECMWF). The operational integrated system consists of a meteorological model, the parallel verision of BOlogna Limited Area Model (BOLAM), coupled over the Mediterranean sea with a WAve Model (WAM), a high-resolution shallow-water model of the Adriatic and Ionian Sea, namely the Princeton Ocean Model (POM), and a finite-element version of the same model (VL-FEM) on the Venice Lagoon, aimed to forecast the acqua alta events. Recently, the physically based, fully distributed, rainfall-runoff TOPographic Kinematic APproximation and Integration (TOPKAPI) model has been integrated into the system, coupled to BOLAM, over two river basins, located in the central and northeastern part of Italy, respectively. However, at the present time, this latter part of the forecasting chain is not operational and it is used in a research configuration. BOLAM was originally implemented in 2000 onto the Quadrics parallel supercomputer (and for this reason referred to as QBOLAM, as well) and only at the end of 2006 it was ported (together with the other operational marine models of the forecasting chain) onto the Silicon Graphics Inc. (SGI) Altix 8-processor machine. In particular, due to the Quadrics implementation, the Kuo scheme was formerly implemented into QBOLAM for the cumulus convection parameterization. On the contrary, when porting SIMM onto the Altix Linux cluster, it was achievable to implement into QBOLAM the more advanced convection parameterization by Kain and Fritsch. A fully updated serial version of the BOLAM code has been recently acquired. Code improvements include a more precise advection scheme (Weighted Average Flux); explicit advection of five hydrometeors, and state-of-the-art parameterization schemes for radiation, convection, boundary layer turbulence and soil processes (also with possible choice among different available schemes). The operational implementation of the new code into the SIMM model chain, which requires the development of a parallel version, will be achieved during 2009. In view of this goal, the comparative verification of the different model versions' skill represents a fundamental task. On this purpose, it has been decided to evaluate the performance improvement of the new BOLAM code (in the available serial version, hereinafter BOLAM 2007) with respect to the version with the Kain-Fritsch scheme (hereinafter KF version) and to the older one employing the Kuo scheme (hereinafter Kuo version). In the present work, verification of precipitation forecasts from the three BOLAM versions is carried on in a case study approach. The intense rainfall episode occurred on 10th - 17th December 2008 over Italy has been considered. This event produced indeed severe damages in Rome and its surrounding areas. Objective and subjective verification methods have been employed in order to evaluate model performance against an observational dataset including rain gauge observations and satellite imagery. Subjective comparison of observed and forecast precipitation fields is suitable to give an overall description of the forecast quality. Spatial errors (e.g., shifting and pattern errors) and rainfall volume error can be assessed quantitatively by means of object-oriented methods. By comparing satellite images with model forecast fields, it is possible to investigate the differences between the evolution of the observed weather system and the predicted ones, and its sensitivity to the improvements in the model code. Finally, the error in forecasting the cyclone evolution can be tentatively related with the precipitation forecast error.
2017-11-22
Weather Research and Forecasting Model Simulations by John W Raby and Huaqing Cai Computational and Information Sciences Directorate, ARL...burden for this collection of information is estimated to average 1 hour per response, including the time for reviewing instructions, searching...existing data sources, gathering and maintaining the data needed, and completing and reviewing the collection information . Send comments regarding this
Energy Forecasting Models Within the Department of the Navy.
1982-06-01
standing the climatic conditions responsible for the results. Both models have particular advantages in parti- cular applications and will be examined...and moving average processes. A similar notation for a model with seasonality . .- considerations will be ARIMA (p d j)(P Q) 3=12, where the upper...AD-A12l 950 ENERGY FORECASTING MODELS WITHIN THE DEPARTMENT OF THE 1/4 NAYY(U) NAVAL POSTGRADUATE SCHOOL MONTEREY CA L &I BUTTOIPH JUN 82
2012-09-30
and Forecasting Based on Observations, Adaptive Sampling, and Numerical Prediction Steven R. Ramp Soliton Ocean Services, Inc. 691 Country Club... Soliton Ocean Services, Inc,691 Country Club Drive,Monterey,CA,93924 8. PERFORMING ORGANIZATION REPORT NUMBER 9. SPONSORING/MONITORING AGENCY NAME(S...shortwave. The results show that the incoming shortwave radiation was the dominant term, even when averaged over the dark hours, which accounts
Monthly ENSO Forecast Skill and Lagged Ensemble Size
DelSole, T.; Tippett, M.K.; Pegion, K.
2018-01-01
Abstract The mean square error (MSE) of a lagged ensemble of monthly forecasts of the Niño 3.4 index from the Climate Forecast System (CFSv2) is examined with respect to ensemble size and configuration. Although the real‐time forecast is initialized 4 times per day, it is possible to infer the MSE for arbitrary initialization frequency and for burst ensembles by fitting error covariances to a parametric model and then extrapolating to arbitrary ensemble size and initialization frequency. Applying this method to real‐time forecasts, we find that the MSE consistently reaches a minimum for a lagged ensemble size between one and eight days, when four initializations per day are included. This ensemble size is consistent with the 8–10 day lagged ensemble configuration used operationally. Interestingly, the skill of both ensemble configurations is close to the estimated skill of the infinite ensemble. The skill of the weighted, lagged, and burst ensembles are found to be comparable. Certain unphysical features of the estimated error growth were tracked down to problems with the climatology and data discontinuities. PMID:29937973
Monthly ENSO Forecast Skill and Lagged Ensemble Size
NASA Astrophysics Data System (ADS)
Trenary, L.; DelSole, T.; Tippett, M. K.; Pegion, K.
2018-04-01
The mean square error (MSE) of a lagged ensemble of monthly forecasts of the Niño 3.4 index from the Climate Forecast System (CFSv2) is examined with respect to ensemble size and configuration. Although the real-time forecast is initialized 4 times per day, it is possible to infer the MSE for arbitrary initialization frequency and for burst ensembles by fitting error covariances to a parametric model and then extrapolating to arbitrary ensemble size and initialization frequency. Applying this method to real-time forecasts, we find that the MSE consistently reaches a minimum for a lagged ensemble size between one and eight days, when four initializations per day are included. This ensemble size is consistent with the 8-10 day lagged ensemble configuration used operationally. Interestingly, the skill of both ensemble configurations is close to the estimated skill of the infinite ensemble. The skill of the weighted, lagged, and burst ensembles are found to be comparable. Certain unphysical features of the estimated error growth were tracked down to problems with the climatology and data discontinuities.
Value-at-Risk forecasts by a spatiotemporal model in Chinese stock market
NASA Astrophysics Data System (ADS)
Gong, Pu; Weng, Yingliang
2016-01-01
This paper generalizes a recently proposed spatial autoregressive model and introduces a spatiotemporal model for forecasting stock returns. We support the view that stock returns are affected not only by the absolute values of factors such as firm size, book-to-market ratio and momentum but also by the relative values of factors like trading volume ranking and market capitalization ranking in each period. This article studies a new method for constructing stocks' reference groups; the method is called quartile method. Applying the method empirically to the Shanghai Stock Exchange 50 Index, we compare the daily volatility forecasting performance and the out-of-sample forecasting performance of Value-at-Risk (VaR) estimated by different models. The empirical results show that the spatiotemporal model performs surprisingly well in terms of capturing spatial dependences among individual stocks, and it produces more accurate VaR forecasts than the other three models introduced in the previous literature. Moreover, the findings indicate that both allowing for serial correlation in the disturbances and using time-varying spatial weight matrices can greatly improve the predictive accuracy of a spatial autoregressive model.
Forecasting Daily Patient Outflow From a Ward Having No Real-Time Clinical Data
Tran, Truyen; Luo, Wei; Phung, Dinh; Venkatesh, Svetha
2016-01-01
Background: Modeling patient flow is crucial in understanding resource demand and prioritization. We study patient outflow from an open ward in an Australian hospital, where currently bed allocation is carried out by a manager relying on past experiences and looking at demand. Automatic methods that provide a reasonable estimate of total next-day discharges can aid in efficient bed management. The challenges in building such methods lie in dealing with large amounts of discharge noise introduced by the nonlinear nature of hospital procedures, and the nonavailability of real-time clinical information in wards. Objective Our study investigates different models to forecast the total number of next-day discharges from an open ward having no real-time clinical data. Methods We compared 5 popular regression algorithms to model total next-day discharges: (1) autoregressive integrated moving average (ARIMA), (2) the autoregressive moving average with exogenous variables (ARMAX), (3) k-nearest neighbor regression, (4) random forest regression, and (5) support vector regression. Although the autoregressive integrated moving average model relied on past 3-month discharges, nearest neighbor forecasting used median of similar discharges in the past in estimating next-day discharge. In addition, the ARMAX model used the day of the week and number of patients currently in ward as exogenous variables. For the random forest and support vector regression models, we designed a predictor set of 20 patient features and 88 ward-level features. Results Our data consisted of 12,141 patient visits over 1826 days. Forecasting quality was measured using mean forecast error, mean absolute error, symmetric mean absolute percentage error, and root mean square error. When compared with a moving average prediction model, all 5 models demonstrated superior performance with the random forests achieving 22.7% improvement in mean absolute error, for all days in the year 2014. Conclusions In the absence of clinical information, our study recommends using patient-level and ward-level data in predicting next-day discharges. Random forest and support vector regression models are able to use all available features from such data, resulting in superior performance over traditional autoregressive methods. An intelligent estimate of available beds in wards plays a crucial role in relieving access block in emergency departments. PMID:27444059
NASA Astrophysics Data System (ADS)
Wei, C.; Cheng, K. S.
Using meteorological radar and satellite imagery had become an efficient tool for rainfall forecasting However few studies were aimed to predict quantitative rainfall in small watersheds for flood forecasting by using remote sensing data Due to the terrain shelter and ground clutter effect of Central Mountain Ridges the application of meteorological radar data was limited in mountainous areas of central Taiwan This study devises a new scheme to predict rainfall of a small upstream watershed by combing GOES-9 geostationary weather satellite imagery and ground rainfall records which can be applied for local quantitative rainfall forecasting during periods of typhoon and heavy rainfall Imagery of two typhoon events in 2004 and five correspondent ground raingauges records of Chitou Forest Recreational Area which is located in upstream region of Bei-Shi river were analyzed in this study The watershed accounts for 12 7 square kilometers and altitudes ranging from 1000 m to 1800 m Basin-wide Average Rainfall BAR in study area were estimated by block kriging Cloud Top Temperature CTT from satellite imagery and ground hourly rainfall records were medium correlated The regression coefficient ranges from 0 5 to 0 7 and the value decreases as the altitude of the gauge site increases The regression coefficient of CCT and next 2 to 6 hour accumulated BAR decrease as the time scale increases The rainfall forecasting for BAR were analyzed by Kalman Filtering Technique The correlation coefficient and average hourly deviates between estimated and observed value of BAR for
Error Estimation of An Ensemble Statistical Seasonal Precipitation Prediction Model
NASA Technical Reports Server (NTRS)
Shen, Samuel S. P.; Lau, William K. M.; Kim, Kyu-Myong; Li, Gui-Long
2001-01-01
This NASA Technical Memorandum describes an optimal ensemble canonical correlation forecasting model for seasonal precipitation. Each individual forecast is based on the canonical correlation analysis (CCA) in the spectral spaces whose bases are empirical orthogonal functions (EOF). The optimal weights in the ensemble forecasting crucially depend on the mean square error of each individual forecast. An estimate of the mean square error of a CCA prediction is made also using the spectral method. The error is decomposed onto EOFs of the predictand and decreases linearly according to the correlation between the predictor and predictand. Since new CCA scheme is derived for continuous fields of predictor and predictand, an area-factor is automatically included. Thus our model is an improvement of the spectral CCA scheme of Barnett and Preisendorfer. The improvements include (1) the use of area-factor, (2) the estimation of prediction error, and (3) the optimal ensemble of multiple forecasts. The new CCA model is applied to the seasonal forecasting of the United States (US) precipitation field. The predictor is the sea surface temperature (SST). The US Climate Prediction Center's reconstructed SST is used as the predictor's historical data. The US National Center for Environmental Prediction's optimally interpolated precipitation (1951-2000) is used as the predictand's historical data. Our forecast experiments show that the new ensemble canonical correlation scheme renders a reasonable forecasting skill. For example, when using September-October-November SST to predict the next season December-January-February precipitation, the spatial pattern correlation between the observed and predicted are positive in 46 years among the 50 years of experiments. The positive correlations are close to or greater than 0.4 in 29 years, which indicates excellent performance of the forecasting model. The forecasting skill can be further enhanced when several predictors are used.
The invisible benefits of exercise.
Ruby, Matthew B; Dunn, Elizabeth W; Perrino, Andrea; Gillis, Randall; Viel, Sasha
2011-01-01
To examine whether--and why--people underestimate how much they enjoy exercise. Across four studies, 279 adults predicted how much they would enjoy exercising, or reported their actual feelings after exercising. Main outcome measures were predicted and actual enjoyment ratings of exercise routines, as well as intention to exercise. Participants significantly underestimated how much they would enjoy exercising; this affective forecasting bias emerged consistently for group and individual exercise, and moderate and challenging workouts spanning a wide range of forms, from yoga and Pilates to aerobic exercise and weight training (Studies 1 and 2). We argue that this bias stems largely from forecasting myopia, whereby people place disproportionate weight on the beginning of a workout, which is typically unpleasant. We demonstrate that forecasting myopia can be harnessed (Study 3) or overcome (Study 4), thereby increasing expected enjoyment of exercise. Finally, Study 4 provides evidence for a mediational model, in which improving people's expected enjoyment of exercise leads to increased intention to exercise. People underestimate how much they enjoy exercise because of a myopic focus on the unpleasant beginning of exercise, but this tendency can be harnessed or overcome, potentially increasing intention to exercise. (PsycINFO Database Record (c) 2010 APA, all rights reserved).
Buitrago, Jaime; Asfour, Shihab
2017-01-01
Short-term load forecasting is crucial for the operations planning of an electrical grid. Forecasting the next 24 h of electrical load in a grid allows operators to plan and optimize their resources. The purpose of this study is to develop a more accurate short-term load forecasting method utilizing non-linear autoregressive artificial neural networks (ANN) with exogenous multi-variable input (NARX). The proposed implementation of the network is new: the neural network is trained in open-loop using actual load and weather data, and then, the network is placed in closed-loop to generate a forecast using the predicted load as the feedback input.more » Unlike the existing short-term load forecasting methods using ANNs, the proposed method uses its own output as the input in order to improve the accuracy, thus effectively implementing a feedback loop for the load, making it less dependent on external data. Using the proposed framework, mean absolute percent errors in the forecast in the order of 1% have been achieved, which is a 30% improvement on the average error using feedforward ANNs, ARMAX and state space methods, which can result in large savings by avoiding commissioning of unnecessary power plants. Finally, the New England electrical load data are used to train and validate the forecast prediction.« less
Rimaityte, Ingrida; Ruzgas, Tomas; Denafas, Gintaras; Racys, Viktoras; Martuzevicius, Dainius
2012-01-01
Forecasting of generation of municipal solid waste (MSW) in developing countries is often a challenging task due to the lack of data and selection of suitable forecasting method. This article aimed to select and evaluate several methods for MSW forecasting in a medium-scaled Eastern European city (Kaunas, Lithuania) with rapidly developing economics, with respect to affluence-related and seasonal impacts. The MSW generation was forecast with respect to the economic activity of the city (regression modelling) and using time series analysis. The modelling based on social-economic indicators (regression implemented in LCA-IWM model) showed particular sensitivity (deviation from actual data in the range from 2.2 to 20.6%) to external factors, such as the synergetic effects of affluence parameters or changes in MSW collection system. For the time series analysis, the combination of autoregressive integrated moving average (ARIMA) and seasonal exponential smoothing (SES) techniques were found to be the most accurate (mean absolute percentage error equalled to 6.5). Time series analysis method was very valuable for forecasting the weekly variation of waste generation data (r (2) > 0.87), but the forecast yearly increase should be verified against the data obtained by regression modelling. The methods and findings of this study may assist the experts, decision-makers and scientists performing forecasts of MSW generation, especially in developing countries.
Experiments with a three-dimensional statistical objective analysis scheme using FGGE data
NASA Technical Reports Server (NTRS)
Baker, Wayman E.; Bloom, Stephen C.; Woollen, John S.; Nestler, Mark S.; Brin, Eugenia
1987-01-01
A three-dimensional (3D), multivariate, statistical objective analysis scheme (referred to as optimum interpolation or OI) has been developed for use in numerical weather prediction studies with the FGGE data. Some novel aspects of the present scheme include: (1) a multivariate surface analysis over the oceans, which employs an Ekman balance instead of the usual geostrophic relationship, to model the pressure-wind error cross correlations, and (2) the capability to use an error correlation function which is geographically dependent. A series of 4-day data assimilation experiments are conducted to examine the importance of some of the key features of the OI in terms of their effects on forecast skill, as well as to compare the forecast skill using the OI with that utilizing a successive correction method (SCM) of analysis developed earlier. For the three cases examined, the forecast skill is found to be rather insensitive to varying the error correlation function geographically. However, significant differences are noted between forecasts from a two-dimensional (2D) version of the OI and those from the 3D OI, with the 3D OI forecasts exhibiting better forecast skill. The 3D OI forecasts are also more accurate than those from the SCM initial conditions. The 3D OI with the multivariate oceanic surface analysis was found to produce forecasts which were slightly more accurate, on the average, than a univariate version.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Buitrago, Jaime; Asfour, Shihab
Short-term load forecasting is crucial for the operations planning of an electrical grid. Forecasting the next 24 h of electrical load in a grid allows operators to plan and optimize their resources. The purpose of this study is to develop a more accurate short-term load forecasting method utilizing non-linear autoregressive artificial neural networks (ANN) with exogenous multi-variable input (NARX). The proposed implementation of the network is new: the neural network is trained in open-loop using actual load and weather data, and then, the network is placed in closed-loop to generate a forecast using the predicted load as the feedback input.more » Unlike the existing short-term load forecasting methods using ANNs, the proposed method uses its own output as the input in order to improve the accuracy, thus effectively implementing a feedback loop for the load, making it less dependent on external data. Using the proposed framework, mean absolute percent errors in the forecast in the order of 1% have been achieved, which is a 30% improvement on the average error using feedforward ANNs, ARMAX and state space methods, which can result in large savings by avoiding commissioning of unnecessary power plants. Finally, the New England electrical load data are used to train and validate the forecast prediction.« less
Random matrix theory filters in portfolio optimisation: A stability and risk assessment
NASA Astrophysics Data System (ADS)
Daly, J.; Crane, M.; Ruskin, H. J.
2008-07-01
Random matrix theory (RMT) filters, applied to covariance matrices of financial returns, have recently been shown to offer improvements to the optimisation of stock portfolios. This paper studies the effect of three RMT filters on the realised portfolio risk, and on the stability of the filtered covariance matrix, using bootstrap analysis and out-of-sample testing. We propose an extension to an existing RMT filter, (based on Krzanowski stability), which is observed to reduce risk and increase stability, when compared to other RMT filters tested. We also study a scheme for filtering the covariance matrix directly, as opposed to the standard method of filtering correlation, where the latter is found to lower the realised risk, on average, by up to 6.7%. We consider both equally and exponentially weighted covariance matrices in our analysis, and observe that the overall best method out-of-sample was that of the exponentially weighted covariance, with our Krzanowski stability-based filter applied to the correlation matrix. We also find that the optimal out-of-sample decay factors, for both filtered and unfiltered forecasts, were higher than those suggested by Riskmetrics [J.P. Morgan, Reuters, Riskmetrics technical document, Technical Report, 1996. http://www.riskmetrics.com/techdoc.html], with those for the latter approaching a value of α=1. In conclusion, RMT filtering reduced the realised risk, on average, and in the majority of cases when tested out-of-sample, but increased the realised risk on a marked number of individual days-in some cases more than doubling it.
Predicting Individuals' Learning Success from Patterns of Pre-Learning MRI Activity
Vo, Loan T. K.; Walther, Dirk B.; Kramer, Arthur F.; Erickson, Kirk I.; Boot, Walter R.; Voss, Michelle W.; Prakash, Ruchika S.; Lee, Hyunkyu; Fabiani, Monica; Gratton, Gabriele; Simons, Daniel J.; Sutton, Bradley P.; Wang, Michelle Y.
2011-01-01
Performance in most complex cognitive and psychomotor tasks improves with training, yet the extent of improvement varies among individuals. Is it possible to forecast the benefit that a person might reap from training? Several behavioral measures have been used to predict individual differences in task improvement, but their predictive power is limited. Here we show that individual differences in patterns of time-averaged T2*-weighted MRI images in the dorsal striatum recorded at the initial stage of training predict subsequent learning success in a complex video game with high accuracy. These predictions explained more than half of the variance in learning success among individuals, suggesting that individual differences in neuroanatomy or persistent physiology predict whether and to what extent people will benefit from training in a complex task. Surprisingly, predictions from white matter were highly accurate, while voxels in the gray matter of the dorsal striatum did not contain any information about future training success. Prediction accuracy was higher in the anterior than the posterior half of the dorsal striatum. The link between trainability and the time-averaged T2*-weighted signal in the dorsal striatum reaffirms the role of this part of the basal ganglia in learning and executive functions, such as task-switching and task coordination processes. The ability to predict who will benefit from training by using neuroimaging data collected in the early training phase may have far-reaching implications for the assessment of candidates for specific training programs as well as the study of populations that show deficiencies in learning new skills. PMID:21264257
Paul, Susannah; Mgbere, Osaro; Arafat, Raouf; Yang, Biru; Santos, Eunice
2017-01-01
Objective The objective was to forecast and validate prediction estimates of influenza activity in Houston, TX using four years of historical influenza-like illness (ILI) from three surveillance data capture mechanisms. Background Using novel surveillance methods and historical data to estimate future trends of influenza-like illness can lead to early detection of influenza activity increases and decreases. Anticipating surges gives public health professionals more time to prepare and increase prevention efforts. Methods Data was obtained from three surveillance systems, Flu Near You, ILINet, and hospital emergency center (EC) visits, with diverse data capture mechanisms. Autoregressive integrated moving average (ARIMA) models were fitted to data from each source for week 27 of 2012 through week 26 of 2016 and used to forecast influenza-like activity for the subsequent 10 weeks. Estimates were then compared to actual ILI percentages for the same period. Results Forecasted estimates had wide confidence intervals that crossed zero. The forecasted trend direction differed by data source, resulting in lack of consensus about future influenza activity. ILINet forecasted estimates and actual percentages had the least differences. ILINet performed best when forecasting influenza activity in Houston, TX. Conclusion Though the three forecasted estimates did not agree on the trend directions, and thus, were considered imprecise predictors of long-term ILI activity based on existing data, pooling predictions and careful interpretations may be helpful for short term intervention efforts. Further work is needed to improve forecast accuracy considering the promise forecasting holds for seasonal influenza prevention and control, and pandemic preparedness.
a Bayesian Synthesis of Predictions from Different Models for Setting Water Quality Criteria
NASA Astrophysics Data System (ADS)
Arhonditsis, G. B.; Ecological Modelling Laboratory
2011-12-01
Skeptical views of the scientific value of modelling argue that there is no true model of an ecological system, but rather several adequate descriptions of different conceptual basis and structure. In this regard, rather than picking the single "best-fit" model to predict future system responses, we can use Bayesian model averaging to synthesize the forecasts from different models. Hence, by acknowledging that models from different areas of the complexity spectrum have different strengths and weaknesses, the Bayesian model averaging is an appealing approach to improve the predictive capacity and to overcome the ambiguity surrounding the model selection or the risk of basing ecological forecasts on a single model. Our study addresses this question using a complex ecological model, developed by Ramin et al. (2011; Environ Modell Softw 26, 337-353) to guide the water quality criteria setting process in the Hamilton Harbour (Ontario, Canada), along with a simpler plankton model that considers the interplay among phosphate, detritus, and generic phytoplankton and zooplankton state variables. This simple approach is more easily subjected to detailed sensitivity analysis and also has the advantage of fewer unconstrained parameters. Using Markov Chain Monte Carlo simulations, we calculate the relative mean standard error to assess the posterior support of the two models from the existing data. Predictions from the two models are then combined using the respective standard error estimates as weights in a weighted model average. The model averaging approach is used to examine the robustness of predictive statements made from our earlier work regarding the response of Hamilton Harbour to the different nutrient loading reduction strategies. The two eutrophication models are then used in conjunction with the SPAtially Referenced Regressions On Watershed attributes (SPARROW) watershed model. The Bayesian nature of our work is used: (i) to alleviate problems of spatiotemporal resolution mismatch between watershed and receiving waterbody models; and (ii) to overcome the conceptual or scale misalignment between processes of interest and supporting information. The proposed Bayesian approach provides an effective means of empirically estimating the relation between in-stream measurements of nutrient fluxes and the sources/sinks of nutrients within the watershed, while explicitly accounting for the uncertainty associated with the existing knowledge from the system along with the different types of spatial correlation typically underlying the parameter estimation of watershed models. Our modelling exercise offers the first estimates of the export coefficients and the delivery rates from the different subcatchments and thus generates testable hypotheses regarding the nutrient export "hot spots" in the studied watershed. Finally, we conduct modeling experiments that evaluate the potential improvement of the model parameter estimates and the decrease of the predictive uncertainty, if the uncertainty associated with the contemporary nutrient loading estimates is reduced. The lessons learned from this study will contribute towards the development of integrated modelling frameworks.
An application of a multi model approach for solar energy prediction in Southern Italy
NASA Astrophysics Data System (ADS)
Avolio, Elenio; Lo Feudo, Teresa; Calidonna, Claudia Roberta; Contini, Daniele; Torcasio, Rosa Claudia; Tiriolo, Luca; Montesanti, Stefania; Transerici, Claudio; Federico, Stefano
2015-04-01
The accuracy of the short and medium range forecast of solar irradiance is very important for solar energy integration into the grid. This issue is particularly important for Southern Italy where a significant availability of solar energy is associated with a poor development of the grid. In this work we analyse the performance of two deterministic models for the prediction of surface temperature and short-wavelength radiance for two sites in southern Italy. Both parameters are needed to forecast the power production from solar power plants, so the performance of the forecast for these meteorological parameters is of paramount importance. The models considered in this work are the RAMS (Regional Atmospheric Modeling System) and the WRF (Weather Research and Forecasting Model) and they were run for the summer 2013 at 4 km horizontal resolution over Italy. The forecast lasts three days. Initial and dynamic boundary conditions are given by the 12 UTC deterministic forecast of the ECMWF-IFS (European Centre for Medium Weather Range Forecast - Integrated Forecasting System) model, and were available every 6 hours. Verification is given against two surface stations located in Southern Italy, Lamezia Terme and Lecce, and are based on hourly output of models forecast. Results for the whole period for temperature show a positive bias for the RAMS model and a negative bias for the WRF model. RMSE is between 1 and 2 °C for both models. Results for the whole period for the short-wavelength radiance show a positive bias for both models (about 30 W/m2 for both models) and a RMSE of 100 W/m2. To reduce the model errors, a statistical post-processing technique, i.e the multi-model, is adopted. In this approach the two model's outputs are weighted with an adequate set of weights computed for a training period. In general, the performance is improved by the application of the technique, and the RMSE is reduced by a sizeable fraction (i.e. larger than 10% of the initial RMSE) depending on the forecasting time and parameter. The performance of the multi model is discussed as a function of the length of the training period and is compared with the performance of the MOS (Model Output Statistics) approach. ACKNOWLEDGMENTS This work is partially supported by projects PON04a2E Sinergreen-ResNovae - "Smart Energy Master for the energetic government of the territory" and PONa3_00363 "High Technology Infrastructure for Climate and Environment Monitoring" (I-AMICA) founded by Italian Ministry of University and Research (MIUR) PON 2007-2013. The ECMWF and CNMCA (Centro Nazionale di Meteorologia e Climatologia Aeronautica) are acknowledged for the use of the MARS (Meteorological Archive and Retrieval System).
Hybrid model for forecasting time series with trend, seasonal and salendar variation patterns
NASA Astrophysics Data System (ADS)
Suhartono; Rahayu, S. P.; Prastyo, D. D.; Wijayanti, D. G. P.; Juliyanto
2017-09-01
Most of the monthly time series data in economics and business in Indonesia and other Moslem countries not only contain trend and seasonal, but also affected by two types of calendar variation effects, i.e. the effect of the number of working days or trading and holiday effects. The purpose of this research is to develop a hybrid model or a combination of several forecasting models to predict time series that contain trend, seasonal and calendar variation patterns. This hybrid model is a combination of classical models (namely time series regression and ARIMA model) and/or modern methods (artificial intelligence method, i.e. Artificial Neural Networks). A simulation study was used to show that the proposed procedure for building the hybrid model could work well for forecasting time series with trend, seasonal and calendar variation patterns. Furthermore, the proposed hybrid model is applied for forecasting real data, i.e. monthly data about inflow and outflow of currency at Bank Indonesia. The results show that the hybrid model tend to provide more accurate forecasts than individual forecasting models. Moreover, this result is also in line with the third results of the M3 competition, i.e. the hybrid model on average provides a more accurate forecast than the individual model.
Zhao, Xin; Han, Meng; Ding, Lili; Calin, Adrian Cantemir
2018-01-01
The accurate forecast of carbon dioxide emissions is critical for policy makers to take proper measures to establish a low carbon society. This paper discusses a hybrid of the mixed data sampling (MIDAS) regression model and BP (back propagation) neural network (MIDAS-BP model) to forecast carbon dioxide emissions. Such analysis uses mixed frequency data to study the effects of quarterly economic growth on annual carbon dioxide emissions. The forecasting ability of MIDAS-BP is remarkably better than MIDAS, ordinary least square (OLS), polynomial distributed lags (PDL), autoregressive distributed lags (ADL), and auto-regressive moving average (ARMA) models. The MIDAS-BP model is suitable for forecasting carbon dioxide emissions for both the short and longer term. This research is expected to influence the methodology for forecasting carbon dioxide emissions by improving the forecast accuracy. Empirical results show that economic growth has both negative and positive effects on carbon dioxide emissions that last 15 quarters. Carbon dioxide emissions are also affected by their own change within 3 years. Therefore, there is a need for policy makers to explore an alternative way to develop the economy, especially applying new energy policies to establish a low carbon society.
Skill of Global Raw and Postprocessed Ensemble Predictions of Rainfall over Northern Tropical Africa
NASA Astrophysics Data System (ADS)
Vogel, Peter; Knippertz, Peter; Fink, Andreas H.; Schlueter, Andreas; Gneiting, Tilmann
2018-04-01
Accumulated precipitation forecasts are of high socioeconomic importance for agriculturally dominated societies in northern tropical Africa. In this study, we analyze the performance of nine operational global ensemble prediction systems (EPSs) relative to climatology-based forecasts for 1 to 5-day accumulated precipitation based on the monsoon seasons 2007-2014 for three regions within northern tropical Africa. To assess the full potential of raw ensemble forecasts across spatial scales, we apply state-of-the-art statistical postprocessing methods in form of Bayesian Model Averaging (BMA) and Ensemble Model Output Statistics (EMOS), and verify against station and spatially aggregated, satellite-based gridded observations. Raw ensemble forecasts are uncalibrated, unreliable, and underperform relative to climatology, independently of region, accumulation time, monsoon season, and ensemble. Differences between raw ensemble and climatological forecasts are large, and partly stem from poor prediction for low precipitation amounts. BMA and EMOS postprocessed forecasts are calibrated, reliable, and strongly improve on the raw ensembles, but - somewhat disappointingly - typically do not outperform climatology. Most EPSs exhibit slight improvements over the period 2007-2014, but overall have little added value compared to climatology. We suspect that the parametrization of convection is a potential cause for the sobering lack of ensemble forecast skill in a region dominated by mesoscale convective systems.
Short-term electric power demand forecasting based on economic-electricity transmission model
NASA Astrophysics Data System (ADS)
Li, Wenfeng; Bai, Hongkun; Liu, Wei; Liu, Yongmin; Wang, Yubin Mao; Wang, Jiangbo; He, Dandan
2018-04-01
Short-term electricity demand forecasting is the basic work to ensure safe operation of the power system. In this paper, a practical economic electricity transmission model (EETM) is built. With the intelligent adaptive modeling capabilities of Prognoz Platform 7.2, the econometric model consists of three industrial added value and income levels is firstly built, the electricity demand transmission model is also built. By multiple regression, moving averages and seasonal decomposition, the problem of multiple correlations between variables is effectively overcome in EETM. The validity of EETM is proved by comparison with the actual value of Henan Province. Finally, EETM model is used to forecast the electricity consumption of the 1-4 quarter of 2018.
Forecasting Electric Power Generation of Photovoltaic Power System for Energy Network
NASA Astrophysics Data System (ADS)
Kudo, Mitsuru; Takeuchi, Akira; Nozaki, Yousuke; Endo, Hisahito; Sumita, Jiro
Recently, there has been an increase in concern about the global environment. Interest is growing in developing an energy network by which new energy systems such as photovoltaic and fuel cells generate power locally and electric power and heat are controlled with a communications network. We developed the power generation forecast method for photovoltaic power systems in an energy network. The method makes use of weather information and regression analysis. We carried out forecasting power output of the photovoltaic power system installed in Expo 2005, Aichi Japan. As a result of comparing measurements with a prediction values, the average prediction error per day was about 26% of the measured power.
Prediction on sunspot activity based on fuzzy information granulation and support vector machine
NASA Astrophysics Data System (ADS)
Peng, Lingling; Yan, Haisheng; Yang, Zhigang
2018-04-01
In order to analyze the range of sunspots, a combined prediction method of forecasting the fluctuation range of sunspots based on fuzzy information granulation (FIG) and support vector machine (SVM) was put forward. Firstly, employing the FIG to granulate sample data and extract va)alid information of each window, namely the minimum value, the general average value and the maximum value of each window. Secondly, forecasting model is built respectively with SVM and then cross method is used to optimize these parameters. Finally, the fluctuation range of sunspots is forecasted with the optimized SVM model. Case study demonstrates that the model have high accuracy and can effectively predict the fluctuation of sunspots.
12 CFR 702.105 - Weighted-average life of investments.
Code of Federal Regulations, 2011 CFR
2011-01-01
... 12 Banks and Banking 6 2011-01-01 2011-01-01 false Weighted-average life of investments. 702.105... PROMPT CORRECTIVE ACTION Net Worth Classification § 702.105 Weighted-average life of investments. Except as provided below (Table 3), the weighted-average life of an investment for purposes of §§ 702.106(c...
12 CFR 702.105 - Weighted-average life of investments.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 12 Banks and Banking 6 2010-01-01 2010-01-01 false Weighted-average life of investments. 702.105... PROMPT CORRECTIVE ACTION Net Worth Classification § 702.105 Weighted-average life of investments. Except as provided below (Table 3), the weighted-average life of an investment for purposes of §§ 702.106(c...
12 CFR 702.105 - Weighted-average life of investments.
Code of Federal Regulations, 2014 CFR
2014-01-01
... 12 Banks and Banking 7 2014-01-01 2014-01-01 false Weighted-average life of investments. 702.105... PROMPT CORRECTIVE ACTION Net Worth Classification § 702.105 Weighted-average life of investments. Except as provided below (Table 3), the weighted-average life of an investment for purposes of §§ 702.106(c...
12 CFR 702.105 - Weighted-average life of investments.
Code of Federal Regulations, 2013 CFR
2013-01-01
... 12 Banks and Banking 7 2013-01-01 2013-01-01 false Weighted-average life of investments. 702.105... PROMPT CORRECTIVE ACTION Net Worth Classification § 702.105 Weighted-average life of investments. Except as provided below (Table 3), the weighted-average life of an investment for purposes of §§ 702.106(c...
12 CFR 702.105 - Weighted-average life of investments.
Code of Federal Regulations, 2012 CFR
2012-01-01
... 12 Banks and Banking 7 2012-01-01 2012-01-01 false Weighted-average life of investments. 702.105... PROMPT CORRECTIVE ACTION Net Worth Classification § 702.105 Weighted-average life of investments. Except as provided below (Table 3), the weighted-average life of an investment for purposes of §§ 702.106(c...
FAA Aviation Forecasts Fiscal Years 1988-1999.
1988-02-01
in the 48 contiguous States, Hawaii, Puerto Rico, and the U.S. Virgin Islands. Excluded fromn the data base is activity in Alaska, other U.S...passengerl miles increased b%-’"’. 17.2 percent. Traffic in Hawaii, Puerto Rico, and the U.S. Virgin Islands, ., Ihio wevr, haid slower growth with passenger...trip length for Hawaii/Puerto Rico/ Virgin Islands is expected to remain constant at 98.0 miles over the forecast period. The average industry load
Sea Ice Outlook for September 2017: June Report - NASA Global Modeling and Assimilation Office
NASA Technical Reports Server (NTRS)
Cullather, Richard I.; Borovikov, Anna Y.; Hackert, Eric C.; Kovach, Robin M.; Marshak, Jelena; Molod, Andrea M.; Pawson, Steven; Suarez, Max J.; Vikhliaev, Yury V.; Zhao, Bin
2017-01-01
The GMAO seasonal forecast is produced from coupled model integrations that are initialized every five days, with seven additional ensemble members generated by coupled model breeding and initialized on the date closest to the beginning of the month. The main components of the AOGCM are the GEOS-5 atmospheric model, the MOM4 ocean model, and CICE sea ice model. Forecast fields were re-gridded to the passive microwave grid for averaging.
Sea Ice Outlook for September 2017 July Report - NASA Global Modeling and Assimilation Office
NASA Technical Reports Server (NTRS)
Cullather, Richard I.; Borovikov, Anna Y.; Hackert, Eric C.; Kovach, Robin M.; Marshak, Jelena; Molod, Andrea M.; Pawson, Steven; Suarez, Max J.; Vikhliaev, Yury V.; Zhao, Bin
2017-01-01
The GMAO seasonal forecast is produced from coupled model integrations that are initialized every five days, with seven additional ensemble members generated by coupled model breeding and initialized on the date closest to the beginning of the month. The main components of the AOGCM are the GEOS-5 atmospheric model, the MOM4 ocean model, and CICE sea ice model. Forecast fields were re-gridded to the passive microwave grid for averaging.
Mumbare, Sachin S; Gosavi, Shriram; Almale, Balaji; Patil, Aruna; Dhakane, Supriya; Kadu, Aniruddha
2014-10-01
India's National Family Welfare Programme is dominated by sterilization, particularly tubectomy. Sterilization, being a terminal method of contraception, decides the final number of children for that couple. Many studies have shown the declining trend in the average number of living children at the time of sterilization over a short period of time. So this study was planned to do time series analysis of the average children at the time of terminal contraception, to do forecasting till 2020 for the same and to compare the rates of change in various subgroups of the population. Data was preprocessed in MS Access 2007 by creating and running SQL queries. After testing stationarity of every series with augmented Dickey-Fuller test, time series analysis and forecasting was done using best-fit Box-Jenkins ARIMA (p, d, q) nonseasonal model. To compare the rates of change of average children in various subgroups, at sterilization, analysis of covariance (ANCOVA) was applied. Forecasting showed that the replacement level of 2.1 total fertility rate (TFR) will be achieved in 2018 for couples opting for sterilization. The same will be achieved in 2020, 2016, 2018, and 2019 for rural area, urban area, Hindu couples, and Buddhist couples, respectively. It will not be achieved till 2020 in Muslim couples. Every stratum of population showed the declining trend. The decline for male children and in rural area was significantly faster than the decline for female children and in urban area, respectively. The decline was not significantly different in Hindu, Muslim, and Buddhist couples.
NASA Astrophysics Data System (ADS)
Vislocky, Robert L.; Fritsch, J. Michael
1997-12-01
A prototype advanced model output statistics (MOS) forecast system that was entered in the 1996-97 National Collegiate Weather Forecast Contest is described and its performance compared to that of widely available objective guidance and to contest participants. The prototype system uses an optimal blend of aviation (AVN) and nested grid model (NGM) MOS forecasts, explicit output from the NGM and Eta guidance, and the latest surface weather observations from the forecast site. The forecasts are totally objective and can be generated quickly on a personal computer. Other "objective" forms of guidance tracked in the contest are 1) the consensus forecast (i.e., the average of the forecasts from all of the human participants), 2) the combination of NGM raw output (for precipitation forecasts) and NGM MOS guidance (for temperature forecasts), and 3) the combination of Eta Model raw output (for precipitation forecasts) and AVN MOS guidance (for temperature forecasts).Results show that the advanced MOS system finished in 20th place out of 737 original entrants, or better than approximately 97% of the human forecasters who entered the contest. Moreover, the advanced MOS system was slightly better than consensus (23d place). The fact that an objective forecast system finished ahead of consensus is a significant accomplishment since consensus is traditionally a very formidable "opponent" in forecast competitions. Equally significant is that the advanced MOS system was superior to the traditional guidance products available from the National Centers for Environmental Prediction (NCEP). Specifically, the combination of NGM raw output and NGM MOS guidance finished in 175th place, and the combination of Eta Model raw output and AVN MOS guidance finished in 266th place. The latter result is most intriguing since the proposed elimination of all NGM products would likely result in a serious degradation of objective products disseminated by NCEP, unless they are replaced with equal or better substitutes. On the other hand, the positive performance of the prototype advanced MOS system shows that it is possible to create a single objective product that is not only superior to currently available objective guidance products, but is also on par with some of the better human forecasters.
Daily air quality index forecasting with hybrid models: A case in China.
Zhu, Suling; Lian, Xiuyuan; Liu, Haixia; Hu, Jianming; Wang, Yuanyuan; Che, Jinxing
2017-12-01
Air quality is closely related to quality of life. Air pollution forecasting plays a vital role in air pollution warnings and controlling. However, it is difficult to attain accurate forecasts for air pollution indexes because the original data are non-stationary and chaotic. The existing forecasting methods, such as multiple linear models, autoregressive integrated moving average (ARIMA) and support vector regression (SVR), cannot fully capture the information from series of pollution indexes. Therefore, new effective techniques need to be proposed to forecast air pollution indexes. The main purpose of this research is to develop effective forecasting models for regional air quality indexes (AQI) to address the problems above and enhance forecasting accuracy. Therefore, two hybrid models (EMD-SVR-Hybrid and EMD-IMFs-Hybrid) are proposed to forecast AQI data. The main steps of the EMD-SVR-Hybrid model are as follows: the data preprocessing technique EMD (empirical mode decomposition) is utilized to sift the original AQI data to obtain one group of smoother IMFs (intrinsic mode functions) and a noise series, where the IMFs contain the important information (level, fluctuations and others) from the original AQI series. LS-SVR is applied to forecast the sum of the IMFs, and then, S-ARIMA (seasonal ARIMA) is employed to forecast the residual sequence of LS-SVR. In addition, EMD-IMFs-Hybrid first separately forecasts the IMFs via statistical models and sums the forecasting results of the IMFs as EMD-IMFs. Then, S-ARIMA is employed to forecast the residuals of EMD-IMFs. To certify the proposed hybrid model, AQI data from June 2014 to August 2015 collected from Xingtai in China are utilized as a test case to investigate the empirical research. In terms of some of the forecasting assessment measures, the AQI forecasting results of Xingtai show that the two proposed hybrid models are superior to ARIMA, SVR, GRNN, EMD-GRNN, Wavelet-GRNN and Wavelet-SVR. Therefore, the proposed hybrid models can be used as effective and simple tools for air pollution forecasting and warning as well as for management. Copyright © 2017 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Chen, L. C.; Mo, K. C.; Zhang, Q.; Huang, J.
2014-12-01
Drought prediction from monthly to seasonal time scales is of critical importance to disaster mitigation, agricultural planning, and multi-purpose reservoir management. Starting in December 2012, NOAA Climate Prediction Center (CPC) has been providing operational Standardized Precipitation Index (SPI) Outlooks using the North American Multi-Model Ensemble (NMME) forecasts, to support CPC's monthly drought outlooks and briefing activities. The current NMME system consists of six model forecasts from U.S. and Canada modeling centers, including the CFSv2, CM2.1, GEOS-5, CCSM3.0, CanCM3, and CanCM4 models. In this study, we conduct an assessment of the predictive skill of meteorological drought using real-time NMME forecasts for the period from May 2012 to May 2014. The ensemble SPI forecasts are the equally weighted mean of the six model forecasts. Two performance measures, the anomaly correlation coefficient and root-mean-square errors against the observations, are used to evaluate forecast skill.Similar to the assessment based on NMME retrospective forecasts, predictive skill of monthly-mean precipitation (P) forecasts is generally low after the second month and errors vary among models. Although P forecast skill is not large, SPI predictive skill is high and the differences among models are small. The skill mainly comes from the P observations appended to the model forecasts. This factor also contributes to the similarity of SPI prediction among the six models. Still, NMME SPI ensemble forecasts have higher skill than those based on individual models or persistence, and the 6-month SPI forecasts are skillful out to four months. The three major drought events occurred during the 2012-2014 period, the 2012 Central Great Plains drought, the 2013 Upper Midwest flash drought, and 2013-2014 California drought, are used as examples to illustrate the system's strength and limitations. For precipitation-driven drought events, such as the 2012 Central Great Plains drought, NMME SPI forecasts perform well in predicting drought severity and spatial patterns. For fast-developing drought events, such as the 2013 Upper Midwest flash drought, the system failed to capture the onset of the drought.
Shen, Xiao-jun; Sun, Jing-sheng; Li, Ming-si; Zhang, Ji-yang; Wang, Jing-lei; Li, Dong-wei
2015-02-01
It is important to improve the real-time irrigation forecasting precision by predicting real-time water consumption of cotton mulched with plastic film under drip irrigation based on meteorological data and cotton growth status. The model parameters for calculating ET0 based on Hargreaves formula were determined using historical meteorological data from 1953 to 2008 in Shihezi reclamation area. According to the field experimental data of growing season in 2009-2010, the model of computing crop coefficient Kc was established based on accumulated temperature. On the basis of crop water requirement (ET0) and Kc, a real-time irrigation forecast model was finally constructed, and it was verified by the field experimental data in 2011. The results showed that the forecast model had high forecasting precision, and the average absolute values of relative error between the predicted value and measured value were about 3.7%, 2.4% and 1.6% during seedling, squaring and blossom-boll forming stages, respectively. The forecast model could be used to modify the predicted values in time according to the real-time meteorological data and to guide the water management in local film-mulched cotton field under drip irrigation.
Sensor network based solar forecasting using a local vector autoregressive ridge framework
DOE Office of Scientific and Technical Information (OSTI.GOV)
Xu, J.; Yoo, S.; Heiser, J.
2016-04-04
The significant improvements and falling costs of photovoltaic (PV) technology make solar energy a promising resource, yet the cloud induced variability of surface solar irradiance inhibits its effective use in grid-tied PV generation. Short-term irradiance forecasting, especially on the minute scale, is critically important for grid system stability and auxiliary power source management. Compared to the trending sky imaging devices, irradiance sensors are inexpensive and easy to deploy but related forecasting methods have not been well researched. The prominent challenge of applying classic time series models on a network of irradiance sensors is to address their varying spatio-temporal correlations duemore » to local changes in cloud conditions. We propose a local vector autoregressive framework with ridge regularization to forecast irradiance without explicitly determining the wind field or cloud movement. By using local training data, our learned forecast model is adaptive to local cloud conditions and by using regularization, we overcome the risk of overfitting from the limited training data. Our systematic experimental results showed an average of 19.7% RMSE and 20.2% MAE improvement over the benchmark Persistent Model for 1-5 minute forecasts on a comprehensive 25-day dataset.« less
Performance of stochastic approaches for forecasting river water quality.
Ahmad, S; Khan, I H; Parida, B P
2001-12-01
This study analysed water quality data collected from the river Ganges in India from 1981 to 1990 for forecasting using stochastic models. Initially the box and whisker plots and Kendall's tau test were used to identify the trends during the study period. For detecting the possible intervention in the data the time series plots and cusum charts were used. The three approaches of stochastic modelling which account for the effect of seasonality in different ways. i.e. multiplicative autoregressive integrated moving average (ARIMA) model. deseasonalised model and Thomas-Fiering model were used to model the observed pattern in water quality. The multiplicative ARIMA model having both nonseasonal and seasonal components were, in general, identified as appropriate models. In the deseasonalised modelling approach, the lower order ARIMA models were found appropriate for the stochastic component. The set of Thomas-Fiering models were formed for each month for all water quality parameters. These models were then used to forecast the future values. The error estimates of forecasts from the three approaches were compared to identify the most suitable approach for the reliable forecast. The deseasonalised modelling approach was recommended for forecasting of water quality parameters of a river.
Time series modelling and forecasting of emergency department overcrowding.
Kadri, Farid; Harrou, Fouzi; Chaabane, Sondès; Tahon, Christian
2014-09-01
Efficient management of patient flow (demand) in emergency departments (EDs) has become an urgent issue for many hospital administrations. Today, more and more attention is being paid to hospital management systems to optimally manage patient flow and to improve management strategies, efficiency and safety in such establishments. To this end, EDs require significant human and material resources, but unfortunately these are limited. Within such a framework, the ability to accurately forecast demand in emergency departments has considerable implications for hospitals to improve resource allocation and strategic planning. The aim of this study was to develop models for forecasting daily attendances at the hospital emergency department in Lille, France. The study demonstrates how time-series analysis can be used to forecast, at least in the short term, demand for emergency services in a hospital emergency department. The forecasts were based on daily patient attendances at the paediatric emergency department in Lille regional hospital centre, France, from January 2012 to December 2012. An autoregressive integrated moving average (ARIMA) method was applied separately to each of the two GEMSA categories and total patient attendances. Time-series analysis was shown to provide a useful, readily available tool for forecasting emergency department demand.
Development of S-ARIMA Model for Forecasting Demand in a Beverage Supply Chain
NASA Astrophysics Data System (ADS)
Mircetic, Dejan; Nikolicic, Svetlana; Maslaric, Marinko; Ralevic, Nebojsa; Debelic, Borna
2016-11-01
Demand forecasting is one of the key activities in planning the freight flows in supply chains, and accordingly it is essential for planning and scheduling of logistic activities within observed supply chain. Accurate demand forecasting models directly influence the decrease of logistics costs, since they provide an assessment of customer demand. Customer demand is a key component for planning all logistic processes in supply chain, and therefore determining levels of customer demand is of great interest for supply chain managers. In this paper we deal with exactly this kind of problem, and we develop the seasonal Autoregressive IntegratedMoving Average (SARIMA) model for forecasting demand patterns of a major product of an observed beverage company. The model is easy to understand, flexible to use and appropriate for assisting the expert in decision making process about consumer demand in particular periods.
Wavelet regression model in forecasting crude oil price
NASA Astrophysics Data System (ADS)
Hamid, Mohd Helmie; Shabri, Ani
2017-05-01
This study presents the performance of wavelet multiple linear regression (WMLR) technique in daily crude oil forecasting. WMLR model was developed by integrating the discrete wavelet transform (DWT) and multiple linear regression (MLR) model. The original time series was decomposed to sub-time series with different scales by wavelet theory. Correlation analysis was conducted to assist in the selection of optimal decomposed components as inputs for the WMLR model. The daily WTI crude oil price series has been used in this study to test the prediction capability of the proposed model. The forecasting performance of WMLR model were also compared with regular multiple linear regression (MLR), Autoregressive Moving Average (ARIMA) and Generalized Autoregressive Conditional Heteroscedasticity (GARCH) using root mean square errors (RMSE) and mean absolute errors (MAE). Based on the experimental results, it appears that the WMLR model performs better than the other forecasting technique tested in this study.
Joint Seasonal ARMA Approach for Modeling of Load Forecast Errors in Planning Studies
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hafen, Ryan P.; Samaan, Nader A.; Makarov, Yuri V.
2014-04-14
To make informed and robust decisions in the probabilistic power system operation and planning process, it is critical to conduct multiple simulations of the generated combinations of wind and load parameters and their forecast errors to handle the variability and uncertainty of these time series. In order for the simulation results to be trustworthy, the simulated series must preserve the salient statistical characteristics of the real series. In this paper, we analyze day-ahead load forecast error data from multiple balancing authority locations and characterize statistical properties such as mean, standard deviation, autocorrelation, correlation between series, time-of-day bias, and time-of-day autocorrelation.more » We then construct and validate a seasonal autoregressive moving average (ARMA) model to model these characteristics, and use the model to jointly simulate day-ahead load forecast error series for all BAs.« less
RBF neural network prediction on weak electrical signals in Aloe vera var. chinensis
NASA Astrophysics Data System (ADS)
Wang, Lanzhou; Zhao, Jiayin; Wang, Miao
2008-10-01
A Gaussian radial base function (RBF) neural network forecast on signals in the Aloe vera var. chinensis by the wavelet soft-threshold denoised as the time series and using the delayed input window chosen at 50, is set up to forecast backward. There was the maximum amplitude at 310.45μV, minimum -75.15μV, average value -2.69μV and <1.5Hz at frequency in Aloe vera var. chinensis respectively. The electrical signal in Aloe vera var. chinensis is a sort of weak, unstable and low frequency signals. A result showed that it is feasible to forecast plant electrical signals for the timing by the RBF. The forecast data can be used as the preferences for the intelligent autocontrol system based on the adaptive characteristic of plants to achieve the energy saving on the agricultural production in the plastic lookum or greenhouse.
NASA Astrophysics Data System (ADS)
Li, D.; Fang, N. Z.
2017-12-01
Dallas-Fort Worth Metroplex (DFW) has a population of over 7 million depending on many water supply reservoirs. The reservoir inflow plays a vital role in water supply decision making process and long-term strategic planning for the region. This paper demonstrates a method of utilizing deep learning algorithms and multi-general circulation model (GCM) platform to forecast reservoir inflow for three reservoirs within the DFW: Eagle Mountain Lake, Lake Benbrook and Lake Arlington. Ensemble empirical mode decomposition was firstly employed to extract the features, which were then represented by the deep belief networks (DBNs). The first 75 years of the historical data (1940 -2015) were used to train the model, while the last 2 years of the data (2016-2017) were used for the model validation. The weights of each DBN gained from the training process were then applied to establish a neural network (NN) that was able to forecast reservoir inflow. Feature predictors used for the forecasting model were generated from weather forecast results of the downscaled multi-GCM platform for the North Texas region. By comparing root mean square error (RMSE) and mean bias error (MBE) with the observed data, the authors found that the deep learning with downscaled multi-GCM platform is an effective approach in the reservoir inflow forecasting.
Peak Wind Tool for General Forecasting
NASA Technical Reports Server (NTRS)
Barrett, Joe H., III
2010-01-01
The expected peak wind speed of the day is an important forecast element in the 45th Weather Squadron's (45 WS) daily 24-Hour and Weekly Planning Forecasts. The forecasts are used for ground and space launch operations at the Kennedy Space Center (KSC) and Cape Canaveral Air Force Station (CCAFS). The 45 WS also issues wind advisories for KSC/CCAFS when they expect wind gusts to meet or exceed 25 kt, 35 kt and 50 kt thresholds at any level from the surface to 300 ft. The 45 WS forecasters have indicated peak wind speeds are challenging to forecast, particularly in the cool season months of October - April. In Phase I of this task, the Applied Meteorology Unit (AMU) developed a tool to help the 45 WS forecast non-convective winds at KSC/CCAFS for the 24-hour period of 0800 to 0800 local time. The tool was delivered as a Microsoft Excel graphical user interface (GUI). The GUI displayed the forecast of peak wind speed, 5-minute average wind speed at the time of the peak wind, timing of the peak wind and probability the peak speed would meet or exceed 25 kt, 35 kt and 50 kt. For the current task (Phase II ), the 45 WS requested additional observations be used for the creation of the forecast equations by expanding the period of record (POR). Additional parameters were evaluated as predictors, including wind speeds between 500 ft and 3000 ft, static stability classification, Bulk Richardson Number, mixing depth, vertical wind shear, temperature inversion strength and depth and wind direction. Using a verification data set, the AMU compared the performance of the Phase I and II prediction methods. Just as in Phase I, the tool was delivered as a Microsoft Excel GUI. The 45 WS requested the tool also be available in the Meteorological Interactive Data Display System (MIDDS). The AMU first expanded the POR by two years by adding tower observations, surface observations and CCAFS (XMR) soundings for the cool season months of March 2007 to April 2009. The POR was expanded again by six years, from October 1996 to April 2002, by interpolating 1000-ft sounding data to 100-ft increments. The Phase II developmental data set included observations for the cool season months of October 1996 to February 2007. The AMU calculated 68 candidate predictors from the XMR soundings, to include 19 stability parameters, 48 wind speed parameters and one wind shear parameter. Each day in the data set was stratified by synoptic weather pattern, low-level wind direction, precipitation and Richardson Number, for a total of 60 stratification methods. Linear regression equations, using the 68 predictors and 60 stratification methods, were created for the tool's three forecast parameters: the highest peak wind speed of the day (PWSD), 5-minute average speed at the same time (A WSD), and timing of the PWSD. For PWSD and A WSD, 30 Phase II methods were selected for evaluation in the verification data set. For timing of the PWSD, 12 Phase\\I methods were selected for evaluation. The verification data set contained observations for the cool season months of March 2007 to April 2009. The data set was used to compare the Phase I and II forecast methods to climatology, model forecast winds and wind advisories issued by the 45 WS. The model forecast winds were derived from the 0000 and 1200 UTC runs of the 12-km North American Mesoscale (MesoNAM) model. The forecast methods that performed the best in the verification data set were selected for the Phase II version of the tool. For PWSD and A WSD, linear regression equations based on MesoNAM forecasts performed significantly better than the Phase I and II methods. For timing of the PWSD, none of the methods performed significantly bener than climatology. The AMU then developed the Microsoft Excel and MIDDS GUls. The GUIs display the forecasts for PWSD, AWSD and the probability the PWSD will meet or exceed 25 kt, 35 kt and 50 kt. Since none of the prediction methods for timing of the PWSD performed significantly better thanlimatology, the tool no longer displays this predictand. The Excel and MIDDS GUIs display forecasts for Day-I to Day-3 and Day-I to Day-5, respectively. The Excel GUI uses MesoNAM forecasts as input, while the MIDDS GUI uses input from the MesoNAM and Global Forecast System model. Based on feedback from the 45 WS, the AMU added the daily average wind speed from 30 ft to 60 ft to the tool, which is one of the parameters in the 24-Hour and Weekly Planning Forecasts issued by the 45 WS. In addition, the AMU expanded the MIDDS GUI to include forecasts out to Day-7.
26 CFR 1.989(b)-1 - Definition of weighted average exchange rate.
Code of Federal Regulations, 2010 CFR
2010-04-01
... 26 Internal Revenue 10 2010-04-01 2010-04-01 false Definition of weighted average exchange rate. 1... (CONTINUED) INCOME TAX (CONTINUED) INCOME TAXES Export Trade Corporations § 1.989(b)-1 Definition of weighted average exchange rate. For purposes of section 989(b)(3) and (4), the term “weighted average exchange rate...
Gan, Ryan W.; Ford, Bonne; Lassman, William; Pfister, Gabriele; Vaidyanathan, Ambarish; Fischer, Emily; Volckens, John; Pierce, Jeffrey R.; Magzamen, Sheryl
2017-01-01
Climate forecasts predict an increase in frequency and intensity of wildfires. Associations between health outcomes and population exposure to smoke from Washington 2012 wildfires were compared using surface monitors, chemical-weather models, and a novel method blending three exposure information sources. The association between smoke particulate matter ≤2.5 μm in diameter (PM2.5) and cardiopulmonary hospital admissions occurring in Washington from 1 July to 31 October 2012 was evaluated using a time-stratified case-crossover design. Hospital admissions aggregated by ZIP code were linked with population-weighted daily average concentrations of smoke PM2.5 estimated using three distinct methods: a simulation with the Weather Research and Forecasting with Chemistry (WRF-Chem) model, a kriged interpolation of PM2.5 measurements from surface monitors, and a geographically weighted ridge regression (GWR) that blended inputs from WRF-Chem, satellite observations of aerosol optical depth, and kriged PM2.5. A 10 μg/m3 increase in GWR smoke PM2.5 was associated with an 8% increased risk in asthma-related hospital admissions (odds ratio (OR): 1.076, 95% confidence interval (CI): 1.019–1.136); other smoke estimation methods yielded similar results. However, point estimates for chronic obstructive pulmonary disease (COPD) differed by smoke PM2.5 exposure method: a 10 μg/m3 increase using GWR was significantly associated with increased risk of COPD (OR: 1.084, 95%CI: 1.026–1.145) and not significant using WRF-Chem (OR: 0.986, 95%CI: 0.931–1.045). The magnitude (OR) and uncertainty (95%CI) of associations between smoke PM2.5 and hospital admissions were dependent on estimation method used and outcome evaluated. Choice of smoke exposure estimation method used can impact the overall conclusion of the study. PMID:28868515
Estimating the budget impact of orphan drugs in Sweden and France 2013–2020
2014-01-01
Background The growth in expenditure on orphan medicinal products (OMP) across Europe has been identified as a concern. Estimates of future expenditure in Europe have suggested that OMPs could account for a significant proportion of total pharmaceutical expenditure in some countries, but few of these forecasts have been well validated. This analysis aims to establish a robust forecast of the future budget impact of OMPs on the healthcare systems in Sweden and France. Methods A dynamic forecasting model was created to estimate the budget impact of OMPs in Sweden and France between 2013 and 2020. The model used historical data on OMP designation and approval rates to predict the number of new OMPs coming to the market. Average OMP sales were estimated for each year post-launch by regression analysis of historical sales data. Total forecast sales were compared with expected sales of all pharmaceuticals in each country to quantify the relative budget impact. Results The model predicts that by 2020, 152 OMPs will have marketing authorization in Europe. The base case OMP budget impacts are forecast to grow from 2.7% in Sweden and 3.2% in France of total drug expenditure in 2013 to 4.1% in Sweden and 4.9% in France by 2020. The principal driver of expenditure growth is the number of new OMPs obtaining OMP designation. This is tempered by the slowing success rate for new approvals and the loss of intellectual property protection on existing orphan medicines. Given the forward-looking nature of the analysis, uncertainty exists around model parameters and sensitivity analysis found peak year budget impact varying between 2% and 11%. Conclusion The budget impact of OMPs in Sweden and France is likely to remain sustainable over time and a relatively small proportion of total pharmaceutical expenditure. This forecast could be affected by changes in the success rate for OMP approvals, average cost of OMPs, and the type of companies developing OMPs. PMID:24524281
Estimating the budget impact of orphan drugs in Sweden and France 2013-2020.
Hutchings, Adam; Schey, Carina; Dutton, Richard; Achana, Felix; Antonov, Karolina
2014-02-13
The growth in expenditure on orphan medicinal products (OMP) across Europe has been identified as a concern. Estimates of future expenditure in Europe have suggested that OMPs could account for a significant proportion of total pharmaceutical expenditure in some countries, but few of these forecasts have been well validated. This analysis aims to establish a robust forecast of the future budget impact of OMPs on the healthcare systems in Sweden and France. A dynamic forecasting model was created to estimate the budget impact of OMPs in Sweden and France between 2013 and 2020. The model used historical data on OMP designation and approval rates to predict the number of new OMPs coming to the market. Average OMP sales were estimated for each year post-launch by regression analysis of historical sales data. Total forecast sales were compared with expected sales of all pharmaceuticals in each country to quantify the relative budget impact. The model predicts that by 2020, 152 OMPs will have marketing authorization in Europe. The base case OMP budget impacts are forecast to grow from 2.7% in Sweden and 3.2% in France of total drug expenditure in 2013 to 4.1% in Sweden and 4.9% in France by 2020. The principal driver of expenditure growth is the number of new OMPs obtaining OMP designation. This is tempered by the slowing success rate for new approvals and the loss of intellectual property protection on existing orphan medicines. Given the forward-looking nature of the analysis, uncertainty exists around model parameters and sensitivity analysis found peak year budget impact varying between 2% and 11%. The budget impact of OMPs in Sweden and France is likely to remain sustainable over time and a relatively small proportion of total pharmaceutical expenditure. This forecast could be affected by changes in the success rate for OMP approvals, average cost of OMPs, and the type of companies developing OMPs.
Forecasting influenza in Hong Kong with Google search queries and statistical model fusion
Ramirez Ramirez, L. Leticia; Nezafati, Kusha; Zhang, Qingpeng; Tsui, Kwok-Leung
2017-01-01
Background The objective of this study is to investigate predictive utility of online social media and web search queries, particularly, Google search data, to forecast new cases of influenza-like-illness (ILI) in general outpatient clinics (GOPC) in Hong Kong. To mitigate the impact of sensitivity to self-excitement (i.e., fickle media interest) and other artifacts of online social media data, in our approach we fuse multiple offline and online data sources. Methods Four individual models: generalized linear model (GLM), least absolute shrinkage and selection operator (LASSO), autoregressive integrated moving average (ARIMA), and deep learning (DL) with Feedforward Neural Networks (FNN) are employed to forecast ILI-GOPC both one week and two weeks in advance. The covariates include Google search queries, meteorological data, and previously recorded offline ILI. To our knowledge, this is the first study that introduces deep learning methodology into surveillance of infectious diseases and investigates its predictive utility. Furthermore, to exploit the strength from each individual forecasting models, we use statistical model fusion, using Bayesian model averaging (BMA), which allows a systematic integration of multiple forecast scenarios. For each model, an adaptive approach is used to capture the recent relationship between ILI and covariates. Results DL with FNN appears to deliver the most competitive predictive performance among the four considered individual models. Combing all four models in a comprehensive BMA framework allows to further improve such predictive evaluation metrics as root mean squared error (RMSE) and mean absolute predictive error (MAPE). Nevertheless, DL with FNN remains the preferred method for predicting locations of influenza peaks. Conclusions The proposed approach can be viewed a feasible alternative to forecast ILI in Hong Kong or other countries where ILI has no constant seasonal trend and influenza data resources are limited. The proposed methodology is easily tractable and computationally efficient. PMID:28464015
NASA Astrophysics Data System (ADS)
Cai, Y.
2017-12-01
Accurately forecasting crop yields has broad implications for economic trading, food production monitoring, and global food security. However, the variation of environmental variables presents challenges to model yields accurately, especially when the lack of highly accurate measurements creates difficulties in creating models that can succeed across space and time. In 2016, we developed a sequence of machine-learning based models forecasting end-of-season corn yields for the US at both the county and national levels. We combined machine learning algorithms in a hierarchical way, and used an understanding of physiological processes in temporal feature selection, to achieve high precision in our intra-season forecasts, including in very anomalous seasons. During the live run, we predicted the national corn yield within 1.40% of the final USDA number as early as August. In the backtesting of the 2000-2015 period, our model predicts national yield within 2.69% of the actual yield on average already by mid-August. At the county level, our model predicts 77% of the variation in final yield using data through the beginning of August and improves to 80% by the beginning of October, with the percentage of counties predicted within 10% of the average yield increasing from 68% to 73%. Further, the lowest errors are in the most significant producing regions, resulting in very high precision national-level forecasts. In addition, we identify the changes of important variables throughout the season, specifically early-season land surface temperature, and mid-season land surface temperature and vegetation index. For the 2017 season, we feed 2016 data to the training set, together with additional geospatial data sources, aiming to make the current model even more precise. We will show how our 2017 US corn yield forecasts converges in time, which factors affect the yield the most, as well as present our plans for 2018 model adjustments.
NASA Astrophysics Data System (ADS)
Wu, Q.
2013-12-01
The MM5-SMOKE-CMAQ model system, which is developed by the United States Environmental Protection Agency(U.S. EPA) as the Models-3 system, has been used for the daily air quality forecast in the Beijing Municipal Environmental Monitoring Center(Beijing MEMC), as a part of the Ensemble Air Quality Forecast System for Beijing(EMS-Beijing) since the Olympic Games year 2008. In this study, we collect the daily forecast results of the CMAQ model in the whole year 2010 for the model evaluation. The results show that the model play a good model performance in most days but underestimate obviously in some air pollution episode. A typical air pollution episode from 11st - 20th January 2010 was chosen, which the air pollution index(API) of particulate matter (PM10) observed by Beijing MEMC reaches to 180 while the prediction of PM10-API is about 100. Taking in account all stations in Beijing, including urban and suburban stations, three numerical methods are used for model improvement: firstly, enhance the inner domain with 4km grids, the coverage from only Beijing to the area including its surrounding cities; secondly, update the Beijing stationary area emission inventory, from statistical county-level to village-town level, that would provide more detail spatial informance for area emissions; thirdly, add some industrial points emission in Beijing's surrounding cities, the latter two are both the improvement of emission. As the result, the peak of the nine national standard stations averaged PM10-API, which is simulated by CMAQ as daily hindcast PM10-API, reach to 160 and much near to the observation. The new results show better model performance, which the correlation coefficent is 0.93 in national standard stations average and 0.84 in all stations, the relative error is 15.7% in national standard stations averaged and 27% in all stations. The time series of 9 national standard in Beijing urban The scatter diagram of all stations in Beijing, the red is the forecast and the blue is new result.
Forecasting air quality time series using deep learning.
Freeman, Brian S; Taylor, Graham; Gharabaghi, Bahram; Thé, Jesse
2018-04-13
This paper presents one of the first applications of deep learning (DL) techniques to predict air pollution time series. Air quality management relies extensively on time series data captured at air monitoring stations as the basis of identifying population exposure to airborne pollutants and determining compliance with local ambient air standards. In this paper, 8 hr averaged surface ozone (O 3 ) concentrations were predicted using deep learning consisting of a recurrent neural network (RNN) with long short-term memory (LSTM). Hourly air quality and meteorological data were used to train and forecast values up to 72 hours with low error rates. The LSTM was able to forecast the duration of continuous O 3 exceedances as well. Prior to training the network, the dataset was reviewed for missing data and outliers. Missing data were imputed using a novel technique that averaged gaps less than eight time steps with incremental steps based on first-order differences of neighboring time periods. Data were then used to train decision trees to evaluate input feature importance over different time prediction horizons. The number of features used to train the LSTM model was reduced from 25 features to 5 features, resulting in improved accuracy as measured by Mean Absolute Error (MAE). Parameter sensitivity analysis identified look-back nodes associated with the RNN proved to be a significant source of error if not aligned with the prediction horizon. Overall, MAE's less than 2 were calculated for predictions out to 72 hours. Novel deep learning techniques were used to train an 8-hour averaged ozone forecast model. Missing data and outliers within the captured data set were replaced using a new imputation method that generated calculated values closer to the expected value based on the time and season. Decision trees were used to identify input variables with the greatest importance. The methods presented in this paper allow air managers to forecast long range air pollution concentration while only monitoring key parameters and without transforming the data set in its entirety, thus allowing real time inputs and continuous prediction.
Average Weighted Receiving Time of Weighted Tetrahedron Koch Networks
NASA Astrophysics Data System (ADS)
Dai, Meifeng; Zhang, Danping; Ye, Dandan; Zhang, Cheng; Li, Lei
2015-07-01
We introduce weighted tetrahedron Koch networks with infinite weight factors, which are generalization of finite ones. The term of weighted time is firstly defined in this literature. The mean weighted first-passing time (MWFPT) and the average weighted receiving time (AWRT) are defined by weighted time accordingly. We study the AWRT with weight-dependent walk. Results show that the AWRT for a nontrivial weight factor sequence grows sublinearly with the network order. To investigate the reason of sublinearity, the average receiving time (ART) for four cases are discussed.
Total probabilities of ensemble runoff forecasts
NASA Astrophysics Data System (ADS)
Olav Skøien, Jon; Bogner, Konrad; Salamon, Peter; Smith, Paul; Pappenberger, Florian
2017-04-01
Ensemble forecasting has a long history from meteorological modelling, as an indication of the uncertainty of the forecasts. However, it is necessary to calibrate and post-process the ensembles as the they often exhibit both bias and dispersion errors. Two of the most common methods for this are Bayesian Model Averaging (Raftery et al., 2005) and Ensemble Model Output Statistics (EMOS) (Gneiting et al., 2005). There are also methods for regionalizing these methods (Berrocal et al., 2007) and for incorporating the correlation between lead times (Hemri et al., 2013). Engeland and Steinsland Engeland and Steinsland (2014) developed a framework which can estimate post-processing parameters varying in space and time, while giving a spatially and temporally consistent output. However, their method is computationally complex for our larger number of stations, which makes it unsuitable for our purpose. Our post-processing method of the ensembles is developed in the framework of the European Flood Awareness System (EFAS - http://www.efas.eu), where we are making forecasts for whole Europe, and based on observations from around 700 catchments. As the target is flood forecasting, we are also more interested in improving the forecast skill for high-flows rather than in a good prediction of the entire flow regime. EFAS uses a combination of ensemble forecasts and deterministic forecasts from different meteorological forecasters to force a distributed hydrologic model and to compute runoff ensembles for each river pixel within the model domain. Instead of showing the mean and the variability of each forecast ensemble individually, we will now post-process all model outputs to estimate the total probability, the post-processed mean and uncertainty of all ensembles. The post-processing parameters are first calibrated for each calibration location, but we are adding a spatial penalty in the calibration process to force a spatial correlation of the parameters. The penalty takes distance, stream-connectivity and size of the catchment areas into account. This can in some cases have a slight negative impact on the calibration error, but avoids large differences between parameters of nearby locations, whether stream connected or not. The spatial calibration also makes it easier to interpolate the post-processing parameters to uncalibrated locations. We also look into different methods for handling the non-normal distributions of runoff data and the effect of different data transformations on forecasts skills in general and for floods in particular. Berrocal, V. J., Raftery, A. E. and Gneiting, T.: Combining Spatial Statistical and Ensemble Information in Probabilistic Weather Forecasts, Mon. Weather Rev., 135(4), 1386-1402, doi:10.1175/MWR3341.1, 2007. Engeland, K. and Steinsland, I.: Probabilistic postprocessing models for flow forecasts for a system of catchments and several lead times, Water Resour. Res., 50(1), 182-197, doi:10.1002/2012WR012757, 2014. Gneiting, T., Raftery, A. E., Westveld, A. H. and Goldman, T.: Calibrated Probabilistic Forecasting Using Ensemble Model Output Statistics and Minimum CRPS Estimation, Mon. Weather Rev., 133(5), 1098-1118, doi:10.1175/MWR2904.1, 2005. Hemri, S., Fundel, F. and Zappa, M.: Simultaneous calibration of ensemble river flow predictions over an entire range of lead times, Water Resour. Res., 49(10), 6744-6755, doi:10.1002/wrcr.20542, 2013. Raftery, A. E., Gneiting, T., Balabdaoui, F. and Polakowski, M.: Using Bayesian Model Averaging to Calibrate Forecast Ensembles, Mon. Weather Rev., 133(5), 1155-1174, doi:10.1175/MWR2906.1, 2005.
Total probabilities of ensemble runoff forecasts
NASA Astrophysics Data System (ADS)
Olav Skøien, Jon; Bogner, Konrad; Salamon, Peter; Smith, Paul; Pappenberger, Florian
2016-04-01
Ensemble forecasting has for a long time been used as a method in meteorological modelling to indicate the uncertainty of the forecasts. However, as the ensembles often exhibit both bias and dispersion errors, it is necessary to calibrate and post-process them. Two of the most common methods for this are Bayesian Model Averaging (Raftery et al., 2005) and Ensemble Model Output Statistics (EMOS) (Gneiting et al., 2005). There are also methods for regionalizing these methods (Berrocal et al., 2007) and for incorporating the correlation between lead times (Hemri et al., 2013). Engeland and Steinsland Engeland and Steinsland (2014) developed a framework which can estimate post-processing parameters which are different in space and time, but still can give a spatially and temporally consistent output. However, their method is computationally complex for our larger number of stations, and cannot directly be regionalized in the way we would like, so we suggest a different path below. The target of our work is to create a mean forecast with uncertainty bounds for a large number of locations in the framework of the European Flood Awareness System (EFAS - http://www.efas.eu) We are therefore more interested in improving the forecast skill for high-flows rather than the forecast skill of lower runoff levels. EFAS uses a combination of ensemble forecasts and deterministic forecasts from different forecasters to force a distributed hydrologic model and to compute runoff ensembles for each river pixel within the model domain. Instead of showing the mean and the variability of each forecast ensemble individually, we will now post-process all model outputs to find a total probability, the post-processed mean and uncertainty of all ensembles. The post-processing parameters are first calibrated for each calibration location, but assuring that they have some spatial correlation, by adding a spatial penalty in the calibration process. This can in some cases have a slight negative impact on the calibration error, but makes it easier to interpolate the post-processing parameters to uncalibrated locations. We also look into different methods for handling the non-normal distributions of runoff data and the effect of different data transformations on forecasts skills in general and for floods in particular. Berrocal, V. J., Raftery, A. E. and Gneiting, T.: Combining Spatial Statistical and Ensemble Information in Probabilistic Weather Forecasts, Mon. Weather Rev., 135(4), 1386-1402, doi:10.1175/MWR3341.1, 2007. Engeland, K. and Steinsland, I.: Probabilistic postprocessing models for flow forecasts for a system of catchments and several lead times, Water Resour. Res., 50(1), 182-197, doi:10.1002/2012WR012757, 2014. Gneiting, T., Raftery, A. E., Westveld, A. H. and Goldman, T.: Calibrated Probabilistic Forecasting Using Ensemble Model Output Statistics and Minimum CRPS Estimation, Mon. Weather Rev., 133(5), 1098-1118, doi:10.1175/MWR2904.1, 2005. Hemri, S., Fundel, F. and Zappa, M.: Simultaneous calibration of ensemble river flow predictions over an entire range of lead times, Water Resour. Res., 49(10), 6744-6755, doi:10.1002/wrcr.20542, 2013. Raftery, A. E., Gneiting, T., Balabdaoui, F. and Polakowski, M.: Using Bayesian Model Averaging to Calibrate Forecast Ensembles, Mon. Weather Rev., 133(5), 1155-1174, doi:10.1175/MWR2906.1, 2005.
NASA Astrophysics Data System (ADS)
ÁLvarez, A.; Orfila, A.; Tintoré, J.
2004-03-01
Satellites are the only systems able to provide continuous information on the spatiotemporal variability of vast areas of the ocean. Relatively long-term time series of satellite data are nowadays available. These spatiotemporal time series of satellite observations can be employed to build empirical models, called satellite-based ocean forecasting (SOFT) systems, to forecast certain aspects of future ocean states. SOFT systems can predict satellite-observed fields at different timescales. The forecast skill of SOFT systems forecasting the sea surface temperature (SST) at monthly timescales has been extensively explored in previous works. In this work we study the performance of two SOFT systems forecasting, respectively, the SST and sea level anomaly (SLA) at weekly timescales, that is, providing forecasts of the weekly averaged SST and SLA fields with 1 week in advance. The SOFT systems were implemented in the Ligurian Sea (Western Mediterranean Sea). Predictions from the SOFT systems are compared with observations and with the predictions obtained from persistence models. Results indicate that the SOFT system forecasting the SST field is always superior in terms of predictability to persistence. Minimum prediction errors in the SST are obtained during winter and spring seasons. On the other hand, the biggest differences between the performance of SOFT and persistence models are found during summer and autumn. These changes in the predictability are explained on the basis of the particular variability of the SST field in the Ligurian Sea. Concerning the SLA field, no improvements with respect to persistence have been found for the SOFT system forecasting the SLA field.
NASA Astrophysics Data System (ADS)
Foster, Kean; Bertacchi Uvo, Cintia; Olsson, Jonas
2018-05-01
Hydropower makes up nearly half of Sweden's electrical energy production. However, the distribution of the water resources is not aligned with demand, as most of the inflows to the reservoirs occur during the spring flood period. This means that carefully planned reservoir management is required to help redistribute water resources to ensure optimal production and accurate forecasts of the spring flood volume (SFV) is essential for this. The current operational SFV forecasts use a historical ensemble approach where the HBV model is forced with historical observations of precipitation and temperature. In this work we develop and test a multi-model prototype, building on previous work, and evaluate its ability to forecast the SFV in 84 sub-basins in northern Sweden. The hypothesis explored in this work is that a multi-model seasonal forecast system incorporating different modelling approaches is generally more skilful at forecasting the SFV in snow dominated regions than a forecast system that utilises only one approach. The testing is done using cross-validated hindcasts for the period 1981-2015 and the results are evaluated against both climatology and the current system to determine skill. Both the multi-model methods considered showed skill over the reference forecasts. The version that combined the historical modelling chain, dynamical modelling chain, and statistical modelling chain performed better than the other and was chosen for the prototype. The prototype was able to outperform the current operational system 57 % of the time on average and reduce the error in the SFV by ˜ 6 % across all sub-basins and forecast dates.
A 30-day-ahead forecast model for grass pollen in north London, United Kingdom.
Smith, Matt; Emberlin, Jean
2006-03-01
A 30-day-ahead forecast method has been developed for grass pollen in north London. The total period of the grass pollen season is covered by eight multiple regression models, each covering a 10-day period running consecutively from 21 May to 8 August. This means that three models were used for each 30-day forecast. The forecast models were produced using grass pollen and environmental data from 1961 to 1999 and tested on data from 2000 and 2002. Model accuracy was judged in two ways: the number of times the forecast model was able to successfully predict the severity (relative to the 1961-1999 dataset as a whole) of grass pollen counts in each of the eight forecast periods on a scale of 1 to 4; the number of times the forecast model was able to predict whether grass pollen counts were higher or lower than the mean. The models achieved 62.5% accuracy in both assessment years when predicting the relative severity of grass pollen counts on a scale of 1 to 4, which equates to six of the eight 10-day periods being forecast correctly. The models attained 87.5% and 100% accuracy in 2000 and 2002, respectively, when predicting whether grass pollen counts would be higher or lower than the mean. Attempting to predict pollen counts during distinct 10-day periods throughout the grass pollen season is a novel approach. The models also employed original methodology in the use of winter averages of the North Atlantic Oscillation to forecast 10-day means of allergenic pollen counts.
NASA Technical Reports Server (NTRS)
Hou, Arthur Y.; Zhang, Sara Q.; deSilva, Arlindo M.
2000-01-01
Global reanalyses currently contain significant errors in the primary fields of the hydrological cycle such as precipitation, evaporation, moisture, and the related cloud fields, especially in the tropics. The Data Assimilation Office (DAO) at the NASA Goddard Space Flight Center has been exploring the use of tropical rainfall and total precipitable water (TPW) observations from the TRMM Microwave Imager (TMI) and the Special Sensor Microwave/ Imager (SSM/I) instruments to improve short-range forecast and reanalyses. We describe a "1+1"D procedure for assimilating 6-hr averaged rainfall and TPW in the Goddard Earth Observing System (GEOS) Data Assimilation System (DAS). The algorithm is based on a 6-hr time integration of a column version of the GEOS DAS, hence the "1+1"D designation. The scheme minimizes the least-square differences between the observed TPW and rain rates and those produced by the column model over the 6-hr analysis window. This 1+lD scheme, in its generalization to four dimensions, is related to the standard 4D variational assimilation but uses analysis increments instead of the initial condition as the control variable. Results show that assimilating the TMI and SSM/I rainfall and TPW observations improves not only the precipitation and moisture fields but also key climate parameters such as clouds, the radiation, the upper-tropospheric moisture, and the large-scale circulation in the tropics. In particular, assimilating these data reduce the state-dependent systematic errors in the assimilated products. The improved analysis also provides better initial conditions for short-range forecasts, but the improvements in forecast are less than improvements in the time-averaged assimilation fields, indicating that using these data types is effective in correcting biases and other errors of the forecast model in data assimilation.
NASA Technical Reports Server (NTRS)
Hou, Arthur Y.; Zhang, Sara Q.; daSilva, Arlindo M.
1999-01-01
Global reanalyses currently contain significant errors in the primary fields of the hydrological cycle such as precipitation, evaporation, moisture, and the related cloud fields, especially in the tropics. The Data Assimilation Office (DAO) at the NASA Goddard Space Flight Center has been exploring the use of tropical rainfall and total precipitable water (TPW) observations from the TRMM Microwave Imager (TMI) and the Special Sensor Microwave/ Imager (SSM/I) instruments to improve short-range forecast and reanalyses. We describe a 1+1D procedure for assimilating 6-hr averaged rainfall and TPW in the Goddard Earth Observing System (GEOS) Data Assimilation System (DAS). The algorithm is based on a 6-hr time integration of a column version of the GEOS DAS, hence the 1+1D designation. The scheme minimizes the least-square differences between the observed TPW and rain rates and those produced by the column model over the 6-hr analysis window. This 1+1D scheme, in its generalization to four dimensions, is related to the standard 4D variational assimilation but uses analysis increments instead of the initial condition as the control variable. Results show that assimilating the TMI and SSW rainfall and TPW observations improves not only the precipitation and moisture fields but also key climate parameters such as clouds, the radiation, the upper-tropospheric moisture, and the large-scale circulation in the tropics. In particular, assimilating these data reduce the state-dependent systematic errors in the assimilated products. The improved analysis also provides better initial conditions for short-range forecasts, but the improvements in forecast are less than improvements in the time-averaged assimilation fields, indicating that using these data types is effective in correcting biases and other errors of the forecast model in data assimilation.
A hybrid procedure for MSW generation forecasting at multiple time scales in Xiamen City, China.
Xu, Lilai; Gao, Peiqing; Cui, Shenghui; Liu, Chun
2013-06-01
Accurate forecasting of municipal solid waste (MSW) generation is crucial and fundamental for the planning, operation and optimization of any MSW management system. Comprehensive information on waste generation for month-scale, medium-term and long-term time scales is especially needed, considering the necessity of MSW management upgrade facing many developing countries. Several existing models are available but of little use in forecasting MSW generation at multiple time scales. The goal of this study is to propose a hybrid model that combines the seasonal autoregressive integrated moving average (SARIMA) model and grey system theory to forecast MSW generation at multiple time scales without needing to consider other variables such as demographics and socioeconomic factors. To demonstrate its applicability, a case study of Xiamen City, China was performed. Results show that the model is robust enough to fit and forecast seasonal and annual dynamics of MSW generation at month-scale, medium- and long-term time scales with the desired accuracy. In the month-scale, MSW generation in Xiamen City will peak at 132.2 thousand tonnes in July 2015 - 1.5 times the volume in July 2010. In the medium term, annual MSW generation will increase to 1518.1 thousand tonnes by 2015 at an average growth rate of 10%. In the long term, a large volume of MSW will be output annually and will increase to 2486.3 thousand tonnes by 2020 - 2.5 times the value for 2010. The hybrid model proposed in this paper can enable decision makers to develop integrated policies and measures for waste management over the long term. Copyright © 2013 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Zhou, Rurui; Li, Yu; Lu, Di; Liu, Haixing; Zhou, Huicheng
2016-09-01
This paper investigates the use of an epsilon-dominance non-dominated sorted genetic algorithm II (ɛ-NSGAII) as a sampling approach with an aim to improving sampling efficiency for multiple metrics uncertainty analysis using Generalized Likelihood Uncertainty Estimation (GLUE). The effectiveness of ɛ-NSGAII based sampling is demonstrated compared with Latin hypercube sampling (LHS) through analyzing sampling efficiency, multiple metrics performance, parameter uncertainty and flood forecasting uncertainty with a case study of flood forecasting uncertainty evaluation based on Xinanjiang model (XAJ) for Qing River reservoir, China. Results obtained demonstrate the following advantages of the ɛ-NSGAII based sampling approach in comparison to LHS: (1) The former performs more effective and efficient than LHS, for example the simulation time required to generate 1000 behavioral parameter sets is shorter by 9 times; (2) The Pareto tradeoffs between metrics are demonstrated clearly with the solutions from ɛ-NSGAII based sampling, also their Pareto optimal values are better than those of LHS, which means better forecasting accuracy of ɛ-NSGAII parameter sets; (3) The parameter posterior distributions from ɛ-NSGAII based sampling are concentrated in the appropriate ranges rather than uniform, which accords with their physical significance, also parameter uncertainties are reduced significantly; (4) The forecasted floods are close to the observations as evaluated by three measures: the normalized total flow outside the uncertainty intervals (FOUI), average relative band-width (RB) and average deviation amplitude (D). The flood forecasting uncertainty is also reduced a lot with ɛ-NSGAII based sampling. This study provides a new sampling approach to improve multiple metrics uncertainty analysis under the framework of GLUE, and could be used to reveal the underlying mechanisms of parameter sets under multiple conflicting metrics in the uncertainty analysis process.
FAA Long-Range Aviation Forecasts Fiscal Years 2005-2020
1993-09-01
assumptions translate into somewhat slower growth of aviation activity and FAA workload measures during the extended 16 -year period (2004 to 2020) than was...OPERATIONS 1.8 1.2 INSTRUMENT OPERATIONS 2.0 1.3 IFR AIRCRAFT HANDLED 2.0 1.3 FLIGHT SERVICE STTIONS (0.2) 0.1 2 II. LONG-RANGE FORECAST ASSUMPTIONS The...product (GDP), adjusted for price changes and expressed in 1987 dollars, will average 1.9 percent annually over the extended 16 -year fore- cast period
Methodology for the Assessment of the Macroeconomic Impacts of Stricter CAFE Standards - Addendum
2002-01-01
This assessment of the economic impacts of Corporate Average Fuel Economy (CAFÉ) standards marks the first time the Energy Information Administration has used the new direct linkage of the DRI-WEFA Macroeconomic Model to the National Energy Modeling System (NEMS) in a policy setting. This methodology assures an internally consistent solution between the energy market concepts forecast by NEMS and the aggregate economy as forecast by the DRI-WEFA Macroeconomic Model of the U.S. Economy.
Domestic & International Air Cargo Activity: National and Selected Hub Forecasts.
1979-11-01
111371 1991 1887811 2?. 768 :297968 Forecast utilizes 1972 dollar GNP from Wharton’s annual model, December 6, 1978, Post-Meeting Control Solution...mile based on 1973 revenue ton-miles reported in the DOT/CAB, Air Carrier Traffic Statistics. South America - RSA - simple average of American (Latin...9518 F (2/11) = 129.347 DW = 1.41 (b) South America (ESA) 4 = 11.8926 + 18.2908* (GDPSA.C) - 8.94307* ( RSA ) 4 (0.14) (6.08) (-0.46) R2 .8717 F (2/11
Comparative study of four time series methods in forecasting typhoid fever incidence in China.
Zhang, Xingyu; Liu, Yuanyuan; Yang, Min; Zhang, Tao; Young, Alistair A; Li, Xiaosong
2013-01-01
Accurate incidence forecasting of infectious disease is critical for early prevention and for better government strategic planning. In this paper, we present a comprehensive study of different forecasting methods based on the monthly incidence of typhoid fever. The seasonal autoregressive integrated moving average (SARIMA) model and three different models inspired by neural networks, namely, back propagation neural networks (BPNN), radial basis function neural networks (RBFNN), and Elman recurrent neural networks (ERNN) were compared. The differences as well as the advantages and disadvantages, among the SARIMA model and the neural networks were summarized and discussed. The data obtained for 2005 to 2009 and for 2010 from the Chinese Center for Disease Control and Prevention were used as modeling and forecasting samples, respectively. The performances were evaluated based on three metrics: mean absolute error (MAE), mean absolute percentage error (MAPE), and mean square error (MSE). The results showed that RBFNN obtained the smallest MAE, MAPE and MSE in both the modeling and forecasting processes. The performances of the four models ranked in descending order were: RBFNN, ERNN, BPNN and the SARIMA model.
Comparative Study of Four Time Series Methods in Forecasting Typhoid Fever Incidence in China
Zhang, Xingyu; Liu, Yuanyuan; Yang, Min; Zhang, Tao; Young, Alistair A.; Li, Xiaosong
2013-01-01
Accurate incidence forecasting of infectious disease is critical for early prevention and for better government strategic planning. In this paper, we present a comprehensive study of different forecasting methods based on the monthly incidence of typhoid fever. The seasonal autoregressive integrated moving average (SARIMA) model and three different models inspired by neural networks, namely, back propagation neural networks (BPNN), radial basis function neural networks (RBFNN), and Elman recurrent neural networks (ERNN) were compared. The differences as well as the advantages and disadvantages, among the SARIMA model and the neural networks were summarized and discussed. The data obtained for 2005 to 2009 and for 2010 from the Chinese Center for Disease Control and Prevention were used as modeling and forecasting samples, respectively. The performances were evaluated based on three metrics: mean absolute error (MAE), mean absolute percentage error (MAPE), and mean square error (MSE). The results showed that RBFNN obtained the smallest MAE, MAPE and MSE in both the modeling and forecasting processes. The performances of the four models ranked in descending order were: RBFNN, ERNN, BPNN and the SARIMA model. PMID:23650546
Short time ahead wind power production forecast
NASA Astrophysics Data System (ADS)
Sapronova, Alla; Meissner, Catherine; Mana, Matteo
2016-09-01
An accurate prediction of wind power output is crucial for efficient coordination of cooperative energy production from different sources. Long-time ahead prediction (from 6 to 24 hours) of wind power for onshore parks can be achieved by using a coupled model that would bridge the mesoscale weather prediction data and computational fluid dynamics. When a forecast for shorter time horizon (less than one hour ahead) is anticipated, an accuracy of a predictive model that utilizes hourly weather data is decreasing. That is because the higher frequency fluctuations of the wind speed are lost when data is averaged over an hour. Since the wind speed can vary up to 50% in magnitude over a period of 5 minutes, the higher frequency variations of wind speed and direction have to be taken into account for an accurate short-term ahead energy production forecast. In this work a new model for wind power production forecast 5- to 30-minutes ahead is presented. The model is based on machine learning techniques and categorization approach and using the historical park production time series and hourly numerical weather forecast.
Monthly monsoon rainfall forecasting using artificial neural networks
NASA Astrophysics Data System (ADS)
Ganti, Ravikumar
2014-10-01
Indian agriculture sector heavily depends on monsoon rainfall for successful harvesting. In the past, prediction of rainfall was mainly performed using regression models, which provide reasonable accuracy in the modelling and forecasting of complex physical systems. Recently, Artificial Neural Networks (ANNs) have been proposed as efficient tools for modelling and forecasting. A feed-forward multi-layer perceptron type of ANN architecture trained using the popular back-propagation algorithm was employed in this study. Other techniques investigated for modeling monthly monsoon rainfall include linear and non-linear regression models for comparison purposes. The data employed in this study include monthly rainfall and monthly average of the daily maximum temperature in the North Central region in India. Specifically, four regression models and two ANN model's were developed. The performance of various models was evaluated using a wide variety of standard statistical parameters and scatter plots. The results obtained in this study for forecasting monsoon rainfalls using ANNs have been encouraging. India's economy and agricultural activities can be effectively managed with the help of the availability of the accurate monsoon rainfall forecasts.
A Modified LS+AR Model to Improve the Accuracy of the Short-term Polar Motion Prediction
NASA Astrophysics Data System (ADS)
Wang, Z. W.; Wang, Q. X.; Ding, Y. Q.; Zhang, J. J.; Liu, S. S.
2017-03-01
There are two problems of the LS (Least Squares)+AR (AutoRegressive) model in polar motion forecast: the inner residual value of LS fitting is reasonable, but the residual value of LS extrapolation is poor; and the LS fitting residual sequence is non-linear. It is unsuitable to establish an AR model for the residual sequence to be forecasted, based on the residual sequence before forecast epoch. In this paper, we make solution to those two problems with two steps. First, restrictions are added to the two endpoints of LS fitting data to fix them on the LS fitting curve. Therefore, the fitting values next to the two endpoints are very close to the observation values. Secondly, we select the interpolation residual sequence of an inward LS fitting curve, which has a similar variation trend as the LS extrapolation residual sequence, as the modeling object of AR for the residual forecast. Calculation examples show that this solution can effectively improve the short-term polar motion prediction accuracy by the LS+AR model. In addition, the comparison results of the forecast models of RLS (Robustified Least Squares)+AR, RLS+ARIMA (AutoRegressive Integrated Moving Average), and LS+ANN (Artificial Neural Network) confirm the feasibility and effectiveness of the solution for the polar motion forecast. The results, especially for the polar motion forecast in the 1-10 days, show that the forecast accuracy of the proposed model can reach the world level.
Sub-seasonal predictability of water scarcity at global and local scale
NASA Astrophysics Data System (ADS)
Wanders, N.; Wada, Y.; Wood, E. F.
2016-12-01
Forecasting the water demand and availability for agriculture and energy production has been neglected in previous research, partly due to the fact that most large-scale hydrological models lack the skill to forecast human water demands at sub-seasonal time scale. We study the potential of a sub-seasonal water scarcity forecasting system for improved water management decision making and improved estimates of water demand and availability. We have generated 32 years of global sub-seasonal multi-model water availability, demand and scarcity forecasts. The quality of the forecasts is compared to a reference forecast derived from resampling historic weather observations. The newly developed system has been evaluated for both the global scale and in a real-time local application in the Sacramento valley for the Trinity, Shasta and Oroville reservoirs, where the water demand for agriculture and hydropower is high. On the global scale we find that the reference forecast shows high initial forecast skill (up to 8 months) for water scarcity in the eastern US, Central Asia and Sub-Saharan Africa. Adding dynamical sub-seasonal forecasts results in a clear improvement for most regions in the world, increasing the forecasts' lead time by 2 or more months on average. The strongest improvements are found in the US, Brazil, Central Asia and Australia. For the Sacramento valley we can accurately predict anomalies in the reservoir inflow, hydropower potential and the downstream irrigation water demand 6 months in advance. This allow us to forecast potential water scarcity in the Sacramento valley and adjust the reservoir management to prevent deficits in energy or irrigation water availability. The newly developed forecast system shows that it is possible to reduce the vulnerability to upcoming water scarcity events and allows optimization of the distribution of the available water between the agricultural and energy sector half a year in advance.
NASA Technical Reports Server (NTRS)
Caluori, V. A.; Conrad, R. T.; Jenkins, J. C.
1980-01-01
Technological requirements and forecasts of rocket engine parameters and launch vehicles for future Earth to geosynchronous orbit transportation systems are presented. The parametric performance, weight, and envelope data for the LOX/CH4, fuel cooled, staged combustion cycle and the hydrogen cooled, expander bleed cycle engine concepts are discussed. The costing methodology and ground rules used to develop the engine study are summarized. The weight estimating methodology for winged launched vehicles is described and summary data, used to evaluate and compare weight data for dedicated and integrated O2/H2 subsystems for the SSTO, HLLV and POTV are presented. Detail weights, comparisons, and weight scaling equations are provided.
Huang, Daizheng; Wu, Zhihui
2017-01-01
Accurately predicting the trend of outpatient visits by mathematical modeling can help policy makers manage hospitals effectively, reasonably organize schedules for human resources and finances, and appropriately distribute hospital material resources. In this study, a hybrid method based on empirical mode decomposition and back-propagation artificial neural networks optimized by particle swarm optimization is developed to forecast outpatient visits on the basis of monthly numbers. The data outpatient visits are retrieved from January 2005 to December 2013 and first obtained as the original time series. Second, the original time series is decomposed into a finite and often small number of intrinsic mode functions by the empirical mode decomposition technique. Third, a three-layer back-propagation artificial neural network is constructed to forecast each intrinsic mode functions. To improve network performance and avoid falling into a local minimum, particle swarm optimization is employed to optimize the weights and thresholds of back-propagation artificial neural networks. Finally, the superposition of forecasting results of the intrinsic mode functions is regarded as the ultimate forecasting value. Simulation indicates that the proposed method attains a better performance index than the other four methods. PMID:28222194
Huang, Daizheng; Wu, Zhihui
2017-01-01
Accurately predicting the trend of outpatient visits by mathematical modeling can help policy makers manage hospitals effectively, reasonably organize schedules for human resources and finances, and appropriately distribute hospital material resources. In this study, a hybrid method based on empirical mode decomposition and back-propagation artificial neural networks optimized by particle swarm optimization is developed to forecast outpatient visits on the basis of monthly numbers. The data outpatient visits are retrieved from January 2005 to December 2013 and first obtained as the original time series. Second, the original time series is decomposed into a finite and often small number of intrinsic mode functions by the empirical mode decomposition technique. Third, a three-layer back-propagation artificial neural network is constructed to forecast each intrinsic mode functions. To improve network performance and avoid falling into a local minimum, particle swarm optimization is employed to optimize the weights and thresholds of back-propagation artificial neural networks. Finally, the superposition of forecasting results of the intrinsic mode functions is regarded as the ultimate forecasting value. Simulation indicates that the proposed method attains a better performance index than the other four methods.
Using ensembles in water management: forecasting dry and wet episodes
NASA Astrophysics Data System (ADS)
van het Schip-Haverkamp, Tessa; van den Berg, Wim; van de Beek, Remco
2015-04-01
Extreme weather situations as droughts and extensive precipitation are becoming more frequent, which makes it more important to obtain accurate weather forecasts for the short and long term. Ensembles can provide a solution in terms of scenario forecasts. MeteoGroup uses ensembles in a new forecasting technique which presents a number of weather scenarios for a dynamical water management project, called Water-Rijk, in which water storage and water retention plays a large role. The Water-Rijk is part of Park Lingezegen, which is located between Arnhem and Nijmegen in the Netherlands. In collaboration with the University of Wageningen, Alterra and Eijkelkamp a forecasting system is developed for this area which can provide water boards with a number of weather and hydrology scenarios in order to assist in the decision whether or not water retention or water storage is necessary in the near future. In order to make a forecast for drought and extensive precipitation, the difference 'precipitation- evaporation' is used as a measurement of drought in the weather forecasts. In case of an upcoming drought this difference will take larger negative values. In case of a wet episode, this difference will be positive. The Makkink potential evaporation is used which gives the most accurate potential evaporation values during the summer, when evaporation plays an important role in the availability of surface water. Scenarios are determined by reducing the large number of forecasts in the ensemble to a number of averaged members with each its own likelihood of occurrence. For the Water-Rijk project 5 scenario forecasts are calculated: extreme dry, dry, normal, wet and extreme wet. These scenarios are constructed for two forecasting periods, each using its own ensemble technique: up to 48 hours ahead and up to 15 days ahead. The 48-hour forecast uses an ensemble constructed from forecasts of multiple high-resolution regional models: UKMO's Euro4 model,the ECMWF model, WRF and Hirlam. Using multiple model runs and additional post processing, an ensemble can be created from non-ensemble models. The 15-day forecast uses the ECMWF Ensemble Prediction System forecast from which scenarios can be deduced directly. A combination of the ensembles from the two forecasting periods is used in order to have the highest possible resolution of the forecast for the first 48 hours followed by the lower resolution long term forecast.
Do Sell-Side Stock Analysts Exhibit Escalation of Commitment?
Milkman, Katherine L.
2010-01-01
This paper presents evidence that when an analyst makes an out-of-consensus forecast of a company’s quarterly earnings that turns out to be incorrect, she escalates her commitment to maintaining an out-of-consensus view on the company. Relative to an analyst who was close to the consensus, the out-of-consensus analyst adjusts her forecasts for the current fiscal year’s earnings less in the direction of the quarterly earnings surprise. On average, this type of updating behavior reduces forecasting accuracy, so it does not seem to reflect superior private information. Further empirical results suggest that analysts do not have financial incentives to stand by extreme stock calls in the face of contradictory evidence. Managerial and financial market implications are discussed. PMID:21516220
NASA Astrophysics Data System (ADS)
Sitohang, Yosep Oktavianus; Darmawan, Gumgum
2017-08-01
This research attempts to compare between two forecasting models in time series analysis for predicting the sales volume of motorcycle in Indonesia. The first forecasting model used in this paper is Autoregressive Fractionally Integrated Moving Average (ARFIMA). ARFIMA can handle non-stationary data and has a better performance than ARIMA in forecasting accuracy on long memory data. This is because the fractional difference parameter can explain correlation structure in data that has short memory, long memory, and even both structures simultaneously. The second forecasting model is Singular spectrum analysis (SSA). The advantage of the technique is that it is able to decompose time series data into the classic components i.e. trend, cyclical, seasonal and noise components. This makes the forecasting accuracy of this technique significantly better. Furthermore, SSA is a model-free technique, so it is likely to have a very wide range in its application. Selection of the best model is based on the value of the lowest MAPE. Based on the calculation, it is obtained the best model for ARFIMA is ARFIMA (3, d = 0, 63, 0) with MAPE value of 22.95 percent. For SSA with a window length of 53 and 4 group of reconstructed data, resulting MAPE value of 13.57 percent. Based on these results it is concluded that SSA produces better forecasting accuracy.
Shafizadeh-Moghadam, Hossein; Valavi, Roozbeh; Shahabi, Himan; Chapi, Kamran; Shirzadi, Ataollah
2018-07-01
In this research, eight individual machine learning and statistical models are implemented and compared, and based on their results, seven ensemble models for flood susceptibility assessment are introduced. The individual models included artificial neural networks, classification and regression trees, flexible discriminant analysis, generalized linear model, generalized additive model, boosted regression trees, multivariate adaptive regression splines, and maximum entropy, and the ensemble models were Ensemble Model committee averaging (EMca), Ensemble Model confidence interval Inferior (EMciInf), Ensemble Model confidence interval Superior (EMciSup), Ensemble Model to estimate the coefficient of variation (EMcv), Ensemble Model to estimate the mean (EMmean), Ensemble Model to estimate the median (EMmedian), and Ensemble Model based on weighted mean (EMwmean). The data set covered 201 flood events in the Haraz watershed (Mazandaran province in Iran) and 10,000 randomly selected non-occurrence points. Among the individual models, the Area Under the Receiver Operating Characteristic (AUROC), which showed the highest value, belonged to boosted regression trees (0.975) and the lowest value was recorded for generalized linear model (0.642). On the other hand, the proposed EMmedian resulted in the highest accuracy (0.976) among all models. In spite of the outstanding performance of some models, nevertheless, variability among the prediction of individual models was considerable. Therefore, to reduce uncertainty, creating more generalizable, more stable, and less sensitive models, ensemble forecasting approaches and in particular the EMmedian is recommended for flood susceptibility assessment. Copyright © 2018 Elsevier Ltd. All rights reserved.
Code of Federal Regulations, 2011 CFR
2011-01-01
... reliability of wind and weather forecasting. (2) The location and kinds of navigation aids. (3) The prevailing... power available; (5) The airplane is operating in standard atmosphere; and (6) The weight of the...
Page, R.A.; Lahr, J.C.; Chouet, B.A.; Power, J.A.; Stephens, C.D.
1994-01-01
The waning phase of the 1989-1990 eruption of Redoubt Volcano in the Cook Inlet region of south-central Alaska comprised a quasi-regular pattern of repetitious dome growth and destruction that lasted from February 15 to late April 1990. The dome failures produced ash plumes hazardous to airline traffic. In response to this hazard, the Alaska Volcano Observatory sought to forecast these ash-producing events using two approaches. One approach built on early successes in issuing warnings before major eruptions on December 14, 1989 and January 2, 1990. These warnings were based largely on changes in seismic activity related to the occurrence of precursory swarms of long-period seismic events. The search for precursory swarms of long-period seismicity was continued through the waning phase of the eruption and led to warnings before tephra eruptions on March 23 and April 6. The observed regularity of dome failures after February 15 suggested that a statistical forecasting method based on a constant-rate failure model might also be successful. The first statistical forecast was issued on March 16 after seven events had occurred, at an average interval of 4.5 days. At this time, the interval between dome failures abruptly lengthened. Accordingly, the forecast was unsuccessful and further forecasting was suspended until the regularity of subsequent failures could be confirmed. Statistical forecasting resumed on April 12, after four dome failure episodes separated by an average of 7.8 days. One dome failure (April 15) was successfully forecast using a 70% confidence window, and a second event (April 21) was narrowly missed before the end of the activity. The cessation of dome failures after April 21 resulted in a concluding false alarm. Although forecasting success during the eruption was limited, retrospective analysis shows that early and consistent application of the statistical method using a constant-rate failure model and a 90% confidence window could have yielded five successful forecasts and two false alarms; no events would have been missed. On closer examination, the intervals between successive dome failures are not uniform but tend to increase with time. This increase attests to the continuous, slowly decreasing supply of magma to the surface vent during the waning phase of the eruption. The domes formed in a precarious position in a breach in the summit crater rim where they were susceptible to gravitational collapse. The instability of the February 15-April 21 domes relative to the earlier domes is attributed to reaming the lip of the vent by a laterally directed explosion during the major dome-destroying eruption of February 15, a process which would leave a less secure foundation for subsequent domes. ?? 1994.
Do we need demographic data to forecast plant population dynamics?
Tredennick, Andrew T.; Hooten, Mevin B.; Adler, Peter B.
2017-01-01
Rapid environmental change has generated growing interest in forecasts of future population trajectories. Traditional population models built with detailed demographic observations from one study site can address the impacts of environmental change at particular locations, but are difficult to scale up to the landscape and regional scales relevant to management decisions. An alternative is to build models using population-level data that are much easier to collect over broad spatial scales than individual-level data. However, it is unknown whether models built using population-level data adequately capture the effects of density-dependence and environmental forcing that are necessary to generate skillful forecasts.Here, we test the consequences of aggregating individual responses when forecasting the population states (percent cover) and trajectories of four perennial grass species in a semi-arid grassland in Montana, USA. We parameterized two population models for each species, one based on individual-level data (survival, growth and recruitment) and one on population-level data (percent cover), and compared their forecasting accuracy and forecast horizons with and without the inclusion of climate covariates. For both models, we used Bayesian ridge regression to weight the influence of climate covariates for optimal prediction.In the absence of climate effects, we found no significant difference between the forecast accuracy of models based on individual-level data and models based on population-level data. Climate effects were weak, but increased forecast accuracy for two species. Increases in accuracy with climate covariates were similar between model types.In our case study, percent cover models generated forecasts as accurate as those from a demographic model. For the goal of forecasting, models based on aggregated individual-level data may offer a practical alternative to data-intensive demographic models. Long time series of percent cover data already exist for many plant species. Modelers should exploit these data to predict the impacts of environmental change.
Silva-Palacios, Inmaculada; Fernández-Rodríguez, Santiago; Durán-Barroso, Pablo; Tormo-Molina, Rafael; Maya-Manzano, José María; Gonzalo-Garijo, Ángela
2016-02-01
Cupressaceae includes species cultivated as ornamentals in the urban environment. This study aims to investigate airborne pollen data for Cupressaceae on the southwestern Iberian Peninsula over a 21-year period and to analyse the trends in these data and their relationship with meteorological parameters using time series analysis. Aerobiological sampling was conducted from 1993 to 2013 in Badajoz (SW Spain). The main pollen season for Cupressaceae lasted, on average, 58 days, ranging from 55 to 112 days, from 24 January to 22 March. Furthermore, a short-term forecasting model has been developed for daily pollen concentrations. The model proposed to forecast the airborne pollen concentration is described by one equation. This expression is composed of two terms: the first term represents the pollen concentration trend in the air according to the average concentration of the previous 10 days; the second term is obtained from considering the actual pollen concentration value, which is calculated based on the most representative meteorological parameters multiplied by a fitting coefficient. Temperature was the main meteorological factor by its influence over daily pollen forecast, being the rain the second most important factor. This model represents a good approach to a continuous balance model of Cupressaceae pollen concentration and is supported by a close agreement between the observed and predicted mean concentrations. The novelty of the proposed model is the analysis of meteorological parameters that are not frequently used in Aerobiology.
Regional forecast model for the Olea pollen season in Extremadura (SW Spain).
Fernández-Rodríguez, Santiago; Durán-Barroso, Pablo; Silva-Palacios, Inmaculada; Tormo-Molina, Rafael; Maya-Manzano, José María; Gonzalo-Garijo, Ángela
2016-10-01
The olive tree (Olea europaea) is a predominantly Mediterranean anemophilous species. The pollen allergens from this tree are an important cause of allergic problems. Olea pollen may be relevant in relation to climate change, due to the fact that its flowering phenology is related to meteorological parameters. This study aims to investigate airborne Olea pollen data from a city on the SW Iberian Peninsula, to analyse the trends in these data and their relationships with meteorological parameters using time series analysis. Aerobiological sampling was conducted from 1994 to 2013 in Badajoz (SW Spain) using a 7-day Hirst-type volumetric sampler. The main Olea pollen season lasted an average of 34 days, from May 4th to June 7th. The model proposed to forecast airborne pollen concentrations, described by one equation. This expression is composed of two terms: the first term represents the resilience of the pollen concentration trend in the air according to the average concentration of the previous 10 days; the second term was obtained from considering the actual pollen concentration value, which is calculated based on the most representative meteorological variables multiplied by a fitting coefficient. Due to the allergenic characteristics of this pollen type, it should be necessary to forecast its short-term prevalence using a long record of data in a city with a Mediterranean climate. The model obtained provides a suitable level of confidence to forecast Olea airborne pollen concentration.
Regional forecast model for the Olea pollen season in Extremadura (SW Spain)
NASA Astrophysics Data System (ADS)
Fernández-Rodríguez, Santiago; Durán-Barroso, Pablo; Silva-Palacios, Inmaculada; Tormo-Molina, Rafael; Maya-Manzano, José María; Gonzalo-Garijo, Ángela
2016-10-01
The olive tree ( Olea europaea) is a predominantly Mediterranean anemophilous species. The pollen allergens from this tree are an important cause of allergic problems. Olea pollen may be relevant in relation to climate change, due to the fact that its flowering phenology is related to meteorological parameters. This study aims to investigate airborne Olea pollen data from a city on the SW Iberian Peninsula, to analyse the trends in these data and their relationships with meteorological parameters using time series analysis. Aerobiological sampling was conducted from 1994 to 2013 in Badajoz (SW Spain) using a 7-day Hirst-type volumetric sampler. The main Olea pollen season lasted an average of 34 days, from May 4th to June 7th. The model proposed to forecast airborne pollen concentrations, described by one equation. This expression is composed of two terms: the first term represents the resilience of the pollen concentration trend in the air according to the average concentration of the previous 10 days; the second term was obtained from considering the actual pollen concentration value, which is calculated based on the most representative meteorological variables multiplied by a fitting coefficient. Due to the allergenic characteristics of this pollen type, it should be necessary to forecast its short-term prevalence using a long record of data in a city with a Mediterranean climate. The model obtained provides a suitable level of confidence to forecast Olea airborne pollen concentration.
Trends in the predictive performance of raw ensemble weather forecasts
NASA Astrophysics Data System (ADS)
Hemri, Stephan; Scheuerer, Michael; Pappenberger, Florian; Bogner, Konrad; Haiden, Thomas
2015-04-01
Over the last two decades the paradigm in weather forecasting has shifted from being deterministic to probabilistic. Accordingly, numerical weather prediction (NWP) models have been run increasingly as ensemble forecasting systems. The goal of such ensemble forecasts is to approximate the forecast probability distribution by a finite sample of scenarios. Global ensemble forecast systems, like the European Centre for Medium-Range Weather Forecasts (ECMWF) ensemble, are prone to probabilistic biases, and are therefore not reliable. They particularly tend to be underdispersive for surface weather parameters. Hence, statistical post-processing is required in order to obtain reliable and sharp forecasts. In this study we apply statistical post-processing to ensemble forecasts of near-surface temperature, 24-hour precipitation totals, and near-surface wind speed from the global ECMWF model. Our main objective is to evaluate the evolution of the difference in skill between the raw ensemble and the post-processed forecasts. The ECMWF ensemble is under continuous development, and hence its forecast skill improves over time. Parts of these improvements may be due to a reduction of probabilistic bias. Thus, we first hypothesize that the gain by post-processing decreases over time. Based on ECMWF forecasts from January 2002 to March 2014 and corresponding observations from globally distributed stations we generate post-processed forecasts by ensemble model output statistics (EMOS) for each station and variable. Parameter estimates are obtained by minimizing the Continuous Ranked Probability Score (CRPS) over rolling training periods that consist of the n days preceding the initialization dates. Given the higher average skill in terms of CRPS of the post-processed forecasts for all three variables, we analyze the evolution of the difference in skill between raw ensemble and EMOS forecasts. The fact that the gap in skill remains almost constant over time, especially for near-surface wind speed, suggests that improvements to the atmospheric model have an effect quite different from what calibration by statistical post-processing is doing. That is, they are increasing potential skill. Thus this study indicates that (a) further model development is important even if one is just interested in point forecasts, and (b) statistical post-processing is important because it will keep adding skill in the foreseeable future.
NOAA's National Weather Service/Environmental Protection Agency - United
Integration Image | Loop View | Daily View | Point Guidance | | Experimental Air Quality Guidance | Product Map To View Additional Guidance Graphic of Air Quality Forecast Guidance for the CONUS Mouse over or Image Alaska 1-Hr Average Ozone Concentration Image Hawaii 1-Hr Average Ozone Concentration Image 8-Hr
NASA Technical Reports Server (NTRS)
Koster, Randal D.; Mahanama, P. P.
2012-01-01
Key to translating soil moisture memory into subseasonal precipitation and air temperature forecast skill is a realistic treatment of evaporation in the forecast system used - in particular, a realistic treatment of how evaporation responds to variations in soil moisture. The inherent soil moisture-evaporation relationships used in today's land surface models (LSMs), however, arguably reflect little more than guesswork given the lack of evaporation and soil moisture data at the spatial scales represented by regional and global models. Here we present a new approach for evaluating this critical aspect of LSMs. Seasonally averaged precipitation is used as a proxy for seasonally-averaged soil moisture, and seasonally-averaged air temperature is used as a proxy for seasonally-averaged evaporation (e.g., more evaporative cooling leads to cooler temperatures) the relationship between historical precipitation and temperature measurements accordingly mimics in certain important ways nature's relationship between soil moisture and evaporation. Additional information on the relationship is gleaned from joint analysis of precipitation and streamflow measurements. An experimental framework that utilizes these ideas to guide the development of an improved soil moisture-evaporation relationship is described and demonstrated.
Evaluating Downscaling Methods for Seasonal Climate Forecasts over East Africa
NASA Technical Reports Server (NTRS)
Roberts, J. Brent; Robertson, Franklin R.; Bosilovich, Michael; Lyon, Bradfield; Funk, Chris
2013-01-01
The U.S. National Multi-Model Ensemble seasonal forecasting system is providing hindcast and real-time data streams to be used in assessing and improving seasonal predictive capacity. The NASA / USAID SERVIR project, which leverages satellite and modeling-based resources for environmental decision making in developing nations, is focusing on the evaluation of NMME forecasts specifically for use in impact modeling within hub regions including East Africa, the Hindu Kush-Himalayan (HKH) region and Mesoamerica. One of the participating models in NMME is the NASA Goddard Earth Observing System (GEOS5). This work will present an intercomparison of downscaling methods using the GEOS5 seasonal forecasts of temperature and precipitation over East Africa. The current seasonal forecasting system provides monthly averaged forecast anomalies. These anomalies must be spatially downscaled and temporally disaggregated for use in application modeling (e.g. hydrology, agriculture). There are several available downscaling methodologies that can be implemented to accomplish this goal. Selected methods include both a non-homogenous hidden Markov model and an analogue based approach. A particular emphasis will be placed on quantifying the ability of different methods to capture the intermittency of precipitation within both the short and long rain seasons. Further, the ability to capture spatial covariances will be assessed. Both probabilistic and deterministic skill measures will be evaluated over the hindcast period
Evaluating Downscaling Methods for Seasonal Climate Forecasts over East Africa
NASA Technical Reports Server (NTRS)
Robertson, Franklin R.; Roberts, J. Brent; Bosilovich, Michael; Lyon, Bradfield
2013-01-01
The U.S. National Multi-Model Ensemble seasonal forecasting system is providing hindcast and real-time data streams to be used in assessing and improving seasonal predictive capacity. The NASA / USAID SERVIR project, which leverages satellite and modeling-based resources for environmental decision making in developing nations, is focusing on the evaluation of NMME forecasts specifically for use in impact modeling within hub regions including East Africa, the Hindu Kush-Himalayan (HKH) region and Mesoamerica. One of the participating models in NMME is the NASA Goddard Earth Observing System (GEOS5). This work will present an intercomparison of downscaling methods using the GEOS5 seasonal forecasts of temperature and precipitation over East Africa. The current seasonal forecasting system provides monthly averaged forecast anomalies. These anomalies must be spatially downscaled and temporally disaggregated for use in application modeling (e.g. hydrology, agriculture). There are several available downscaling methodologies that can be implemented to accomplish this goal. Selected methods include both a non-homogenous hidden Markov model and an analogue based approach. A particular emphasis will be placed on quantifying the ability of different methods to capture the intermittency of precipitation within both the short and long rain seasons. Further, the ability to capture spatial covariances will be assessed. Both probabilistic and deterministic skill measures will be evaluated over the hindcast period.
Arima model and exponential smoothing method: A comparison
NASA Astrophysics Data System (ADS)
Wan Ahmad, Wan Kamarul Ariffin; Ahmad, Sabri
2013-04-01
This study shows the comparison between Autoregressive Moving Average (ARIMA) model and Exponential Smoothing Method in making a prediction. The comparison is focused on the ability of both methods in making the forecasts with the different number of data sources and the different length of forecasting period. For this purpose, the data from The Price of Crude Palm Oil (RM/tonne), Exchange Rates of Ringgit Malaysia (RM) in comparison to Great Britain Pound (GBP) and also The Price of SMR 20 Rubber Type (cents/kg) with three different time series are used in the comparison process. Then, forecasting accuracy of each model is measured by examinethe prediction error that producedby using Mean Squared Error (MSE), Mean Absolute Percentage Error (MAPE), and Mean Absolute deviation (MAD). The study shows that the ARIMA model can produce a better prediction for the long-term forecasting with limited data sources, butcannot produce a better prediction for time series with a narrow range of one point to another as in the time series for Exchange Rates. On the contrary, Exponential Smoothing Method can produce a better forecasting for Exchange Rates that has a narrow range of one point to another for its time series, while itcannot produce a better prediction for a longer forecasting period.
Fast Demand Forecast of Electric Vehicle Charging Stations for Cell Phone Application
DOE Office of Scientific and Technical Information (OSTI.GOV)
Majidpour, Mostafa; Qiu, Charlie; Chung, Ching-Yen
This paper describes the core cellphone application algorithm which has been implemented for the prediction of energy consumption at Electric Vehicle (EV) Charging Stations at UCLA. For this interactive user application, the total time of accessing database, processing the data and making the prediction, needs to be within a few seconds. We analyze four relatively fast Machine Learning based time series prediction algorithms for our prediction engine: Historical Average, kNearest Neighbor, Weighted k-Nearest Neighbor, and Lazy Learning. The Nearest Neighbor algorithm (k Nearest Neighbor with k=1) shows better performance and is selected to be the prediction algorithm implemented for themore » cellphone application. Two applications have been designed on top of the prediction algorithm: one predicts the expected available energy at the station and the other one predicts the expected charging finishing time. The total time, including accessing the database, data processing, and prediction is about one second for both applications.« less
A hybrid neurogenetic approach for stock forecasting.
Kwon, Yung-Keun; Moon, Byung-Ro
2007-05-01
In this paper, we propose a hybrid neurogenetic system for stock trading. A recurrent neural network (NN) having one hidden layer is used for the prediction model. The input features are generated from a number of technical indicators being used by financial experts. The genetic algorithm (GA) optimizes the NN's weights under a 2-D encoding and crossover. We devised a context-based ensemble method of NNs which dynamically changes on the basis of the test day's context. To reduce the time in processing mass data, we parallelized the GA on a Linux cluster system using message passing interface. We tested the proposed method with 36 companies in NYSE and NASDAQ for 13 years from 1992 to 2004. The neurogenetic hybrid showed notable improvement on the average over the buy-and-hold strategy and the context-based ensemble further improved the results. We also observed that some companies were more predictable than others, which implies that the proposed neurogenetic hybrid can be used for financial portfolio construction.
Mao, Qiang; Zhang, Kai; Yan, Wu; Cheng, Chaonan
2018-05-02
The aims of this study were to develop a forecasting model for the incidence of tuberculosis (TB) and analyze the seasonality of infections in China; and to provide a useful tool for formulating intervention programs and allocating medical resources. Data for the monthly incidence of TB from January 2004 to December 2015 were obtained from the National Scientific Data Sharing Platform for Population and Health (China). The Box-Jenkins method was applied to fit a seasonal auto-regressive integrated moving average (SARIMA) model to forecast the incidence of TB over the subsequent six months. During the study period of 144 months, 12,321,559 TB cases were reported in China, with an average monthly incidence of 6.4426 per 100,000 of the population. The monthly incidence of TB showed a clear 12-month cycle, and a seasonality with two peaks occurring in January and March and a trough in December. The best-fit model was SARIMA (1,0,0)(0,1,1) 12 , which demonstrated adequate information extraction (white noise test, p>0.05). Based on the analysis, the incidence of TB from January to June 2016 were 6.6335, 4.7208, 5.8193, 5.5474, 5.2202 and 4.9156 per 100,000 of the population, respectively. According to the seasonal pattern of TB incidence in China, the SARIMA model was proposed as a useful tool for monitoring epidemics. Copyright © 2018 The Authors. Published by Elsevier Ltd.. All rights reserved.
Operational Impact of Data Collected from the Global Hawk Unmanned Aircraft During SHOUT
NASA Astrophysics Data System (ADS)
Wick, G. A.; Dunion, J. P.; Sippel, J.; Cucurull, L.; Aksoy, A.; Kren, A.; Christophersen, H.; Black, P.
2017-12-01
The primary scientific goal of the Sensing Hazards with Operational Unmanned Technology (SHOUT) Project was to determine the potential utility of observations from high-altitude, long-endurance unmanned aircraft systems such as the Global Hawk (GH) aircraft to improve operational forecasts of high-impact weather events or mitigate potential degradation of forecasts in the event of a future gap in satellite coverage. Hurricanes and tropical cyclones are among the most potentially destructive high-impact weather events and pose a major forecasting challenge to NOAA. Major winter storms over the Pacific Ocean, including atmospheric river events, which make landfall and bring strong winds and extreme precipitation to the West Coast and Alaska are also important to forecast accurately because of their societal impact in those parts of the country. In response, the SHOUT project supported three field campaigns with the GH aircraft and dedicated data impact studies exploring the potential for the real-time data from the aircraft to improve the forecasting of both tropical cyclones and landfalling Pacific storms. Dropsonde observations from the GH aircraft were assimilated into the operational Hurricane Weather Research and Forecasting (HWRF) and Global Forecast System (GFS) models. The results from several diverse but complementary studies consistently demonstrated significant positive forecast benefits spanning the regional and global models. Forecast skill improvements within HWRF reached up to about 9% for track and 14% for intensity. Within GFS, track skill improvements for multi-storm averages exceeded 10% and improvements for individual storms reached over 20% depending on forecast lead time. Forecasted precipitation was also improved. Impacts for Pacific winter storms were smaller but still positive. The results are highly encouraging and support the potential for operational utilization of data from a platform like the GH. This presentation summarizes the observations collected and highlights the multiple impact studies completed.
Natural Gas Prices Forecast Comparison--AEO vs. Natural Gas Markets
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wong-Parodi, Gabrielle; Lekov, Alex; Dale, Larry
This paper evaluates the accuracy of two methods to forecast natural gas prices: using the Energy Information Administration's ''Annual Energy Outlook'' forecasted price (AEO) and the ''Henry Hub'' compared to U.S. Wellhead futures price. A statistical analysis is performed to determine the relative accuracy of the two measures in the recent past. A statistical analysis suggests that the Henry Hub futures price provides a more accurate average forecast of natural gas prices than the AEO. For example, the Henry Hub futures price underestimated the natural gas price by 35 cents per thousand cubic feet (11.5 percent) between 1996 and 2003more » and the AEO underestimated by 71 cents per thousand cubic feet (23.4 percent). Upon closer inspection, a liner regression analysis reveals that two distinct time periods exist, the period between 1996 to 1999 and the period between 2000 to 2003. For the time period between 1996 to 1999, AEO showed a weak negative correlation (R-square = 0.19) between forecast price by actual U.S. Wellhead natural gas price versus the Henry Hub with a weak positive correlation (R-square = 0.20) between forecasted price and U.S. Wellhead natural gas price. During the time period between 2000 to 2003, AEO shows a moderate positive correlation (R-square = 0.37) between forecasted natural gas price and U.S. Wellhead natural gas price versus the Henry Hub that show a moderate positive correlation (R-square = 0.36) between forecast price and U.S. Wellhead natural gas price. These results suggest that agencies forecasting natural gas prices should consider incorporating the Henry Hub natural gas futures price into their forecasting models along with the AEO forecast. Our analysis is very preliminary and is based on a very small data set. Naturally the results of the analysis may change, as more data is made available.« less
Towards seasonal forecasting of malaria in India.
Lauderdale, Jonathan M; Caminade, Cyril; Heath, Andrew E; Jones, Anne E; MacLeod, David A; Gouda, Krushna C; Murty, Upadhyayula Suryanarayana; Goswami, Prashant; Mutheneni, Srinivasa R; Morse, Andrew P
2014-08-10
Malaria presents public health challenge despite extensive intervention campaigns. A 30-year hindcast of the climatic suitability for malaria transmission in India is presented, using meteorological variables from a state of the art seasonal forecast model to drive a process-based, dynamic disease model. The spatial distribution and seasonal cycles of temperature and precipitation from the forecast model are compared to three observationally-based meteorological datasets. These time series are then used to drive the disease model, producing a simulated forecast of malaria and three synthetic malaria time series that are qualitatively compared to contemporary and pre-intervention malaria estimates. The area under the Relative Operator Characteristic (ROC) curve is calculated as a quantitative metric of forecast skill, comparing the forecast to the meteorologically-driven synthetic malaria time series. The forecast shows probabilistic skill in predicting the spatial distribution of Plasmodium falciparum incidence when compared to the simulated meteorologically-driven malaria time series, particularly where modelled incidence shows high seasonal and interannual variability such as in Orissa, West Bengal, and Jharkhand (North-east India), and Gujarat, Rajastan, Madhya Pradesh and Maharashtra (North-west India). Focusing on these two regions, the malaria forecast is able to distinguish between years of "high", "above average" and "low" malaria incidence in the peak malaria transmission seasons, with more than 70% sensitivity and a statistically significant area under the ROC curve. These results are encouraging given that the three month forecast lead time used is well in excess of the target for early warning systems adopted by the World Health Organization. This approach could form the basis of an operational system to identify the probability of regional malaria epidemics, allowing advanced and targeted allocation of resources for combatting malaria in India.
Anticipating Cycle 24 Minimum and its Consequences: An Update
NASA Technical Reports Server (NTRS)
Wilson, Robert M.; Hathaway, David H.
2008-01-01
This Technical Publication updates estimates for cycle 24 minimum and discusses consequences associated with cycle 23 being a longer than average period cycle and cycle 24 having parametric minimum values smaller (or larger for the case of spotless days) than long term medians. Through December 2007, cycle 23 has persisted 140 mo from its 12-mo moving average (12-mma) minimum monthly mean sunspot number occurrence date (May 1996). Longer than average period cycles of the modern era (since cycle 12) have minimum-to-minimum periods of about 139.0+/-6.3 mo (the 90-percent prediction interval), inferring that cycle 24 s minimum monthly mean sunspot number should be expected before July 2008. The major consequence of this is that, unless cycle 24 is a statistical outlier (like cycle 21), its maximum amplitude (RM) likely will be smaller than previously forecast. If, however, in the course of its rise cycle 24 s 12-mma of the weighted mean latitude (L) of spot groups exceeds 24 deg, then one expects RM >131, and if its 12-mma of highest latitude (H) spot groups exceeds 38 deg, then one expects RM >127. High-latitude new cycle spot groups, while first reported in January 2008, have not, as yet, become the dominant form of spot groups. Minimum values in L and H were observed in mid 2007 and values are now slowly increasing, a precondition for the imminent onset of the new sunspot cycle.
Penile measurements in Tanzanian males: guiding circumcision device design and supply forecasting.
Chrouser, Kristin; Bazant, Eva; Jin, Linda; Kileo, Baldwin; Plotkin, Marya; Adamu, Tigistu; Curran, Kelly; Koshuma, Sifuni
2013-08-01
Voluntary medical male circumcision decreases the risk in males of HIV infection through heterosexual intercourse by about 60% in clinical trials and 73% at post-trial followup. In 2007 WHO and the Joint United Nations Programme on HIV/AIDS (UNAIDS) recommended that countries with a low circumcision rate and high HIV prevalence expand voluntary medical male circumcision programs as part of a national HIV prevention strategy. Devices for adult/adolescent male circumcision could accelerate the pace of scaling up voluntary medical male circumcision. Detailed penile measurements of African males are required for device development and supply size forecasting. Consenting males undergoing voluntary medical male circumcision at 3 health facilities in the Iringa region, Tanzania, underwent measurement of the penile glans, shaft and foreskin. Age, Tanner stage, height and weight were recorded. Measurements were analyzed by age categories. Correlations of penile parameters with height, weight and body mass index were calculated. In 253 Tanzanian males 10 to 47 years old mean ± SD penile length in adults was 11.5 ± 1.6 cm, mean shaft circumference was 8.7 ± 0.9 cm and mean glans circumference was 8.8 ± 0.9 cm. As expected, given the variability of puberty, measurements in younger males varied significantly. Glans circumference highly correlated with height (r = 0.80) and weight (r = 0.81, each p <0.001). Stretched foreskin diameter moderately correlated with height (r = 0.68) and weight (r = 0.71, each p <0.001). Our descriptive study provides penile measurements of males who sought voluntary medical male circumcision services in Iringa, Tanzania. To our knowledge this is the first study in a sub-Saharan African population that provides sufficiently detailed glans and foreskin dimensions to inform voluntary medical male circumcision device development and size forecasting. Copyright © 2013 American Urological Association Education and Research, Inc. Published by Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Schemm, J. E.; Long, L.; Baxter, S.
2013-12-01
Evaluation of the NCEP CFSv2 45-day Forecasts for Predictability of Intraseasonal Tropical Storm Activities Jae-Kyung E. Schemm, Lindsey Long and Stephen Baxter Climate Prediction Center, NCEP/NWS/NOAA Predictability of intraseasonal tropical storm (TS) activities is assessed using the 1999-2010 CFSv2 hindcast suite. Weekly TS activities in the CFSv2 45-day forecasts were determined using the TS detection and tracking method devised by Carmago and Zebiak (2002). The forecast periods are divided into weekly intervals for Week 1 through Week 6, and also the 30-day mean. The TS activities in those intervals are compared to the observed activities based on the NHC HURDAT and JTWC Best Track datasets. The CFSv2 45-day hindcast suite is made of forecast runs initialized at 00, 06, 12 and 18Z every day during the 1999 - 2010 period. For predictability evaluation, forecast TS activities are analyzed based on 20-member ensemble forecasts comprised of 45-day runs made during the most recent 5 days prior to the verification period. The forecast TS activities are evaluated in terms of the number of storms, genesis locations and storm tracks during the weekly periods. The CFSv2 forecasts are shown to have a fair level of skill in predicting the number of storms over the Atlantic Basin with the temporal correlation scores ranging from 0.73 for Week 1 forecasts to 0.63 for Week 6, and the average RMS errors ranging from 0.86 to 1.07 during the 1999-2010 hurricane season. Also, the forecast track density distribution and false alarm statistics are compiled using the hindcast analyses. In real-time applications of the intraseasonal TS activity forecasts, the climatological TS forecast statistics will be used to make the model bias corrections in terms of the storm counts, track distribution and removal of false alarms. An operational implementation of the weekly TS activity prediction is planned for early 2014 to provide an objective input for the CPC's Global Tropical Hazards Outlooks.
Assessing skill of a global bimonthly streamflow ensemble prediction system
NASA Astrophysics Data System (ADS)
van Dijk, A. I.; Peña-Arancibia, J.; Sheffield, J.; Wood, E. F.
2011-12-01
Ideally, a seasonal streamflow forecasting system might be conceived of as a system that ingests skillful climate forecasts from general circulation models and propagates these through thoroughly calibrated hydrological models that are initialised using hydrometric observations. In practice, there are practical problems with each of these aspects. Instead, we analysed whether a comparatively simple hydrological model-based Ensemble Prediction System (EPS) can provide global bimonthly streamflow forecasts with some skill and if so, under what circumstances the greatest skill may be expected. The system tested produces ensemble forecasts for each of six annual bimonthly periods based on the previous 30 years of global daily gridded 1° resolution climate variables and an initialised global hydrological model. To incorporate some of the skill derived from ocean conditions, a post-EPS analog method was used to sample from the ensemble based on El Niño Southern Oscillation (ENSO), Indian Ocean Dipole (IOD), North Atlantic Oscillation (NAO) and Pacific Decadal Oscillation (PDO) index values observed prior to the forecast. Forecasts skill was assessed through a hind-casting experiment for the period 1979-2008. Potential skill was calculated with reference to a model run with the actual forcing for the forecast period (the 'perfect' model) and was compared to actual forecast skill calculated for each of the six forecast times for an average 411 Australian and 51 pan-tropical catchments. Significant potential skill in bimonthly forecasts was largely limited to northern regions during the snow melt period, seasonally wet tropical regions at the transition of wet to dry season, and the Indonesian region where rainfall is well correlated to ENSO. The actual skill was approximately 34-50% of the potential skill. We attribute this primarily to limitations in the model structure, parameterisation and global forcing data. Use of better climate forecasts and remote sensing observations of initial catchment conditions should help to increase actual skill in future. Future work also could address the potential skill gain from using weather and climate forecasts and from a calibrated and/or alternative hydrological model or model ensemble. The approach and data might be useful as a benchmark for joint seasonal forecasting experiments planned under GEWEX.
NASA Astrophysics Data System (ADS)
Abernethy, Jennifer A.
Pilots' ability to avoid clear-air turbulence (CAT) during flight affects the safety of the millions of people who fly commercial airlines and other aircraft, and turbulence costs millions in injuries and aircraft maintenance every year. Forecasting CAT is not straightforward, however; microscale features like the turbulence eddies that affect aircraft (100m) are below the current resolution of operational numerical weather prediction (NWP) models, and the only evidence of CAT episodes, until recently, has been sparse, subjective reports from pilots known as PIREPs. To forecast CAT, researchers use a simple weighted sum of top-performing turbulence indicators derived from NWP model outputs---termed diagnostics---based on their agreement with current PIREPs. However, a new, quantitative source of observation data---high-density measurements made by sensor equipment and software on aircraft, called in-situ measurements---is now available. The main goal of this thesis is to develop new data analysis and processing techniques to apply to the model and new observation data, in order to improve CAT forecasting accuracy. This thesis shows that using in-situ data improves forecasting accuracy and that automated machine learning algorithms such as support vector machines (SVM), logistic regression, and random forests, can match current performance while eliminating almost all hand-tuning. Feature subset selection is paired with the new algorithms to choose diagnostics that predict well as a group rather than individually. Specializing forecasts and choice of diagnostics by geographic region further improves accuracy because of the geographic variation in turbulence sources. This work uses random forests to find climatologically-relevant regions based on these variations and implements a forecasting system testbed which brings these techniques together to rapidly prototype new, regionalized versions of operational CAT forecasting systems.
NASA Astrophysics Data System (ADS)
Engeland, Kolbjorn; Steinsland, Ingelin
2014-05-01
This study introduces a methodology for the construction of probabilistic inflow forecasts for multiple catchments and lead times, and investigates criterions for evaluation of multi-variate forecasts. A post-processing approach is used, and a Gaussian model is applied for transformed variables. The post processing model has two main components, the mean model and the dependency model. The mean model is used to estimate the marginal distributions for forecasted inflow for each catchment and lead time, whereas the dependency models was used to estimate the full multivariate distribution of forecasts, i.e. co-variances between catchments and lead times. In operational situations, it is a straightforward task to use the models to sample inflow ensembles which inherit the dependencies between catchments and lead times. The methodology was tested and demonstrated in the river systems linked to the Ulla-Førre hydropower complex in southern Norway, where simultaneous probabilistic forecasts for five catchments and ten lead times were constructed. The methodology exhibits sufficient flexibility to utilize deterministic flow forecasts from a numerical hydrological model as well as statistical forecasts such as persistent forecasts and sliding window climatology forecasts. It also deals with variation in the relative weights of these forecasts with both catchment and lead time. When evaluating predictive performance in original space using cross validation, the case study found that it is important to include the persistent forecast for the initial lead times and the hydrological forecast for medium-term lead times. Sliding window climatology forecasts become more important for the latest lead times. Furthermore, operationally important features in this case study such as heteroscedasticity, lead time varying between lead time dependency and lead time varying between catchment dependency are captured. Two criterions were used for evaluating the added value of the dependency model. The first one was the Energy score (ES) that is a multi-dimensional generalization of continuous rank probability score (CRPS). ES was calculated for all lead-times and catchments together, for each catchment across all lead times and for each lead time across all catchments. The second criterion was to use CRPS for forecasted inflows accumulated over several lead times and catchments. The results showed that ES was not very sensitive to correct covariance structure, whereas CRPS for accumulated flows where more suitable for evaluating the dependency model. This indicates that it is more appropriate to evaluate relevant univariate variables that depends on the dependency structure then to evaluate the multivariate forecast directly.
Evaluation of Flood Forecast and Warning in Elbe river basin - Impact of Forecaster's Strategy
NASA Astrophysics Data System (ADS)
Danhelka, Jan; Vlasak, Tomas
2010-05-01
Czech Hydrometeorological Institute (CHMI) is responsible for flood forecasting and warning in the Czech Republic. To meet that issue CHMI operates hydrological forecasting systems and publish flow forecast in selected profiles. Flood forecast and warning is an output of system that links observation (flow and atmosphere), data processing, weather forecast (especially NWP's QPF), hydrological modeling and modeled outputs evaluation and interpretation by forecaster. Forecast users are interested in final output without separating uncertainties of separate steps of described process. Therefore an evaluation of final operational forecasts was done for profiles within Elbe river basin produced by AquaLog forecasting system during period 2002 to 2008. Effects of uncertainties of observation, data processing and especially meteorological forecasts were not accounted separately. Forecast of flood levels exceedance (peak over the threshold) during forecasting period was the main criterion as flow increase forecast is of the highest importance. Other evaluation criteria included peak flow and volume difference. In addition Nash-Sutcliffe was computed separately for each time step (1 to 48 h) of forecasting period to identify its change with the lead time. Textual flood warnings are issued for administrative regions to initiate flood protection actions in danger of flood. Flood warning hit rate was evaluated at regions level and national level. Evaluation found significant differences of model forecast skill between forecasting profiles, particularly less skill was evaluated at small headwater basins due to domination of QPF uncertainty in these basins. The average hit rate was 0.34 (miss rate = 0.33, false alarm rate = 0.32). However its explored spatial difference is likely to be influenced also by different fit of parameters sets (due to different basin characteristics) and importantly by different impact of human factor. Results suggest that the practice of interactive model operation, experience and forecasting strategy differs between responsible forecasting offices. Warning is based on model outputs interpretation by hydrologists-forecaster. Warning hit rate reached 0.60 for threshold set to lowest flood stage of which 0.11 was underestimation of flood degree (miss 0.22, false alarm 0.28). Critical success index of model forecast was 0.34, while the same criteria for warning reached 0.55. We assume that the increase accounts not only to change of scale from single forecasting point to region for warning, but partly also to forecaster's added value. There is no official warning strategy preferred in the Czech Republic (f.e. tolerance towards higher false alarm rate). Therefore forecaster decision and personal strategy is of great importance. Results show quite successful warning for 1st flood level exceedance, over-warning for 2nd flood level, but under-warning for 3rd (highest) flood level. That suggests general forecaster's preference of medium level warning (2nd flood level is legally determined to be the start of the flood and flood protection activities). In conclusion human forecaster's experience and analysis skill increases flood warning performance notably. However society preference should be specifically addressed in the warning strategy definition to support forecaster's decision making.
Evaluation of weather forecast systems for storm surge modeling in the Chesapeake Bay
NASA Astrophysics Data System (ADS)
Garzon, Juan L.; Ferreira, Celso M.; Padilla-Hernandez, Roberto
2018-01-01
Accurate forecast of sea-level heights in coastal areas depends, among other factors, upon a reliable coupling of a meteorological forecast system to a hydrodynamic and wave system. This study evaluates the predictive skills of the coupled circulation and wind-wave model system (ADCIRC+SWAN) for simulating storm tides in the Chesapeake Bay, forced by six different products: (1) Global Forecast System (GFS), (2) Climate Forecast System (CFS) version 2, (3) North American Mesoscale Forecast System (NAM), (4) Rapid Refresh (RAP), (5) European Center for Medium-Range Weather Forecasts (ECMWF), and (6) the Atlantic hurricane database (HURDAT2). This evaluation is based on the hindcasting of four events: Irene (2011), Sandy (2012), Joaquin (2015), and Jonas (2016). By comparing the simulated water levels to observations at 13 monitoring stations, we have found that the ADCIR+SWAN System forced by the following: (1) the HURDAT2-based system exhibited the weakest statistical skills owing to a noteworthy overprediction of the simulated wind speed; (2) the ECMWF, RAP, and NAM products captured the moment of the peak and moderately its magnitude during all storms, with a correlation coefficient ranging between 0.98 and 0.77; (3) the CFS system exhibited the worst averaged root-mean-square difference (excepting HURDAT2); (4) the GFS system (the lowest horizontal resolution product tested) resulted in a clear underprediction of the maximum water elevation. Overall, the simulations forced by NAM and ECMWF systems induced the most accurate results best accuracy to support water level forecasting in the Chesapeake Bay during both tropical and extra-tropical storms.
NASA Astrophysics Data System (ADS)
de Weger, Letty A.; Beerthuizen, Thijs; Hiemstra, Pieter S.; Sont, Jacob K.
2014-08-01
One-third of the Dutch population suffers from allergic rhinitis, including hay fever. In this study, a 5-day-ahead hay fever forecast was developed and validated for grass pollen allergic patients in the Netherlands. Using multiple regression analysis, a two-step pollen and hay fever symptom prediction model was developed using actual and forecasted weather parameters, grass pollen data and patient symptom diaries. Therefore, 80 patients with a grass pollen allergy rated the severity of their hay fever symptoms during the grass pollen season in 2007 and 2008. First, a grass pollen forecast model was developed using the following predictors: (1) daily means of grass pollen counts of the previous 10 years; (2) grass pollen counts of the previous 2-week period of the current year; and (3) maximum, minimum and mean temperature ( R 2 = 0.76). The second modeling step concerned the forecasting of hay fever symptom severity and included the following predictors: (1) forecasted grass pollen counts; (2) day number of the year; (3) moving average of the grass pollen counts of the previous 2 week-periods; and (4) maximum and mean temperatures ( R 2 = 0.81). Since the daily hay fever forecast is reported in three categories (low-, medium- and high symptom risk), we assessed the agreement between the observed and the 1- to 5-day-ahead predicted risk categories by kappa, which ranged from 65 % to 77 %. These results indicate that a model based on forecasted temperature and grass pollen counts performs well in predicting symptoms of hay fever up to 5 days ahead.
Use of temperature to improve West Nile virus forecasts
Schneider, Zachary D.; Caillouet, Kevin A.; Campbell, Scott R.; Damian, Dan; Irwin, Patrick; Jones, Herff M. P.; Townsend, John
2018-01-01
Ecological and laboratory studies have demonstrated that temperature modulates West Nile virus (WNV) transmission dynamics and spillover infection to humans. Here we explore whether inclusion of temperature forcing in a model depicting WNV transmission improves WNV forecast accuracy relative to a baseline model depicting WNV transmission without temperature forcing. Both models are optimized using a data assimilation method and two observed data streams: mosquito infection rates and reported human WNV cases. Each coupled model-inference framework is then used to generate retrospective ensemble forecasts of WNV for 110 outbreak years from among 12 geographically diverse United States counties. The temperature-forced model improves forecast accuracy for much of the outbreak season. From the end of July until the beginning of October, a timespan during which 70% of human cases are reported, the temperature-forced model generated forecasts of the total number of human cases over the next 3 weeks, total number of human cases over the season, the week with the highest percentage of infectious mosquitoes, and the peak percentage of infectious mosquitoes that on average increased absolute forecast accuracy 5%, 10%, 12%, and 6%, respectively, over the non-temperature forced baseline model. These results indicate that use of temperature forcing improves WNV forecast accuracy and provide further evidence that temperature influences rates of WNV transmission. The findings provide a foundation for implementation of a statistically rigorous system for real-time forecast of seasonal WNV outbreaks and their use as a quantitative decision support tool for public health officials and mosquito control programs. PMID:29522514
NASA Astrophysics Data System (ADS)
van Dijk, Albert I. J. M.; Peña-Arancibia, Jorge L.; Wood, Eric F.; Sheffield, Justin; Beck, Hylke E.
2013-05-01
Ideally, a seasonal streamflow forecasting system would ingest skilful climate forecasts and propagate these through calibrated hydrological models initialized with observed catchment conditions. At global scale, practical problems exist in each of these aspects. For the first time, we analyzed theoretical and actual skill in bimonthly streamflow forecasts from a global ensemble streamflow prediction (ESP) system. Forecasts were generated six times per year for 1979-2008 by an initialized hydrological model and an ensemble of 1° resolution daily climate estimates for the preceding 30 years. A post-ESP conditional sampling method was applied to 2.6% of forecasts, based on predictive relationships between precipitation and 1 of 21 climate indices prior to the forecast date. Theoretical skill was assessed against a reference run with historic forcing. Actual skill was assessed against streamflow records for 6192 small (<10,000 km2) catchments worldwide. The results show that initial catchment conditions provide the main source of skill. Post-ESP sampling enhanced skill in equatorial South America and Southeast Asia, particularly in terms of tercile probability skill, due to the persistence and influence of the El Niño Southern Oscillation. Actual skill was on average 54% of theoretical skill but considerably more for selected regions and times of year. The realized fraction of the theoretical skill probably depended primarily on the quality of precipitation estimates. Forecast skill could be predicted as the product of theoretical skill and historic model performance. Increases in seasonal forecast skill are likely to require improvement in the observation of precipitation and initial hydrological conditions.
de Weger, Letty A; Beerthuizen, Thijs; Hiemstra, Pieter S; Sont, Jacob K
2014-08-01
One-third of the Dutch population suffers from allergic rhinitis, including hay fever. In this study, a 5-day-ahead hay fever forecast was developed and validated for grass pollen allergic patients in the Netherlands. Using multiple regression analysis, a two-step pollen and hay fever symptom prediction model was developed using actual and forecasted weather parameters, grass pollen data and patient symptom diaries. Therefore, 80 patients with a grass pollen allergy rated the severity of their hay fever symptoms during the grass pollen season in 2007 and 2008. First, a grass pollen forecast model was developed using the following predictors: (1) daily means of grass pollen counts of the previous 10 years; (2) grass pollen counts of the previous 2-week period of the current year; and (3) maximum, minimum and mean temperature (R (2)=0.76). The second modeling step concerned the forecasting of hay fever symptom severity and included the following predictors: (1) forecasted grass pollen counts; (2) day number of the year; (3) moving average of the grass pollen counts of the previous 2 week-periods; and (4) maximum and mean temperatures (R (2)=0.81). Since the daily hay fever forecast is reported in three categories (low-, medium- and high symptom risk), we assessed the agreement between the observed and the 1- to 5-day-ahead predicted risk categories by kappa, which ranged from 65 % to 77 %. These results indicate that a model based on forecasted temperature and grass pollen counts performs well in predicting symptoms of hay fever up to 5 days ahead.
Automated time series forecasting for biosurveillance.
Burkom, Howard S; Murphy, Sean Patrick; Shmueli, Galit
2007-09-30
For robust detection performance, traditional control chart monitoring for biosurveillance is based on input data free of trends, day-of-week effects, and other systematic behaviour. Time series forecasting methods may be used to remove this behaviour by subtracting forecasts from observations to form residuals for algorithmic input. We describe three forecast methods and compare their predictive accuracy on each of 16 authentic syndromic data streams. The methods are (1) a non-adaptive regression model using a long historical baseline, (2) an adaptive regression model with a shorter, sliding baseline, and (3) the Holt-Winters method for generalized exponential smoothing. Criteria for comparing the forecasts were the root-mean-square error, the median absolute per cent error (MedAPE), and the median absolute deviation. The median-based criteria showed best overall performance for the Holt-Winters method. The MedAPE measures over the 16 test series averaged 16.5, 11.6, and 9.7 for the non-adaptive regression, adaptive regression, and Holt-Winters methods, respectively. The non-adaptive regression forecasts were degraded by changes in the data behaviour in the fixed baseline period used to compute model coefficients. The mean-based criterion was less conclusive because of the effects of poor forecasts on a small number of calendar holidays. The Holt-Winters method was also most effective at removing serial autocorrelation, with most 1-day-lag autocorrelation coefficients below 0.15. The forecast methods were compared without tuning them to the behaviour of individual series. We achieved improved predictions with such tuning of the Holt-Winters method, but practical use of such improvements for routine surveillance will require reliable data classification methods.
NASA Astrophysics Data System (ADS)
Dreano, Denis; Tsiaras, Kostas; Triantafyllou, George; Hoteit, Ibrahim
2017-07-01
Forecasting the state of large marine ecosystems is important for many economic and public health applications. However, advanced three-dimensional (3D) ecosystem models, such as the European Regional Seas Ecosystem Model (ERSEM), are computationally expensive, especially when implemented within an ensemble data assimilation system requiring several parallel integrations. As an alternative to 3D ecological forecasting systems, we propose to implement a set of regional one-dimensional (1D) water-column ecological models that run at a fraction of the computational cost. The 1D model domains are determined using a Gaussian mixture model (GMM)-based clustering method and satellite chlorophyll-a (Chl-a) data. Regionally averaged Chl-a data is assimilated into the 1D models using the singular evolutive interpolated Kalman (SEIK) filter. To laterally exchange information between subregions and improve the forecasting skills, we introduce a new correction step to the assimilation scheme, in which we assimilate a statistical forecast of future Chl-a observations based on information from neighbouring regions. We apply this approach to the Red Sea and show that the assimilative 1D ecological models can forecast surface Chl-a concentration with high accuracy. The statistical assimilation step further improves the forecasting skill by as much as 50%. This general approach of clustering large marine areas and running several interacting 1D ecological models is very flexible. It allows many combinations of clustering, filtering and regression technics to be used and can be applied to build efficient forecasting systems in other large marine ecosystems.
A hybrid least squares support vector machines and GMDH approach for river flow forecasting
NASA Astrophysics Data System (ADS)
Samsudin, R.; Saad, P.; Shabri, A.
2010-06-01
This paper proposes a novel hybrid forecasting model, which combines the group method of data handling (GMDH) and the least squares support vector machine (LSSVM), known as GLSSVM. The GMDH is used to determine the useful input variables for LSSVM model and the LSSVM model which works as time series forecasting. In this study the application of GLSSVM for monthly river flow forecasting of Selangor and Bernam River are investigated. The results of the proposed GLSSVM approach are compared with the conventional artificial neural network (ANN) models, Autoregressive Integrated Moving Average (ARIMA) model, GMDH and LSSVM models using the long term observations of monthly river flow discharge. The standard statistical, the root mean square error (RMSE) and coefficient of correlation (R) are employed to evaluate the performance of various models developed. Experiment result indicates that the hybrid model was powerful tools to model discharge time series and can be applied successfully in complex hydrological modeling.
Impact of data assimilation on ocean current forecasts in the Angola Basin
NASA Astrophysics Data System (ADS)
Phillipson, Luke; Toumi, Ralf
2017-06-01
The ocean current predictability in the data limited Angola Basin was investigated using the Regional Ocean Modelling System (ROMS) with four-dimensional variational data assimilation. Six experiments were undertaken comprising a baseline case of the assimilation of salinity/temperature profiles and satellite sea surface temperature, with the subsequent addition of altimetry, OSCAR (satellite-derived sea surface currents), drifters, altimetry and drifters combined, and OSCAR and drifters combined. The addition of drifters significantly improves Lagrangian predictability in comparison to the baseline case as well as the addition of either altimetry or OSCAR. OSCAR assimilation only improves Lagrangian predictability as much as altimetry assimilation. On average the assimilation of either altimetry or OSCAR with drifter velocities does not significantly improve Lagrangian predictability compared to the drifter assimilation alone, even degrading predictability in some cases. When the forecast current speed is large, it is more likely that the combination improves trajectory forecasts. Conversely, when the currents are weaker, it is more likely that the combination degrades the trajectory forecast.
Short-term forecasting of emergency inpatient flow.
Abraham, Gad; Byrnes, Graham B; Bain, Christopher A
2009-05-01
Hospital managers have to manage resources effectively, while maintaining a high quality of care. For hospitals where admissions from the emergency department to the wards represent a large proportion of admissions, the ability to forecast these admissions and the resultant ward occupancy is especially useful for resource planning purposes. Since emergency admissions often compete with planned elective admissions, modeling emergency demand may result in improved elective planning as well. We compare several models for forecasting daily emergency inpatient admissions and occupancy. The models are applied to three years of daily data. By measuring their mean square error in a cross-validation framework, we find that emergency admissions are largely random, and hence, unpredictable, whereas emergency occupancy can be forecasted using a model combining regression and autoregressive integrated moving average (ARIMA) model, or a seasonal ARIMA model, for up to one week ahead. Faced with variable admissions and occupancy, hospitals must prepare a reserve capacity of beds and staff. Our approach allows estimation of the required reserve capacity.
Application of recurrent neural networks for drought projections in California
NASA Astrophysics Data System (ADS)
Le, J. A.; El-Askary, H. M.; Allali, M.; Struppa, D. C.
2017-05-01
We use recurrent neural networks (RNNs) to investigate the complex interactions between the long-term trend in dryness and a projected, short but intense, period of wetness due to the 2015-2016 El Niño. Although it was forecasted that this El Niño season would bring significant rainfall to the region, our long-term projections of the Palmer Z Index (PZI) showed a continuing drought trend, contrasting with the 1998-1999 El Niño event. RNN training considered PZI data during 1896-2006 that was validated against the 2006-2015 period to evaluate the potential of extreme precipitation forecast. We achieved a statistically significant correlation of 0.610 between forecasted and observed PZI on the validation set for a lead time of 1 month. This gives strong confidence to the forecasted precipitation indicator. The 2015-2016 El Niño season proved to be relatively weak as compared with the 1997-1998, with a peak PZI anomaly of 0.242 standard deviations below historical averages, continuing drought conditions.
NASA Technical Reports Server (NTRS)
Shafer, B. A.; Leaf, C. F.; Danielson, J. A.; Moravec, G. F.
1981-01-01
The study was conducted on six watersheds ranging in size from 277 km to 3460 km in the Rio Grande and Arkansas River basins of southwestern Colorado. Six years of satellite data in the period 1973-78 were analyzed and snowcover maps prepared for all available image dates. Seven snowmapping techniques were explored; the photointerpretative method was selected as the most accurate. Three schemes to forecast snowmelt runoff employing satellite snowcover observations were investigated. They included a conceptual hydrologic model, a statistical model, and a graphical method. A reduction of 10% in the current average forecast error is estimated when snowcover data in snowmelt runoff forecasting is shown to be extremely promising. Inability to obtain repetitive coverage due to the 18 day cycle of LANDSAT, the occurrence of cloud cover and slow image delivery are obstacles to the immediate implementation of satellite derived snowcover in operational streamflow forecasting programs.
Gomez-Elipe, Alberto; Otero, Angel; van Herp, Michel; Aguirre-Jaime, Armando
2007-01-01
Background The objective of this work was to develop a model to predict malaria incidence in an area of unstable transmission by studying the association between environmental variables and disease dynamics. Methods The study was carried out in Karuzi, a province in the Burundi highlands, using time series of monthly notifications of malaria cases from local health facilities, data from rain and temperature records, and the normalized difference vegetation index (NDVI). Using autoregressive integrated moving average (ARIMA) methodology, a model showing the relation between monthly notifications of malaria cases and the environmental variables was developed. Results The best forecasting model (R2adj = 82%, p < 0.0001 and 93% forecasting accuracy in the range ± 4 cases per 100 inhabitants) included the NDVI, mean maximum temperature, rainfall and number of malaria cases in the preceding month. Conclusion This model is a simple and useful tool for producing reasonably reliable forecasts of the malaria incidence rate in the study area. PMID:17892540
Forecasting dengue hemorrhagic fever cases using ARIMA model: a case study in Asahan district
NASA Astrophysics Data System (ADS)
Siregar, Fazidah A.; Makmur, Tri; Saprin, S.
2018-01-01
Time series analysis had been increasingly used to forecast the number of dengue hemorrhagic fever in many studies. Since no vaccine exist and poor public health infrastructure, predicting the occurrence of dengue hemorrhagic fever (DHF) is crucial. This study was conducted to determine trend and forecasting the occurrence of DHF in Asahan district, North Sumatera Province. Monthly reported dengue cases for the years 2012-2016 were obtained from the district health offices. A time series analysis was conducted by Autoregressive integrated moving average (ARIMA) modeling to forecast the occurrence of DHF. The results demonstrated that the reported DHF cases showed a seasonal variation. The SARIMA (1,0,0)(0,1,1)12 model was the best model and adequate for the data. The SARIMA model for DHF is necessary and could applied to predict the incidence of DHF in Asahan district and assist with design public health maesures to prevent and control the diseases.
Forecasting hotspots in East Kutai, Kutai Kartanegara, and West Kutai as early warning information
NASA Astrophysics Data System (ADS)
Wahyuningsih, S.; Goejantoro, R.; Rizki, N. A.
2018-04-01
The aims of this research are to model hotspots and forecast hotspot 2017 in East Kutai, Kutai Kartanegara and West Kutai. The methods which used in this research were Holt exponential smoothing, Holt’s additive dump trend method, Holt-Winters’ additive method, additive decomposition method, multiplicative decomposition method, Loess decomposition method and Box-Jenkins method. For smoothing techniques, additive decomposition is better than Holt’s exponential smoothing. The hotspots model using Box-Jenkins method were Autoregressive Moving Average ARIMA(1,1,0), ARIMA(0,2,1), and ARIMA(0,1,0). Comparing the results from all methods which were used in this research, and based on Root of Mean Squared Error (RMSE), show that Loess decomposition method is the best times series model, because it has the least RMSE. Thus the Loess decomposition model used to forecast the number of hotspot. The forecasting result indicatethat hotspots pattern tend to increase at the end of 2017 in Kutai Kartanegara and West Kutai, but stationary in East Kutai.
A new approach to the convective parameterization of the regional atmospheric model BRAMS
NASA Astrophysics Data System (ADS)
Dos Santos, A. F.; Freitas, S. R.; de Campos Velho, H. F.; Luz, E. F.; Gan, M. A.; de Mattos, J. Z.; Grell, G. A.
2013-05-01
The summer characteristics of January 2010 was performed using the atmospheric model Brazilian developments on the Regional Atmospheric Modeling System (BRAMS). The convective parameterization scheme of Grell and Dévényi was used to represent clouds and their interaction with the large scale environment. As a result, the precipitation forecasts can be combined in several ways, generating a numerical representation of precipitation and atmospheric heating and moistening rates. The purpose of this study was to generate a set of weights to compute a best combination of the hypothesis of the convective scheme. It is an inverse problem of parameter estimation and the problem is solved as an optimization problem. To minimize the difference between observed data and forecasted precipitation, the objective function was computed with the quadratic difference between five simulated precipitation fields and observation. The precipitation field estimated by the Tropical Rainfall Measuring Mission satellite was used as observed data. Weights were obtained using the firefly algorithm and the mass fluxes of each closure of the convective scheme were weighted generating a new set of mass fluxes. The results indicated the better skill of the model with the new methodology compared with the old ensemble mean calculation.
Forecasting sex differences in mortality in high income nations: The contribution of smoking
Pampel, Fred
2011-01-01
To address the question of whether sex differences in mortality will in the future rise, fall, or stay the same, this study uses relative smoking prevalence among males and females to forecast future changes in relative smoking-attributed mortality. Data on 21 high income nations from 1975 to 2000 and a lag between smoking prevalence and mortality allow forecasts up to 2020. Averaged across nations, the results for logged male/female ratios in smoking mortality reveal equalization of the sex differential. However, continued divergence in non-smoking mortality rates would counter convergence in smoking mortality rates and lead to future increases in the female advantage overall, particularly in nations at late stages of the cigarette epidemic (such as the United States and the United Kingdom). PMID:21874120
26 CFR 1.989(b)-1 - Definition of weighted average exchange rate.
Code of Federal Regulations, 2011 CFR
2011-04-01
... 26 Internal Revenue 10 2011-04-01 2011-04-01 false Definition of weighted average exchange rate. 1... (CONTINUED) INCOME TAX (CONTINUED) INCOME TAXES (CONTINUED) Export Trade Corporations § 1.989(b)-1 Definition of weighted average exchange rate. For purposes of section 989(b)(3) and (4), the term “weighted...
Demand forecasting of electricity in Indonesia with limited historical data
NASA Astrophysics Data System (ADS)
Dwi Kartikasari, Mujiati; Rohmad Prayogi, Arif
2018-03-01
Demand forecasting of electricity is an important activity for electrical agents to know the description of electricity demand in future. Prediction of demand electricity can be done using time series models. In this paper, double moving average model, Holt’s exponential smoothing model, and grey model GM(1,1) are used to predict electricity demand in Indonesia under the condition of limited historical data. The result shows that grey model GM(1,1) has the smallest value of MAE (mean absolute error), MSE (mean squared error), and MAPE (mean absolute percentage error).
Future body mass index modelling based on macronutrient profiles and physical activity
2012-01-01
Background An accurate system of determining the relationship of macronutrient profiles of foods and beverages to the long-term weight impacts of foods is necessary for evidence-based, unbiased front-of-the-package food labels. Methods Data sets on diet, physical activity, and BMI came from the Food and Agriculture Organization (FAO), the World Health Organization (WHO), the Diabetes Control and Complications Trial (DCCT), and Epidemiology Diabetes Intervention and Complications (EDIC). To predict future BMI of individuals, multiple regression derived FAO/WHO and DCCT/EDIC formulas related macronutrient profiles and physical activity (independent variables) to BMI change/year (dependent variable). Similar formulas without physical activity related macronutrient profiles of individual foods and beverages to four-year weight impacts of those items and compared those forecasts to published food group profiling estimates from three large prospective studies by Harvard nutritional epidemiologists. Results FAO/WHO food and beverage formula: four-year weight impact (pounds)=(0.07710 alcohol g+11.95 (381.7+carbohydrates g per serving)*4/(2,613+kilocalories per serving)–304.9 (30.38+dietary fiber g per serving)/(2,613+kilocalories per serving)+19.73 (84.44+total fat g)*9/(2,613+kilocalories per serving)–68.57 (20.45+PUFA g per serving)*9/(2,613+kilocalories per serving))*2.941–12.78 (n=334, R2=0.29, P < 0.0001). DCCT/EDIC formula for four-year weight impact (pounds)=(0.898 (102.2+protein g per serving)*4/(2,297+kilocalories per serving)+1.063 (264.2+carbohydrates g per serving)*4/(2,297+ kilocalories per serving)–13.19 (24.29+dietary fiber g per serving)/ (2,297+kilocalories per serving)+ 0.973 (74.59+(total fat g per serving–PUFA g per serving)*9/(2,297+kilocalories per serving))*85.82–68.11 (n=1,055, R2=0.03, P < 0.0001). (FAO/WHO+ DCCT/EDIC formula forecasts averaged correlated strongly with published food group profiling findings except for potatoes and dairy foods (n=12, r=0.85, P = 0.0004). Formula predictions did not correlate with food group profiling findings for potatoes and dairy products (n=10, r= −0.33 P=0.36). A formula based diet and exercise analysis tool is available to researchers and individuals: http://thehealtheconomy.com/healthTool/. Conclusions Two multiple regression derived formulas from dissimilar databases produced markedly similar estimates of future BMI for 1,055 individuals with type 1 diabetes and female and male cohorts from 167 countries. These formulas predicted the long-term weight impacts of foods and beverages, closely corresponding with most food group profiling estimates from three other databases. If discrepancies with potatoes and dairy products can be resolved, these formulas present a potential basis for a front-of-the-package weight impact rating system. PMID:23106911
DOT National Transportation Integrated Search
2011-02-01
The New Hampshire Department of Transportation Pavement Management Sections scope of work includes monitoring, evaluating, and : sometimes forecasting the condition of New Hampshires 4,560 miles of roadway network in order to provide guidance o...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, Xuesong; Liang, Faming; Yu, Beibei
2011-11-09
Estimating uncertainty of hydrologic forecasting is valuable to water resources and other relevant decision making processes. Recently, Bayesian Neural Networks (BNNs) have been proved powerful tools for quantifying uncertainty of streamflow forecasting. In this study, we propose a Markov Chain Monte Carlo (MCMC) framework to incorporate the uncertainties associated with input, model structure, and parameter into BNNs. This framework allows the structure of the neural networks to change by removing or adding connections between neurons and enables scaling of input data by using rainfall multipliers. The results show that the new BNNs outperform the BNNs that only consider uncertainties associatedmore » with parameter and model structure. Critical evaluation of posterior distribution of neural network weights, number of effective connections, rainfall multipliers, and hyper-parameters show that the assumptions held in our BNNs are not well supported. Further understanding of characteristics of different uncertainty sources and including output error into the MCMC framework are expected to enhance the application of neural networks for uncertainty analysis of hydrologic forecasting.« less
[A preliminary study on dental-manpower forecasting model of Miyun County in Beijing].
Huang, H; Wang, H; Yang, S
1999-01-01
To explore the dental-manpower forecasting model of Chinese rural region and provide references for Chinese dental-manpower researches. Chose rural Miyun County in Beijing as a sample, according to the need-based and demand-weighted forecasting method, a protocol WHO-CH model and corresponding JWG-6-M package developed by authors were used to calculate the present and future need and demand of dental-manpower in Miyun County. Further predications were also calculated on the effects of four modeling parameters to the demand of dental manpower. The present need and demand of oral care personnel for Miyun were 114.5 and 29.1 respectively. At present, Miyun has 43 oral care providers who can satisfy the demand but not the need. The change of oral health demand had a major effect on the forecast of the manpower. Dental-manpower planning should consider the need as a prime factor but must be modified by the demand. It was suggested that corresponding factors of oral care personnel need to be discussed further.
Forecasting the mortality rates using Lee-Carter model and Heligman-Pollard model
NASA Astrophysics Data System (ADS)
Ibrahim, R. I.; Ngataman, N.; Abrisam, W. N. A. Wan Mohd
2017-09-01
Improvement in life expectancies has driven further declines in mortality. The sustained reduction in mortality rates and its systematic underestimation has been attracting the significant interest of researchers in recent years because of its potential impact on population size and structure, social security systems, and (from an actuarial perspective) the life insurance and pensions industry worldwide. Among all forecasting methods, the Lee-Carter model has been widely accepted by the actuarial community and Heligman-Pollard model has been widely used by researchers in modelling and forecasting future mortality. Therefore, this paper only focuses on Lee-Carter model and Heligman-Pollard model. The main objective of this paper is to investigate how accurately these two models will perform using Malaysian data. Since these models involves nonlinear equations that are explicitly difficult to solve, the Matrix Laboratory Version 8.0 (MATLAB 8.0) software will be used to estimate the parameters of the models. Autoregressive Integrated Moving Average (ARIMA) procedure is applied to acquire the forecasted parameters for both models as the forecasted mortality rates are obtained by using all the values of forecasted parameters. To investigate the accuracy of the estimation, the forecasted results will be compared against actual data of mortality rates. The results indicate that both models provide better results for male population. However, for the elderly female population, Heligman-Pollard model seems to underestimate to the mortality rates while Lee-Carter model seems to overestimate to the mortality rates.
Use of Temperature to Improve West Nile Virus Forecasts
NASA Astrophysics Data System (ADS)
Shaman, J. L.; DeFelice, N.; Schneider, Z.; Little, E.; Barker, C.; Caillouet, K.; Campbell, S.; Damian, D.; Irwin, P.; Jones, H.; Townsend, J.
2017-12-01
Ecological and laboratory studies have demonstrated that temperature modulates West Nile virus (WNV) transmission dynamics and spillover infection to humans. Here we explore whether the inclusion of temperature forcing in a model depicting WNV transmission improves WNV forecast accuracy relative to a baseline model depicting WNV transmission without temperature forcing. Both models are optimized using a data assimilation method and two observed data streams: mosquito infection rates and reported human WNV cases. Each coupled model-inference framework is then used to generate retrospective ensemble forecasts of WNV for 110 outbreak years from among 12 geographically diverse United States counties. The temperature-forced model improves forecast accuracy for much of the outbreak season. From the end of July until the beginning of October, a timespan during which 70% of human cases are reported, the temperature-forced model generated forecasts of the total number of human cases over the next 3 weeks, total number of human cases over the season, the week with the highest percentage of infectious mosquitoes, and the peak percentage of infectious mosquitoes that were on average 5%, 10%, 12%, and 6% more accurate, respectively, than the baseline model. These results indicate that use of temperature forcing improves WNV forecast accuracy and provide further evidence that temperatures influence rates of WNV transmission. The findings help build a foundation for implementation of a statistically rigorous system for real-time forecast of seasonal WNV outbreaks and their use as a quantitative decision support tool for public health officials and mosquito control programs.
NASA Astrophysics Data System (ADS)
Song, Yiliao; Qin, Shanshan; Qu, Jiansheng; Liu, Feng
2015-10-01
The issue of air quality regarding PM pollution levels in China is a focus of public attention. To address that issue, to date, a series of studies is in progress, including PM monitoring programs, PM source apportionment, and the enactment of new ambient air quality index standards. However, related research concerning computer modeling for PM future trends estimation is rare, despite its significance to forecasting and early warning systems. Thereby, a study regarding deterministic and interval forecasts of PM is performed. In this study, data on hourly and 12 h-averaged air pollutants are applied to forecast PM concentrations within the Yangtze River Delta (YRD) region of China. The characteristics of PM emissions have been primarily examined and analyzed using different distribution functions. To improve the distribution fitting that is crucial for estimating PM levels, an artificial intelligence algorithm is incorporated to select the optimal parameters. Following that step, an ANF model is used to conduct deterministic forecasts of PM. With the identified distributions and deterministic forecasts, different levels of PM intervals are estimated. The results indicate that the lognormal or gamma distributions are highly representative of the recorded PM data with a goodness-of-fit R2 of approximately 0.998. Furthermore, the results of the evaluation metrics (MSE, MAPE and CP, AW) also show high accuracy within the deterministic and interval forecasts of PM, indicating that this method enables the informative and effective quantification of future PM trends.
Michael J. Erickson; Brian A. Colle; Joseph J. Charney
2012-01-01
The performance of a multimodel ensemble over the northeast United States is evaluated before and after applying bias correction and Bayesian model averaging (BMA). The 13-member Stony Brook University (SBU) ensemble at 0000 UTC is combined with the 21-member National Centers for Environmental Prediction (NCEP) Short-Range Ensemble Forecast (SREF) system at 2100 UTC....
NASA Technical Reports Server (NTRS)
Rukhovets, Leonid; Sienkiewicz, M.; Tenenbaum, J.; Kondratyeva, Y.; Owens, T.; Oztunali, M.; Atlas, Robert (Technical Monitor)
2001-01-01
British Airways flight data recorders can provide valuable meteorological information, but they are not available in real-time on the Global Telecommunication System. Information from the flight recorders was used in the Global Aircraft Data Set (GADS) experiment as independent observations to estimate errors in wind analyses produced by major operational centers. The GADS impact on the Goddard Earth Observing System Data Assimilation System (GEOS DAS) analyses was investigated using GEOS-1 DAS version. Recently, a new Data Assimilation System (fvDAS) has been developed at the Data Assimilation Office, NASA Goddard. Using fvDAS , the, GADS impact on analyses and forecasts was investigated. It was shown the GADS data intensify wind speed analyses of jet streams for some cases. Five-day forecast anomaly correlations and root mean squares were calculated for 300, 500 hPa and SLP for six different areas: Northern and Southern Hemispheres, North America, Europe, Asia, USA These scores were obtained as averages over 21 forecasts from January 1998. Comparisons with scores for control experiments without GADS showed a positive impact of the GADS data on forecasts beyond 2-3 days for all levels at the most areas.
The psychology of intelligence analysis: drivers of prediction accuracy in world politics.
Mellers, Barbara; Stone, Eric; Atanasov, Pavel; Rohrbaugh, Nick; Metz, S Emlen; Ungar, Lyle; Bishop, Michael M; Horowitz, Michael; Merkle, Ed; Tetlock, Philip
2015-03-01
This article extends psychological methods and concepts into a domain that is as profoundly consequential as it is poorly understood: intelligence analysis. We report findings from a geopolitical forecasting tournament that assessed the accuracy of more than 150,000 forecasts of 743 participants on 199 events occurring over 2 years. Participants were above average in intelligence and political knowledge relative to the general population. Individual differences in performance emerged, and forecasting skills were surprisingly consistent over time. Key predictors were (a) dispositional variables of cognitive ability, political knowledge, and open-mindedness; (b) situational variables of training in probabilistic reasoning and participation in collaborative teams that shared information and discussed rationales (Mellers, Ungar, et al., 2014); and (c) behavioral variables of deliberation time and frequency of belief updating. We developed a profile of the best forecasters; they were better at inductive reasoning, pattern detection, cognitive flexibility, and open-mindedness. They had greater understanding of geopolitics, training in probabilistic reasoning, and opportunities to succeed in cognitively enriched team environments. Last but not least, they viewed forecasting as a skill that required deliberate practice, sustained effort, and constant monitoring of current affairs. PsycINFO Database Record (c) 2015 APA, all rights reserved.
Ballarin, Antonio; Posteraro, Brunella; Demartis, Giuseppe; Gervasi, Simona; Panzarella, Fabrizio; Torelli, Riccardo; Paroni Sterbini, Francesco; Morandotti, Grazia; Posteraro, Patrizia; Ricciardi, Walter; Gervasi Vidal, Kristian A; Sanguinetti, Maurizio
2014-12-06
Mathematical or statistical tools are capable to provide a valid help to improve surveillance systems for healthcare and non-healthcare-associated bacterial infections. The aim of this work is to evaluate the time-varying auto-adaptive (TVA) algorithm-based use of clinical microbiology laboratory database to forecast medically important drug-resistant bacterial infections. Using TVA algorithm, six distinct time series were modelled, each one representing the number of episodes per single 'ESKAPE' (E nterococcus faecium, S taphylococcus aureus, K lebsiella pneumoniae, A cinetobacter baumannii, P seudomonas aeruginosa and E nterobacter species) infecting pathogen, that had occurred monthly between 2002 and 2011 calendar years at the Università Cattolica del Sacro Cuore general hospital. Monthly moving averaged numbers of observed and forecasted ESKAPE infectious episodes were found to show a complete overlapping of their respective smoothed time series curves. Overall good forecast accuracy was observed, with percentages ranging from 82.14% for E. faecium infections to 90.36% for S. aureus infections. Our approach may regularly provide physicians with forecasted bacterial infection rates to alert them about the spread of antibiotic-resistant bacterial species, especially when clinical microbiological results of patients' specimens are delayed.
Alamaniotis, Miltiadis; Bargiotas, Dimitrios; Tsoukalas, Lefteri H
2016-01-01
Integration of energy systems with information technologies has facilitated the realization of smart energy systems that utilize information to optimize system operation. To that end, crucial in optimizing energy system operation is the accurate, ahead-of-time forecasting of load demand. In particular, load forecasting allows planning of system expansion, and decision making for enhancing system safety and reliability. In this paper, the application of two types of kernel machines for medium term load forecasting (MTLF) is presented and their performance is recorded based on a set of historical electricity load demand data. The two kernel machine models and more specifically Gaussian process regression (GPR) and relevance vector regression (RVR) are utilized for making predictions over future load demand. Both models, i.e., GPR and RVR, are equipped with a Gaussian kernel and are tested on daily predictions for a 30-day-ahead horizon taken from the New England Area. Furthermore, their performance is compared to the ARMA(2,2) model with respect to mean average percentage error and squared correlation coefficient. Results demonstrate the superiority of RVR over the other forecasting models in performing MTLF.
NASA Astrophysics Data System (ADS)
E, Jianwei; Bao, Yanling; Ye, Jimin
2017-10-01
As one of the most vital energy resources in the world, crude oil plays a significant role in international economic market. The fluctuation of crude oil price has attracted academic and commercial attention. There exist many methods in forecasting the trend of crude oil price. However, traditional models failed in predicting accurately. Based on this, a hybrid method will be proposed in this paper, which combines variational mode decomposition (VMD), independent component analysis (ICA) and autoregressive integrated moving average (ARIMA), called VMD-ICA-ARIMA. The purpose of this study is to analyze the influence factors of crude oil price and predict the future crude oil price. Major steps can be concluded as follows: Firstly, applying the VMD model on the original signal (crude oil price), the modes function can be decomposed adaptively. Secondly, independent components are separated by the ICA, and how the independent components affect the crude oil price is analyzed. Finally, forecasting the price of crude oil price by the ARIMA model, the forecasting trend demonstrates that crude oil price declines periodically. Comparing with benchmark ARIMA and EEMD-ICA-ARIMA, VMD-ICA-ARIMA can forecast the crude oil price more accurately.
Study on the medical meteorological forecast of the number of hypertension inpatient based on SVR
NASA Astrophysics Data System (ADS)
Zhai, Guangyu; Chai, Guorong; Zhang, Haifeng
2017-06-01
The purpose of this study is to build a hypertension prediction model by discussing the meteorological factors for hypertension incidence. The research method is selecting the standard data of relative humidity, air temperature, visibility, wind speed and air pressure of Lanzhou from 2010 to 2012(calculating the maximum, minimum and average value with 5 days as a unit ) as the input variables of Support Vector Regression(SVR) and the standard data of hypertension incidence of the same period as the output dependent variables to obtain the optimal prediction parameters by cross validation algorithm, then by SVR algorithm learning and training, a SVR forecast model for hypertension incidence is built. The result shows that the hypertension prediction model is composed of 15 input independent variables, the training accuracy is 0.005, the final error is 0.0026389. The forecast accuracy based on SVR model is 97.1429%, which is higher than statistical forecast equation and neural network prediction method. It is concluded that SVR model provides a new method for hypertension prediction with its simple calculation, small error as well as higher historical sample fitting and Independent sample forecast capability.
Using a Hybrid Model to Forecast the Prevalence of Schistosomiasis in Humans.
Zhou, Lingling; Xia, Jing; Yu, Lijing; Wang, Ying; Shi, Yun; Cai, Shunxiang; Nie, Shaofa
2016-03-23
We previously proposed a hybrid model combining both the autoregressive integrated moving average (ARIMA) and the nonlinear autoregressive neural network (NARNN) models in forecasting schistosomiasis. Our purpose in the current study was to forecast the annual prevalence of human schistosomiasis in Yangxin County, using our ARIMA-NARNN model, thereby further certifying the reliability of our hybrid model. We used the ARIMA, NARNN and ARIMA-NARNN models to fit and forecast the annual prevalence of schistosomiasis. The modeling time range included was the annual prevalence from 1956 to 2008 while the testing time range included was from 2009 to 2012. The mean square error (MSE), mean absolute error (MAE) and mean absolute percentage error (MAPE) were used to measure the model performance. We reconstructed the hybrid model to forecast the annual prevalence from 2013 to 2016. The modeling and testing errors generated by the ARIMA-NARNN model were lower than those obtained from either the single ARIMA or NARNN models. The predicted annual prevalence from 2013 to 2016 demonstrated an initial decreasing trend, followed by an increase. The ARIMA-NARNN model can be well applied to analyze surveillance data for early warning systems for the control and elimination of schistosomiasis.
NASA Astrophysics Data System (ADS)
Papacharalampous, Georgia; Tyralis, Hristos; Koutsoyiannis, Demetris
2017-04-01
Machine learning (ML) is considered to be a promising approach to hydrological processes forecasting. We conduct a comparison between several stochastic and ML point estimation methods by performing large-scale computational experiments based on simulations. The purpose is to provide generalized results, while the respective comparisons in the literature are usually based on case studies. The stochastic methods used include simple methods, models from the frequently used families of Autoregressive Moving Average (ARMA), Autoregressive Fractionally Integrated Moving Average (ARFIMA) and Exponential Smoothing models. The ML methods used are Random Forests (RF), Support Vector Machines (SVM) and Neural Networks (NN). The comparison refers to the multi-step ahead forecasting properties of the methods. A total of 20 methods are used, among which 9 are the ML methods. 12 simulation experiments are performed, while each of them uses 2 000 simulated time series of 310 observations. The time series are simulated using stochastic processes from the families of ARMA and ARFIMA models. Each time series is split into a fitting (first 300 observations) and a testing set (last 10 observations). The comparative assessment of the methods is based on 18 metrics, that quantify the methods' performance according to several criteria related to the accurate forecasting of the testing set, the capturing of its variation and the correlation between the testing and forecasted values. The most important outcome of this study is that there is not a uniformly better or worse method. However, there are methods that are regularly better or worse than others with respect to specific metrics. It appears that, although a general ranking of the methods is not possible, their classification based on their similar or contrasting performance in the various metrics is possible to some extent. Another important conclusion is that more sophisticated methods do not necessarily provide better forecasts compared to simpler methods. It is pointed out that the ML methods do not differ dramatically from the stochastic methods, while it is interesting that the NN, RF and SVM algorithms used in this study offer potentially very good performance in terms of accuracy. It should be noted that, although this study focuses on hydrological processes, the results are of general scientific interest. Another important point in this study is the use of several methods and metrics. Using fewer methods and fewer metrics would have led to a very different overall picture, particularly if those fewer metrics corresponded to fewer criteria. For this reason, we consider that the proposed methodology is appropriate for the evaluation of forecasting methods.
Peak Wind Tool for General Forecasting
NASA Technical Reports Server (NTRS)
Barrett, Joe H., III; Short, David
2008-01-01
This report describes work done by the Applied Meteorology Unit (AMU) in predicting peak winds at Kennedy Space Center (KSC) and Cape Canaveral Air Force Station (CCAFS). The 45th Weather Squadron requested the AMU develop a tool to help them forecast the speed and timing of the daily peak and average wind, from the surface to 300 ft on KSC/CCAFS during the cool season. Based on observations from the KSC/CCAFS wind tower network , Shuttle Landing Facility (SLF) surface observations, and CCAFS sounding s from the cool season months of October 2002 to February 2007, the AMU created mul tiple linear regression equations to predict the timing and speed of the daily peak wind speed, as well as the background average wind speed. Several possible predictors were evaluated, including persistence , the temperature inversion depth and strength, wind speed at the top of the inversion, wind gust factor (ratio of peak wind speed to average wind speed), synoptic weather pattern, occurrence of precipitation at the SLF, and strongest wind in the lowest 3000 ft, 4000 ft, or 5000 ft.
Dettinger, M.D.; Cayan, D.R.; McCabe, G.J.; Redmond, K.T.
2000-01-01
An analysis of historical floods and seasonal streamflows during years with neutral El NiñoSouthern Oscillation (ENSO) conditions in the tropical Pacific and “negative” states of the North Pacific Oscillation (NPO) in the North Pacific—like those expected next year—indicates that (1) chances of having maximum-daily flows next year that are near the longterm averages in many rivers are enhanced, especially in the western states, (2) chances of having near-average seasonal-average flows also may be enhanced across the country, and (3) locally, chances of large floods and winter-season flows may be enhanced in the extreme Northwest, chances of large winter flows may be diminished in rivers in and around Wisconsin, and chances of large spring flows may be diminished in the interior southwest and southeastern coastal plain. The background, methods, and forecast results that lead to these statements are detailed below, followed by a summary of the successes and failures of last year’s streamflow forecast by Dettinger et al. (1999).