Sample records for evaluating point forecasts

  1. Applications of the gambling score in evaluating earthquake predictions and forecasts

    NASA Astrophysics Data System (ADS)

    Zhuang, Jiancang; Zechar, Jeremy D.; Jiang, Changsheng; Console, Rodolfo; Murru, Maura; Falcone, Giuseppe

    2010-05-01

    This study presents a new method, namely the gambling score, for scoring the performance earthquake forecasts or predictions. Unlike most other scoring procedures that require a regular scheme of forecast and treat each earthquake equally, regardless their magnitude, this new scoring method compensates the risk that the forecaster has taken. Starting with a certain number of reputation points, once a forecaster makes a prediction or forecast, he is assumed to have betted some points of his reputation. The reference model, which plays the role of the house, determines how many reputation points the forecaster can gain if he succeeds, according to a fair rule, and also takes away the reputation points bet by the forecaster if he loses. This method is also extended to the continuous case of point process models, where the reputation points betted by the forecaster become a continuous mass on the space-time-magnitude range of interest. For discrete predictions, we apply this method to evaluate performance of Shebalin's predictions made by using the Reverse Tracing of Precursors (RTP) algorithm and of the outputs of the predictions from the Annual Consultation Meeting on Earthquake Tendency held by China Earthquake Administration. For the continuous case, we use it to compare the probability forecasts of seismicity in the Abruzzo region before and after the L'aquila earthquake based on the ETAS model and the PPE model.

  2. Forecasting longitudinal changes in oropharyngeal tumor morphology throughout the course of head and neck radiation therapy

    PubMed Central

    Yock, Adam D.; Rao, Arvind; Dong, Lei; Beadle, Beth M.; Garden, Adam S.; Kudchadker, Rajat J.; Court, Laurence E.

    2014-01-01

    Purpose: To create models that forecast longitudinal trends in changing tumor morphology and to evaluate and compare their predictive potential throughout the course of radiation therapy. Methods: Two morphology feature vectors were used to describe 35 gross tumor volumes (GTVs) throughout the course of intensity-modulated radiation therapy for oropharyngeal tumors. The feature vectors comprised the coordinates of the GTV centroids and a description of GTV shape using either interlandmark distances or a spherical harmonic decomposition of these distances. The change in the morphology feature vector observed at 33 time points throughout the course of treatment was described using static, linear, and mean models. Models were adjusted at 0, 1, 2, 3, or 5 different time points (adjustment points) to improve prediction accuracy. The potential of these models to forecast GTV morphology was evaluated using leave-one-out cross-validation, and the accuracy of the models was compared using Wilcoxon signed-rank tests. Results: Adding a single adjustment point to the static model without any adjustment points decreased the median error in forecasting the position of GTV surface landmarks by the largest amount (1.2 mm). Additional adjustment points further decreased the forecast error by about 0.4 mm each. Selection of the linear model decreased the forecast error for both the distance-based and spherical harmonic morphology descriptors (0.2 mm), while the mean model decreased the forecast error for the distance-based descriptor only (0.2 mm). The magnitude and statistical significance of these improvements decreased with each additional adjustment point, and the effect from model selection was not as large as that from adding the initial points. Conclusions: The authors present models that anticipate longitudinal changes in tumor morphology using various models and model adjustment schemes. The accuracy of these models depended on their form, and the utility of these models includes the characterization of patient-specific response with implications for treatment management and research study design. PMID:25086518

  3. Forecasting longitudinal changes in oropharyngeal tumor morphology throughout the course of head and neck radiation therapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yock, Adam D.; Kudchadker, Rajat J.; Rao, Arvind

    2014-08-15

    Purpose: To create models that forecast longitudinal trends in changing tumor morphology and to evaluate and compare their predictive potential throughout the course of radiation therapy. Methods: Two morphology feature vectors were used to describe 35 gross tumor volumes (GTVs) throughout the course of intensity-modulated radiation therapy for oropharyngeal tumors. The feature vectors comprised the coordinates of the GTV centroids and a description of GTV shape using either interlandmark distances or a spherical harmonic decomposition of these distances. The change in the morphology feature vector observed at 33 time points throughout the course of treatment was described using static, linear,more » and mean models. Models were adjusted at 0, 1, 2, 3, or 5 different time points (adjustment points) to improve prediction accuracy. The potential of these models to forecast GTV morphology was evaluated using leave-one-out cross-validation, and the accuracy of the models was compared using Wilcoxon signed-rank tests. Results: Adding a single adjustment point to the static model without any adjustment points decreased the median error in forecasting the position of GTV surface landmarks by the largest amount (1.2 mm). Additional adjustment points further decreased the forecast error by about 0.4 mm each. Selection of the linear model decreased the forecast error for both the distance-based and spherical harmonic morphology descriptors (0.2 mm), while the mean model decreased the forecast error for the distance-based descriptor only (0.2 mm). The magnitude and statistical significance of these improvements decreased with each additional adjustment point, and the effect from model selection was not as large as that from adding the initial points. Conclusions: The authors present models that anticipate longitudinal changes in tumor morphology using various models and model adjustment schemes. The accuracy of these models depended on their form, and the utility of these models includes the characterization of patient-specific response with implications for treatment management and research study design.« less

  4. A new scoring method for evaluating the performance of earthquake forecasts and predictions

    NASA Astrophysics Data System (ADS)

    Zhuang, J.

    2009-12-01

    This study presents a new method, namely the gambling score, for scoring the performance of earthquake forecasts or predictions. Unlike most other scoring procedures that require a regular scheme of forecast and treat each earthquake equally, regardless their magnitude, this new scoring method compensates the risk that the forecaster has taken. A fair scoring scheme should reward the success in a way that is compatible with the risk taken. Suppose that we have the reference model, usually the Poisson model for usual cases or Omori-Utsu formula for the case of forecasting aftershocks, which gives probability p0 that at least 1 event occurs in a given space-time-magnitude window. The forecaster, similar to a gambler, who starts with a certain number of reputation points, bets 1 reputation point on ``Yes'' or ``No'' according to his forecast, or bets nothing if he performs a NA-prediction. If the forecaster bets 1 reputation point of his reputations on ``Yes" and loses, the number of his reputation points is reduced by 1; if his forecasts is successful, he should be rewarded (1-p0)/p0 reputation points. The quantity (1-p0)/p0 is the return (reward/bet) ratio for bets on ``Yes''. In this way, if the reference model is correct, the expected return that he gains from this bet is 0. This rule also applies to probability forecasts. Suppose that p is the occurrence probability of an earthquake given by the forecaster. We can regard the forecaster as splitting 1 reputation point by betting p on ``Yes'' and 1-p on ``No''. In this way, the forecaster's expected pay-off based on the reference model is still 0. From the viewpoints of both the reference model and the forecaster, the rule for rewarding and punishment is fair. This method is also extended to the continuous case of point process models, where the reputation points bet by the forecaster become a continuous mass on the space-time-magnitude range of interest. We also calculate the upper bound of the gambling score when the true model is a renewal process, the stress release model or the ETAS model and when the reference model is the Poisson model.

  5. State-space adjustment of radar rainfall and skill score evaluation of stochastic volume forecasts in urban drainage systems.

    PubMed

    Löwe, Roland; Mikkelsen, Peter Steen; Rasmussen, Michael R; Madsen, Henrik

    2013-01-01

    Merging of radar rainfall data with rain gauge measurements is a common approach to overcome problems in deriving rain intensities from radar measurements. We extend an existing approach for adjustment of C-band radar data using state-space models and use the resulting rainfall intensities as input for forecasting outflow from two catchments in the Copenhagen area. Stochastic grey-box models are applied to create the runoff forecasts, providing us with not only a point forecast but also a quantification of the forecast uncertainty. Evaluating the results, we can show that using the adjusted radar data improves runoff forecasts compared with using the original radar data and that rain gauge measurements as forecast input are also outperformed. Combining the data merging approach with short-term rainfall forecasting algorithms may result in further improved runoff forecasts that can be used in real time control.

  6. MO-C-17A-04: Forecasting Longitudinal Changes in Oropharyngeal Tumor Morphology Throughout the Course of Head and Neck Radiation Therapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yock, A; UT Graduate School of Biomedical Sciences, Houston, TX; Rao, A

    2014-06-15

    Purpose: To generate, evaluate, and compare models that predict longitudinal changes in tumor morphology throughout the course of radiation therapy. Methods: Two morphology feature vectors were used to describe the size, shape, and position of 35 oropharyngeal GTVs at each treatment fraction during intensity-modulated radiation therapy. The feature vectors comprised the coordinates of the GTV centroids and one of two shape descriptors. One shape descriptor was based on radial distances between the GTV centroid and 614 GTV surface landmarks. The other was based on a spherical harmonic decomposition of these distances. Feature vectors over the course of therapy were describedmore » using static, linear, and mean models. The error of these models in forecasting GTV morphology was evaluated with leave-one-out cross-validation, and their accuracy was compared using Wilcoxon signed-rank tests. The effect of adjusting model parameters at 1, 2, 3, or 5 time points (adjustment points) was also evaluated. Results: The addition of a single adjustment point to the static model decreased the median error in forecasting the position of GTV surface landmarks by 1.2 mm (p<0.001). Additional adjustment points further decreased forecast error by about 0.4 mm each. The linear model decreased forecast error compared to the static model for feature vectors based on both shape descriptors (0.2 mm), while the mean model did so only for those based on the inter-landmark distances (0.2 mm). The decrease in forecast error due to adding adjustment points was greater than that due to model selection. Both effects diminished with subsequent adjustment points. Conclusion: Models of tumor morphology that include information from prior patients and/or prior treatment fractions are able to predict the tumor surface at each treatment fraction during radiation therapy. The predicted tumor morphology can be compared with patient anatomy or dose distributions, opening the possibility of anticipatory re-planning. American Legion Auxiliary Fellowship; The University of Texas Graduate School of Biomedical Sciences at Houston.« less

  7. A Comparison of the Forecast Skills among Three Numerical Models

    NASA Astrophysics Data System (ADS)

    Lu, D.; Reddy, S. R.; White, L. J.

    2003-12-01

    Three numerical weather forecast models, MM5, COAMPS and WRF, operating with a joint effort of NOAA HU-NCAS and Jackson State University (JSU) during summer 2003 have been chosen to study their forecast skills against observations. The models forecast over the same region with the same initialization, boundary condition, forecast length and spatial resolution. AVN global dataset have been ingested as initial conditions. Grib resolution of 27 km is chosen to represent the current mesoscale model. The forecasts with the length of 36h are performed to output the result with 12h interval. The key parameters used to evaluate the forecast skill include 12h accumulated precipitation, sea level pressure, wind, surface temperature and dew point. Precipitation is evaluated statistically using conventional skill scores, Threat Score (TS) and Bias Score (BS), for different threshold values based on 12h rainfall observations whereas other statistical methods such as Mean Error (ME), Mean Absolute Error(MAE) and Root Mean Square Error (RMSE) are applied to other forecast parameters.

  8. HexSim: A flexible simulation model for forecasting wildlife responses to multiple interacting stressors

    EPA Science Inventory

    With SERDP funding, we have improved upon a popular life history simulator (PATCH), and in doing so produced a powerful new forecasting tool (HexSim). PATCH, our starting point, was spatially explicit and individual-based, and was useful for evaluating a range of terrestrial lif...

  9. An Extended Objective Evaluation of the 29-km Eta Model for Weather Support to the United States Space Program

    NASA Technical Reports Server (NTRS)

    Nutter, Paul; Manobianco, John

    1998-01-01

    This report describes the Applied Meteorology Unit's objective verification of the National Centers for Environmental Prediction 29-km eta model during separate warm and cool season periods from May 1996 through January 1998. The verification of surface and upper-air point forecasts was performed at three selected stations important for 45th Weather Squadron, Spaceflight Meteorology Group, and National Weather Service, Melbourne operational weather concerns. The statistical evaluation identified model biases that may result from inadequate parameterization of physical processes. Since model biases are relatively small compared to the random error component, most of the total model error results from day-to-day variability in the forecasts and/or observations. To some extent, these nonsystematic errors reflect the variability in point observations that sample spatial and temporal scales of atmospheric phenomena that cannot be resolved by the model. On average, Meso-Eta point forecasts provide useful guidance for predicting the evolution of the larger scale environment. A more substantial challenge facing model users in real time is the discrimination of nonsystematic errors that tend to inflate the total forecast error. It is important that model users maintain awareness of ongoing model changes. Such changes are likely to modify the basic error characteristics, particularly near the surface.

  10. HexSim: A flexible simulation model for forecasting wildlife responses to multiple interacting stressors - ESRP Meeting

    EPA Science Inventory

    With SERDP funding, we have improved upon a popular life history simulator (PATCH), and indoing so produced a powerful new forecasting tool (HexSim). PATCH, our starting point, was spatially explicit and individual-based, and was useful for evaluating a range of terrestrial life...

  11. Marine Point Forecasts

    Science.gov Websites

    with smartphones and other mobile platforms new Marine Point Forecasts are a forecast for a specific maps providing zone/point marine forecasts Mobile, AL Eureka, CA San Francisco, CA Los Angeles, CA San

  12. Purposes and methods of scoring earthquake forecasts

    NASA Astrophysics Data System (ADS)

    Zhuang, J.

    2010-12-01

    There are two kinds of purposes in the studies on earthquake prediction or forecasts: one is to give a systematic estimation of earthquake risks in some particular region and period in order to give advice to governments and enterprises for the use of reducing disasters, the other one is to search for reliable precursors that can be used to improve earthquake prediction or forecasts. For the first case, a complete score is necessary, while for the latter case, a partial score, which can be used to evaluate whether the forecasts or predictions have some advantages than a well know model, is necessary. This study reviews different scoring methods for evaluating the performance of earthquake prediction and forecasts. Especially, the gambling scoring method, which is developed recently, shows its capacity in finding good points in an earthquake prediction algorithm or model that are not in a reference model, even if its overall performance is no better than the reference model.

  13. Gambling scores for earthquake predictions and forecasts

    NASA Astrophysics Data System (ADS)

    Zhuang, Jiancang

    2010-04-01

    This paper presents a new method, namely the gambling score, for scoring the performance earthquake forecasts or predictions. Unlike most other scoring procedures that require a regular scheme of forecast and treat each earthquake equally, regardless their magnitude, this new scoring method compensates the risk that the forecaster has taken. Starting with a certain number of reputation points, once a forecaster makes a prediction or forecast, he is assumed to have betted some points of his reputation. The reference model, which plays the role of the house, determines how many reputation points the forecaster can gain if he succeeds, according to a fair rule, and also takes away the reputation points betted by the forecaster if he loses. This method is also extended to the continuous case of point process models, where the reputation points betted by the forecaster become a continuous mass on the space-time-magnitude range of interest. We also calculate the upper bound of the gambling score when the true model is a renewal process, the stress release model or the ETAS model and when the reference model is the Poisson model.

  14. Statistical Earthquake Focal Mechanism Forecasts

    NASA Astrophysics Data System (ADS)

    Kagan, Y. Y.; Jackson, D. D.

    2013-12-01

    The new whole Earth focal mechanism forecast, based on the GCMT catalog, has been created. In the present forecast, the sum of normalized seismic moment tensors within 1000 km radius is calculated and the P- and T-axes for the focal mechanism are evaluated on the basis of the sum. Simultaneously we calculate an average rotation angle between the forecasted mechanism and all the surrounding mechanisms. This average angle shows tectonic complexity of a region and indicates the accuracy of the prediction. The method was originally proposed by Kagan and Jackson (1994, JGR). Recent interest by CSEP and GEM has motivated some improvements, particularly to extend the previous forecast to polar and near-polar regions. The major problem in extending the forecast is the focal mechanism calculation on a spherical surface. In the previous forecast as our average focal mechanism was computed, it was assumed that longitude lines are approximately parallel within 1000 km radius. This is largely accurate in the equatorial and near-equatorial areas. However, when one approaches the 75 degree latitude, the longitude lines are no longer parallel: the bearing (azimuthal) difference at points separated by 1000 km reach about 35 degrees. In most situations a forecast point where we calculate an average focal mechanism is surrounded by earthquakes, so a bias should not be strong due to the difference effect cancellation. But if we move into polar regions, the bearing difference could approach 180 degrees. In a modified program focal mechanisms have been projected on a plane tangent to a sphere at a forecast point. New longitude axes which are parallel in the tangent plane are corrected for the bearing difference. A comparison with the old 75S-75N forecast shows that in equatorial regions the forecasted focal mechanisms are almost the same, and the difference in the forecasted focal mechanisms rotation angle is close to zero. However, though the forecasted focal mechanisms are similar, closer to the 75 latitude degree, the difference in the rotation angle is large (around a factor 1.5 in some places). The Gamma-index was calculated for the average focal mechanism moment. A non-zero Index indicates that earthquake focal mechanisms around the forecast point have different orientations. Thus deformation complexity displays itself in the average rotation angle and in the Index. However, sometimes the rotation angle is close to zero, whereas the Index is large, testifying to a large CLVD presence. Both new 0.5x0.5 and 0.1x0.1 degree forecasts are posted at http://eq.ess.ucla.edu/~kagan/glob_gcmt_index.html.

  15. Statistical Correction of Air Temperature Forecasts for City and Road Weather Applications

    NASA Astrophysics Data System (ADS)

    Mahura, Alexander; Petersen, Claus; Sass, Bent; Gilet, Nicolas

    2014-05-01

    The method for statistical correction of air /road surface temperatures forecasts was developed based on analysis of long-term time-series of meteorological observations and forecasts (from HIgh Resolution Limited Area Model & Road Conditions Model; 3 km horizontal resolution). It has been tested for May-Aug 2012 & Oct 2012 - Mar 2013, respectively. The developed method is based mostly on forecasted meteorological parameters with a minimal inclusion of observations (covering only a pre-history period). Although the st iteration correction is based taking into account relevant temperature observations, but the further adjustment of air and road temperature forecasts is based purely on forecasted meteorological parameters. The method is model independent, e.g. it can be applied for temperature correction with other types of models having different horizontal resolutions. It is relatively fast due to application of the singular value decomposition method for matrix solution to find coefficients. Moreover, there is always a possibility for additional improvement due to extra tuning of the temperature forecasts for some locations (stations), and in particular, where for example, the MAEs are generally higher compared with others (see Gilet et al., 2014). For the city weather applications, new operationalized procedure for statistical correction of the air temperature forecasts has been elaborated and implemented for the HIRLAM-SKA model runs at 00, 06, 12, and 18 UTCs covering forecast lengths up to 48 hours. The procedure includes segments for extraction of observations and forecast data, assigning these to forecast lengths, statistical correction of temperature, one-&multi-days statistical evaluation of model performance, decision-making on using corrections by stations, interpolation, visualisation and storage/backup. Pre-operational air temperature correction runs were performed for the mainland Denmark since mid-April 2013 and shown good results. Tests also showed that the CPU time required for the operational procedure is relatively short (less than 15 minutes including a large time spent for interpolation). These also showed that in order to start correction of forecasts there is no need to have a long-term pre-historical data (containing forecasts and observations) and, at least, a couple of weeks will be sufficient when a new observational station is included and added to the forecast point. Note for the road weather application, the operationalization of the statistical correction of the road surface temperature forecasts (for the RWM system daily hourly runs covering forecast length up to 5 hours ahead) for the Danish road network (for about 400 road stations) was also implemented, and it is running in a test mode since Sep 2013. The method can also be applied for correction of the dew point temperature and wind speed (as a part of observations/ forecasts at synoptical stations), where these both meteorological parameters are parts of the proposed system of equations. The evaluation of the method performance for improvement of the wind speed forecasts is planned as well, with considering possibilities for the wind direction improvements (which is more complex due to multi-modal types of such data distribution). The method worked for the entire domain of mainland Denmark (tested for 60 synoptical and 395 road stations), and hence, it can be also applied for any geographical point within this domain, as through interpolation to about 100 cities' locations (for Danish national byvejr forecasts). Moreover, we can assume that the same method can be used in other geographical areas. The evaluation for other domains (with a focus on Greenland and Nordic countries) is planned. In addition, a similar approach might be also tested for statistical correction of concentrations of chemical species, but such approach will require additional elaboration and evaluation.

  16. The Nature and Variability of Ensemble Sensitivity Fields that Diagnose Severe Convection

    NASA Astrophysics Data System (ADS)

    Ancell, B. C.

    2017-12-01

    Ensemble sensitivity analysis (ESA) is a statistical technique that uses information from an ensemble of forecasts to reveal relationships between chosen forecast metrics and the larger atmospheric state at various forecast times. A number of studies have employed ESA from the perspectives of dynamical interpretation, observation targeting, and ensemble subsetting toward improved probabilistic prediction of high-impact events, mostly at synoptic scales. We tested ESA using convective forecast metrics at the 2016 HWT Spring Forecast Experiment to understand the utility of convective ensemble sensitivity fields in improving forecasts of severe convection and its individual hazards. The main purpose of this evaluation was to understand the temporal coherence and general characteristics of convective sensitivity fields toward future use in improving ensemble predictability within an operational framework.The magnitude and coverage of simulated reflectivity, updraft helicity, and surface wind speed were used as response functions, and the sensitivity of these functions to winds, temperatures, geopotential heights, and dew points at different atmospheric levels and at different forecast times were evaluated on a daily basis throughout the HWT Spring Forecast experiment. These sensitivities were calculated within the Texas Tech real-time ensemble system, which possesses 42 members that run twice daily to 48-hr forecast time. Here we summarize both the findings regarding the nature of the sensitivity fields and the evaluation of the participants that reflects their opinions of the utility of operational ESA. The future direction of ESA for operational use will also be discussed.

  17. Evaluation of Flood Forecast and Warning in Elbe river basin - Impact of Forecaster's Strategy

    NASA Astrophysics Data System (ADS)

    Danhelka, Jan; Vlasak, Tomas

    2010-05-01

    Czech Hydrometeorological Institute (CHMI) is responsible for flood forecasting and warning in the Czech Republic. To meet that issue CHMI operates hydrological forecasting systems and publish flow forecast in selected profiles. Flood forecast and warning is an output of system that links observation (flow and atmosphere), data processing, weather forecast (especially NWP's QPF), hydrological modeling and modeled outputs evaluation and interpretation by forecaster. Forecast users are interested in final output without separating uncertainties of separate steps of described process. Therefore an evaluation of final operational forecasts was done for profiles within Elbe river basin produced by AquaLog forecasting system during period 2002 to 2008. Effects of uncertainties of observation, data processing and especially meteorological forecasts were not accounted separately. Forecast of flood levels exceedance (peak over the threshold) during forecasting period was the main criterion as flow increase forecast is of the highest importance. Other evaluation criteria included peak flow and volume difference. In addition Nash-Sutcliffe was computed separately for each time step (1 to 48 h) of forecasting period to identify its change with the lead time. Textual flood warnings are issued for administrative regions to initiate flood protection actions in danger of flood. Flood warning hit rate was evaluated at regions level and national level. Evaluation found significant differences of model forecast skill between forecasting profiles, particularly less skill was evaluated at small headwater basins due to domination of QPF uncertainty in these basins. The average hit rate was 0.34 (miss rate = 0.33, false alarm rate = 0.32). However its explored spatial difference is likely to be influenced also by different fit of parameters sets (due to different basin characteristics) and importantly by different impact of human factor. Results suggest that the practice of interactive model operation, experience and forecasting strategy differs between responsible forecasting offices. Warning is based on model outputs interpretation by hydrologists-forecaster. Warning hit rate reached 0.60 for threshold set to lowest flood stage of which 0.11 was underestimation of flood degree (miss 0.22, false alarm 0.28). Critical success index of model forecast was 0.34, while the same criteria for warning reached 0.55. We assume that the increase accounts not only to change of scale from single forecasting point to region for warning, but partly also to forecaster's added value. There is no official warning strategy preferred in the Czech Republic (f.e. tolerance towards higher false alarm rate). Therefore forecaster decision and personal strategy is of great importance. Results show quite successful warning for 1st flood level exceedance, over-warning for 2nd flood level, but under-warning for 3rd (highest) flood level. That suggests general forecaster's preference of medium level warning (2nd flood level is legally determined to be the start of the flood and flood protection activities). In conclusion human forecaster's experience and analysis skill increases flood warning performance notably. However society preference should be specifically addressed in the warning strategy definition to support forecaster's decision making.

  18. A two-stage method of quantitative flood risk analysis for reservoir real-time operation using ensemble-based hydrologic forecasts

    NASA Astrophysics Data System (ADS)

    Liu, P.

    2013-12-01

    Quantitative analysis of the risk for reservoir real-time operation is a hard task owing to the difficulty of accurate description of inflow uncertainties. The ensemble-based hydrologic forecasts directly depict the inflows not only the marginal distributions but also their persistence via scenarios. This motivates us to analyze the reservoir real-time operating risk with ensemble-based hydrologic forecasts as inputs. A method is developed by using the forecast horizon point to divide the future time into two stages, the forecast lead-time and the unpredicted time. The risk within the forecast lead-time is computed based on counting the failure number of forecast scenarios, and the risk in the unpredicted time is estimated using reservoir routing with the design floods and the reservoir water levels of forecast horizon point. As a result, a two-stage risk analysis method is set up to quantify the entire flood risks by defining the ratio of the number of scenarios that excessive the critical value to the total number of scenarios. The China's Three Gorges Reservoir (TGR) is selected as a case study, where the parameter and precipitation uncertainties are implemented to produce ensemble-based hydrologic forecasts. The Bayesian inference, Markov Chain Monte Carlo, is used to account for the parameter uncertainty. Two reservoir operation schemes, the real operated and scenario optimization, are evaluated for the flood risks and hydropower profits analysis. With the 2010 flood, it is found that the improvement of the hydrologic forecast accuracy is unnecessary to decrease the reservoir real-time operation risk, and most risks are from the forecast lead-time. It is therefore valuable to decrease the avarice of ensemble-based hydrologic forecasts with less bias for a reservoir operational purpose.

  19. Research and Application of an Air Quality Early Warning System Based on a Modified Least Squares Support Vector Machine and a Cloud Model.

    PubMed

    Wang, Jianzhou; Niu, Tong; Wang, Rui

    2017-03-02

    The worsening atmospheric pollution increases the necessity of air quality early warning systems (EWSs). Despite the fact that a massive amount of investigation about EWS in theory and practicality has been conducted by numerous researchers, studies concerning the quantification of uncertain information and comprehensive evaluation are still lacking, which impedes further development in the area. In this paper, firstly a comprehensive warning system is proposed, which consists of two vital indispensable modules, namely effective forecasting and scientific evaluation, respectively. For the forecasting module, a novel hybrid model combining the theory of data preprocessing and numerical optimization is first developed to implement effective forecasting for air pollutant concentration. Especially, in order to further enhance the accuracy and robustness of the warning system, interval forecasting is implemented to quantify the uncertainties generated by forecasts, which can provide significant risk signals by using point forecasting for decision-makers. For the evaluation module, a cloud model, based on probability and fuzzy set theory, is developed to perform comprehensive evaluations of air quality, which can realize the transformation between qualitative concept and quantitative data. To verify the effectiveness and efficiency of the warning system, extensive simulations based on air pollutants data from Dalian in China were effectively implemented, which illustrate that the warning system is not only remarkably high-performance, but also widely applicable.

  20. Research and Application of an Air Quality Early Warning System Based on a Modified Least Squares Support Vector Machine and a Cloud Model

    PubMed Central

    Wang, Jianzhou; Niu, Tong; Wang, Rui

    2017-01-01

    The worsening atmospheric pollution increases the necessity of air quality early warning systems (EWSs). Despite the fact that a massive amount of investigation about EWS in theory and practicality has been conducted by numerous researchers, studies concerning the quantification of uncertain information and comprehensive evaluation are still lacking, which impedes further development in the area. In this paper, firstly a comprehensive warning system is proposed, which consists of two vital indispensable modules, namely effective forecasting and scientific evaluation, respectively. For the forecasting module, a novel hybrid model combining the theory of data preprocessing and numerical optimization is first developed to implement effective forecasting for air pollutant concentration. Especially, in order to further enhance the accuracy and robustness of the warning system, interval forecasting is implemented to quantify the uncertainties generated by forecasts, which can provide significant risk signals by using point forecasting for decision-makers. For the evaluation module, a cloud model, based on probability and fuzzy set theory, is developed to perform comprehensive evaluations of air quality, which can realize the transformation between qualitative concept and quantitative data. To verify the effectiveness and efficiency of the warning system, extensive simulations based on air pollutants data from Dalian in China were effectively implemented, which illustrate that the warning system is not only remarkably high-performance, but also widely applicable. PMID:28257122

  1. Quantitative precipitation forecasts in the Alps - an assessment from the Forecast Demonstration Project MAP D-PHASE

    NASA Astrophysics Data System (ADS)

    Ament, F.; Weusthoff, T.; Arpagaus, M.; Rotach, M.

    2009-04-01

    The main aim of the WWRP Forecast Demonstration Project MAP D-PHASE is to demonstrate the performance of today's models to forecast heavy precipitation and flood events in the Alpine region. Therefore an end-to-end, real-time forecasting system was installed and operated during the D PHASE Operations Period from June to November 2007. Part of this system are 30 numerical weather prediction models (deterministic as well as ensemble systems) operated by weather services and research institutes, which issue alerts if predicted precipitation accumulations exceed critical thresholds. Additionally to the real-time alerts, all relevant model fields of these simulations are stored in a central data archive. This comprehensive data set allows a detailed assessment of today's quantitative precipitation forecast (QPF) performance in the Alpine region. We will present results of QPF verifications against Swiss radar and rain gauge data both from a qualitative point of view, in terms of alerts, as well as from a quantitative perspective, in terms of precipitation rate. Various influencing factors like lead time, accumulation time, selection of warning thresholds, or bias corrections will be discussed. Additional to traditional verifications of area average precipitation amounts, the performance of the models to predict the correct precipitation statistics without requiring a point-to-point match will be described by using modern Fuzzy verification techniques. Both analyses reveal significant advantages of deep convection resolving models compared to coarser models with parameterized convection. An intercomparison of the model forecasts themselves reveals a remarkably high variability between different models, and makes it worthwhile to evaluate the potential of a multi-model ensemble. Various multi-model ensemble strategies will be tested by combining D-PHASE models to virtual ensemble systems.

  2. Evaluation of the 29-km Eta Model for Weather Support to the United States Space Program

    NASA Technical Reports Server (NTRS)

    Manobianco, John; Nutter, Paul

    1997-01-01

    The Applied Meteorology Unit (AMU) conducted a year-long evaluation of NCEP's 29-km mesoscale Eta (meso-eta) weather prediction model in order to identify added value to forecast operations in support of the United States space program. The evaluation was stratified over warm and cool seasons and considered both objective and subjective verification methodologies. Objective verification results generally indicate that meso-eta model point forecasts at selected stations exhibit minimal error growth in terms of RMS errors and are reasonably unbiased. Conversely, results from the subjective verification demonstrate that model forecasts of developing weather events such as thunderstorms, sea breezes, and cold fronts, are not always as accurate as implied by the seasonal error statistics. Sea-breeze case studies reveal that the model generates a dynamically-consistent thermally direct circulation over the Florida peninsula, although at a larger scale than observed. Thunderstorm verification reveals that the meso-eta model is capable of predicting areas of organized convection, particularly during the late afternoon hours but is not capable of forecasting individual thunderstorms. Verification of cold fronts during the cool season reveals that the model is capable of forecasting a majority of cold frontal passages through east central Florida to within +1-h of observed frontal passage.

  3. Assessing High-Resolution Weather Research and Forecasting (WRF) Forecasts Using an Object-Based Diagnostic Evaluation

    DTIC Science & Technology

    2014-02-01

    Operational Model Archive and Distribution System ( NOMADS ). The RTMA product was generated using a 2-D variational method to assimilate point weather...observations and satellite-derived measurements (National Weather Service, 2013). The products were downloaded using the NOMADS General Regularly...of the completed WRF run" read Start_Date echo $Start_Date echo " " echo "Enter 2- digit , zulu, observation hour (HH) for remapping" read oHH

  4. Evaluation of TIGGE Ensemble Forecasts of Precipitation in Distinct Climate Regions in Iran

    NASA Astrophysics Data System (ADS)

    Aminyavari, Saleh; Saghafian, Bahram; Delavar, Majid

    2018-04-01

    The application of numerical weather prediction (NWP) products is increasing dramatically. Existing reports indicate that ensemble predictions have better skill than deterministic forecasts. In this study, numerical ensemble precipitation forecasts in the TIGGE database were evaluated using deterministic, dichotomous (yes/no), and probabilistic techniques over Iran for the period 2008-16. Thirteen rain gauges spread over eight homogeneous precipitation regimes were selected for evaluation. The Inverse Distance Weighting and Kriging methods were adopted for interpolation of the prediction values, downscaled to the stations at lead times of one to three days. To enhance the forecast quality, NWP values were post-processed via Bayesian Model Averaging. The results showed that ECMWF had better scores than other products. However, products of all centers underestimated precipitation in high precipitation regions while overestimating precipitation in other regions. This points to a systematic bias in forecasts and demands application of bias correction techniques. Based on dichotomous evaluation, NCEP did better at most stations, although all centers overpredicted the number of precipitation events. Compared to those of ECMWF and NCEP, UKMO yielded higher scores in mountainous regions, but performed poorly at other selected stations. Furthermore, the evaluations showed that all centers had better skill in wet than in dry seasons. The quality of post-processed predictions was better than those of the raw predictions. In conclusion, the accuracy of the NWP predictions made by the selected centers could be classified as medium over Iran, while post-processing of predictions is recommended to improve the quality.

  5. Using simplified Chaos Theory to manage nursing services.

    PubMed

    Haigh, Carol A

    2008-04-01

    The purpose of this study was to evaluate the part simplified chaos theory could play in the management of nursing services. As nursing care becomes more complex, practitioners need to become familiar with business planning and objective time management. There are many time-limited methods that facilitate this type of planning but few that can help practitioners to forecast the end-point outcome of the service they deliver. A growth model was applied to a specialist service to plot service trajectory. Components of chaos theory can play a role in forecasting service outcomes and consequently the impact upon the management of such services. The ability to (1) track the trajectory of a service and (2) manipulate that trajectory by introducing new variables can allow managers to forward plan for service development and to evaluate the effectiveness of a service by plotting its end-point state.

  6. Engaging Earth- and Environmental-Science Undergraduates Through Weather Discussions and an eLearning Weather Forecasting Contest

    NASA Astrophysics Data System (ADS)

    Schultz, David M.; Anderson, Stuart; Seo-Zindy, Ryo

    2013-06-01

    For students who major in meteorology, engaging in weather forecasting can motivate learning, develop critical-thinking skills, improve their written communication, and yield better forecasts. Whether such advances apply to students who are not meteorology majors has been less demonstrated. To test this idea, a weather discussion and an eLearning weather forecasting contest were devised for a meteorology course taken by third-year undergraduate earth- and environmental-science students. The discussion consisted of using the recent, present, and future weather to amplify the topics of the week's lectures. Then, students forecasted the next day's high temperature and the probability of precipitation for Woodford, the closest official observing site to Manchester, UK. The contest ran for 10 weeks, and the students received credit for participation. The top students at the end of the contest received bonus points on their final grade. A Web-based forecast contest application was developed to register the students, receive their forecasts, and calculate weekly standings. Students who were successful in the forecast contest were not necessarily those who achieved the highest scores on the tests, demonstrating that the contest was possibly testing different skills than traditional learning. Student evaluations indicate that the weather discussion and contest were reasonably successful in engaging students to learn about the weather outside of the classroom, synthesize their knowledge from the lectures, and improve their practical understanding of the weather. Therefore, students taking a meteorology class, but not majoring in meteorology, can derive academic benefits from weather discussions and forecast contests. Nevertheless, student evaluations also indicate that better integration of the lectures, weather discussions, and the forecasting contests is necessary.

  7. Probabilistic empirical prediction of seasonal climate: evaluation and potential applications

    NASA Astrophysics Data System (ADS)

    Dieppois, B.; Eden, J.; van Oldenborgh, G. J.

    2017-12-01

    Preparing for episodes with risks of anomalous weather a month to a year ahead is an important challenge for governments, non-governmental organisations, and private companies and is dependent on the availability of reliable forecasts. The majority of operational seasonal forecasts are made using process-based dynamical models, which are complex, computationally challenging and prone to biases. Empirical forecast approaches built on statistical models to represent physical processes offer an alternative to dynamical systems and can provide either a benchmark for comparison or independent supplementary forecasts. Here, we present a new evaluation of an established empirical system used to predict seasonal climate across the globe. Forecasts for surface air temperature, precipitation and sea level pressure are produced by the KNMI Probabilistic Empirical Prediction (K-PREP) system every month and disseminated via the KNMI Climate Explorer (climexp.knmi.nl). K-PREP is based on multiple linear regression and built on physical principles to the fullest extent with predictive information taken from the global CO2-equivalent concentration, large-scale modes of variability in the climate system and regional-scale information. K-PREP seasonal forecasts for the period 1981-2016 will be compared with corresponding dynamically generated forecasts produced by operational forecast systems. While there are many regions of the world where empirical forecast skill is extremely limited, several areas are identified where K-PREP offers comparable skill to dynamical systems. We discuss two key points in the future development and application of the K-PREP system: (a) the potential for K-PREP to provide a more useful basis for reference forecasts than those based on persistence or climatology, and (b) the added value of including K-PREP forecast information in multi-model forecast products, at least for known regions of good skill. We also discuss the potential development of stakeholder-driven applications of the K-PREP system, including empirical forecasts for circumboreal fire activity.

  8. Marine Layer Stratus Study

    NASA Astrophysics Data System (ADS)

    Wells, Leonard A.

    2007-06-01

    The intent of this study is to develop a better understanding of the behavior of late spring through early fall marine layer stratus and fog at Vandenberg Air Force Base, which accounts for a majority of aviation forecasting difficulties. The main objective was to use Leipper (1995) study as a starting point to evaluate synoptic and mesoscale processes involved, and identify specific meteorological parameters that affected the behavior of marine layer stratus and fog. After identifying those parameters, the study evaluates how well the various weather models forecast them. The main conclusion of this study is that weak upper-air dynamic features work with boundary layer motions to influence marine layer behavior. It highlights the importance of correctly forecasting the surface temperature by showing how it ties directly to the wind field. That wind field, modified by the local terrain, establishes the low-level convergence and divergence pattern and the resulting marine layer cloud thicknesses and visibilities.

  9. A Diagnostics Tool to detect ensemble forecast system anomaly and guide operational decisions

    NASA Astrophysics Data System (ADS)

    Park, G. H.; Srivastava, A.; Shrestha, E.; Thiemann, M.; Day, G. N.; Draijer, S.

    2017-12-01

    The hydrologic community is moving toward using ensemble forecasts to take uncertainty into account during the decision-making process. The New York City Department of Environmental Protection (DEP) implements several types of ensemble forecasts in their decision-making process: ensemble products for a statistical model (Hirsch and enhanced Hirsch); the National Weather Service (NWS) Advanced Hydrologic Prediction Service (AHPS) forecasts based on the classical Ensemble Streamflow Prediction (ESP) technique; and the new NWS Hydrologic Ensemble Forecasting Service (HEFS) forecasts. To remove structural error and apply the forecasts to additional forecast points, the DEP post processes both the AHPS and the HEFS forecasts. These ensemble forecasts provide mass quantities of complex data, and drawing conclusions from these forecasts is time-consuming and difficult. The complexity of these forecasts also makes it difficult to identify system failures resulting from poor data, missing forecasts, and server breakdowns. To address these issues, we developed a diagnostic tool that summarizes ensemble forecasts and provides additional information such as historical forecast statistics, forecast skill, and model forcing statistics. This additional information highlights the key information that enables operators to evaluate the forecast in real-time, dynamically interact with the data, and review additional statistics, if needed, to make better decisions. We used Bokeh, a Python interactive visualization library, and a multi-database management system to create this interactive tool. This tool compiles and stores data into HTML pages that allows operators to readily analyze the data with built-in user interaction features. This paper will present a brief description of the ensemble forecasts, forecast verification results, and the intended applications for the diagnostic tool.

  10. Evaluation of the 29-km Eta Model. Part 1; Objective Verification at Three Selected Stations

    NASA Technical Reports Server (NTRS)

    Nutter, Paul A.; Manobianco, John; Merceret, Francis J. (Technical Monitor)

    1998-01-01

    This paper describes an objective verification of the National Centers for Environmental Prediction (NCEP) 29-km eta model from May 1996 through January 1998. The evaluation was designed to assess the model's surface and upper-air point forecast accuracy at three selected locations during separate warm (May - August) and cool (October - January) season periods. In order to enhance sample sizes available for statistical calculations, the objective verification includes two consecutive warm and cool season periods. Systematic model deficiencies comprise the larger portion of the total error in most of the surface forecast variables that were evaluated. The error characteristics for both surface and upper-air forecasts vary widely by parameter, season, and station location. At upper levels, a few characteristic biases are identified. Overall however, the upper-level errors are more nonsystematic in nature and could be explained partly by observational measurement uncertainty. With a few exceptions, the upper-air results also indicate that 24-h model error growth is not statistically significant. In February and August 1997, NCEP implemented upgrades to the eta model's physical parameterizations that were designed to change some of the model's error characteristics near the surface. The results shown in this paper indicate that these upgrades led to identifiable and statistically significant changes in forecast accuracy for selected surface parameters. While some of the changes were expected, others were not consistent with the intent of the model updates and further emphasize the need for ongoing sensitivity studies and localized statistical verification efforts. Objective verification of point forecasts is a stringent measure of model performance, but when used alone, is not enough to quantify the overall value that model guidance may add to the forecast process. Therefore, results from a subjective verification of the meso-eta model over the Florida peninsula are discussed in the companion paper by Manobianco and Nutter. Overall verification results presented here and in part two should establish a reasonable benchmark from which model users and developers may pursue the ongoing eta model verification strategies in the future.

  11. A travel time forecasting model based on change-point detection method

    NASA Astrophysics Data System (ADS)

    LI, Shupeng; GUANG, Xiaoping; QIAN, Yongsheng; ZENG, Junwei

    2017-06-01

    Travel time parameters obtained from road traffic sensors data play an important role in traffic management practice. A travel time forecasting model is proposed for urban road traffic sensors data based on the method of change-point detection in this paper. The first-order differential operation is used for preprocessing over the actual loop data; a change-point detection algorithm is designed to classify the sequence of large number of travel time data items into several patterns; then a travel time forecasting model is established based on autoregressive integrated moving average (ARIMA) model. By computer simulation, different control parameters are chosen for adaptive change point search for travel time series, which is divided into several sections of similar state.Then linear weight function is used to fit travel time sequence and to forecast travel time. The results show that the model has high accuracy in travel time forecasting.

  12. Prospective Evaluation of the Global Earthquake Activity Rate Model (GEAR1) Earthquake Forecast: Preliminary Results

    NASA Astrophysics Data System (ADS)

    Strader, Anne; Schorlemmer, Danijel; Beutin, Thomas

    2017-04-01

    The Global Earthquake Activity Rate Model (GEAR1) is a hybrid seismicity model, constructed from a loglinear combination of smoothed seismicity from the Global Centroid Moment Tensor (CMT) earthquake catalog and geodetic strain rates (Global Strain Rate Map, version 2.1). For the 2005-2012 retrospective evaluation period, GEAR1 outperformed both parent strain rate and smoothed seismicity forecasts. Since 1. October 2015, GEAR1 has been prospectively evaluated by the Collaboratory for the Study of Earthquake Predictability (CSEP) testing center. Here, we present initial one-year test results of the GEAR1, GSRM and GSRM2.1, as well as localized evaluation of GEAR1 performance. The models were evaluated on the consistency in number (N-test), spatial (S-test) and magnitude (M-test) distribution of forecasted and observed earthquakes, as well as overall data consistency (CL-, L-tests). Performance at target earthquake locations was compared between models using the classical paired T-test and its non-parametric equivalent, the W-test, to determine if one model could be rejected in favor of another at the 0.05 significance level. For the evaluation period from 1. October 2015 to 1. October 2016, the GEAR1, GSRM and GSRM2.1 forecasts pass all CSEP likelihood tests. Comparative test results show statistically significant improvement of GEAR1 performance over both strain rate-based forecasts, both of which can be rejected in favor of GEAR1. Using point process residual analysis, we investigate the spatial distribution of differences in GEAR1, GSRM and GSRM2 model performance, to identify regions where the GEAR1 model should be adjusted, that could not be inferred from CSEP test results. Furthermore, we investigate whether the optimal combination of smoothed seismicity and strain rates remains stable over space and time.

  13. Long-Lead Prediction of the 2015 Fire and Haze Episode in Indonesia

    NASA Astrophysics Data System (ADS)

    Shawki, Dilshad; Field, Robert D.; Tippett, Michael K.; Saharjo, Bambang Hero; Albar, Israr; Atmoko, Dwi; Voulgarakis, Apostolos

    2017-10-01

    We conducted a case study of National Centers for Environmental Prediction Climate Forecast System version 2 seasonal model forecast performance over Indonesia in predicting the dry conditions in 2015 that led to severe fire, in comparison to the non-El Niño dry season conditions of 2016. Forecasts of the Drought Code (DC) component of Indonesia's Fire Danger Rating System were examined across the entire equatorial Asia region and for the primary burning regions within it. Our results show that early warning lead times of high observed DC in September and October 2015 varied considerably for different regions. High DC over Southern Kalimantan and Southern New Guinea were predicted with 180 day lead times, whereas Southern Sumatra had lead times of up to only 60 days, which we attribute to the absence in the forecasts of an eastward decrease in Indian Ocean sea surface temperatures. This case study provides the starting point for longer-term evaluation of seasonal fire danger rating forecasts over Indonesia.

  14. Design and development of surface rainfall forecast products on GRAPES_MESO model

    NASA Astrophysics Data System (ADS)

    Zhili, Liu

    2016-04-01

    In this paper, we designed and developed the surface rainfall forecast products using medium scale GRAPES_MESO model precipitation forecast products. The horizontal resolution of GRAPES_MESO model is 10km*10km, the number of Grids points is 751*501, vertical levels is 26, the range is 70°E-145.15°E, 15°N-64.35 °N. We divided the basin into 7 major watersheds. Each watersheds was divided into a number of sub regions. There were 95 sub regions in all. Tyson polygon method is adopted in the calculation of surface rainfall. We used 24 hours forecast precipitation data of GRAPES_MESO model to calculate the surface rainfall. According to the site of information and boundary information of the 95 sub regions, the forecast surface rainfall of each sub regions was calculated. We can provide real-time surface rainfall forecast products every day. We used the method of fuzzy evaluation to carry out a preliminary test and verify about the surface rainfall forecast product. Results shows that the fuzzy score of heavy rain, rainstorm and downpour level forecast rainfall were higher, the fuzzy score of light rain level was lower. The forecast effect of heavy rain, rainstorm and downpour level surface rainfall were better. The rate of missing and empty forecast of light rainfall level surface rainfall were higher, so it's fuzzy score were lower.

  15. Short-Term Load Forecasting Based Automatic Distribution Network Reconfiguration

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jiang, Huaiguang; Ding, Fei; Zhang, Yingchen

    In a traditional dynamic network reconfiguration study, the optimal topology is determined at every scheduled time point by using the real load data measured at that time. The development of the load forecasting technique can provide an accurate prediction of the load power that will happen in a future time and provide more information about load changes. With the inclusion of load forecasting, the optimal topology can be determined based on the predicted load conditions during a longer time period instead of using a snapshot of the load at the time when the reconfiguration happens; thus, the distribution system operatormore » can use this information to better operate the system reconfiguration and achieve optimal solutions. This paper proposes a short-term load forecasting approach to automatically reconfigure distribution systems in a dynamic and pre-event manner. Specifically, a short-term and high-resolution distribution system load forecasting approach is proposed with a forecaster based on support vector regression and parallel parameters optimization. The network reconfiguration problem is solved by using the forecasted load continuously to determine the optimal network topology with the minimum amount of loss at the future time. The simulation results validate and evaluate the proposed approach.« less

  16. Short-Term Load Forecasting Based Automatic Distribution Network Reconfiguration: Preprint

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jiang, Huaiguang; Ding, Fei; Zhang, Yingchen

    In the traditional dynamic network reconfiguration study, the optimal topology is determined at every scheduled time point by using the real load data measured at that time. The development of load forecasting technique can provide accurate prediction of load power that will happen in future time and provide more information about load changes. With the inclusion of load forecasting, the optimal topology can be determined based on the predicted load conditions during the longer time period instead of using the snapshot of load at the time when the reconfiguration happens, and thus it can provide information to the distribution systemmore » operator (DSO) to better operate the system reconfiguration to achieve optimal solutions. Thus, this paper proposes a short-term load forecasting based approach for automatically reconfiguring distribution systems in a dynamic and pre-event manner. Specifically, a short-term and high-resolution distribution system load forecasting approach is proposed with support vector regression (SVR) based forecaster and parallel parameters optimization. And the network reconfiguration problem is solved by using the forecasted load continuously to determine the optimal network topology with the minimum loss at the future time. The simulation results validate and evaluate the proposed approach.« less

  17. Short-Term Load Forecasting-Based Automatic Distribution Network Reconfiguration

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jiang, Huaiguang; Ding, Fei; Zhang, Yingchen

    In a traditional dynamic network reconfiguration study, the optimal topology is determined at every scheduled time point by using the real load data measured at that time. The development of the load forecasting technique can provide an accurate prediction of the load power that will happen in a future time and provide more information about load changes. With the inclusion of load forecasting, the optimal topology can be determined based on the predicted load conditions during a longer time period instead of using a snapshot of the load at the time when the reconfiguration happens; thus, the distribution system operatormore » can use this information to better operate the system reconfiguration and achieve optimal solutions. This paper proposes a short-term load forecasting approach to automatically reconfigure distribution systems in a dynamic and pre-event manner. Specifically, a short-term and high-resolution distribution system load forecasting approach is proposed with a forecaster based on support vector regression and parallel parameters optimization. The network reconfiguration problem is solved by using the forecasted load continuously to determine the optimal network topology with the minimum amount of loss at the future time. The simulation results validate and evaluate the proposed approach.« less

  18. Against all odds -- Probabilistic forecasts and decision making

    NASA Astrophysics Data System (ADS)

    Liechti, Katharina; Zappa, Massimiliano

    2015-04-01

    In the city of Zurich (Switzerland) the setting is such that the damage potential due to flooding of the river Sihl is estimated to about 5 billion US dollars. The flood forecasting system that is used by the administration for decision making runs continuously since 2007. It has a time horizon of max. five days and operates at hourly time steps. The flood forecasting system includes three different model chains. Two of those are run by the deterministic NWP models COSMO-2 and COSMO-7 and one is driven by the probabilistic NWP COSMO-Leps. The model chains are consistent since February 2010, so five full years are available for the evaluation for the system. The system was evaluated continuously and is a very nice example to present the added value that lies in probabilistic forecasts. The forecasts are available on an online-platform to the decision makers. Several graphical representations of the forecasts and forecast-history are available to support decision making and to rate the current situation. The communication between forecasters and decision-makers is quite close. To put it short, an ideal situation. However, an event or better put a non-event in summer 2014 showed that the knowledge about the general superiority of probabilistic forecasts doesn't necessarily mean that the decisions taken in a specific situation will be based on that probabilistic forecast. Some years of experience allow gaining confidence in the system, both for the forecasters and for the decision-makers. Even if from the theoretical point of view the handling during crisis situation is well designed, a first event demonstrated that the dialog with the decision-makers still lacks of exercise during such situations. We argue, that a false alarm is a needed experience to consolidate real-time emergency procedures relying on ensemble predictions. A missed event would probably also fit, but, in our case, we are very happy not to report about this option.

  19. EVALUATION OF METEOROLOGICAL ALERT CHAIN IN CASTILLA Y LEÓN (SPAIN): How can the meteorological risk managers help researchers?

    NASA Astrophysics Data System (ADS)

    López, Laura; Guerrero-Higueras, Ángel Manuel; Sánchez, José Luis; Matía, Pedro; Ortiz de Galisteo, José Pablo; Rodríguez, Vicente; Lorente, José Manuel; Merino, Andrés; Hermida, Lucía; García-Ortega, Eduardo; Fernández-Manso, Oscar

    2013-04-01

    Evaluating the meteorological alert chain, or, how information is transmitted from the meteorological forecasters to the final users, passing through risk managers, is a useful tool that benefits all the links of the chain, especially the meteorology researchers and forecasters. In fact, the risk managers can help significantly to improve meteorological forecasts in different ways. Firstly, by pointing out the most appropriate type of meteorological format, and its characteristics when representing the meteorological information, consequently improving the interpretation of the already-existing forecasts. Secondly, by pointing out the specific predictive needs in their workplaces related to the type of significant meteorological parameters, temporal or spatial range necessary, meteorological products "custom-made" for each type of risk manager, etc. In order to carry out an evaluation of the alert chain in Castilla y León, we opted for the creation of a Panel of Experts made up of personnel specialized in risk management (Responsible for Protection Civil, Responsible for Alert Services and Hydrological Planning of Hydrographical Confederations, Responsible for highway maintenance, and management of fires, fundamentally). In creating this panel, a total of twenty online questions were evaluated, and the majority of the questions were multiple choice or open-ended. Some of the results show how the risk managers think that it would be interesting, or very interesting, to carry out environmental educational campaigns about the meteorological risks in Castilla y León. Another result is the elevated importance that the risk managers provide to the observation data in real-time (real-time of wind, lightning, relative humidity, combined indices of risk of avalanches, snowslides, index of fires due to convective activity, etc.) Acknowledgements The authors would like to thank the Junta de Castilla y León for its financial support through the project LE220A11-2.

  20. Near real time wind energy forecasting incorporating wind tunnel modeling

    NASA Astrophysics Data System (ADS)

    Lubitz, William David

    A series of experiments and investigations were carried out to inform the development of a day-ahead wind power forecasting system. An experimental near-real time wind power forecasting system was designed and constructed that operates on a desktop PC and forecasts 12--48 hours in advance. The system uses model output of the Eta regional scale forecast (RSF) to forecast the power production of a wind farm in the Altamont Pass, California, USA from 12 to 48 hours in advance. It is of modular construction and designed to also allow diagnostic forecasting using archived RSF data, thereby allowing different methods of completing each forecasting step to be tested and compared using the same input data. Wind-tunnel investigations of the effect of wind direction and hill geometry on wind speed-up above a hill were conducted. Field data from an Altamont Pass, California site was used to evaluate several speed-up prediction algorithms, both with and without wind direction adjustment. These algorithms were found to be of limited usefulness for the complex terrain case evaluated. Wind-tunnel and numerical simulation-based methods were developed for determining a wind farm power curve (the relation between meteorological conditions at a point in the wind farm and the power production of the wind farm). Both methods, as well as two methods based on fits to historical data, ultimately showed similar levels of accuracy: mean absolute errors predicting power production of 5 to 7 percent of the wind farm power capacity. The downscaling of RSF forecast data to the wind farm was found to be complicated by the presence of complex terrain. Poor results using the geostrophic drag law and regression methods motivated the development of a database search method that is capable of forecasting not only wind speeds but also power production with accuracy better than persistence.

  1. Evaluation of model-based seasonal streamflow and water allocation forecasts for the Elqui Valley, Chile

    NASA Astrophysics Data System (ADS)

    Delorit, Justin; Cristian Gonzalez Ortuya, Edmundo; Block, Paul

    2017-09-01

    In many semi-arid regions, multisectoral demands often stress available water supplies. Such is the case in the Elqui River valley of northern Chile, which draws on a limited-capacity reservoir to allocate 25 000 water rights. Delayed infrastructure investment forces water managers to address demand-based allocation strategies, particularly in dry years, which are realized through reductions in the volume associated with each water right. Skillful season-ahead streamflow forecasts have the potential to inform managers with an indication of future conditions to guide reservoir allocations. This work evaluates season-ahead statistical prediction models of October-January (growing season) streamflow at multiple lead times associated with manager and user decision points, and links predictions with a reservoir allocation tool. Skillful results (streamflow forecasts outperform climatology) are produced for short lead times (1 September: ranked probability skill score (RPSS) of 0.31, categorical hit skill score of 61 %). At longer lead times, climatological skill exceeds forecast skill due to fewer observations of precipitation. However, coupling the 1 September statistical forecast model with a sea surface temperature phase and strength statistical model allows for equally skillful categorical streamflow forecasts to be produced for a 1 May lead, triggered for 60 % of years (1950-2015), suggesting forecasts need not be strictly deterministic to be useful for water rights holders. An early (1 May) categorical indication of expected conditions is reinforced with a deterministic forecast (1 September) as more observations of local variables become available. The reservoir allocation model is skillful at the 1 September lead (categorical hit skill score of 53 %); skill improves to 79 % when categorical allocation prediction certainty exceeds 80 %. This result implies that allocation efficiency may improve when forecasts are integrated into reservoir decision frameworks. The methods applied here advance the understanding of the mechanisms and timing responsible for moisture transport to the Elqui Valley and provide a unique application of streamflow forecasting in the prediction of water right allocations.

  2. Earthquake Forecasting Through Semi-periodicity Analysis of Labeled Point Processes

    NASA Astrophysics Data System (ADS)

    Quinteros Cartaya, C. B. M.; Nava Pichardo, F. A.; Glowacka, E.; Gomez-Trevino, E.

    2015-12-01

    Large earthquakes have semi-periodic behavior as result of critically self-organized processes of stress accumulation and release in some seismogenic region. Thus, large earthquakes in a region constitute semi-periodic sequences with recurrence times varying slightly from periodicity. Nava et al., 2013 and Quinteros et al., 2013 realized that not all earthquakes in a given region need belong to the same sequence, since there can be more than one process of stress accumulation and release in it; they also proposed a method to identify semi-periodic sequences through analytic Fourier analysis. This work presents improvements on the above-mentioned method: the influence of earthquake size on the spectral analysis, and its importance in semi-periodic events identification, which means that earthquake occurrence times are treated as a labeled point process; the estimation of appropriate upper limit uncertainties to use in forecasts; and the use of Bayesian analysis to evaluate the forecast performance. This improved method is applied to specific regions: the southwestern coast of Mexico, the northeastern Japan Arc, the San Andreas Fault zone at Parkfield, and northeastern Venezuela.

  3. Forecasting Global Point Rainfall using ECMWF's Ensemble Forecasting System

    NASA Astrophysics Data System (ADS)

    Pillosu, Fatima; Hewson, Timothy; Zsoter, Ervin; Baugh, Calum

    2017-04-01

    ECMWF (the European Centre for Medium range Weather Forecasts), in collaboration with the EFAS (European Flood Awareness System) and GLOFAS (GLObal Flood Awareness System) teams, has developed a new operational system that post-processes grid box rainfall forecasts from its ensemble forecasting system to provide global probabilistic point-rainfall predictions. The project attains a higher forecasting skill by applying an understanding of how different rainfall generation mechanisms lead to different degrees of sub-grid variability in rainfall totals. In turn this approach facilitates identification of cases in which very localized extreme totals are much more likely. This approach aims also to improve the rainfall input required in different hydro-meteorological applications. Flash flood forecasting, in particular in urban areas, is a good example. In flash flood scenarios precipitation is typically characterised by high spatial variability and response times are short. In this case, to move beyond radar based now casting, the classical approach has been to use very high resolution hydro-meteorological models. Of course these models are valuable but they can represent only very limited areas, may not be spatially accurate and may give reasonable results only for limited lead times. On the other hand, our method aims to use a very cost-effective approach to downscale global rainfall forecasts to a point scale. It needs only rainfall totals from standard global reporting stations and forecasts over a relatively short period to train it, and it can give good results even up to day 5. For these reasons we believe that this approach better satisfies user needs around the world. This presentation aims to describe two phases of the project: The first phase, already completed, is the implementation of this new system to provide 6 and 12 hourly point-rainfall accumulation probabilities. To do this we use a limited number of physically relevant global model parameters (i.e. convective precipitation ratio, speed of steering winds, CAPE - Convective Available Potential Energy - and solar radiation), alongside the rainfall forecasts themselves, to define the "weather types" that in turn define the expected sub-grid variability. The calibration and computational strategy intrinsic to the system will be illustrated. The quality of the global point rainfall forecasts is also illustrated by analysing recent case studies in which extreme totals and a greatly elevated flash flood risk could be foreseen some days in advance but especially by a longer-term verification that arises out of retrospective global point rainfall forecasting for 2016. The second phase, currently in development, is focussing on the relationships with other relevant geographical aspects, for instance, orography and coastlines. Preliminary results will be presented. These are promising but need further study to fully understand their impact on the spatial distribution of point rainfall totals.

  4. Space Weather Products at the Community Coordinated Modeling Center

    NASA Technical Reports Server (NTRS)

    Hesse, Michael; Kuznetsova, M.; Pulkkinen, A.; Maddox, M.; Rastaetter, L.; Berrios, D.; MacNeice, P.

    2010-01-01

    The Community Coordinated Modeling Center (CCMC) is a US inter-agency activity aiming at research in support of the generation of advanced space weather models. As one of its main functions, the CCMC provides to researchers the use of space science models, even if they are not model owners themselves. The second CCMC activity is to support Space Weather forecasting at national Space Weather Forecasting Centers. This second activity involves model evaluations, model transitions to operations, and the development of space weather forecasting tools. Owing to the pace of development in the science community, new model capabilities emerge frequently. Consequently, space weather products and tools involve not only increased validity, but often entirely new capabilities. This presentation will review the present state of space weather tools as well as point out emerging future capabilities.

  5. Benefits of Sharing Information: Supermodel Ensemble and Applications in South America

    NASA Astrophysics Data System (ADS)

    Dias, P. L.

    2006-05-01

    A model intercomparison program involving a large number of academic and operational institutions has been implemented in South America since 2003, motivated by the SALLJEX Intercomparison Program in 2003 (a research program focused on the identification of the role of the Andes low level jet moisture transport from the Amazon to the Plata basin) and the WMO/THORPEX (www.wmo.int/thorpex) goals to improve predictability through the proper combination of numerical weather forecasts. This program also explores the potential predictability associated with the combination of a large number of possible scenarios in the time scale of a few days to up to 15 days. Five academic institutions and five operational forecasting centers in several countries in South America, 1 academic institution in the USA, and the main global forecasting centers (NCEP, UKMO, ECMWF) agreed to provide numerical products based on operational and experimental models. The metric for model validation is concentrated on the fit of the forecast to surface observations. Meteorological data from airports, synoptic stations operated by national weather services, automatic data platforms maintained by different institutions, the PIRATA buoys etc are all collected through LDM/NCAR or direct transmission. Approximately 40 models outputs are available on a daily basis, twice a day. A simple procedure based on data assimilation principles was quite successful in combining the available forecasts in order to produce temperature, dew point, wind, pressure and precipitation forecasts at station points in S. America. The procedure is based on removing each model bias at the observational point and a weighted average based on the mean square error of the forecasts. The base period for estimating the bias and mean square error is of the order of 15 to 30 days. Products of the intercomparison model program and the optimal statistical combination of the available forecasts are public and available in real time (www.master.iag.usp.br/). Monitoring of the use of the products reveal a growing trend in the last year (reaching about 10.000 accesses per day in recent months). The intercomparison program provides a rich data set for educational products (real time use in Synoptic Meteorology and Numerical Weather Forecasting lectures), operational weather forecasts in national or regional weather centers and for research purposes. During the first phase of the program it was difficult to convince potential participants to share the information in the public homepage. However, as the system evolved, more and more institutions became associated with the program. The general opinion of the participants is that the system provides an unified metric for evaluation, a forum for discussion of the physical origin of the model forecast differences and therefore improvement of the quality of the numerical guidance.

  6. Forecasting the absolute and relative shortage of physicians in Japan using a system dynamics model approach

    PubMed Central

    2013-01-01

    Background In Japan, a shortage of physicians, who serve a key role in healthcare provision, has been pointed out as a major medical issue. The healthcare workforce policy planner should consider future dynamic changes in physician numbers. The purpose of this study was to propose a physician supply forecasting methodology by applying system dynamics modeling to estimate future absolute and relative numbers of physicians. Method We constructed a forecasting model using a system dynamics approach. Forecasting the number of physician was performed for all clinical physician and OB/GYN specialists. Moreover, we conducted evaluation of sufficiency for the number of physicians and sensitivity analysis. Result & conclusion As a result, it was forecast that the number of physicians would increase during 2008–2030 and the shortage would resolve at 2026 for all clinical physicians. However, the shortage would not resolve for the period covered. This suggests a need for measures for reconsidering the allocation system of new entry physicians to resolve maldistribution between medical departments, in addition, for increasing the overall number of clinical physicians. PMID:23981198

  7. Forecasting the absolute and relative shortage of physicians in Japan using a system dynamics model approach.

    PubMed

    Ishikawa, Tomoki; Ohba, Hisateru; Yokooka, Yuki; Nakamura, Kozo; Ogasawara, Katsuhiko

    2013-08-27

    In Japan, a shortage of physicians, who serve a key role in healthcare provision, has been pointed out as a major medical issue. The healthcare workforce policy planner should consider future dynamic changes in physician numbers. The purpose of this study was to propose a physician supply forecasting methodology by applying system dynamics modeling to estimate future absolute and relative numbers of physicians. We constructed a forecasting model using a system dynamics approach. Forecasting the number of physician was performed for all clinical physician and OB/GYN specialists. Moreover, we conducted evaluation of sufficiency for the number of physicians and sensitivity analysis. As a result, it was forecast that the number of physicians would increase during 2008-2030 and the shortage would resolve at 2026 for all clinical physicians. However, the shortage would not resolve for the period covered. This suggests a need for measures for reconsidering the allocation system of new entry physicians to resolve maldistribution between medical departments, in addition, for increasing the overall number of clinical physicians.

  8. A cross impact methodology for the assessment of US telecommunications system with application to fiber optics development: Executive summary

    NASA Technical Reports Server (NTRS)

    Martino, J. P.; Lenz, R. C., Jr.; Chen, K. L.

    1979-01-01

    A cross impact model of the U.S. telecommunications system was developed. For this model, it was necessary to prepare forecasts of the major segments of the telecommunications system, such as satellites, telephone, TV, CATV, radio broadcasting, etc. In addition, forecasts were prepared of the traffic generated by a variety of new or expanded services, such as electronic check clearing and point of sale electronic funds transfer. Finally, the interactions among the forecasts were estimated (the cross impacts). Both the forecasts and the cross impacts were used as inputs to the cross impact model, which could then be used to stimulate the future growth of the entire U.S. telecommunications system. By varying the inputs, technology changes or policy decisions with regard to any segment of the system could be evaluated in the context of the remainder of the system. To illustrate the operation of the model, a specific study was made of the deployment of fiber optics, throughout the telecommunications system.

  9. A cross impact methodology for the assessment of US telecommunications system with application to fiber optics development, volume 1

    NASA Technical Reports Server (NTRS)

    Martino, J. P.; Lenz, R. C., Jr.; Chen, K. L.; Kahut, P.; Sekely, R.; Weiler, J.

    1979-01-01

    A cross impact model of the U.S. telecommunications system was developed. It was necessary to prepare forecasts of the major segments of the telecommunications system, such as satellites, telephone, TV, CATV, radio broadcasting, etc. In addition, forecasts were prepared of the traffic generated by a variety of new or expanded services, such as electronic check clearing and point of sale electronic funds transfer. Finally, the interactions among the forecasts were estimated (the cross impact). Both the forecasts and the cross impacts were used as inputs to the cross impact model, which could then be used to stimulate the future growth of the entire U.S. telecommunications system. By varying the inputs, technology changes or policy decisions with regard to any segment of the system could be evaluated in the context of the remainder of the system. To illustrate the operation of the model, a specific study was made of the deployment of fiber optics throughout the telecommunications system.

  10. Do quantitative decadal forecasts from GCMs provide decision relevant skill?

    NASA Astrophysics Data System (ADS)

    Suckling, E. B.; Smith, L. A.

    2012-04-01

    It is widely held that only physics-based simulation models can capture the dynamics required to provide decision-relevant probabilistic climate predictions. This fact in itself provides no evidence that predictions from today's GCMs are fit for purpose. Empirical (data-based) models are employed to make probability forecasts on decadal timescales, where it is argued that these 'physics free' forecasts provide a quantitative 'zero skill' target for the evaluation of forecasts based on more complicated models. It is demonstrated that these zero skill models are competitive with GCMs on decadal scales for probability forecasts evaluated over the last 50 years. Complications of statistical interpretation due to the 'hindcast' nature of this experiment, and the likely relevance of arguments that the lack of hindcast skill is irrelevant as the signal will soon 'come out of the noise' are discussed. A lack of decision relevant quantiative skill does not bring the science-based insights of anthropogenic warming into doubt, but it does call for a clear quantification of limits, as a function of lead time, for spatial and temporal scales on which decisions based on such model output are expected to prove maladaptive. Failing to do so may risk the credibility of science in support of policy in the long term. The performance amongst a collection of simulation models is evaluated, having transformed ensembles of point forecasts into probability distributions through the kernel dressing procedure [1], according to a selection of proper skill scores [2] and contrasted with purely data-based empirical models. Data-based models are unlikely to yield realistic forecasts for future climate change if the Earth system moves away from the conditions observed in the past, upon which the models are constructed; in this sense the empirical model defines zero skill. When should a decision relevant simulation model be expected to significantly outperform such empirical models? Probability forecasts up to ten years ahead (decadal forecasts) are considered, both on global and regional spatial scales for surface air temperature. Such decadal forecasts are not only important in terms of providing information on the impacts of near-term climate change, but also from the perspective of climate model validation, as hindcast experiments and a sufficient database of historical observations allow standard forecast verification methods to be used. Simulation models from the ENSEMBLES hindcast experiment [3] are evaluated and contrasted with static forecasts of the observed climatology, persistence forecasts and against simple statistical models, called dynamic climatology (DC). It is argued that DC is a more apropriate benchmark in the case of a non-stationary climate. It is found that the ENSEMBLES models do not demonstrate a significant increase in skill relative to the empirical models even at global scales over any lead time up to a decade ahead. It is suggested that the contsruction and co-evaluation with the data-based models become a regular component of the reporting of large simulation model forecasts. The methodology presented may easily be adapted to other forecasting experiments and is expected to influence the design of future experiments. The inclusion of comparisons with dynamic climatology and other data-based approaches provide important information to both scientists and decision makers on which aspects of state-of-the-art simulation forecasts are likely to be fit for purpose. [1] J. Bröcker and L. A. Smith. From ensemble forecasts to predictive distributions, Tellus A, 60(4), 663-678 (2007). [2] J. Bröcker and L. A. Smith. Scoring probabilistic forecasts: The importance of being proper, Weather and Forecasting, 22, 382-388 (2006). [3] F. J. Doblas-Reyes, A. Weisheimer, T. N. Palmer, J. M. Murphy and D. Smith. Forecast quality asessment of the ENSEMBLES seasonal-to-decadal stream 2 hindcasts, ECMWF Technical Memorandum, 621 (2010).

  11. Evaluation of the Impact of an Innovative Immunization Practice Model Designed to Improve Population Health: Results of the Project IMPACT Immunizations Pilot.

    PubMed

    Bluml, Benjamin M; Brock, Kelly A; Hamstra, Scott; Tonrey, Lisa

    2018-02-01

    The goal of the initiative was to evaluate the impact of an innovative practice model on identification of unmet vaccination needs and vaccination rates. This was accomplished through a prospective, multisite, observational study in 8 community pharmacy practices with adults receiving an influenza vaccine with a documented vaccination forecast review from October 22, 2015 through March 22, 2016. When patients presented for influenza vaccinations, pharmacists utilized immunization information systems (IIS) data at the point of care to identify unmet vaccination needs, educate patients, and improve vaccination rates. The main outcome measures were the number of vaccination forecast reviews, patients educated, unmet vaccination needs identified and resolved, and vaccines administered. Pharmacists reviewed vaccination forecasts generated by clinical decision-support technology based on patient information documented in the IIS for 1080 patients receiving influenza vaccinations. The vaccination forecasts predicted there were 1566 additional vaccinations due at the time patients were receiving the influenza vaccine. Pharmacist assessments identified 36 contraindications and 196 potential duplications, leaving a net of 1334 unmet vaccination needs eligible for vaccination. In all, 447 of the 1334 unmet vaccinations needs were resolved during the 6-month study period, and the remainder of patients received information about their vaccination needs and recommendations to follow up for their vaccinations. Integration of streamlined principle-centered processes of care in immunization practices that allow pharmacists to utilize actionable point-of-care data resulted in identification of unmet vaccination needs, education of patients about their vaccination needs, a 41.4% increase in the number of vaccines administered, and significant improvements in routinely recommended adult vaccination rates.

  12. Tsunami Forecasting in the Atlantic Basin

    NASA Astrophysics Data System (ADS)

    Knight, W. R.; Whitmore, P.; Sterling, K.; Hale, D. A.; Bahng, B.

    2012-12-01

    The mission of the West Coast and Alaska Tsunami Warning Center (WCATWC) is to provide advance tsunami warning and guidance to coastal communities within its Area-of-Responsibility (AOR). Predictive tsunami models, based on the shallow water wave equations, are an important part of the Center's guidance support. An Atlantic-based counterpart to the long-standing forecasting ability in the Pacific known as the Alaska Tsunami Forecast Model (ATFM) is now developed. The Atlantic forecasting method is based on ATFM version 2 which contains advanced capabilities over the original model; including better handling of the dynamic interactions between grids, inundation over dry land, new forecast model products, an optional non-hydrostatic approach, and the ability to pre-compute larger and more finely gridded regions using parallel computational techniques. The wide and nearly continuous Atlantic shelf region presents a challenge for forecast models. Our solution to this problem has been to develop a single unbroken high resolution sub-mesh (currently 30 arc-seconds), trimmed to the shelf break. This allows for edge wave propagation and for kilometer scale bathymetric feature resolution. Terminating the fine mesh at the 2000m isobath keeps the number of grid points manageable while allowing for a coarse (4 minute) mesh to adequately resolve deep water tsunami dynamics. Higher resolution sub-meshes are then included around coastal forecast points of interest. The WCATWC Atlantic AOR includes eastern U.S. and Canada, the U.S. Gulf of Mexico, Puerto Rico, and the Virgin Islands. Puerto Rico and the Virgin Islands are in very close proximity to well-known tsunami sources. Because travel times are under an hour and response must be immediate, our focus is on pre-computing many tsunami source "scenarios" and compiling those results into a database accessible and calibrated with observations during an event. Seismic source evaluation determines the order of model pre-computation - starting with those sources that carry the highest risk. Model computation zones are confined to regions at risk to save computation time. For example, Atlantic sources have been shown to not propagate into the Gulf of Mexico. Therefore, fine grid computations are not performed in the Gulf for Atlantic sources. Outputs from the Atlantic model include forecast marigrams at selected sites, maximum amplitudes, drawdowns, and currents for all coastal points. The maximum amplitude maps will be supplemented with contoured energy flux maps which show more clearly the effects of bathymetric features on tsunami wave propagation. During an event, forecast marigrams will be compared to observations to adjust the model results. The modified forecasts will then be used to set alert levels between coastal breakpoints, and provided to emergency management.

  13. An Evaluation of Alternatives for Processing of Administrative Pay Vouchers: A Simulation Approach.

    DTIC Science & Technology

    1982-09-01

    Finance Travel Voucher Q-GERT Productivity Personnel Forecasts Simulation Model 20. ABSTRACT (Continue on reverse side if necessary end Jdentfly by...Finance Office (ACF) has devised a Point System for use in determining the productivity of the ACF Travel Section (ACFTT). This Point System sets values...5 to 5+) to be assigned to incoming travel vouchers based on voucher complexity. This research had set objectives of (1) building an ACFTT model that

  14. Validation of Seasonal Forecast of Indian Summer Monsoon Rainfall

    NASA Astrophysics Data System (ADS)

    Das, Sukanta Kumar; Deb, Sanjib Kumar; Kishtawal, C. M.; Pal, Pradip Kumar

    2015-06-01

    The experimental seasonal forecast of Indian summer monsoon (ISM) rainfall during June through September using Community Atmosphere Model (CAM) version 3 has been carried out at the Space Applications Centre Ahmedabad since 2009. The forecasts, based on a number of ensemble members (ten minimum) of CAM, are generated in several phases and updated on regular basis. On completion of 5 years of experimental seasonal forecasts in operational mode, it is required that the overall validation or correctness of the forecast system is quantified and that the scope is assessed for further improvements of the forecast over time, if any. The ensemble model climatology generated by a set of 20 identical CAM simulations is considered as the model control simulation. The performance of the forecast has been evaluated by assuming the control simulation as the model reference. The forecast improvement factor shows positive improvements, with higher values for the recent forecasted years as compared to the control experiment over the Indian landmass. The Taylor diagram representation of the Pearson correlation coefficient (PCC), standard deviation and centered root mean square difference has been used to demonstrate the best PCC, in the order of 0.74-0.79, recorded for the seasonal forecast made during 2013. Further, the bias score of different phases of experiment revealed the fact that the ISM rainfall forecast is affected by overestimation in predicting the low rain-rate (less than 7 mm/day), but by underestimation in the medium and high rain-rate (higher than 11 mm/day). Overall, the analysis shows significant improvement of the ISM forecast over the last 5 years, viz. 2009-2013, due to several important modifications that have been implemented in the forecast system. The validation exercise has also pointed out a number of shortcomings in the forecast system; these will be addressed in the upcoming years of experiments to improve the quality of the ISM prediction.

  15. Adaptive Blending of Model and Observations for Automated Short-Range Forecasting: Examples from the Vancouver 2010 Olympic and Paralympic Winter Games

    NASA Astrophysics Data System (ADS)

    Bailey, Monika E.; Isaac, George A.; Gultepe, Ismail; Heckman, Ivan; Reid, Janti

    2014-01-01

    An automated short-range forecasting system, adaptive blending of observations and model (ABOM), was tested in real time during the 2010 Vancouver Olympic and Paralympic Winter Games in British Columbia. Data at 1-min time resolution were available from a newly established, dense network of surface observation stations. Climatological data were not available at these new stations. This, combined with output from new high-resolution numerical models, provided a unique and exciting setting to test nowcasting systems in mountainous terrain during winter weather conditions. The ABOM method blends extrapolations in time of recent local observations with numerical weather predictions (NWP) model predictions to generate short-range point forecasts of surface variables out to 6 h. The relative weights of the model forecast and the observation extrapolation are based on performance over recent history. The average performance of ABOM nowcasts during February and March 2010 was evaluated using standard scores and thresholds important for Olympic events. Significant improvements over the model forecasts alone were obtained for continuous variables such as temperature, relative humidity and wind speed. The small improvements to forecasts of variables such as visibility and ceiling, subject to discontinuous changes, are attributed to the persistence component of ABOM.

  16. Adaptation of Mesoscale Weather Models to Local Forecasting

    NASA Technical Reports Server (NTRS)

    Manobianco, John T.; Taylor, Gregory E.; Case, Jonathan L.; Dianic, Allan V.; Wheeler, Mark W.; Zack, John W.; Nutter, Paul A.

    2003-01-01

    Methodologies have been developed for (1) configuring mesoscale numerical weather-prediction models for execution on high-performance computer workstations to make short-range weather forecasts for the vicinity of the Kennedy Space Center (KSC) and the Cape Canaveral Air Force Station (CCAFS) and (2) evaluating the performances of the models as configured. These methodologies have been implemented as part of a continuing effort to improve weather forecasting in support of operations of the U.S. space program. The models, methodologies, and results of the evaluations also have potential value for commercial users who could benefit from tailoring their operations and/or marketing strategies based on accurate predictions of local weather. More specifically, the purpose of developing the methodologies for configuring the models to run on computers at KSC and CCAFS is to provide accurate forecasts of winds, temperature, and such specific thunderstorm-related phenomena as lightning and precipitation. The purpose of developing the evaluation methodologies is to maximize the utility of the models by providing users with assessments of the capabilities and limitations of the models. The models used in this effort thus far include the Mesoscale Atmospheric Simulation System (MASS), the Regional Atmospheric Modeling System (RAMS), and the National Centers for Environmental Prediction Eta Model ( Eta for short). The configuration of the MASS and RAMS is designed to run the models at very high spatial resolution and incorporate local data to resolve fine-scale weather features. Model preprocessors were modified to incorporate surface, ship, buoy, and rawinsonde data as well as data from local wind towers, wind profilers, and conventional or Doppler radars. The overall evaluation of the MASS, Eta, and RAMS was designed to assess the utility of these mesoscale models for satisfying the weather-forecasting needs of the U.S. space program. The evaluation methodology includes objective and subjective verification methodologies. Objective (e.g., statistical) verification of point forecasts is a stringent measure of model performance, but when used alone, it is not usually sufficient for quantifying the value of the overall contribution of the model to the weather-forecasting process. This is especially true for mesoscale models with enhanced spatial and temporal resolution that may be capable of predicting meteorologically consistent, though not necessarily accurate, fine-scale weather phenomena. Therefore, subjective (phenomenological) evaluation, focusing on selected case studies and specific weather features, such as sea breezes and precipitation, has been performed to help quantify the added value that cannot be inferred solely from objective evaluation.

  17. A global flash flood forecasting system

    NASA Astrophysics Data System (ADS)

    Baugh, Calum; Pappenberger, Florian; Wetterhall, Fredrik; Hewson, Tim; Zsoter, Ervin

    2016-04-01

    The sudden and devastating nature of flash flood events means it is imperative to provide early warnings such as those derived from Numerical Weather Prediction (NWP) forecasts. Currently such systems exist on basin, national and continental scales in Europe, North America and Australia but rely on high resolution NWP forecasts or rainfall-radar nowcasting, neither of which have global coverage. To produce global flash flood forecasts this work investigates the possibility of using forecasts from a global NWP system. In particular we: (i) discuss how global NWP can be used for flash flood forecasting and discuss strengths and weaknesses; (ii) demonstrate how a robust evaluation can be performed given the rarity of the event; (iii) highlight the challenges and opportunities in communicating flash flood uncertainty to decision makers; and (iv) explore future developments which would significantly improve global flash flood forecasting. The proposed forecast system uses ensemble surface runoff forecasts from the ECMWF H-TESSEL land surface scheme. A flash flood index is generated using the ERIC (Enhanced Runoff Index based on Climatology) methodology [Raynaud et al., 2014]. This global methodology is applied to a series of flash floods across southern Europe. Results from the system are compared against warnings produced using the higher resolution COSMO-LEPS limited area model. The global system is evaluated by comparing forecasted warning locations against a flash flood database of media reports created in partnership with floodlist.com. To deal with the lack of objectivity in media reports we carefully assess the suitability of different skill scores and apply spatial uncertainty thresholds to the observations. To communicate the uncertainties of the flash flood system output we experiment with a dynamic region-growing algorithm. This automatically clusters regions of similar return period exceedence probabilities, thus presenting the at-risk areas at a spatial resolution appropriate to the NWP system. We then demonstrate how these warning areas could eventually complement existing global systems such as the Global Flood Awareness System (GloFAS), to give warnings of flash floods. This work demonstrates the possibility of creating a global flash flood forecasting system based on forecasts from existing global NWP systems. Future developments, in post-processing for example, will need to address an under-prediction bias, for extreme point rainfall, that is innate to current-generation global models.

  18. Parametric analysis of parameters for electrical-load forecasting using artificial neural networks

    NASA Astrophysics Data System (ADS)

    Gerber, William J.; Gonzalez, Avelino J.; Georgiopoulos, Michael

    1997-04-01

    Accurate total system electrical load forecasting is a necessary part of resource management for power generation companies. The better the hourly load forecast, the more closely the power generation assets of the company can be configured to minimize the cost. Automating this process is a profitable goal and neural networks should provide an excellent means of doing the automation. However, prior to developing such a system, the optimal set of input parameters must be determined. The approach of this research was to determine what those inputs should be through a parametric study of potentially good inputs. Input parameters tested were ambient temperature, total electrical load, the day of the week, humidity, dew point temperature, daylight savings time, length of daylight, season, forecast light index and forecast wind velocity. For testing, a limited number of temperatures and total electrical loads were used as a basic reference input parameter set. Most parameters showed some forecasting improvement when added individually to the basic parameter set. Significantly, major improvements were exhibited with the day of the week, dew point temperatures, additional temperatures and loads, forecast light index and forecast wind velocity.

  19. Comparison between stochastic and machine learning methods for hydrological multi-step ahead forecasting: All forecasts are wrong!

    NASA Astrophysics Data System (ADS)

    Papacharalampous, Georgia; Tyralis, Hristos; Koutsoyiannis, Demetris

    2017-04-01

    Machine learning (ML) is considered to be a promising approach to hydrological processes forecasting. We conduct a comparison between several stochastic and ML point estimation methods by performing large-scale computational experiments based on simulations. The purpose is to provide generalized results, while the respective comparisons in the literature are usually based on case studies. The stochastic methods used include simple methods, models from the frequently used families of Autoregressive Moving Average (ARMA), Autoregressive Fractionally Integrated Moving Average (ARFIMA) and Exponential Smoothing models. The ML methods used are Random Forests (RF), Support Vector Machines (SVM) and Neural Networks (NN). The comparison refers to the multi-step ahead forecasting properties of the methods. A total of 20 methods are used, among which 9 are the ML methods. 12 simulation experiments are performed, while each of them uses 2 000 simulated time series of 310 observations. The time series are simulated using stochastic processes from the families of ARMA and ARFIMA models. Each time series is split into a fitting (first 300 observations) and a testing set (last 10 observations). The comparative assessment of the methods is based on 18 metrics, that quantify the methods' performance according to several criteria related to the accurate forecasting of the testing set, the capturing of its variation and the correlation between the testing and forecasted values. The most important outcome of this study is that there is not a uniformly better or worse method. However, there are methods that are regularly better or worse than others with respect to specific metrics. It appears that, although a general ranking of the methods is not possible, their classification based on their similar or contrasting performance in the various metrics is possible to some extent. Another important conclusion is that more sophisticated methods do not necessarily provide better forecasts compared to simpler methods. It is pointed out that the ML methods do not differ dramatically from the stochastic methods, while it is interesting that the NN, RF and SVM algorithms used in this study offer potentially very good performance in terms of accuracy. It should be noted that, although this study focuses on hydrological processes, the results are of general scientific interest. Another important point in this study is the use of several methods and metrics. Using fewer methods and fewer metrics would have led to a very different overall picture, particularly if those fewer metrics corresponded to fewer criteria. For this reason, we consider that the proposed methodology is appropriate for the evaluation of forecasting methods.

  20. Optimising seasonal streamflow forecast lead time for operational decision making in Australia

    NASA Astrophysics Data System (ADS)

    Schepen, Andrew; Zhao, Tongtiegang; Wang, Q. J.; Zhou, Senlin; Feikema, Paul

    2016-10-01

    Statistical seasonal forecasts of 3-month streamflow totals are released in Australia by the Bureau of Meteorology and updated on a monthly basis. The forecasts are often released in the second week of the forecast period, due to the onerous forecast production process. The current service relies on models built using data for complete calendar months, meaning the forecast production process cannot begin until the first day of the forecast period. Somehow, the bureau needs to transition to a service that provides forecasts before the beginning of the forecast period; timelier forecast release will become critical as sub-seasonal (monthly) forecasts are developed. Increasing the forecast lead time to one month ahead is not considered a viable option for Australian catchments that typically lack any predictability associated with snowmelt. The bureau's forecasts are built around Bayesian joint probability models that have antecedent streamflow, rainfall and climate indices as predictors. In this study, we adapt the modelling approach so that forecasts have any number of days of lead time. Daily streamflow and sea surface temperatures are used to develop predictors based on 28-day sliding windows. Forecasts are produced for 23 forecast locations with 0-14- and 21-day lead time. The forecasts are assessed in terms of continuous ranked probability score (CRPS) skill score and reliability metrics. CRPS skill scores, on average, reduce monotonically with increase in days of lead time, although both positive and negative differences are observed. Considering only skilful forecast locations, CRPS skill scores at 7-day lead time are reduced on average by 4 percentage points, with differences largely contained within +5 to -15 percentage points. A flexible forecasting system that allows for any number of days of lead time could benefit Australian seasonal streamflow forecast users by allowing more time for forecasts to be disseminated, comprehended and made use of prior to the commencement of a forecast season. The system would allow for forecasts to be updated if necessary.

  1. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mendes, J.; Bessa, R.J.; Keko, H.

    Wind power forecasting (WPF) provides important inputs to power system operators and electricity market participants. It is therefore not surprising that WPF has attracted increasing interest within the electric power industry. In this report, we document our research on improving statistical WPF algorithms for point, uncertainty, and ramp forecasting. Below, we provide a brief introduction to the research presented in the following chapters. For a detailed overview of the state-of-the-art in wind power forecasting, we refer to [1]. Our related work on the application of WPF in operational decisions is documented in [2]. Point forecasts of wind power are highlymore » dependent on the training criteria used in the statistical algorithms that are used to convert weather forecasts and observational data to a power forecast. In Chapter 2, we explore the application of information theoretic learning (ITL) as opposed to the classical minimum square error (MSE) criterion for point forecasting. In contrast to the MSE criterion, ITL criteria do not assume a Gaussian distribution of the forecasting errors. We investigate to what extent ITL criteria yield better results. In addition, we analyze time-adaptive training algorithms and how they enable WPF algorithms to cope with non-stationary data and, thus, to adapt to new situations without requiring additional offline training of the model. We test the new point forecasting algorithms on two wind farms located in the U.S. Midwest. Although there have been advancements in deterministic WPF, a single-valued forecast cannot provide information on the dispersion of observations around the predicted value. We argue that it is essential to generate, together with (or as an alternative to) point forecasts, a representation of the wind power uncertainty. Wind power uncertainty representation can take the form of probabilistic forecasts (e.g., probability density function, quantiles), risk indices (e.g., prediction risk index) or scenarios (with spatial and/or temporal dependence). Statistical approaches to uncertainty forecasting basically consist of estimating the uncertainty based on observed forecasting errors. Quantile regression (QR) is currently a commonly used approach in uncertainty forecasting. In Chapter 3, we propose new statistical approaches to the uncertainty estimation problem by employing kernel density forecast (KDF) methods. We use two estimators in both offline and time-adaptive modes, namely, the Nadaraya-Watson (NW) and Quantilecopula (QC) estimators. We conduct detailed tests of the new approaches using QR as a benchmark. One of the major issues in wind power generation are sudden and large changes of wind power output over a short period of time, namely ramping events. In Chapter 4, we perform a comparative study of existing definitions and methodologies for ramp forecasting. We also introduce a new probabilistic method for ramp event detection. The method starts with a stochastic algorithm that generates wind power scenarios, which are passed through a high-pass filter for ramp detection and estimation of the likelihood of ramp events to happen. The report is organized as follows: Chapter 2 presents the results of the application of ITL training criteria to deterministic WPF; Chapter 3 reports the study on probabilistic WPF, including new contributions to wind power uncertainty forecasting; Chapter 4 presents a new method to predict and visualize ramp events, comparing it with state-of-the-art methodologies; Chapter 5 briefly summarizes the main findings and contributions of this report.« less

  2. Evaluation of ensemble forecast uncertainty using a new proper score: application to medium-range and seasonal forecasts

    NASA Astrophysics Data System (ADS)

    Christensen, Hannah; Moroz, Irene; Palmer, Tim

    2015-04-01

    Forecast verification is important across scientific disciplines as it provides a framework for evaluating the performance of a forecasting system. In the atmospheric sciences, probabilistic skill scores are often used for verification as they provide a way of unambiguously ranking the performance of different probabilistic forecasts. In order to be useful, a skill score must be proper -- it must encourage honesty in the forecaster, and reward forecasts which are reliable and which have good resolution. A new score, the Error-spread Score (ES), is proposed which is particularly suitable for evaluation of ensemble forecasts. It is formulated with respect to the moments of the forecast. The ES is confirmed to be a proper score, and is therefore sensitive to both resolution and reliability. The ES is tested on forecasts made using the Lorenz '96 system, and found to be useful for summarising the skill of the forecasts. The European Centre for Medium-Range Weather Forecasts (ECMWF) ensemble prediction system (EPS) is evaluated using the ES. Its performance is compared to a perfect statistical probabilistic forecast -- the ECMWF high resolution deterministic forecast dressed with the observed error distribution. This generates a forecast that is perfectly reliable if considered over all time, but which does not vary from day to day with the predictability of the atmospheric flow. The ES distinguishes between the dynamically reliable EPS forecasts and the statically reliable dressed deterministic forecasts. Other skill scores are tested and found to be comparatively insensitive to this desirable forecast quality. The ES is used to evaluate seasonal range ensemble forecasts made with the ECMWF System 4. The ensemble forecasts are found to be skilful when compared with climatological or persistence forecasts, though this skill is dependent on region and time of year.

  3. Business Planning in the Light of Neuro-fuzzy and Predictive Forecasting

    NASA Astrophysics Data System (ADS)

    Chakrabarti, Prasun; Basu, Jayanta Kumar; Kim, Tai-Hoon

    In this paper we have pointed out gain sensing on forecast based techniques.We have cited an idea of neural based gain forecasting. Testing of sequence of gain pattern is also verifies using statsistical analysis of fuzzy value assignment. The paper also suggests realization of stable gain condition using K-Means clustering of data mining. A new concept of 3D based gain sensing has been pointed out. The paper also reveals what type of trend analysis can be observed for probabilistic gain prediction.

  4. Development and Evaluation of a Gridded CrIS/ATMS Visualization for Operational Forecasting

    NASA Astrophysics Data System (ADS)

    Zavodsky, B.; Smith, N.; Dostalek, J.; Stevens, E.; Nelson, K.; Weisz, E.; Berndt, E.; Line, W.; Barnet, C.; Gambacorta, A.; Reale, A.; Hoese, D.

    2016-12-01

    Upper-air observations from radiosondes are limited in spatial coverage and are primarily launched only at synoptic times, potentially missing evolving air masses. For forecast challenges which require diagnosis of the three-dimensional extent of the atmosphere, these observations may not be enough for forecasters. Currently, forecasters rely on model output alongside the sparse network of radiosondes for characterizing the three-dimensional atmosphere. However, satellite information can help fill in the spatial and temporal gaps in radiosonde observations. In particular, temperature and moisture retrievals from the NOAA-Unique Combined Atmospheric Processing System (NUCAPS), which combines infrared soundings from the Cross-track Infrared Sounder (CrIS) with the Advanced Technology Microwave Sounder (ATMS) to retrieve profiles of temperature and moisture. NUCAPS retrievals are available in a wide swath of observations with approximately 45-km spatial resolution at nadir and a local Equator crossing time of 1:30 A.M./P.M. enabling three-dimensional observations at asynoptic times. For forecasters to make the best use of these observations, these satellite-based soundings must be displayed in the National Weather Service's decision support system, the Advanced Weather Interactive Processing System (AWIPS). NUCAPS profiles are currently available in AWIPS as point observations that can be displayed on Skew-T diagrams. This presentation discusses the development of a new visualization capability for NUCAPS within AWIPS that will allow the data to be viewed in gridded horizontal maps or as vertical cross sections, giving forecasters additional tools for diagnosing atmospheric features. Forecaster feedback and examples of operational applications from two testbed activities will be highlighted. First is a product evaluation at the Hazardous Weather Testbed for severe weather—such as high winds, large hail, tornadoes—where the vertical distribution of temperature and moisture ahead of frontal boundaries was assessed. Second, is a product evaluation with the Alaska Center Weather Service Unit for cold air aloft—where the detection of the three-dimension extent of exterior aircraft temperatures lower than -65°C (temperatures at which jet fuel may begin to freeze)—was assessed.

  5. Resolution of Probabilistic Weather Forecasts with Application in Disease Management.

    PubMed

    Hughes, G; McRoberts, N; Burnett, F J

    2017-02-01

    Predictive systems in disease management often incorporate weather data among the disease risk factors, and sometimes this comes in the form of forecast weather data rather than observed weather data. In such cases, it is useful to have an evaluation of the operational weather forecast, in addition to the evaluation of the disease forecasts provided by the predictive system. Typically, weather forecasts and disease forecasts are evaluated using different methodologies. However, the information theoretic quantity expected mutual information provides a basis for evaluating both kinds of forecast. Expected mutual information is an appropriate metric for the average performance of a predictive system over a set of forecasts. Both relative entropy (a divergence, measuring information gain) and specific information (an entropy difference, measuring change in uncertainty) provide a basis for the assessment of individual forecasts.

  6. Smoothing strategies combined with ARIMA and neural networks to improve the forecasting of traffic accidents.

    PubMed

    Barba, Lida; Rodríguez, Nibaldo; Montt, Cecilia

    2014-01-01

    Two smoothing strategies combined with autoregressive integrated moving average (ARIMA) and autoregressive neural networks (ANNs) models to improve the forecasting of time series are presented. The strategy of forecasting is implemented using two stages. In the first stage the time series is smoothed using either, 3-point moving average smoothing, or singular value Decomposition of the Hankel matrix (HSVD). In the second stage, an ARIMA model and two ANNs for one-step-ahead time series forecasting are used. The coefficients of the first ANN are estimated through the particle swarm optimization (PSO) learning algorithm, while the coefficients of the second ANN are estimated with the resilient backpropagation (RPROP) learning algorithm. The proposed models are evaluated using a weekly time series of traffic accidents of Valparaíso, Chilean region, from 2003 to 2012. The best result is given by the combination HSVD-ARIMA, with a MAPE of 0:26%, followed by MA-ARIMA with a MAPE of 1:12%; the worst result is given by the MA-ANN based on PSO with a MAPE of 15:51%.

  7. Multivariate time series modeling of short-term system scale irrigation demand

    NASA Astrophysics Data System (ADS)

    Perera, Kushan C.; Western, Andrew W.; George, Biju; Nawarathna, Bandara

    2015-12-01

    Travel time limits the ability of irrigation system operators to react to short-term irrigation demand fluctuations that result from variations in weather, including very hot periods and rainfall events, as well as the various other pressures and opportunities that farmers face. Short-term system-wide irrigation demand forecasts can assist in system operation. Here we developed a multivariate time series (ARMAX) model to forecast irrigation demands with respect to aggregated service points flows (IDCGi, ASP) and off take regulator flows (IDCGi, OTR) based across 5 command areas, which included area covered under four irrigation channels and the study area. These command area specific ARMAX models forecast 1-5 days ahead daily IDCGi, ASP and IDCGi, OTR using the real time flow data recorded at the service points and the uppermost regulators and observed meteorological data collected from automatic weather stations. The model efficiency and the predictive performance were quantified using the root mean squared error (RMSE), Nash-Sutcliffe model efficiency coefficient (NSE), anomaly correlation coefficient (ACC) and mean square skill score (MSSS). During the evaluation period, NSE for IDCGi, ASP and IDCGi, OTR across 5 command areas were ranged 0.98-0.78. These models were capable of generating skillful forecasts (MSSS ⩾ 0.5 and ACC ⩾ 0.6) of IDCGi, ASP and IDCGi, OTR for all 5 lead days and IDCGi, ASP and IDCGi, OTR forecasts were better than using the long term monthly mean irrigation demand. Overall these predictive performance from the ARMAX time series models were higher than almost all the previous studies we are aware. Further, IDCGi, ASP and IDCGi, OTR forecasts have improved the operators' ability to react for near future irrigation demand fluctuations as the developed ARMAX time series models were self-adaptive to reflect the short-term changes in the irrigation demand with respect to various pressures and opportunities that farmers' face, such as changing water policy, continued development of water markets, drought and changing technology.

  8. Probabilistic forecasting for extreme NO2 pollution episodes.

    PubMed

    Aznarte, José L

    2017-10-01

    In this study, we investigate the convenience of quantile regression to predict extreme concentrations of NO 2 . Contrarily to the usual point-forecasting, where a single value is forecast for each horizon, probabilistic forecasting through quantile regression allows for the prediction of the full probability distribution, which in turn allows to build models specifically fit for the tails of this distribution. Using data from the city of Madrid, including NO 2 concentrations as well as meteorological measures, we build models that predict extreme NO 2 concentrations, outperforming point-forecasting alternatives, and we prove that the predictions are accurate, reliable and sharp. Besides, we study the relative importance of the independent variables involved, and show how the important variables for the median quantile are different than those important for the upper quantiles. Furthermore, we present a method to compute the probability of exceedance of thresholds, which is a simple and comprehensible manner to present probabilistic forecasts maximizing their usefulness. Copyright © 2017 Elsevier Ltd. All rights reserved.

  9. Testing hypotheses of earthquake occurrence

    NASA Astrophysics Data System (ADS)

    Kagan, Y. Y.; Jackson, D. D.; Schorlemmer, D.; Gerstenberger, M.

    2003-12-01

    We present a relatively straightforward likelihood method for testing those earthquake hypotheses that can be stated as vectors of earthquake rate density in defined bins of area, magnitude, and time. We illustrate the method as it will be applied to the Regional Earthquake Likelihood Models (RELM) project of the Southern California Earthquake Center (SCEC). Several earthquake forecast models are being developed as part of this project, and additional contributed forecasts are welcome. Various models are based on fault geometry and slip rates, seismicity, geodetic strain, and stress interactions. We would test models in pairs, requiring that both forecasts in a pair be defined over the same set of bins. Thus we offer a standard "menu" of bins and ground rules to encourage standardization. One menu category includes five-year forecasts of magnitude 5.0 and larger. Forecasts would be in the form of a vector of yearly earthquake rates on a 0.05 degree grid at the beginning of the test. Focal mechanism forecasts, when available, would be also be archived and used in the tests. The five-year forecast category may be appropriate for testing hypotheses of stress shadows from large earthquakes. Interim progress will be evaluated yearly, but final conclusions would be made on the basis of cumulative five-year performance. The second category includes forecasts of earthquakes above magnitude 4.0 on a 0.05 degree grid, evaluated and renewed daily. Final evaluation would be based on cumulative performance over five years. Other types of forecasts with different magnitude, space, and time sampling are welcome and will be tested against other models with shared characteristics. All earthquakes would be counted, and no attempt made to separate foreshocks, main shocks, and aftershocks. Earthquakes would be considered as point sources located at the hypocenter. For each pair of forecasts, we plan to compute alpha, the probability that the first would be wrongly rejected in favor of the second, and beta, the probability that the second would be wrongly rejected in favor of the first. Computing alpha and beta requires knowing the theoretical distribution of likelihood scores under each hypothesis, which we will estimate by simulations. Each forecast is given equal status; there is no "null hypothesis" which would be accepted by default. Forecasts and test results would be archived and posted on the RELM web site. The same methods can be applied to any region with adequate monitoring and sufficient earthquakes. If fewer than ten events are forecasted, the likelihood tests may not give definitive results. The tests do force certain requirements on the forecast models. Because the tests are based on absolute rates, stress models must be explicit about how stress increments affect past seismicity rates. Aftershocks of triggered events must be accounted for. Furthermore, the tests are sensitive to magnitude, so forecast models must specify the magnitude distribution of triggered events. Models should account for probable errors in magnitude and location by appropriate smoothing of the probabilities, as the tests will be "cold hearted:" near misses won't count.

  10. Integrating Wind Profiling Radars and Radiosonde Observations with Model Point Data to Develop a Decision Support Tool to Assess Upper-Level Winds for Space Launch

    NASA Technical Reports Server (NTRS)

    Bauman, William H., III; Flinn, Clay

    2013-01-01

    On the day of launch, the 45th Weather Squadron (45 WS) Launch Weather Officers (LWOs) monitor the upper-level winds for their launch customers. During launch operations, the payload/launch team sometimes asks the LWOs if they expect the upper-level winds to change during the countdown. The LWOs used numerical weather prediction model point forecasts to provide the information, but did not have the capability to quickly retrieve or adequately display the upper-level observations and compare them directly in the same display to the model point forecasts to help them determine which model performed the best. The LWOs requested the Applied Meteorology Unit (AMU) develop a graphical user interface (GUI) that will plot upper-level wind speed and direction observations from the Cape Canaveral Air Force Station (CCAFS) Automated Meteorological Profiling System (AMPS) rawinsondes with point forecast wind profiles from the National Centers for Environmental Prediction (NCEP) North American Mesoscale (NAM), Rapid Refresh (RAP) and Global Forecast System (GFS) models to assess the performance of these models. The AMU suggested adding observations from the NASA 50 MHz wind profiler and one of the US Air Force 915 MHz wind profilers, both located near the Kennedy Space Center (KSC) Shuttle Landing Facility, to supplement the AMPS observations with more frequent upper-level profiles. Figure 1 shows a map of KSC/CCAFS with the locations of the observation sites and the model point forecasts.

  11. Use of wind data in global modelling

    NASA Technical Reports Server (NTRS)

    Pailleux, J.

    1985-01-01

    The European Centre for Medium Range Weather Forecasts (ECMWF) is producing operational global analyses every 6 hours and operational global forecasts every day from the 12Z analysis. How the wind data are used in the ECMWF golbal analysis is described. For each current wind observing system, its ability to provide initial conditions for the forecast model is discussed as well as its weaknesses. An assessment of the impact of each individual system on the quality of the analysis and the forecast is given each time it is possible. Sometimes the deficiencies which are pointed out are related not only to the observing system itself but also to the optimum interpolation (OI) analysis scheme; then some improvements are generally possible through ad hoc modifications of the analysis scheme and especially tunings of the structure functions. Examples are given. The future observing network over the North Atlantic is examined. Several countries, coordinated by WMO, are working to set up an 'Operational WWW System Evaluation' (OWSE), in order to evaluate the operational aspects of the deployment of new systems (ASDAR, ASAP). Most of the new systems are expected to be deployed before January 1987, and in order to make the best use of the available resources during the deployment phase, some network studies are carried out at the present time, by using simulated data for ASDAR and ASAP systems. They are summarized.

  12. Evaluating the extreme precipitation events using a mesoscale atmopshere model

    NASA Astrophysics Data System (ADS)

    Yucel, I.; Onen, A.

    2012-04-01

    Evidence is showing that global warming or climate change has a direct influence on changes in precipitation and the hydrological cycle. Extreme weather events such as heavy rainfall and flooding are projected to become much more frequent as climate warms. Mesoscale atmospheric models coupled with land surface models provide efficient forecasts for meteorological events in high lead time and therefore they should be used for flood forecasting and warning issues as they provide more continuous monitoring of precipitation over large areas. This study examines the performance of the Weather Research and Forecasting (WRF) model in producing the temporal and spatial characteristics of the number of extreme precipitation events observed in West Black Sea Region of Turkey. Extreme precipitation events usually resulted in flood conditions as an associated hydrologic response of the basin. The performance of the WRF system is further investigated by using the three dimensional variational (3D-VAR) data assimilation scheme within WRF. WRF performance with and without data assimilation at high spatial resolution (4 km) is evaluated by making comparison with gauge precipitation and satellite-estimated rainfall data from Multi Precipitation Estimates (MPE). WRF-derived precipitation showed capabilities in capturing the timing of the precipitation extremes and in some extent spatial distribution and magnitude of the heavy rainfall events. These precipitation characteristics are enhanced with the use of 3D-VAR scheme in WRF system. Data assimilation improved area-averaged precipitation forecasts by 9 percent and at some points there exists quantitative match in precipitation events, which are critical for hydrologic forecast application.

  13. Scenario studies as a synthetic and integrative research activity for Long-Term Ecological Research

    Treesearch

    Jonathan R. Thompson; Arnim Wiek; Frederick J. Swanson; Stephen R. Carpenter; Nancy Fresco; Teresa Hollingsworth; Thomas A. Spies; David R. Foster

    2012-01-01

    Scenario studies have emerged as a powerful approach for synthesizing diverse forms of research and for articulating and evaluating alternative socioecological futures. Unlike predictive modeling, scenarios do not attempt to forecast the precise or probable state of any variable at a given point in the future. Instead, comparisons among a set of contrasting scenarios...

  14. Forecasting biodiversity in breeding birds using best practices

    PubMed Central

    Taylor, Shawn D.; White, Ethan P.

    2018-01-01

    Biodiversity forecasts are important for conservation, management, and evaluating how well current models characterize natural systems. While the number of forecasts for biodiversity is increasing, there is little information available on how well these forecasts work. Most biodiversity forecasts are not evaluated to determine how well they predict future diversity, fail to account for uncertainty, and do not use time-series data that captures the actual dynamics being studied. We addressed these limitations by using best practices to explore our ability to forecast the species richness of breeding birds in North America. We used hindcasting to evaluate six different modeling approaches for predicting richness. Hindcasts for each method were evaluated annually for a decade at 1,237 sites distributed throughout the continental United States. All models explained more than 50% of the variance in richness, but none of them consistently outperformed a baseline model that predicted constant richness at each site. The best practices implemented in this study directly influenced the forecasts and evaluations. Stacked species distribution models and “naive” forecasts produced poor estimates of uncertainty and accounting for this resulted in these models dropping in the relative performance compared to other models. Accounting for observer effects improved model performance overall, but also changed the rank ordering of models because it did not improve the accuracy of the “naive” model. Considering the forecast horizon revealed that the prediction accuracy decreased across all models as the time horizon of the forecast increased. To facilitate the rapid improvement of biodiversity forecasts, we emphasize the value of specific best practices in making forecasts and evaluating forecasting methods. PMID:29441230

  15. Providing the Fire Risk Map in Forest Area Using a Geographically Weighted Regression Model with Gaussin Kernel and Modis Images, a Case Study: Golestan Province

    NASA Astrophysics Data System (ADS)

    Shah-Heydari pour, A.; Pahlavani, P.; Bigdeli, B.

    2017-09-01

    According to the industrialization of cities and the apparent increase in pollutants and greenhouse gases, the importance of forests as the natural lungs of the earth is felt more than ever to clean these pollutants. Annually, a large part of the forests is destroyed due to the lack of timely action during the fire. Knowledge about areas with a high-risk of fire and equipping these areas by constructing access routes and allocating the fire-fighting equipment can help to eliminate the destruction of the forest. In this research, the fire risk of region was forecasted and the risk map of that was provided using MODIS images by applying geographically weighted regression model with Gaussian kernel and ordinary least squares over the effective parameters in forest fire including distance from residential areas, distance from the river, distance from the road, height, slope, aspect, soil type, land use, average temperature, wind speed, and rainfall. After the evaluation, it was found that the geographically weighted regression model with Gaussian kernel forecasted 93.4% of the all fire points properly, however the ordinary least squares method could forecast properly only 66% of the fire points.

  16. MINERVE flood warning and management project. What is computed, what is required and what is visualized?

    NASA Astrophysics Data System (ADS)

    Garcia Hernandez, J.; Boillat, J.-L.; Schleiss, A.

    2010-09-01

    During last decades several flood events caused important inundations in the Upper Rhone River basin in Switzerland. As a response to such disasters, the MINERVE project aims to improve the security by reducing damages in this basin. The main goal of this project is to predict floods in advance in order to obtain a better flow control during flood peaks taking advantage from the multireservoir system of the existing hydropower schemes. The MINERVE system evaluates the hydro-meteorological situation on the watershed and provides hydrological forecasts with a horizon from three to five days. It exploits flow measurements, data from reservoirs and hydropower plants as well as deterministic (COSMO-7 and COSMO-2) and ensemble (COSMO-LEPS) meteorological forecast from MeteoSwiss. The hydrological model is based on a semi-distributed concept, dividing the watershed in 239 sub-catchments, themselves decomposed in elevation bands in order to describe the temperature-driven processes related to snow and glacier melt. The model is completed by rivers and hydraulic works such as water intakes, reservoirs, turbines and pumps. Once the hydrological forecasts are calculated, a report provides the warning level at selected control points according to time, being a support to decision-making for preventive actions. A Notice, Alert or Alarm is then activated depending on the discharge thresholds defined by the Valais Canton. Preventive operation scenarios are then generated based on observed discharge at control points, meteorological forecasts from MeteoSwiss, hydrological forecasts from MINERVE and retention possibilities in the reservoirs. An update of the situation is done every time new data or new forecasts are provided, keeping last observations and last forecasts in the warning report. The forecasts can also be used for the evaluation of priority decisions concerning the management of hydropower plants for security purposes. Considering future inflows and reservoir levels, turbine and bottom outlet preventive operations can be proposed to the hydropower plants operators in order to store water inflows and to stop turbining during the peak flow. Appropriate operations can thus reduce the peak discharges in the Rhone River and its tributaries, limiting or avoiding damages. Results presentation in a clear and understandable way is an important goal of the project and is considered as one of the main focuses. The MINERVE project is developed in partnership by the Swiss Federal Office for Environment (FOEV), Services of Roads and Water courses as well as Water Power and Energy of the Wallis Canton and Service of Water, Land and Sanitation of the Vaud Canton. The Swiss Weather Service (MeteoSwiss) provides the weather forecasts and hydroelectric companies communicate specific information regarding the hydropower plants. Scientific developments are entrusted to two entities of the Ecole Polytechnique Fédérale de Lausanne (EPFL), the Hydraulic Constructions Laboratory (LCH) and the Ecohydrology Laboratory (ECHO), as well as to the Institute of Geomatics and Analysis of Risk (IGAR) of Lausanne University (UNIL).

  17. Hawaiian Marine Reports

    Science.gov Websites

    (PHMO) Kohala (PHKM) South Point (PHWA) Forecasts Activity Planner Hawaii Marine Aviation Fire Weather (PHWA) Forecasts Activity Planner Hawaii Marine Aviation Fire Weather Local Graphics National Graphics

  18. Trends in the predictive performance of raw ensemble weather forecasts

    NASA Astrophysics Data System (ADS)

    Hemri, Stephan; Scheuerer, Michael; Pappenberger, Florian; Bogner, Konrad; Haiden, Thomas

    2015-04-01

    Over the last two decades the paradigm in weather forecasting has shifted from being deterministic to probabilistic. Accordingly, numerical weather prediction (NWP) models have been run increasingly as ensemble forecasting systems. The goal of such ensemble forecasts is to approximate the forecast probability distribution by a finite sample of scenarios. Global ensemble forecast systems, like the European Centre for Medium-Range Weather Forecasts (ECMWF) ensemble, are prone to probabilistic biases, and are therefore not reliable. They particularly tend to be underdispersive for surface weather parameters. Hence, statistical post-processing is required in order to obtain reliable and sharp forecasts. In this study we apply statistical post-processing to ensemble forecasts of near-surface temperature, 24-hour precipitation totals, and near-surface wind speed from the global ECMWF model. Our main objective is to evaluate the evolution of the difference in skill between the raw ensemble and the post-processed forecasts. The ECMWF ensemble is under continuous development, and hence its forecast skill improves over time. Parts of these improvements may be due to a reduction of probabilistic bias. Thus, we first hypothesize that the gain by post-processing decreases over time. Based on ECMWF forecasts from January 2002 to March 2014 and corresponding observations from globally distributed stations we generate post-processed forecasts by ensemble model output statistics (EMOS) for each station and variable. Parameter estimates are obtained by minimizing the Continuous Ranked Probability Score (CRPS) over rolling training periods that consist of the n days preceding the initialization dates. Given the higher average skill in terms of CRPS of the post-processed forecasts for all three variables, we analyze the evolution of the difference in skill between raw ensemble and EMOS forecasts. The fact that the gap in skill remains almost constant over time, especially for near-surface wind speed, suggests that improvements to the atmospheric model have an effect quite different from what calibration by statistical post-processing is doing. That is, they are increasing potential skill. Thus this study indicates that (a) further model development is important even if one is just interested in point forecasts, and (b) statistical post-processing is important because it will keep adding skill in the foreseeable future.

  19. A New Objective Technique for Verifying Mesoscale Numerical Weather Prediction Models

    NASA Technical Reports Server (NTRS)

    Case, Jonathan L.; Manobianco, John; Lane, John E.; Immer, Christopher D.

    2003-01-01

    This report presents a new objective technique to verify predictions of the sea-breeze phenomenon over east-central Florida by the Regional Atmospheric Modeling System (RAMS) mesoscale numerical weather prediction (NWP) model. The Contour Error Map (CEM) technique identifies sea-breeze transition times in objectively-analyzed grids of observed and forecast wind, verifies the forecast sea-breeze transition times against the observed times, and computes the mean post-sea breeze wind direction and speed to compare the observed and forecast winds behind the sea-breeze front. The CEM technique is superior to traditional objective verification techniques and previously-used subjective verification methodologies because: It is automated, requiring little manual intervention, It accounts for both spatial and temporal scales and variations, It accurately identifies and verifies the sea-breeze transition times, and It provides verification contour maps and simple statistical parameters for easy interpretation. The CEM uses a parallel lowpass boxcar filter and a high-order bandpass filter to identify the sea-breeze transition times in the observed and model grid points. Once the transition times are identified, CEM fits a Gaussian histogram function to the actual histogram of transition time differences between the model and observations. The fitted parameters of the Gaussian function subsequently explain the timing bias and variance of the timing differences across the valid comparison domain. Once the transition times are all identified at each grid point, the CEM computes the mean wind direction and speed during the remainder of the day for all times and grid points after the sea-breeze transition time. The CEM technique performed quite well when compared to independent meteorological assessments of the sea-breeze transition times and results from a previously published subjective evaluation. The algorithm correctly identified a forecast or observed sea-breeze occurrence or absence 93% of the time during the two- month evaluation period from July and August 2000. Nearly all failures in CEM were the result of complex precipitation features (observed or forecast) that contaminated the wind field, resulting in a false identification of a sea-breeze transition. A qualitative comparison between the CEM timing errors and the subjectively determined observed and forecast transition times indicate that the algorithm performed very well overall. Most discrepancies between the CEM results and the subjective analysis were again caused by observed or forecast areas of precipitation that led to complex wind patterns. The CEM also failed on a day when the observed sea- breeze transition affected only a very small portion of the verification domain. Based on the results of CEM, the RAMS tended to predict the onset and movement of the sea-breeze transition too early and/or quickly. The domain-wide timing biases provided by CEM indicated an early bias on 30 out of 37 days when both an observed and forecast sea breeze occurred over the same portions of the analysis domain. These results are consistent with previous subjective verifications of the RAMS sea breeze predictions. A comparison of the mean post-sea breeze winds indicate that RAMS has a positive wind-speed bias for .all days, which is also consistent with the early bias in the sea-breeze transition time since the higher wind speeds resulted in a faster inland penetration of the sea breeze compared to reality.

  20. Post-processing of global model output to forecast point rainfall

    NASA Astrophysics Data System (ADS)

    Hewson, Tim; Pillosu, Fatima

    2016-04-01

    ECMWF (the European Centre for Medium range Weather Forecasts) has recently embarked upon a new project to post-process gridbox rainfall forecasts from its ensemble prediction system, to provide probabilistic forecasts of point rainfall. The new post-processing strategy relies on understanding how different rainfall generation mechanisms lead to different degrees of sub-grid variability in rainfall totals. We use a number of simple global model parameters, such as the convective rainfall fraction, to anticipate the sub-grid variability, and then post-process each ensemble forecast into a pdf (probability density function) for a point-rainfall total. The final forecast will comprise the sum of the different pdfs from all ensemble members. The post-processing is essentially a re-calibration exercise, which needs only rainfall totals from standard global reporting stations (and forecasts) to train it. High density observations are not needed. This presentation will describe results from the initial 'proof of concept' study, which has been remarkably successful. Reference will also be made to other useful outcomes of the work, such as gaining insights into systematic model biases in different synoptic settings. The special case of orographic rainfall will also be discussed. Work ongoing this year will also be described. This involves further investigations of which model parameters can provide predictive skill, and will then move on to development of an operational system for predicting point rainfall across the globe. The main practical benefit of this system will be a greatly improved capacity to predict extreme point rainfall, and thereby provide early warnings, for the whole world, of flash flood potential for lead times that extend beyond day 5. This will be incorporated into the suite of products output by GLOFAS (the GLObal Flood Awareness System) which is hosted at ECMWF. As such this work offers a very cost-effective approach to satisfying user needs right around the world. This field has hitherto relied on using very expensive high-resolution ensembles; by their very nature these can only run over small regions, and only for lead times up to about 2 days.

  1. Forecasting in foodservice: model development, testing, and evaluation.

    PubMed

    Miller, J L; Thompson, P A; Orabella, M M

    1991-05-01

    This study was designed to develop, test, and evaluate mathematical models appropriate for forecasting menu-item production demand in foodservice. Data were collected from residence and dining hall foodservices at Ohio State University. Objectives of the study were to collect, code, and analyze the data; develop and test models using actual operation data; and compare forecasting results with current methods in use. Customer count was forecast using deseasonalized simple exponential smoothing. Menu-item demand was forecast by multiplying the count forecast by a predicted preference statistic. Forecasting models were evaluated using mean squared error, mean absolute deviation, and mean absolute percentage error techniques. All models were more accurate than current methods. A broad spectrum of forecasting techniques could be used by foodservice managers with access to a personal computer and spread-sheet and database-management software. The findings indicate that mathematical forecasting techniques may be effective in foodservice operations to control costs, increase productivity, and maximize profits.

  2. The Global Precipitation Measurement (GPM) Mission contributions to hydrology and societal applications

    NASA Astrophysics Data System (ADS)

    Kirschbaum, D.; Huffman, G. J.; Skofronick Jackson, G.

    2016-12-01

    Too much or too little rain can serve as a tipping point for triggering catastrophic flooding and landslides or widespread drought. Knowing when, where and how much rain is falling globally is vital to understanding how vulnerable areas may be more or less impacted by these disasters. The Global Precipitation Measurement (GPM) mission provides near real-time precipitation data worldwide that is used by a broad range of end users, from tropical cyclone forecasters to agricultural modelers to researchers evaluating the spread of diseases. The GPM constellation provides merged, multi-satellite data products at three latencies that are critical for research and societal applications around the world. This presentation will outline current capabilities in using accurate and timely information of precipitation to directly benefit society, including examples of end user applications within the tropical cyclone forecasting, disasters response, agricultural forecasting, and disease tracking communities, among others. The presentation will also introduce some of the new visualization and access tools developed by the GPM team.

  3. Probabilistic precipitation nowcasting based on an extrapolation of radar reflectivity and an ensemble approach

    NASA Astrophysics Data System (ADS)

    Sokol, Zbyněk; Mejsnar, Jan; Pop, Lukáš; Bližňák, Vojtěch

    2017-09-01

    A new method for the probabilistic nowcasting of instantaneous rain rates (ENS) based on the ensemble technique and extrapolation along Lagrangian trajectories of current radar reflectivity is presented. Assuming inaccurate forecasts of the trajectories, an ensemble of precipitation forecasts is calculated and used to estimate the probability that rain rates will exceed a given threshold in a given grid point. Although the extrapolation neglects the growth and decay of precipitation, their impact on the probability forecast is taken into account by the calibration of forecasts using the reliability component of the Brier score (BS). ENS forecasts the probability that the rain rates will exceed thresholds of 0.1, 1.0 and 3.0 mm/h in squares of 3 km by 3 km. The lead times were up to 60 min, and the forecast accuracy was measured by the BS. The ENS forecasts were compared with two other methods: combined method (COM) and neighbourhood method (NEI). NEI considered the extrapolated values in the square neighbourhood of 5 by 5 grid points of the point of interest as ensemble members, and the COM ensemble was comprised of united ensemble members of ENS and NEI. The results showed that the calibration technique significantly improves bias of the probability forecasts by including additional uncertainties that correspond to neglected processes during the extrapolation. In addition, the calibration can also be used for finding the limits of maximum lead times for which the forecasting method is useful. We found that ENS is useful for lead times up to 60 min for thresholds of 0.1 and 1 mm/h and approximately 30 to 40 min for a threshold of 3 mm/h. We also found that a reasonable size of the ensemble is 100 members, which provided better scores than ensembles with 10, 25 and 50 members. In terms of the BS, the best results were obtained by ENS and COM, which are comparable. However, ENS is better calibrated and thus preferable.

  4. Smoothing Strategies Combined with ARIMA and Neural Networks to Improve the Forecasting of Traffic Accidents

    PubMed Central

    Rodríguez, Nibaldo

    2014-01-01

    Two smoothing strategies combined with autoregressive integrated moving average (ARIMA) and autoregressive neural networks (ANNs) models to improve the forecasting of time series are presented. The strategy of forecasting is implemented using two stages. In the first stage the time series is smoothed using either, 3-point moving average smoothing, or singular value Decomposition of the Hankel matrix (HSVD). In the second stage, an ARIMA model and two ANNs for one-step-ahead time series forecasting are used. The coefficients of the first ANN are estimated through the particle swarm optimization (PSO) learning algorithm, while the coefficients of the second ANN are estimated with the resilient backpropagation (RPROP) learning algorithm. The proposed models are evaluated using a weekly time series of traffic accidents of Valparaíso, Chilean region, from 2003 to 2012. The best result is given by the combination HSVD-ARIMA, with a MAPE of 0 : 26%, followed by MA-ARIMA with a MAPE of 1 : 12%; the worst result is given by the MA-ANN based on PSO with a MAPE of 15 : 51%. PMID:25243200

  5. Statistical and Hydrological evaluation of precipitation forecasts from IMD MME and ECMWF numerical weather forecasts for Indian River basins

    NASA Astrophysics Data System (ADS)

    Mohite, A. R.; Beria, H.; Behera, A. K.; Chatterjee, C.; Singh, R.

    2016-12-01

    Flood forecasting using hydrological models is an important and cost-effective non-structural flood management measure. For forecasting at short lead times, empirical models using real-time precipitation estimates have proven to be reliable. However, their skill depreciates with increasing lead time. Coupling a hydrologic model with real-time rainfall forecasts issued from numerical weather prediction (NWP) systems could increase the lead time substantially. In this study, we compared 1-5 days precipitation forecasts from India Meteorological Department (IMD) Multi-Model Ensemble (MME) with European Center for Medium Weather forecast (ECMWF) NWP forecasts for over 86 major river basins in India. We then evaluated the hydrologic utility of these forecasts over Basantpur catchment (approx. 59,000 km2) of the Mahanadi River basin. Coupled MIKE 11 RR (NAM) and MIKE 11 hydrodynamic (HD) models were used for the development of flood forecast system (FFS). RR model was calibrated using IMD station rainfall data. Cross-sections extracted from SRTM 30 were used as input to the MIKE 11 HD model. IMD started issuing operational MME forecasts from the year 2008, and hence, both the statistical and hydrologic evaluation were carried out from 2008-2014. The performance of FFS was evaluated using both the NWP datasets separately for the year 2011, which was a large flood year in Mahanadi River basin. We will present figures and metrics for statistical (threshold based statistics, skill in terms of correlation and bias) and hydrologic (Nash Sutcliffe efficiency, mean and peak error statistics) evaluation. The statistical evaluation will be at pan-India scale for all the major river basins and the hydrologic evaluation will be for the Basantpur catchment of the Mahanadi River basin.

  6. Hydro-meteorological evaluation of downscaled global ensemble rainfall forecasts

    NASA Astrophysics Data System (ADS)

    Gaborit, Étienne; Anctil, François; Fortin, Vincent; Pelletier, Geneviève

    2013-04-01

    Ensemble rainfall forecasts are of high interest for decision making, as they provide an explicit and dynamic assessment of the uncertainty in the forecast (Ruiz et al. 2009). However, for hydrological forecasting, their low resolution currently limits their use to large watersheds (Maraun et al. 2010). In order to bridge this gap, various implementations of the statistic-stochastic multi-fractal downscaling technique presented by Perica and Foufoula-Georgiou (1996) were compared, bringing Environment Canada's global ensemble rainfall forecasts from a 100 by 70-km resolution down to 6 by 4-km, while increasing each pixel's rainfall variance and preserving its original mean. For comparison purposes, simpler methods were also implemented such as the bi-linear interpolation, which disaggregates global forecasts without modifying their variance. The downscaled meteorological products were evaluated using different scores and diagrams, from both a meteorological and a hydrological view points. The meteorological evaluation was conducted comparing the forecasted rainfall depths against nine days of observed values taken from Québec City rain gauge database. These 9 days present strong precipitation events occurring during the summer of 2009. For the hydrologic evaluation, the hydrological models SWMM5 and (a modified version of) GR4J were implemented on a small 6 km2 urban catchment located in the Québec City region. Ensemble hydrologic forecasts with a time step of 3 hours were then performed over a 3-months period of the summer of 2010 using the original and downscaled ensemble rainfall forecasts. The most important conclusions of this work are that the overall quality of the forecasts was preserved during the disaggregation procedure and that the disaggregated products using this variance-enhancing method were of similar quality than bi-linear interpolation products. However, variance and dispersion of the different members were, of course, much improved for the variance-enhanced products, compared to the bi-linear interpolation, which is a decisive advantage. The disaggregation technique of Perica and Foufoula-Georgiou (1996) hence represents an interesting way of bridging the gap between the meteorological models' resolution and the high degree of spatial precision sometimes required by hydrological models in their precipitation representation. References Maraun, D., Wetterhall, F., Ireson, A. M., Chandler, R. E., Kendon, E. J., Widmann, M., Brienen, S., Rust, H. W., Sauter, T., Themeßl, M., Venema, V. K. C., Chun, K. P., Goodess, C. M., Jones, R. G., Onof, C., Vrac, M., and Thiele-Eich, I. 2010. Precipitation downscaling under climate change: recent developments to bridge the gap between dynamical models and the end user. Reviews of Geophysics, 48 (3): RG3003, [np]. Doi: 10.1029/2009RG000314. Perica, S., and Foufoula-Georgiou, E. 1996. Model for multiscale disaggregation of spatial rainfall based on coupling meteorological and scaling descriptions. Journal Of Geophysical Research, 101(D21): 26347-26361. Ruiz, J., Saulo, C. and Kalnay, E. 2009. Comparison of Methods Used to Generate Probabilistic Quantitative Precipitation Forecasts over South America. Weather and forecasting, 24: 319-336. DOI: 10.1175/2008WAF2007098.1 This work is distributed under the Creative Commons Attribution 3.0 Unported License together with an author copyright. This license does not conflict with the regulations of the Crown Copyright.

  7. HESS Opinions "On forecast (in)consistency in a hydro-meteorological chain: curse or blessing?"

    NASA Astrophysics Data System (ADS)

    Pappenberger, F.; Cloke, H. L.; Persson, A.; Demeritt, D.

    2011-07-01

    Flood forecasting increasingly relies on numerical weather prediction forecasts to achieve longer lead times. One of the key difficulties that is emerging in constructing a decision framework for these flood forecasts is what to dowhen consecutive forecasts are so different that they lead to different conclusions regarding the issuing of warnings or triggering other action. In this opinion paper we explore some of the issues surrounding such forecast inconsistency (also known as "Jumpiness", "Turning points", "Continuity" or number of "Swings"). In thsi opinion paper we define forecast inconsistency; discuss the reasons why forecasts might be inconsistent; how we should analyse inconsistency; and what we should do about it; how we should communicate it and whether it is a totally undesirable property. The property of consistency is increasingly emerging as a hot topic in many forecasting environments.

  8. Value of long-term streamflow forecast to reservoir operations for water supply in snow-dominated catchments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Anghileri, Daniela; Voisin, Nathalie; Castelletti, Andrea F.

    In this study, we develop a forecast-based adaptive control framework for Oroville reservoir, California, to assess the value of seasonal and inter-annual forecasts for reservoir operation.We use an Ensemble Streamflow Prediction (ESP) approach to generate retrospective, one-year-long streamflow forecasts based on the Variable Infiltration Capacity hydrology model. The optimal sequence of daily release decisions from the reservoir is then determined by Model Predictive Control, a flexible and adaptive optimization scheme.We assess the forecast value by comparing system performance based on the ESP forecasts with that based on climatology and a perfect forecast. In addition, we evaluate system performance based onmore » a synthetic forecast, which is designed to isolate the contribution of seasonal and inter-annual forecast skill to the overall value of the ESP forecasts.Using the same ESP forecasts, we generalize our results by evaluating forecast value as a function of forecast skill, reservoir features, and demand. Our results show that perfect forecasts are valuable when the water demand is high and the reservoir is sufficiently large to allow for annual carry-over. Conversely, ESP forecast value is highest when the reservoir can shift water on a seasonal basis.On average, for the system evaluated here, the overall ESP value is 35% less than the perfect forecast value. The inter-annual component of the ESP forecast contributes 20-60% of the total forecast value. Improvements in the seasonal component of the ESP forecast would increase the overall ESP forecast value between 15 and 20%.« less

  9. Bayesian analyses of seasonal runoff forecasts

    NASA Astrophysics Data System (ADS)

    Krzysztofowicz, R.; Reese, S.

    1991-12-01

    Forecasts of seasonal snowmelt runoff volume provide indispensable information for rational decision making by water project operators, irrigation district managers, and farmers in the western United States. Bayesian statistical models and communication frames have been researched in order to enhance the forecast information disseminated to the users, and to characterize forecast skill from the decision maker's point of view. Four products are presented: (i) a Bayesian Processor of Forecasts, which provides a statistical filter for calibrating the forecasts, and a procedure for estimating the posterior probability distribution of the seasonal runoff; (ii) the Bayesian Correlation Score, a new measure of forecast skill, which is related monotonically to the ex ante economic value of forecasts for decision making; (iii) a statistical predictor of monthly cumulative runoffs within the snowmelt season, conditional on the total seasonal runoff forecast; and (iv) a framing of the forecast message that conveys the uncertainty associated with the forecast estimates to the users. All analyses are illustrated with numerical examples of forecasts for six gauging stations from the period 1971 1988.

  10. Grey-Markov prediction model based on background value optimization and central-point triangular whitenization weight function

    NASA Astrophysics Data System (ADS)

    Ye, Jing; Dang, Yaoguo; Li, Bingjun

    2018-01-01

    Grey-Markov forecasting model is a combination of grey prediction model and Markov chain which show obvious optimization effects for data sequences with characteristics of non-stationary and volatility. However, the state division process in traditional Grey-Markov forecasting model is mostly based on subjective real numbers that immediately affects the accuracy of forecasting values. To seek the solution, this paper introduces the central-point triangular whitenization weight function in state division to calculate possibilities of research values in each state which reflect preference degrees in different states in an objective way. On the other hand, background value optimization is applied in the traditional grey model to generate better fitting data. By this means, the improved Grey-Markov forecasting model is built. Finally, taking the grain production in Henan Province as an example, it verifies this model's validity by comparing with GM(1,1) based on background value optimization and the traditional Grey-Markov forecasting model.

  11. Satellite Altimetry based River Forecasting of Transboundary Flow

    NASA Astrophysics Data System (ADS)

    Hossain, F.; Siddique-E-Akbor, A.; Lee, H.; Shum, C.; Biancamaria, S.

    2012-12-01

    Forecasting of this transboundary flow in downstream nations however remains notoriously difficult due to the lack of basin-wide in-situ hydrologic measurements or its real-time sharing among nations. In addition, human regulation of upstream flow through diversion projects and dams, make hydrologic models less effective for forecasting on their own. Using the Ganges-Brahmaputra (GB) basin as an example, this study assesses the feasibility of using JASON-2 satellite altimetry for forecasting such transboundary flow at locations further inside the downstream nation of Bangladesh by propagating forecasts derived from upstream (Indian) locations through a hydrodynamic river model. The 5-day forecast of river levels at upstream boundary points inside Bangladesh are used to initialize daily simulation of the hydrodynamic river model and yield the 5-day forecast river level further downstream inside Bangladesh. The forecast river levels are then compared with the 5-day-later "now cast" simulation by the river model based on in-situ river level at the upstream boundary points in Bangladesh. Future directions for satellite-based forecasting of flow are also briefly overviewed.round tracks or virtual stations of JASON-2 (J2) altimeter over the GB basin shown in yellow lines. The locations where the track crosses a river and used for deriving forecasting rating curves is shown with a circle and station number (magenta- Brahmaputra basin; blue - Ganges basin). Circles without a station number represent the broader view of sampling by JASON-2 if all the ground tracks on main stem rivers and neighboring tributaries of Ganges and Brahmaputra are considered.

  12. Investigating NWP initialization sensitivities in heavy precipitation events

    NASA Astrophysics Data System (ADS)

    Frediani, M. E. B.; Anagnostou, E. N.; Papadopoulos, A.

    2010-09-01

    This study aims to investigate the effect of different types of model initialization applied to extreme storms simulations. Storms with extreme precipitation can usually produce flash floods that cause several damages to the society. Lives and property are destroyed from the landslides when they could be speared if forecasted a few hours in advance. The forecasts depend on several factors; among them the initialization fields play an important role. These fields are the starting point for the simulation and therefore it controls the quality of the forecast. This study evaluates the sensitivities of WRF to the initialization from two perspectives, (1) resolution and (2) initial atmospheric fields. Two storms that lead to flash flood are simulated. The first one happened in Northeast Italy in 04/09/2009 (NI), and the second in Germany, in 02/06/2008 (GE). These storms present contrasting characteristics, NI was a maritime originated storm enhanced by local orography while GE was a typical summer convection. Three different sources of atmospheric fields defining the initial conditions are applied: (a) ECMWF operational analysis at resolution of 0.25 deg, (b) GFS operational analysis at 0.5deg and (c) LAPS analysis at ~15km, produced operationally at HCMR. The rainfall forecasted is compared against in situ ground radar and surface rain gauges observations through a set of quantitative precipitation forecast scores.

  13. Thirty Years of Improving the NCEP Global Forecast System

    NASA Astrophysics Data System (ADS)

    White, G. H.; Manikin, G.; Yang, F.

    2014-12-01

    Current eight day forecasts by the NCEP Global Forecast System are as accurate as five day forecasts 30 years ago. This revolution in weather forecasting reflects increases in computer power, improvements in the assimilation of observations, especially satellite data, improvements in model physics, improvements in observations and international cooperation and competition. One important component has been and is the diagnosis, evaluation and reduction of systematic errors. The effect of proposed improvements in the GFS on systematic errors is one component of the thorough testing of such improvements by the Global Climate and Weather Modeling Branch. Examples of reductions in systematic errors in zonal mean temperatures and winds and other fields will be presented. One challenge in evaluating systematic errors is uncertainty in what reality is. Model initial states can be regarded as the best overall depiction of the atmosphere, but can be misleading in areas of few observations or for fields not well observed such as humidity or precipitation over the oceans. Verification of model physics is particularly difficult. The Environmental Modeling Center emphasizes the evaluation of systematic biases against observations. Recently EMC has placed greater emphasis on synoptic evaluation and on precipitation, 2-meter temperatures and dew points and 10 meter winds. A weekly EMC map discussion reviews the performance of many models over the United States and has helped diagnose and alleviate significant systematic errors in the GFS, including a near surface summertime evening cold wet bias over the eastern US and a multi-week period when the GFS persistently developed bogus tropical storms off Central America. The GFS exhibits a wet bias for light rain and a dry bias for moderate to heavy rain over the continental United States. Significant changes to the GFS are scheduled to be implemented in the fall of 2014. These include higher resolution, improved physics and improvements to the assimilation. These changes significantly improve the tropospheric flow and reduce a tropical upper tropospheric warm bias. One important error remaining is the failure of the GFS to maintain deep convection over Indonesia and in the tropical west Pacific. This and other current systematic errors will be presented.

  14. Evaluation of NASA SPoRT's Pseudo-Geostationary Lightning Mapper Products in the 2011 Spring Program

    NASA Technical Reports Server (NTRS)

    Stano, Geoffrey T.; Carcione, Brian; Siewert, Christopher; Kuhlman, Kristin M.

    2012-01-01

    NASA's Short-term Prediction Research and Transition (SPoRT) program is a contributing partner with the GOES-R Proving Ground (PG) preparing forecasters to understand and utilize the unique products that will be available in the GOES-R era. This presentation emphasizes SPoRT s actions to prepare the end user community for the Geostationary Lightning Mapper (GLM). This preparation is a collaborative effort with SPoRT's National Weather Service partners, the National Severe Storms Laboratory (NSSL), and the Hazardous Weather Testbed s Spring Program. SPoRT continues to use its effective paradigm of matching capabilities to forecast problems through collaborations with our end users and working with the developers at NSSL to create effective evaluations and visualizations. Furthermore, SPoRT continues to develop software plug-ins so that these products will be available to forecasters in their own decision support system, AWIPS and eventually AWIPS II. In 2009, the SPoRT program developed the original pseudo geostationary lightning mapper (PGLM) flash extent product to demonstrate what forecasters may see with GLM. The PGLM replaced the previous GLM product and serves as a stepping-stone until the AWG s official GLM proxy is ready. The PGLM algorithm is simple and can be applied to any ground-based total lightning network. For 2011, the PGLM used observations from four ground-based networks (North Alabama, Kennedy Space Center, Oklahoma, and Washington D.C.). While the PGLM is not a true proxy product, it is intended as a tool to train forecasters about total lightning as well as foster discussions on product visualizations and incorporating GLM-resolution data into forecast operations. The PGLM has been used in 2010 and 2011 and is likely to remain the primary lightning training tool for the GOES-R program for the near future. This presentation will emphasize the feedback received during the 2011 Spring Program. This will discuss several topics. Based on feedback from the 2010 Spring Program, SPoRT created two variant PGLM products, which NSSL produced locally and provided in real-time within AWIPS for 2011. The first is the flash initiation density (FID) product, which creates a gridded display showing the number of flashes that originated in each 8 8 km grid box. The second product is the maximum flash density (MFD). This shows the highest PGLM value for each grid point over a specific period of time, ranging from 30 to 120 minutes. In addition to the evaluation of these two new products, the evaluation of the PGLM itself will be covered. The presentation will conclude with forecaster feedback for additional improvements requested for future evaluations, such as within the 2012 Spring Program.

  15. Using phenomenological models for forecasting the 2015 Ebola challenge.

    PubMed

    Pell, Bruce; Kuang, Yang; Viboud, Cecile; Chowell, Gerardo

    2018-03-01

    The rising number of novel pathogens threatening the human population has motivated the application of mathematical modeling for forecasting the trajectory and size of epidemics. We summarize the real-time forecasting results of the logistic equation during the 2015 Ebola challenge focused on predicting synthetic data derived from a detailed individual-based model of Ebola transmission dynamics and control. We also carry out a post-challenge comparison of two simple phenomenological models. In particular, we systematically compare the logistic growth model and a recently introduced generalized Richards model (GRM) that captures a range of early epidemic growth profiles ranging from sub-exponential to exponential growth. Specifically, we assess the performance of each model for estimating the reproduction number, generate short-term forecasts of the epidemic trajectory, and predict the final epidemic size. During the challenge the logistic equation consistently underestimated the final epidemic size, peak timing and the number of cases at peak timing with an average mean absolute percentage error (MAPE) of 0.49, 0.36 and 0.40, respectively. Post-challenge, the GRM which has the flexibility to reproduce a range of epidemic growth profiles ranging from early sub-exponential to exponential growth dynamics outperformed the logistic growth model in ascertaining the final epidemic size as more incidence data was made available, while the logistic model underestimated the final epidemic even with an increasing amount of data of the evolving epidemic. Incidence forecasts provided by the generalized Richards model performed better across all scenarios and time points than the logistic growth model with mean RMS decreasing from 78.00 (logistic) to 60.80 (GRM). Both models provided reasonable predictions of the effective reproduction number, but the GRM slightly outperformed the logistic growth model with a MAPE of 0.08 compared to 0.10, averaged across all scenarios and time points. Our findings further support the consideration of transmission models that incorporate flexible early epidemic growth profiles in the forecasting toolkit. Such models are particularly useful for quickly evaluating a developing infectious disease outbreak using only case incidence time series of the early phase of an infectious disease outbreak. Copyright © 2016 The Authors. Published by Elsevier B.V. All rights reserved.

  16. Comparative analysis of GPS-derived TEC estimates and foF2 observations during storm conditions towards the expansion of ionospheric forecasting capabilities over Europe

    NASA Astrophysics Data System (ADS)

    Tsagouri, Ioanna; Belehaki, Anna; Elias, Panagiotis

    2017-04-01

    This paper builts the discussion on the comparative analysis of the variations in the peak electron density at F2 layer and the TEC parameter during a significant number of geomagnetic storm events that occurred in the present solar cycle 24. The ionospheric disturbances are determined through the comparison of actual observations of the foF2 critical frequency and GPS-TEC estimates obtained over European locations with the corresponding median estimates, and they are analysed in conjunction to the solar wind conditions at L1 point that are monitored by the ACE spacecraft. The quantification of the storm impact on the TEC parameter in terms of possible limitations introduced by different TEC derivation methods is carefully addressed.The results reveal similarities and differences in the response of the two parameters with respect to the solar wind drivers of the storms, as well as the local time and the latitude of the observation point. The aforementioned dependences drive the storm-time forecasts of the SWIF model (Solar Wind driven autorgressive model for Ionospheric short-term Forecast), which is operationally implemented in the DIAS system (http://dias.space.noa.gr) and extensively tested in performance at several occassions. In its present version, the model provides alerts and warnings for upcoming ionospheric disturbances, as well as single site and regional forecasts of the foF2 characteristic over Europe up to 24 hours ahead based on the assesment of the solar wind conditions at ACE location. In that respect, the results obtained above support the upgrade of the SWIF's modeling technique in forecasting the storm-time TEC variation within an operational environment several hours in advance. Preliminary results on the evaluation of the model's efficiency in TEC prediction are also discussed, giving special attention in the assesment of the capabilities through the TEC-derivation uncertanties for future discussions.

  17. Forecast Method of Solar Irradiance with Just-In-Time Modeling

    NASA Astrophysics Data System (ADS)

    Suzuki, Takanobu; Goto, Yusuke; Terazono, Takahiro; Wakao, Shinji; Oozeki, Takashi

    PV power output mainly depends on the solar irradiance which is affected by various meteorological factors. So, it is required to predict solar irradiance in the future for the efficient operation of PV systems. In this paper, we develop a novel approach for solar irradiance forecast, in which we introduce to combine the black-box model (JIT Modeling) with the physical model (GPV data). We investigate the predictive accuracy of solar irradiance over wide controlled-area of each electric power company by utilizing the measured data on the 44 observation points throughout Japan offered by JMA and the 64 points around Kanto by NEDO. Finally, we propose the application forecast method of solar irradiance to the point which is difficulty in compiling the database. And we consider the influence of different GPV default time on solar irradiance prediction.

  18. Using a Software Tool in Forecasting: a Case Study of Sales Forecasting Taking into Account Data Uncertainty

    NASA Astrophysics Data System (ADS)

    Fabianová, Jana; Kačmáry, Peter; Molnár, Vieroslav; Michalik, Peter

    2016-10-01

    Forecasting is one of the logistics activities and a sales forecast is the starting point for the elaboration of business plans. Forecast accuracy affects the business outcomes and ultimately may significantly affect the economic stability of the company. The accuracy of the prediction depends on the suitability of the use of forecasting methods, experience, quality of input data, time period and other factors. The input data are usually not deterministic but they are often of random nature. They are affected by uncertainties of the market environment, and many other factors. Taking into account the input data uncertainty, the forecast error can by reduced. This article deals with the use of the software tool for incorporating data uncertainty into forecasting. Proposals are presented of a forecasting approach and simulation of the impact of uncertain input parameters to the target forecasted value by this case study model. The statistical analysis and risk analysis of the forecast results is carried out including sensitivity analysis and variables impact analysis.

  19. Evaluation of Real-Time Convection-Permitting Precipitation Forecasts in China During the 2013-2014 Summer Season

    NASA Astrophysics Data System (ADS)

    Zhu, Kefeng; Xue, Ming; Zhou, Bowen; Zhao, Kun; Sun, Zhengqi; Fu, Peiling; Zheng, Yongguang; Zhang, Xiaoling; Meng, Qingtao

    2018-01-01

    Forecasts at a 4 km convection-permitting resolution over China during the summer season have been produced with the Weather Research and Forecasting model at Nanjing University since 2013. Precipitation forecasts from 2013 to 2014 are evaluated with dense rain gauge observations and compared with operational global model forecasts. Overall, the 4 km forecasts show very good agreement with observations over most parts of China, outperforming global forecasts in terms of spatial distribution, intensity, and diurnal variation. Quantitative evaluations with the Gilbert skill score further confirm the better performance of the 4 km forecasts over global forecasts for heavy precipitation, especially for the thresholds of 100 and 150 mm d-1. Besides bulk characteristics, the representations of some unique features of summer precipitation in China under the influence of the East Asian summer monsoon are further evaluated. These include the northward progression and southward retreat of the main rainband through the summer season, the diurnal variations of precipitation, and the meridional and zonal propagation of precipitation episodes associated with background synoptic flow and the embedded mesoscale convective systems. The 4 km forecast is able to faithfully reproduce most of the features while overprediction of afternoon convection near the southern China coast is found to be a main deficiency that requires further investigations.

  20. Toward improving hurricane forecasts using the JPL Tropical Cyclone Information System (TCIS): A framework to address the issues of Big Data

    NASA Astrophysics Data System (ADS)

    Hristova-Veleva, S. M.; Boothe, M.; Gopalakrishnan, S.; Haddad, Z. S.; Knosp, B.; Lambrigtsen, B.; Li, P.; montgomery, M. T.; Niamsuwan, N.; Tallapragada, V. S.; Tanelli, S.; Turk, J.; Vukicevic, T.

    2013-12-01

    Accurate forecasting of extreme weather requires the use of both regional models as well as global General Circulation Models (GCMs). The regional models have higher resolution and more accurate physics - two critical components needed for properly representing the key convective processes. GCMs, on the other hand, have better depiction of the large-scale environment and, thus, are necessary for properly capturing the important scale interactions. But how to evaluate the models, understand their shortcomings and improve them? Satellite observations can provide invaluable information. And this is where the issues of Big Data come: satellite observations are very complex and have large variety while model forecast are very voluminous. We are developing a system - TCIS - that addresses the issues of model evaluation and process understanding with the goal of improving the accuracy of hurricane forecasts. This NASA/ESTO/AIST-funded project aims at bringing satellite/airborne observations and model forecasts into a common system and developing on-line tools for joint analysis. To properly evaluate the models we go beyond the comparison of the geophysical fields. We input the model fields into instrument simulators (NEOS3, CRTM, etc.) and compute synthetic observations for a more direct comparison to the observed parameters. In this presentation we will start by describing the scientific questions. We will then outline our current framework to provide fusion of models and observations. Next, we will illustrate how the system can be used to evaluate several models (HWRF, GFS, ECMWF) by applying a couple of our analysis tools to several hurricanes observed during the 2013 season. Finally, we will outline our future plans. Our goal is to go beyond the image comparison and point-by-point statistics, by focusing instead on understanding multi-parameter correlations and providing robust statistics. By developing on-line analysis tools, our framework will allow for consistent model evaluation, providing results that are much more robust than those produced by case studies - the current paradigm imposed by the Big Data issues (voluminous data and incompatible analysis tools). We believe that this collaborative approach, with contributions of models, observations and analysis approaches used by the research and operational communities, will help untangle the complex interactions that lead to hurricane genesis and rapid intensity changes - two processes that still pose many unanswered questions. The developed framework for evaluation of the global models will also have implications for the improvement of the climate models, which output only a limited amount of information making it difficult to evaluate them. Our TCIS will help by investigating the GCMs under current weather scenarios and with much more detailed model output, making it possible to compare the models to multiple observed parameters to help narrow down the uncertainty in their performance. This knowledge could then be transferred to the climate models to lower the uncertainty in their predictions. The work described here was performed at the Jet Propulsion Laboratory, California Institute of Technology, under contract with the National Aeronautics and Space Administration.

  1. Predictability of Bristol Bay, Alaska, sockeye salmon returns one to four years in the future

    USGS Publications Warehouse

    Adkison, Milo D.; Peterson, R.M.

    2000-01-01

    Historically, forecast error for returns of sockeye salmon Oncorhynchus nerka to Bristol Bay, Alaska, has been large. Using cross-validation forecast error as our criterion, we selected forecast models for each of the nine principal Bristol Bay drainages. Competing forecast models included stock-recruitment relationships, environmental variables, prior returns of siblings, or combinations of these predictors. For most stocks, we found prior returns of siblings to be the best single predictor of returns; however, forecast accuracy was low even when multiple predictors were considered. For a typical drainage, an 80% confidence interval ranged from one half to double the point forecast. These confidence intervals appeared to be appropriately wide.

  2. Statistical Analysis of Model Data for Operational Space Launch Weather Support at Kennedy Space Center and Cape Canaveral Air Force Station

    NASA Technical Reports Server (NTRS)

    Bauman, William H., III

    2010-01-01

    The 12-km resolution North American Mesoscale (NAM) model (MesoNAM) is used by the 45th Weather Squadron (45 WS) Launch Weather Officers at Kennedy Space Center (KSC) and Cape Canaveral Air Force Station (CCAFS) to support space launch weather operations. The 45 WS tasked the Applied Meteorology Unit to conduct an objective statistics-based analysis of MesoNAM output compared to wind tower mesonet observations and then develop a an operational tool to display the results. The National Centers for Environmental Prediction began running the current version of the MesoNAM in mid-August 2006. The period of record for the dataset was 1 September 2006 - 31 January 2010. The AMU evaluated MesoNAM hourly forecasts from 0 to 84 hours based on model initialization times of 00, 06, 12 and 18 UTC. The MesoNAM forecast winds, temperature and dew point were compared to the observed values of these parameters from the sensors in the KSC/CCAFS wind tower network. The data sets were stratified by model initialization time, month and onshore/offshore flow for each wind tower. Statistics computed included bias (mean difference), standard deviation of the bias, root mean square error (RMSE) and a hypothesis test for bias = O. Twelve wind towers located in close proximity to key launch complexes were used for the statistical analysis with the sensors on the towers positioned at varying heights to include 6 ft, 30 ft, 54 ft, 60 ft, 90 ft, 162 ft, 204 ft and 230 ft depending on the launch vehicle and associated weather launch commit criteria being evaluated. These twelve wind towers support activities for the Space Shuttle (launch and landing), Delta IV, Atlas V and Falcon 9 launch vehicles. For all twelve towers, the results indicate a diurnal signal in the bias of temperature (T) and weaker but discernable diurnal signal in the bias of dewpoint temperature (T(sub d)) in the MesoNAM forecasts. Also, the standard deviation of the bias and RMSE of T, T(sub d), wind speed and wind direction indicated the model error increased with the forecast period all four parameters. The hypothesis testing uses statistics to determine the probability that a given hypothesis is true. The goal of using the hypothesis test was to determine if the model bias of any of the parameters assessed throughout the model forecast period was statistically zero. For th is dataset, if this test produced a value >= -1 .96 or <= 1.96 for a data point, then the bias at that point was effectively zero and the model forecast for that point was considered to have no error. A graphical user interface (GUI) was developed so the 45 WS would have an operational tool at their disposal that would be easy to navigate among the multiple stratifications of information to include tower locations, month, model initialization times, sensor heights and onshore/offshore flow. The AMU developed the GUI using HyperText Markup Language (HTML) so the tool could be used in most popular web browsers with computers running different operating systems such as Microsoft Windows and Linux.

  3. Evaluation of the Regional Atmospheric Modeling System in the Eastern Range Dispersion Assessment System

    NASA Technical Reports Server (NTRS)

    Case, Jonathan

    2000-01-01

    The Applied Meteorology Unit is conducting an evaluation of the Regional Atmospheric Modeling System (RAMS) contained within the Eastern Range Dispersion Assessment System (ERDAS). ERDAS provides emergency response guidance for operations at the Cape Canaveral Air Force Station and the Kennedy Space Center in the event of an accidental hazardous material release or aborted vehicle launch. The prognostic data from RAMS is available to ERDAS for display and is used to initialize the 45th Range Safety (45 SW/SE) dispersion model. Thus, the accuracy of the 45 SW/SE dispersion model is dependent upon the accuracy of RAMS forecasts. The RAMS evaluation task consists of an objective and subjective component for the Florida warm and cool seasons of 1999-2000. The objective evaluation includes gridded and point error statistics at surface and upper-level observational sites, a comparison of the model errors to a coarser grid configuration of RAMS, and a benchmark of RAMS against the widely accepted Eta model. The warm-season subjective evaluation involves a verification of the onset and movement of the Florida east coast sea breeze and RAMS forecast precipitation. This interim report provides a summary of the RAMS objective and subjective evaluation for the 1999 Florida warm season only.

  4. Verification of FLYSAFE Clear Air Turbulence (CAT) objects against aircraft turbulence measurements

    NASA Astrophysics Data System (ADS)

    Lunnon, R.; Gill, P.; Reid, L.; Mirza, A.

    2009-09-01

    Prediction of gridded CAT fields The main causes of CAT are (a) Vertical wind shear - low Richardson Number (b) Mountain waves (c) Convection. All three causes contribute roughly equally to CAT occurrences, globally Prediction of shear induced CAT The predictions of shear induced CAT has a longer history than either mountain-wave induced CAT or convectively induced CAT. Both Global Aviation Forecasting Centres are currently using the Ellrod TI1 algorithm (Ellrod and Knapp, 1992). This predictor is the scalar product of deformation [akm1]and vertical wind shear. More sophisticated algorithms can amplify errors in non-linear, differentiated quantities so it is very likely that Ellrod will out-perform other algorithms when verified globally. Prediction of mountain wave CAT The Global Aviation Forecasting Centre in the UK has been generating automated forecasts of mountain wave CAT since the late 1990s, based on the diagnosis of gravity wave drag. Generation of CAT objects In the FLYSAFE project it was decided at an early stage that short range forecasts of meteorological hazards, i.e. icing, Clear Air Turbulence, Cumulonimbus Clouds, should be represented as weather objects, that is, descriptions of individual hazardous volumes of airspace. For CAT, the forecast information on which the weather objects were based was gridded, that comprised a representation of a hazard level for all points in a pre-defined 3-D grid, for a range of forecast times. A "grid-to-objects" capability was generated. This is discussed further in Mirza and Drouin (this conference). Verification of CAT forecasts Verification was performed using digital accelerometer data from aircraft in the British Airways Boeing 747 fleet. A preliminary processing of the aircraft data were performed to generate a truth field on a scale similar to that used to provide gridded forecasts to airlines. This truth field was binary, i.e. each flight segment was characterised as being either "turbulent" or "benign". A gridded forecast field is a continuously changing variable. In contrast, a simple weather object must be characterised by a specific threshold. For a gridded forecast and a binary truth measure it is possible to generate Relative Operating Characteristic (ROC) curves. For weather objects, a single point in the hit-rate/false-alarm-rate space can be generated. If this point is plotted on a ROC curve graph then the skill of the forecast using weather objects can be compared with the skill of the gridded forecast.

  5. Multiple "buy buttons" in the brain: Forecasting chocolate sales at point-of-sale based on functional brain activation using fMRI.

    PubMed

    Kühn, Simone; Strelow, Enrique; Gallinat, Jürgen

    2016-08-01

    We set out to forecast consumer behaviour in a supermarket based on functional magnetic resonance imaging (fMRI). Data was collected while participants viewed six chocolate bar communications and product pictures before and after each communication. Then self-reports liking judgement were collected. fMRI data was extracted from a priori selected brain regions: nucleus accumbens, medial orbitofrontal cortex, amygdala, hippocampus, inferior frontal gyrus, dorsomedial prefrontal cortex assumed to contribute positively and dorsolateral prefrontal cortex and insula were hypothesized to contribute negatively to sales. The resulting values were rank ordered. After our fMRI-based forecast an instore test was conducted in a supermarket on n=63.617 shoppers. Changes in sales were best forecasted by fMRI signal during communication viewing, second best by a comparison of brain signal during product viewing before and after communication and least by explicit liking judgements. The results demonstrate the feasibility of applying neuroimaging methods in a relatively small sample to correctly forecast sales changes at point-of-sale. Copyright © 2016. Published by Elsevier Inc.

  6. How is the weather? Forecasting inpatient glycemic control

    PubMed Central

    Saulnier, George E; Castro, Janna C; Cook, Curtiss B; Thompson, Bithika M

    2017-01-01

    Aim: Apply methods of damped trend analysis to forecast inpatient glycemic control. Method: Observed and calculated point-of-care blood glucose data trends were determined over 62 weeks. Mean absolute percent error was used to calculate differences between observed and forecasted values. Comparisons were drawn between model results and linear regression forecasting. Results: The forecasted mean glucose trends observed during the first 24 and 48 weeks of projections compared favorably to the results provided by linear regression forecasting. However, in some scenarios, the damped trend method changed inferences compared with linear regression. In all scenarios, mean absolute percent error values remained below the 10% accepted by demand industries. Conclusion: Results indicate that forecasting methods historically applied within demand industries can project future inpatient glycemic control. Additional study is needed to determine if forecasting is useful in the analyses of other glucometric parameters and, if so, how to apply the techniques to quality improvement. PMID:29134125

  7. Supplier Short Term Load Forecasting Using Support Vector Regression and Exogenous Input

    NASA Astrophysics Data System (ADS)

    Matijaš, Marin; Vukićcević, Milan; Krajcar, Slavko

    2011-09-01

    In power systems, task of load forecasting is important for keeping equilibrium between production and consumption. With liberalization of electricity markets, task of load forecasting changed because each market participant has to forecast their own load. Consumption of end-consumers is stochastic in nature. Due to competition, suppliers are not in a position to transfer their costs to end-consumers; therefore it is essential to keep forecasting error as low as possible. Numerous papers are investigating load forecasting from the perspective of the grid or production planning. We research forecasting models from the perspective of a supplier. In this paper, we investigate different combinations of exogenous input on the simulated supplier loads and show that using points of delivery as a feature for Support Vector Regression leads to lower forecasting error, while adding customer number in different datasets does the opposite.

  8. A temporal-spatial postprocessing model for probabilistic run-off forecast. With a case study from Ulla-Førre with five catchments and ten lead times

    NASA Astrophysics Data System (ADS)

    Engeland, K.; Steinsland, I.

    2012-04-01

    This work is driven by the needs of next generation short term optimization methodology for hydro power production. Stochastic optimization are about to be introduced; i.e. optimizing when available resources (water) and utility (prices) are uncertain. In this paper we focus on the available resources, i.e. water, where uncertainty mainly comes from uncertainty in future runoff. When optimizing a water system all catchments and several lead times have to be considered simultaneously. Depending on the system of hydropower reservoirs, it might be a set of headwater catchments, a system of upstream /downstream reservoirs where water used from one catchment /dam arrives in a lower catchment maybe days later, or a combination of both. The aim of this paper is therefore to construct a simultaneous probabilistic forecast for several catchments and lead times, i.e. to provide a predictive distribution for the forecasts. Stochastic optimization methods need samples/ensembles of run-off forecasts as input. Hence, it should also be possible to sample from our probabilistic forecast. A post-processing approach is taken, and an error model based on Box- Cox transformation, power transform and a temporal-spatial copula model is used. It accounts for both between catchment and between lead time dependencies. In operational use it is strait forward to sample run-off ensembles from this models that inherits the catchment and lead time dependencies. The methodology is tested and demonstrated in the Ulla-Førre river system, and simultaneous probabilistic forecasts for five catchments and ten lead times are constructed. The methodology has enough flexibility to model operationally important features in this case study such as hetroscadasety, lead-time varying temporal dependency and lead-time varying inter-catchment dependency. Our model is evaluated using CRPS for marginal predictive distributions and energy score for joint predictive distribution. It is tested against deterministic run-off forecast, climatology forecast and a persistent forecast, and is found to be the better probabilistic forecast for lead time grater then two. From an operational point of view the results are interesting as the between catchment dependency gets stronger with longer lead-times.

  9. Evaluation and economic value of winter weather forecasts

    NASA Astrophysics Data System (ADS)

    Snyder, Derrick W.

    State and local highway agencies spend millions of dollars each year to deploy winter operation teams to plow snow and de-ice roadways. Accurate and timely weather forecast information is critical for effective decision making. Students from Purdue University partnered with the Indiana Department of Transportation to create an experimental winter weather forecast service for the 2012-2013 winter season in Indiana to assist in achieving these goals. One forecast product, an hourly timeline of winter weather hazards produced daily, was evaluated for quality and economic value. Verification of the forecasts was performed with data from the Rapid Refresh numerical weather model. Two objective verification criteria were developed to evaluate the performance of the timeline forecasts. Using both criteria, the timeline forecasts had issues with reliability and discrimination, systematically over-forecasting the amount of winter weather that was observed while also missing significant winter weather events. Despite these quality issues, the forecasts still showed significant, but varied, economic value compared to climatology. Economic value of the forecasts was estimated to be 29.5 million or 4.1 million, depending on the verification criteria used. Limitations of this valuation system are discussed and a framework is developed for more thorough studies in the future.

  10. Integrating Wind Profiling Radars and Radiosonde Observations with Model Point Data to Develop a Decision Support Tool to Assess Upper-Level Winds for Space Launch

    NASA Technical Reports Server (NTRS)

    Bauman, William H., III; Flinn, Clay

    2013-01-01

    On the day-of-launch, the 45th Weather Squadron (45 WS) Launch Weather Officers (LWOs) monitor the upper-level winds for their launch customers to include NASA's Launch Services Program and NASA's Ground Systems Development and Operations Program. They currently do not have the capability to display and overlay profiles of upper-level observations and numerical weather prediction model forecasts. The LWOs requested the Applied Meteorology Unit (AMU) develop a tool in the form of a graphical user interface (GUI) that will allow them to plot upper-level wind speed and direction observations from the Kennedy Space Center (KSC) 50 MHz tropospheric wind profiling radar, KSC Shuttle Landing Facility 915 MHz boundary layer wind profiling radar and Cape Canaveral Air Force Station (CCAFS) Automated Meteorological Processing System (AMPS) radiosondes, and then overlay forecast wind profiles from the model point data including the North American Mesoscale (NAM) model, Rapid Refresh (RAP) model and Global Forecast System (GFS) model to assess the performance of these models. The AMU developed an Excel-based tool that provides an objective method for the LWOs to compare the model-forecast upper-level winds to the KSC wind profiling radars and CCAFS AMPS observations to assess the model potential to accurately forecast changes in the upperlevel profile through the launch count. The AMU wrote Excel Visual Basic for Applications (VBA) scripts to automatically retrieve model point data for CCAFS (XMR) from the Iowa State University Archive Data Server (http://mtarchive.qeol.iastate.edu) and the 50 MHz, 915 MHz and AMPS observations from the NASA/KSC Spaceport Weather Data Archive web site (http://trmm.ksc.nasa.gov). The AMU then developed code in Excel VBA to automatically ingest and format the observations and model point data in Excel to ready the data for generating Excel charts for the LWO's. The resulting charts allow the LWOs to independently initialize the three models 0-hour forecasts against the observations to determine which is the best performing model and then overlay the model forecasts on time-matched observations during the launch countdown to further assess the model performance and forecasts. This paper will demonstrate integration of observed and predicted atmospheric conditions into a decision support tool and demonstrate how the GUI is implemented in operations.

  11. MAG4 versus alternative techniques for forecasting active region flare productivity

    PubMed Central

    Falconer, David A; Moore, Ronald L; Barghouty, Abdulnasser F; Khazanov, Igor

    2014-01-01

    MAG4 is a technique of forecasting an active region's rate of production of major flares in the coming few days from a free magnetic energy proxy. We present a statistical method of measuring the difference in performance between MAG4 and comparable alternative techniques that forecast an active region's major-flare productivity from alternative observed aspects of the active region. We demonstrate the method by measuring the difference in performance between the “Present MAG4” technique and each of three alternative techniques, called “McIntosh Active-Region Class,” “Total Magnetic Flux,” and “Next MAG4.” We do this by using (1) the MAG4 database of magnetograms and major flare histories of sunspot active regions, (2) the NOAA table of the major-flare productivity of each of 60 McIntosh active-region classes of sunspot active regions, and (3) five technique performance metrics (Heidke Skill Score, True Skill Score, Percent Correct, Probability of Detection, and False Alarm Rate) evaluated from 2000 random two-by-two contingency tables obtained from the databases. We find that (1) Present MAG4 far outperforms both McIntosh Active-Region Class and Total Magnetic Flux, (2) Next MAG4 significantly outperforms Present MAG4, (3) the performance of Next MAG4 is insensitive to the forward and backward temporal windows used, in the range of one to a few days, and (4) forecasting from the free-energy proxy in combination with either any broad category of McIntosh active-region classes or any Mount Wilson active-region class gives no significant performance improvement over forecasting from the free-energy proxy alone (Present MAG4). Key Points Quantitative comparison of performance of pairs of forecasting techniques Next MAG4 forecasts major flares more accurately than Present MAG4 Present MAG4 forecast outperforms McIntosh AR Class and total magnetic flux PMID:26213517

  12. Probabilistic postprocessing models for flow forecasts for a system of catchments and several lead times

    NASA Astrophysics Data System (ADS)

    Engeland, Kolbjorn; Steinsland, Ingelin

    2014-05-01

    This study introduces a methodology for the construction of probabilistic inflow forecasts for multiple catchments and lead times, and investigates criterions for evaluation of multi-variate forecasts. A post-processing approach is used, and a Gaussian model is applied for transformed variables. The post processing model has two main components, the mean model and the dependency model. The mean model is used to estimate the marginal distributions for forecasted inflow for each catchment and lead time, whereas the dependency models was used to estimate the full multivariate distribution of forecasts, i.e. co-variances between catchments and lead times. In operational situations, it is a straightforward task to use the models to sample inflow ensembles which inherit the dependencies between catchments and lead times. The methodology was tested and demonstrated in the river systems linked to the Ulla-Førre hydropower complex in southern Norway, where simultaneous probabilistic forecasts for five catchments and ten lead times were constructed. The methodology exhibits sufficient flexibility to utilize deterministic flow forecasts from a numerical hydrological model as well as statistical forecasts such as persistent forecasts and sliding window climatology forecasts. It also deals with variation in the relative weights of these forecasts with both catchment and lead time. When evaluating predictive performance in original space using cross validation, the case study found that it is important to include the persistent forecast for the initial lead times and the hydrological forecast for medium-term lead times. Sliding window climatology forecasts become more important for the latest lead times. Furthermore, operationally important features in this case study such as heteroscedasticity, lead time varying between lead time dependency and lead time varying between catchment dependency are captured. Two criterions were used for evaluating the added value of the dependency model. The first one was the Energy score (ES) that is a multi-dimensional generalization of continuous rank probability score (CRPS). ES was calculated for all lead-times and catchments together, for each catchment across all lead times and for each lead time across all catchments. The second criterion was to use CRPS for forecasted inflows accumulated over several lead times and catchments. The results showed that ES was not very sensitive to correct covariance structure, whereas CRPS for accumulated flows where more suitable for evaluating the dependency model. This indicates that it is more appropriate to evaluate relevant univariate variables that depends on the dependency structure then to evaluate the multivariate forecast directly.

  13. Evaluation of the stability indices for the thunderstorm forecasting in the region of Belgrade, Serbia

    NASA Astrophysics Data System (ADS)

    Vujović, D.; Paskota, M.; Todorović, N.; Vučković, V.

    2015-07-01

    The pre-convective atmosphere over Serbia during the ten-year period (2001-2010) was investigated using the radiosonde data from one meteorological station and the thunderstorm observations from thirteen SYNOP meteorological stations. In order to verify their ability to forecast a thunderstorm, several stability indices were examined. Rank sum scores (RSSs) were used to segregate indices and parameters which can differentiate between a thunderstorm and no-thunderstorm event. The following indices had the best RSS values: Lifted index (LI), K index (KI), Showalter index (SI), Boyden index (BI), Total totals (TT), dew-point temperature and mixing ratio. The threshold value test was used in order to determine the appropriate threshold values for these variables. The threshold with the best skill scores was chosen as the optimal. The thresholds were validated in two ways: through the control data set, and comparing the calculated indices thresholds with the values of indices for a randomly chosen day with an observed thunderstorm. The index with the highest skill for thunderstorm forecasting was LI, and then SI, KI and TT. The BI had the poorest skill scores.

  14. Disruption Event Characterization and Forecasting in Tokamaks

    NASA Astrophysics Data System (ADS)

    Berkery, J. W.; Sabbagh, S. A.; Park, Y. S.; Ahn, J. H.; Jiang, Y.; Riquezes, J. D.; Gerhardt, S. P.; Myers, C. E.

    2017-10-01

    The Disruption Event Characterization and Forecasting (DECAF) code, being developed to meet the challenging goal of high reliability disruption prediction in tokamaks, automates data analysis to determine chains of events that lead to disruptions and to forecast their evolution. The relative timing of magnetohydrodynamic modes and other events including plasma vertical displacement, loss of boundary control, proximity to density limits, reduction of safety factor, and mismatch of the measured and desired plasma current are considered. NSTX/-U databases are examined with analysis expanding to DIII-D, KSTAR, and TCV. Characterization of tearing modes has determined mode bifurcation frequency and locking points. In an NSTX database exhibiting unstable resistive wall modes (RWM), the RWM event and loss of boundary control event were found in 100%, and the vertical displacement event in over 90% of cases. A reduced kinetic RWM stability physics model is evaluated to determine the proximity of discharges to marginal stability. The model shows high success as a disruption predictor (greater than 85%) with relatively low false positive rate. Supported by US DOE Contracts DE-FG02-99ER54524, DE-AC02-09CH11466, and DE-SC0016614.

  15. Florida Model Information eXchange System (MIXS).

    DOT National Transportation Integrated Search

    2013-08-01

    Transportation planning largely relies on travel demand forecasting, which estimates the number and type of vehicles that will use a roadway at some point in the future. Forecasting estimates are made by computer models that use a wide variety of dat...

  16. The Comparison of Point Data Models for the Output of WRF Hydro Model in the IDV

    NASA Astrophysics Data System (ADS)

    Ho, Y.; Weber, J.

    2017-12-01

    WRF Hydro netCDF output files contain streamflow, flow depth, longitude, latitude, altitude and stream order values for each forecast point. However, the data are not CF compliant. The total number of forecast points for the US CONUS is approximately 2.7 million and it is a big challenge for any visualization and analysis tool. The IDV point cloud display shows point data as a set of points colored by parameter. This display is very efficient compared to a standard point type display for rendering a large number of points. The one problem we have is that the data I/O can be a bottleneck issue when dealing with a large collection of point input files. In this presentation, we will experiment with different point data models and their APIs to access the same WRF Hydro model output. The results will help us construct a CF compliant netCDF point data format for the community.

  17. Arima model and exponential smoothing method: A comparison

    NASA Astrophysics Data System (ADS)

    Wan Ahmad, Wan Kamarul Ariffin; Ahmad, Sabri

    2013-04-01

    This study shows the comparison between Autoregressive Moving Average (ARIMA) model and Exponential Smoothing Method in making a prediction. The comparison is focused on the ability of both methods in making the forecasts with the different number of data sources and the different length of forecasting period. For this purpose, the data from The Price of Crude Palm Oil (RM/tonne), Exchange Rates of Ringgit Malaysia (RM) in comparison to Great Britain Pound (GBP) and also The Price of SMR 20 Rubber Type (cents/kg) with three different time series are used in the comparison process. Then, forecasting accuracy of each model is measured by examinethe prediction error that producedby using Mean Squared Error (MSE), Mean Absolute Percentage Error (MAPE), and Mean Absolute deviation (MAD). The study shows that the ARIMA model can produce a better prediction for the long-term forecasting with limited data sources, butcannot produce a better prediction for time series with a narrow range of one point to another as in the time series for Exchange Rates. On the contrary, Exponential Smoothing Method can produce a better forecasting for Exchange Rates that has a narrow range of one point to another for its time series, while itcannot produce a better prediction for a longer forecasting period.

  18. Using forecast modelling to evaluate treatment effects in single-group interrupted time series analysis.

    PubMed

    Linden, Ariel

    2018-05-11

    Interrupted time series analysis (ITSA) is an evaluation methodology in which a single treatment unit's outcome is studied serially over time and the intervention is expected to "interrupt" the level and/or trend of that outcome. ITSA is commonly evaluated using methods which may produce biased results if model assumptions are violated. In this paper, treatment effects are alternatively assessed by using forecasting methods to closely fit the preintervention observations and then forecast the post-intervention trend. A treatment effect may be inferred if the actual post-intervention observations diverge from the forecasts by some specified amount. The forecasting approach is demonstrated using the effect of California's Proposition 99 for reducing cigarette sales. Three forecast models are fit to the preintervention series-linear regression (REG), Holt-Winters (HW) non-seasonal smoothing, and autoregressive moving average (ARIMA)-and forecasts are generated into the post-intervention period. The actual observations are then compared with the forecasts to assess intervention effects. The preintervention data were fit best by HW, followed closely by ARIMA. REG fit the data poorly. The actual post-intervention observations were above the forecasts in HW and ARIMA, suggesting no intervention effect, but below the forecasts in the REG (suggesting a treatment effect), thereby raising doubts about any definitive conclusion of a treatment effect. In a single-group ITSA, treatment effects are likely to be biased if the model is misspecified. Therefore, evaluators should consider using forecast models to accurately fit the preintervention data and generate plausible counterfactual forecasts, thereby improving causal inference of treatment effects in single-group ITSA studies. © 2018 John Wiley & Sons, Ltd.

  19. EVALUATION OF ETA- CMAQ O3 FORECAST OVER DIFFERENT REGIONS OF THE CONTINENTAL US AND USING NEW CATEGORICAL EVALUATION METRICS

    EPA Science Inventory

    Developmental forecasts simulations with the Eta-CMAQ modeling system over the continental U.S. were initiated in 2005. This paper presents an evaluation of surface O3 forecast over different regions of the continental U.S. In addition, to the traditional operational e...

  20. Probability fire weather forecasts .. show promise in 3-year trial

    Treesearch

    Paul G. Scowcroft

    1970-01-01

    Probability fire weather forecasts were compared with categorical and climatological forecasts in a trial in southern California during the 1965-1967 fire seasons. Equations were developed to express the reliability of forecasts and degree of skill shown by the forecaster. Evaluation of 336 daily reports suggests that probability forecasts were more reliable. For...

  1. Operational early warning of shallow landslides in Norway: Evaluation of landslide forecasts and associated challenges

    NASA Astrophysics Data System (ADS)

    Dahl, Mads-Peter; Colleuille, Hervé; Boje, Søren; Sund, Monica; Krøgli, Ingeborg; Devoli, Graziella

    2015-04-01

    The Norwegian Water Resources and Energy Directorate (NVE) runs a national early warning system (EWS) for shallow landslides in Norway. Slope failures included in the EWS are debris slides, debris flows, debris avalanches and slush flows. The EWS has been operational on national scale since 2013 and consists of (a) quantitative landslide thresholds and daily hydro-meteorological prognosis; (b) daily qualitative expert evaluation of prognosis / additional data in decision to determine warning levels; (c) publication of warning levels through various custom build internet platforms. The effectiveness of an EWS depends on both the quality of forecasts being issued, and the communication of forecasts to the public. In this analysis a preliminary evaluation of landslide forecasts from the Norwegian EWS within the period 2012-2014 is presented. Criteria for categorizing forecasts as correct, missed events or false alarms are discussed and concrete examples of forecasts falling into the latter two categories are presented. The evaluation show a rate of correct forecasts exceeding 90%. However correct forecast categorization is sometimes difficult, particularly due to poorly documented landslide events. Several challenges has to be met in the process of further lowering rates of missed events of false alarms in the EWS. Among others these include better implementation of susceptibility maps in landslide forecasting, more detailed regionalization of hydro-meteorological landslide thresholds, improved prognosis on precipitation, snowmelt and soil water content as well as the build-up of more experience among the people performing landslide forecasting.

  2. Evaluation of Mesoscale Model Phenomenological Verification Techniques

    NASA Technical Reports Server (NTRS)

    Lambert, Winifred

    2006-01-01

    Forecasters at the Spaceflight Meteorology Group, 45th Weather Squadron, and National Weather Service in Melbourne, FL use mesoscale numerical weather prediction model output in creating their operational forecasts. These models aid in forecasting weather phenomena that could compromise the safety of launch, landing, and daily ground operations and must produce reasonable weather forecasts in order for their output to be useful in operations. Considering the importance of model forecasts to operations, their accuracy in forecasting critical weather phenomena must be verified to determine their usefulness. The currently-used traditional verification techniques involve an objective point-by-point comparison of model output and observations valid at the same time and location. The resulting statistics can unfairly penalize high-resolution models that make realistic forecasts of a certain phenomena, but are offset from the observations in small time and/or space increments. Manual subjective verification can provide a more valid representation of model performance, but is time-consuming and prone to personal biases. An objective technique that verifies specific meteorological phenomena, much in the way a human would in a subjective evaluation, would likely produce a more realistic assessment of model performance. Such techniques are being developed in the research community. The Applied Meteorology Unit (AMU) was tasked to conduct a literature search to identify phenomenological verification techniques being developed, determine if any are ready to use operationally, and outline the steps needed to implement any operationally-ready techniques into the Advanced Weather Information Processing System (AWIPS). The AMU conducted a search of all literature on the topic of phenomenological-based mesoscale model verification techniques and found 10 different techniques in various stages of development. Six of the techniques were developed to verify precipitation forecasts, one to verify sea breeze forecasts, and three were capable of verifying several phenomena. The AMU also determined the feasibility of transitioning each technique into operations and rated the operational capability of each technique on a subjective 1-10 scale: (1) 1 indicates that the technique is only in the initial stages of development, (2) 2-5 indicates that the technique is still undergoing modifications and is not ready for operations, (3) 6-8 indicates a higher probability of integrating the technique into AWIPS with code modifications, and (4) 9-10 indicates that the technique was created for AWIPS and is ready for implementation. Eight of the techniques were assigned a rating of 5 or below. The other two received ratings of 6 and 7, and none of the techniques a rating of 9-10. At the current time, there are no phenomenological model verification techniques ready for operational use. However, several of the techniques described in this report may become viable techniques in the future and should be monitored for updates in the literature. The desire to use a phenomenological verification technique is widespread in the modeling community, and it is likely that other techniques besides those described herein are being developed, but the work has not yet been published. Therefore, the AMIU recommends that the literature continue to be monitored for updates to the techniques described in this report and for new techniques being developed whose results have not yet been published. 111

  3. Combining SVM and flame radiation to forecast BOF end-point

    NASA Astrophysics Data System (ADS)

    Wen, Hongyuan; Zhao, Qi; Xu, Lingfei; Zhou, Munchun; Chen, Yanru

    2009-05-01

    Because of complex reactions in Basic Oxygen Furnace (BOF) for steelmaking, the main end-point control methods of steelmaking have insurmountable difficulties. Aiming at these problems, a support vector machine (SVM) method for forecasting the BOF steelmaking end-point is presented based on flame radiation information. The basis is that the furnace flame is the performance of the carbon oxygen reaction, because the carbon oxygen reaction is the major reaction in the steelmaking furnace. The system can acquire spectrum and image data quickly in the steelmaking adverse environment. The structure of SVM and the multilayer feed-ward neural network are similar, but SVM model could overcome the inherent defects of the latter. The model is trained and forecasted by using SVM and some appropriate variables of light and image characteristic information. The model training process follows the structure risk minimum (SRM) criterion and the design parameter can be adjusted automatically according to the sampled data in the training process. Experimental results indicate that the prediction precision of the SVM model and the executive time both meet the requirements of end-point judgment online.

  4. Bulk electric system reliability evaluation incorporating wind power and demand side management

    NASA Astrophysics Data System (ADS)

    Huang, Dange

    Electric power systems are experiencing dramatic changes with respect to structure, operation and regulation and are facing increasing pressure due to environmental and societal constraints. Bulk electric system reliability is an important consideration in power system planning, design and operation particularly in the new competitive environment. A wide range of methods have been developed to perform bulk electric system reliability evaluation. Theoretically, sequential Monte Carlo simulation can include all aspects and contingencies in a power system and can be used to produce an informative set of reliability indices. It has become a practical and viable tool for large system reliability assessment technique due to the development of computing power and is used in the studies described in this thesis. The well-being approach used in this research provides the opportunity to integrate an accepted deterministic criterion into a probabilistic framework. This research work includes the investigation of important factors that impact bulk electric system adequacy evaluation and security constrained adequacy assessment using the well-being analysis framework. Load forecast uncertainty is an important consideration in an electrical power system. This research includes load forecast uncertainty considerations in bulk electric system reliability assessment and the effects on system, load point and well-being indices and reliability index probability distributions are examined. There has been increasing worldwide interest in the utilization of wind power as a renewable energy source over the last two decades due to enhanced public awareness of the environment. Increasing penetration of wind power has significant impacts on power system reliability, and security analyses become more uncertain due to the unpredictable nature of wind power. The effects of wind power additions in generating and bulk electric system reliability assessment considering site wind speed correlations and the interactive effects of wind power and load forecast uncertainty on system reliability are examined. The concept of the security cost associated with operating in the marginal state in the well-being framework is incorporated in the economic analyses associated with system expansion planning including wind power and load forecast uncertainty. Overall reliability cost/worth analyses including security cost concepts are applied to select an optimal wind power injection strategy in a bulk electric system. The effects of the various demand side management measures on system reliability are illustrated using the system, load point, and well-being indices, and the reliability index probability distributions. The reliability effects of demand side management procedures in a bulk electric system including wind power and load forecast uncertainty considerations are also investigated. The system reliability effects due to specific demand side management programs are quantified and examined in terms of their reliability benefits.

  5. Short-Term Forecasting of Loads and Wind Power for Latvian Power System: Accuracy and Capacity of the Developed Tools

    NASA Astrophysics Data System (ADS)

    Radziukynas, V.; Klementavičius, A.

    2016-04-01

    The paper analyses the performance results of the recently developed short-term forecasting suit for the Latvian power system. The system load and wind power are forecasted using ANN and ARIMA models, respectively, and the forecasting accuracy is evaluated in terms of errors, mean absolute errors and mean absolute percentage errors. The investigation of influence of additional input variables on load forecasting errors is performed. The interplay of hourly loads and wind power forecasting errors is also evaluated for the Latvian power system with historical loads (the year 2011) and planned wind power capacities (the year 2023).

  6. Status of the NASA GMAO Observing System Simulation Experiment

    NASA Technical Reports Server (NTRS)

    Prive, Nikki C.; Errico, Ronald M.

    2014-01-01

    An Observing System Simulation Experiment (OSSE) is a pure modeling study used when actual observations are too expensive or difficult to obtain. OSSEs are valuable tools for determining the potential impact of new observing systems on numerical weather forecasts and for evaluation of data assimilation systems (DAS). An OSSE has been developed at the NASA Global Modeling and Assimilation Office (GMAO, Errico et al 2013). The GMAO OSSE uses a 13-month integration of the European Centre for Medium- Range Weather Forecasts 2005 operational model at T511/L91 resolution for the Nature Run (NR). Synthetic observations have been updated so that they are based on real observations during the summer of 2013. The emulated observation types include AMSU-A, MHS, IASI, AIRS, and HIRS4 radiance data, GPS-RO, and conventional types including aircraft, rawinsonde, profiler, surface, and satellite winds. The synthetic satellite wind observations are colocated with the NR cloud fields, and the rawinsondes are advected during ascent using the NR wind fields. Data counts for the synthetic observations are matched as closely as possible to real data counts, as shown in Figure 2. Errors are added to the synthetic observations to emulate representativeness and instrument errors. The synthetic errors are calibrated so that the statistics of observation innovation and analysis increments in the OSSE are similar to the same statistics for assimilation of real observations, in an iterative method described by Errico et al (2013). The standard deviations of observation minus forecast (xo-H(xb)) are compared for the OSSE and real data in Figure 3. The synthetic errors include both random, uncorrelated errors, and an additional correlated error component for some observational types. Vertically correlated errors are included for conventional sounding data and GPS-RO, and channel correlated errors are introduced to AIRS and IASI (Figure 4). HIRS, AMSU-A, and MHS have a component of horizontally correlated error. The forecast model used by the GMAO OSSE is the Goddard Earth Observing System Model, Version 5 (GEOS-5) with Gridpoint Statistical Interpolation (GSI) DAS. The model version has been updated to v. 5.13.3, corresponding to the current operational model. Forecasts are run on a cube-sphere grid with 180 points along each edge of the cube (approximately 0.5 degree horizontal resolution) with 72 vertical levels. The DAS is cycled at 6-hour intervals, with 240 hour forecasts launched daily at 0000 UTC. Evaluation of the forecasting skill for July and August is currently underway. Prior versions of the GMAO OSSE have been found to have greater forecasting skill than real world forecasts. It is anticipated that similar forecast skill will be found in the updated OSSE.

  7. Operational forecasting of human-biometeorological conditions

    NASA Astrophysics Data System (ADS)

    Giannaros, T. M.; Lagouvardos, K.; Kotroni, V.; Matzarakis, A.

    2018-03-01

    This paper presents the development of an operational forecasting service focusing on human-biometeorological conditions. The service is based on the coupling of numerical weather prediction models with an advanced human-biometeorological model. Human thermal perception and stress forecasts are issued on a daily basis for Greece, in both point and gridded format. A user-friendly presentation approach is adopted for communicating the forecasts to the public via the worldwide web. The development of the presented service highlights the feasibility of replacing standard meteorological parameters and/or indices used in operational weather forecasting activities for assessing the thermal environment. This is of particular significance for providing effective, human-biometeorology-oriented, warnings for both heat waves and cold outbreaks.

  8. Retrospective Evaluation of the Long-Term CSEP-Italy Earthquake Forecasts

    NASA Astrophysics Data System (ADS)

    Werner, M. J.; Zechar, J. D.; Marzocchi, W.; Wiemer, S.

    2010-12-01

    On 1 August 2009, the global Collaboratory for the Study of Earthquake Predictability (CSEP) launched a prospective and comparative earthquake predictability experiment in Italy. The goal of the CSEP-Italy experiment is to test earthquake occurrence hypotheses that have been formalized as probabilistic earthquake forecasts over temporal scales that range from days to years. In the first round of forecast submissions, members of the CSEP-Italy Working Group presented eighteen five-year and ten-year earthquake forecasts to the European CSEP Testing Center at ETH Zurich. We considered the twelve time-independent earthquake forecasts among this set and evaluated them with respect to past seismicity data from two Italian earthquake catalogs. Here, we present the results of tests that measure the consistency of the forecasts with the past observations. Besides being an evaluation of the submitted time-independent forecasts, this exercise provided insight into a number of important issues in predictability experiments with regard to the specification of the forecasts, the performance of the tests, and the trade-off between the robustness of results and experiment duration.

  9. Ensemble Flow Forecasts for Risk Based Reservoir Operations of Lake Mendocino in Mendocino County, California

    NASA Astrophysics Data System (ADS)

    Delaney, C.; Hartman, R. K.; Mendoza, J.; Evans, K. M.; Evett, S.

    2016-12-01

    Forecast informed reservoir operations (FIRO) is a methodology that incorporates short to mid-range precipitation or flow forecasts to inform the flood operations of reservoirs. Previous research and modeling for flood control reservoirs has shown that FIRO can reduce flood risk and increase water supply for many reservoirs. The risk-based method of FIRO presents a unique approach that incorporates flow forecasts made by NOAA's California-Nevada River Forecast Center (CNRFC) to model and assess risk of meeting or exceeding identified management targets or thresholds. Forecasted risk is evaluated against set risk tolerances to set reservoir flood releases. A water management model was developed for Lake Mendocino, a 116,500 acre-foot reservoir located near Ukiah, California. Lake Mendocino is a dual use reservoir, which is owned and operated for flood control by the United State Army Corps of Engineers and is operated by the Sonoma County Water Agency for water supply. Due to recent changes in the operations of an upstream hydroelectric facility, this reservoir has been plagued with water supply reliability issues since 2007. FIRO is applied to Lake Mendocino by simulating daily hydrologic conditions from 1985 to 2010 in the Upper Russian River from Lake Mendocino to the City of Healdsburg approximately 50 miles downstream. The risk-based method is simulated using a 15-day, 61 member streamflow hindcast by the CNRFC. Model simulation results of risk-based flood operations demonstrate a 23% increase in average end of water year (September 30) storage levels over current operations. Model results show no increase in occurrence of flood damages for points downstream of Lake Mendocino. This investigation demonstrates that FIRO may be a viable flood control operations approach for Lake Mendocino and warrants further investigation through additional modeling and analysis.

  10. How do I know if my forecasts are better? Using benchmarks in hydrological ensemble prediction

    NASA Astrophysics Data System (ADS)

    Pappenberger, F.; Ramos, M. H.; Cloke, H. L.; Wetterhall, F.; Alfieri, L.; Bogner, K.; Mueller, A.; Salamon, P.

    2015-03-01

    The skill of a forecast can be assessed by comparing the relative proximity of both the forecast and a benchmark to the observations. Example benchmarks include climatology or a naïve forecast. Hydrological ensemble prediction systems (HEPS) are currently transforming the hydrological forecasting environment but in this new field there is little information to guide researchers and operational forecasters on how benchmarks can be best used to evaluate their probabilistic forecasts. In this study, it is identified that the forecast skill calculated can vary depending on the benchmark selected and that the selection of a benchmark for determining forecasting system skill is sensitive to a number of hydrological and system factors. A benchmark intercomparison experiment is then undertaken using the continuous ranked probability score (CRPS), a reference forecasting system and a suite of 23 different methods to derive benchmarks. The benchmarks are assessed within the operational set-up of the European Flood Awareness System (EFAS) to determine those that are 'toughest to beat' and so give the most robust discrimination of forecast skill, particularly for the spatial average fields that EFAS relies upon. Evaluating against an observed discharge proxy the benchmark that has most utility for EFAS and avoids the most naïve skill across different hydrological situations is found to be meteorological persistency. This benchmark uses the latest meteorological observations of precipitation and temperature to drive the hydrological model. Hydrological long term average benchmarks, which are currently used in EFAS, are very easily beaten by the forecasting system and the use of these produces much naïve skill. When decomposed into seasons, the advanced meteorological benchmarks, which make use of meteorological observations from the past 20 years at the same calendar date, have the most skill discrimination. They are also good at discriminating skill in low flows and for all catchment sizes. Simpler meteorological benchmarks are particularly useful for high flows. Recommendations for EFAS are to move to routine use of meteorological persistency, an advanced meteorological benchmark and a simple meteorological benchmark in order to provide a robust evaluation of forecast skill. This work provides the first comprehensive evidence on how benchmarks can be used in evaluation of skill in probabilistic hydrological forecasts and which benchmarks are most useful for skill discrimination and avoidance of naïve skill in a large scale HEPS. It is recommended that all HEPS use the evidence and methodology provided here to evaluate which benchmarks to employ; so forecasters can have trust in their skill evaluation and will have confidence that their forecasts are indeed better.

  11. Small area population forecasting: some experience with British models.

    PubMed

    Openshaw, S; Van Der Knaap, G A

    1983-01-01

    This study is concerned with the evaluation of the various models including time-series forecasts, extrapolation, and projection procedures, that have been developed to prepare population forecasts for planning purposes. These models are evaluated using data for the Netherlands. "As part of a research project at the Erasmus University, space-time population data has been assembled in a geographically consistent way for the period 1950-1979. These population time series are of sufficient length for the first 20 years to be used to build models and then evaluate the performance of the model for the next 10 years. Some 154 different forecasting models for 832 municipalities have been evaluated. It would appear that the best forecasts are likely to be provided by either a Holt-Winters model, or a ratio-correction model, or a low order exponential-smoothing model." excerpt

  12. A PERFORMANCE EVALUATION OF THE ETA- CMAQ AIR QUALITY FORECAST SYSTEM FOR THE SUMMER OF 2005

    EPA Science Inventory

    This poster presents an evaluation of the Eta-CMAQ Air Quality Forecast System's experimental domain using O3 observations obtained from EPA's AIRNOW program and a suite of statistical metrics examining both discrete and categorical forecasts.

  13. Results of the Clarus demonstrations : evaluation of enhanced road weather forecasting enabled by Clarus.

    DOT National Transportation Integrated Search

    2011-06-14

    This document is the final report of an evaluation of Clarus-enabled enhanced road weather forecasting used in the Clarus Demonstrations. This report examines the use of Clarus data to enhance four types of weather models and forecasts: The Local Ana...

  14. Effect of high latitude filtering on NWP skill

    NASA Technical Reports Server (NTRS)

    Kalnay, E.; Takacs, L. L.; Hoffman, R. N.

    1984-01-01

    The high latitude filtering techniques commonly employed in global grid point models to eliminate the high frequency waves associated with the convergence of meridians, can introduce serious distortions which ultimately affect the solution at all latitudes. Experiments completed so far with the 4 deg x 5 deg, 9-level GLAS Fourth Order Model indicate that the high latitude filter currently in operation affects only minimally its forecasting skill. In one case, however, the use of pressure gradient filter significantly improved the forecast. Three day forecasts with the pressure gradient and operational filters are compared as are 5-day forecasts with no filter.

  15. The GISS sounding temperature impact test

    NASA Technical Reports Server (NTRS)

    Halem, M.; Ghil, M.; Atlas, R.; Susskind, J.; Quirk, W. J.

    1978-01-01

    The impact of DST 5 and DST 6 satellite sounding data on mid-range forecasting was studied. The GISS temperature sounding technique, the GISS time-continuous four-dimensional assimilation procedure based on optimal statistical analysis, the GISS forecast model, and the verification techniques developed, including impact on local precipitation forecasts are described. It is found that the impact of sounding data was substantial and beneficial for the winter test period, Jan. 29 - Feb. 21. 1976. Forecasts started from initial state obtained with the aid of satellite data showed a mean improvement of about 4 points in the 48 and 772 hours Sub 1 scores as verified over North America and Europe. This corresponds to an 8 to 12 hour forecast improvement in the forecast range at 48 hours. An automated local precipitation forecast model applied to 128 cities in the United States showed on an average 15% improvement when satellite data was used for numerical forecasts. The improvement was 75% in the midwest.

  16. When idols look into the future: fair treatment modulates the affective forecasting error in talent show candidates.

    PubMed

    Feys, Marjolein; Anseel, Frederik

    2015-03-01

    People's affective forecasts are often inaccurate because they tend to overestimate how they will feel after an event. As life decisions are often based on affective forecasts, it is crucial to find ways to manage forecasting errors. We examined the impact of a fair treatment on forecasting errors in candidates in a Belgian reality TV talent show. We found that perceptions of fair treatment increased the forecasting error for losers (a negative audition decision) but decreased it for winners (a positive audition decision). For winners, this effect was even more pronounced when candidates were highly invested in their self-view as a future pop idol whereas for losers, the effect was more pronounced when importance was low. The results in this study point to a potential paradox between maximizing happiness and decreasing forecasting errors. A fair treatment increased the forecasting error for losers, but actually made them happier. © 2014 The British Psychological Society.

  17. Estimating the snowfall limit in alpine and pre-alpine valleys: A local evaluation of operational approaches

    NASA Astrophysics Data System (ADS)

    Fehlmann, Michael; Gascón, Estíbaliz; Rohrer, Mario; Schwarb, Manfred; Stoffel, Markus

    2018-05-01

    The snowfall limit has important implications for different hazardous processes associated with prolonged or heavy precipitation such as flash floods, rain-on-snow events and freezing precipitation. To increase preparedness and to reduce risk in such situations, early warning systems are frequently used to monitor and predict precipitation events at different temporal and spatial scales. However, in alpine and pre-alpine valleys, the estimation of the snowfall limit remains rather challenging. In this study, we characterize uncertainties related to snowfall limit for different lead times based on local measurements of a vertically pointing micro rain radar (MRR) and a disdrometer in the Zulg valley, Switzerland. Regarding the monitoring, we show that the interpolation of surface temperatures tends to overestimate the altitude of the snowfall limit and can thus lead to highly uncertain estimates of liquid precipitation in the catchment. This bias is much smaller in the Integrated Nowcasting through Comprehensive Analysis (INCA) system, which integrates surface station and remotely sensed data as well as outputs of a numerical weather prediction model. To reduce systematic error, we perform a bias correction based on local MRR measurements and thereby demonstrate the added value of such measurements for the estimation of liquid precipitation in the catchment. Regarding the nowcasting, we show that the INCA system provides good estimates up to 6 h ahead and is thus considered promising for operational hydrological applications. Finally, we explore the medium-range forecasting of precipitation type, especially with respect to rain-on-snow events. We show for a selected case study that the probability for a certain precipitation type in an ensemble-based forecast is more persistent than the respective type in the high-resolution forecast (HRES) of the European Centre for Medium Range Weather Forecasts Integrated Forecasting System (ECMWF IFS). In this case study, the ensemble-based forecast could be used to anticipate such an event up to 7-8 days ahead, whereas the use of the HRES is limited to a lead time of 4-5 days. For the different lead times investigated, we point out possibilities of considering uncertainties in snowfall limit and precipitation type estimates so as to increase preparedness to risk situations.

  18. An Evaluation of the NOAA Climate Forecast System Subseasonal Forecasts

    NASA Astrophysics Data System (ADS)

    Mass, C.; Weber, N.

    2016-12-01

    This talk will describe a multi-year evaluation of the 1-5 week forecasts of the NOAA Climate Forecasting System (CFS) over the globe, North America, and the western U.S. Forecasts are evaluated for both specific times and for a variety of time-averaging periods. Initial results show a loss of predictability at approximately three weeks, with sea surface temperature retaining predictability longer than atmospheric variables. It is shown that a major CFS problem is an inability to realistically simulate propagating convection in the tropics, with substantial implications for midlatitude teleconnections and subseasonal predictability. The inability of CFS to deal with tropical convection will be discussed in connection with the prediction of extreme climatic events over the midlatitudes.

  19. Metrics for Evaluating the Accuracy of Solar Power Forecasting: Preprint

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, J.; Hodge, B. M.; Florita, A.

    2013-10-01

    Forecasting solar energy generation is a challenging task due to the variety of solar power systems and weather regimes encountered. Forecast inaccuracies can result in substantial economic losses and power system reliability issues. This paper presents a suite of generally applicable and value-based metrics for solar forecasting for a comprehensive set of scenarios (i.e., different time horizons, geographic locations, applications, etc.). In addition, a comprehensive framework is developed to analyze the sensitivity of the proposed metrics to three types of solar forecasting improvements using a design of experiments methodology, in conjunction with response surface and sensitivity analysis methods. The resultsmore » show that the developed metrics can efficiently evaluate the quality of solar forecasts, and assess the economic and reliability impact of improved solar forecasting.« less

  20. Forecast skill impact of drifting buoys in the Southern Hemisphere

    NASA Technical Reports Server (NTRS)

    Kalnay, E.; Atlas, R.; Baker, W.; Halem, M.

    1984-01-01

    Two analyses are performed to evaluate the effect of drift buoys and the FGGE's special observing system (SOS) on forecasting. The FGGE analysis utilizes all level II-b conventional and special data, and the Nosat analysis employs only surface and conventional upper air data. Twelve five-day forecasts are produced from these data. An additional experiment utilizing the FGGE data base minus buoys data, and the Nosat data base including buoys data is being conducted. The forecasts are compared and synoptic evaluation of the effect of buoys data is described. The results reveal that the FGGE data base with the SOS significantly improves forecasting in the Southern Hemisphere and the loss of buoys data does not have a great effect on forecasting. The Nosat data has less impact on forecasting; however, the addition of buoys data provides an improvement in forecast skills.

  1. Improving Seasonal Crop Monitoring and Forecasting for Soybean and Corn in Iowa

    NASA Astrophysics Data System (ADS)

    Togliatti, K.; Archontoulis, S.; Dietzel, R.; VanLoocke, A.

    2016-12-01

    Accurately forecasting crop yield in advance of harvest could greatly benefit farmers, however few evaluations have been conducted to determine the effectiveness of forecasting methods. We tested one such method that used a combination of short-term weather forecasting from the Weather Research and Forecasting Model (WRF) to predict in season weather variables, such as, maximum and minimum temperature, precipitation and radiation at 4 different forecast lengths (2 weeks, 1 week, 3 days, and 0 days). This forecasted weather data along with the current and historic (previous 35 years) data from the Iowa Environmental Mesonet was combined to drive Agricultural Production Systems sIMulator (APSIM) simulations to forecast soybean and corn yields in 2015 and 2016. The goal of this study is to find the forecast length that reduces the variability of simulated yield predictions while also increasing the accuracy of those predictions. APSIM simulations of crop variables were evaluated against bi-weekly field measurements of phenology, biomass, and leaf area index from early and late planted soybean plots located at the Agricultural Engineering and Agronomy Research Farm in central Iowa as well as the Northwest Research Farm in northwestern Iowa. WRF model predictions were evaluated against observed weather data collected at the experimental fields. Maximum temperature was the most accurately predicted variable, followed by minimum temperature and radiation, and precipitation was least accurate according to RMSE values and the number of days that were forecasted within a 20% error of the observed weather. Our analysis indicated that for the majority of months in the growing season the 3 day forecast performed the best. The 1 week forecast came in second and the 2 week forecast was the least accurate for the majority of months. Preliminary results for yield indicate that the 2 week forecast is the least variable of the forecast lengths, however it also is the least accurate. The 3 day and 1 week forecast have a better accuracy, with an increase in variability.

  2. Short-term Drought Prediction in India.

    NASA Astrophysics Data System (ADS)

    Shah, R.; Mishra, V.

    2014-12-01

    Medium range soil moisture drought forecast helps in decision making in the field of agriculture and water resources management. Part of skills in medium range drought forecast comes from precipitation. Proper evaluation and correction of precipitation forecast may improve drought predictions. Here, we evaluate skills of ensemble mean precipitation forecast from Global Ensemble Forecast System (GEFS) for medium range drought predictions over India. Climatological mean (CLIM) of historic data (OBS) are used as reference forecast to evaluate GEFS precipitation forecast. Analysis was conducted based on forecast initiated on 1st and 15th dates of each month for lead up to 7-days. Correlation and RMSE were used to estimate skill scores of accumulated GEFS precipitation forecast from lead 1 to 7-days. Volumetric indices based on the 2X2 contingency table were used to check missed and falsely predicted historic volume of daily precipitation from GEFS in different regions and at different thresholds. GEFS showed improvement in correlation of 0.44 over CLIM during the monsoon season and 0.55 during the winter season. Lower RMSE was showed by GEFS than CLIM. Ratio of RMSE in GEFS and CLIM comes out as 0.82 and 0.4 (perfect skill is at zero) during the monsoon and winter season, respectively. We finally used corrected GEFS forecast to derive the Variable Infiltration Capacity (VIC) model, which was used to develop short-term forecast of hydrologic and agricultural (soil moisture) droughts in India.

  3. What If We Had A Magnetograph at Lagrangian L5?

    NASA Technical Reports Server (NTRS)

    Pevtsov, Alexei A.; Bertello, Luca; MacNeice, Peter; Petrie, Gordon

    2016-01-01

    Synoptic Carrington charts of magnetic field are routinely used as an input for modelings of solar wind and other aspects of space weather forecast. However, these maps are constructed using only the observations from the solar hemisphere facing Earth. The evolution of magnetic flux on the "farside" of the Sun, which may affect the topology of coronal field in the "nearside," is largely ignored. It is commonly accepted that placing a magnetograph in Lagrangian L5 point would improve the space weather forecast. However, the quantitative estimates of anticipated improvements have been lacking. We use longitudinal magnetograms from the Synoptic Optical Long-term Investigations of the Sun (SOLIS) to investigate how adding data from L5 point would affect the outcome of two major models used in space weather forecast.

  4. HESS Opinions "On forecast (in)consistency in a hydro-meteorological chain: curse or blessing?"

    NASA Astrophysics Data System (ADS)

    Pappenberger, F.; Cloke, H. L.; Persson, A.; Demeritt, D.

    2011-01-01

    Flood forecasting increasingly relies on Numerical Weather Prediction (NWP) forecasts to achieve longer lead times (see Cloke et al., 2009; Cloke and Pappenberger, 2009). One of the key difficulties that is emerging in constructing a decision framework for these flood forecasts is when consecutive forecasts are different, leading to different conclusions regarding the issuing of forecasts, and hence inconsistent. In this opinion paper we explore some of the issues surrounding such forecast inconsistency (also known as "jumpiness", "turning points", "continuity" or number of "swings"; Zoster et al., 2009; Mills and Pepper, 1999; Lashley et al., 2008). We begin by defining what forecast inconsistency is; why forecasts might be inconsistent; how we should analyse it; what we should do about it; how we should communicate it and whether it is a totally undesirable property. The property of consistency is increasingly emerging as a hot topic in many forecasting environments (for a limited discussion on NWP inconsistency see Persson, 2011). However, in this opinion paper we restrict the discussion to a hydro-meteorological forecasting chain in which river discharge forecasts are produced using inputs from NWP. In this area of research (in)consistency is receiving recent interest and application (see e.g., Bartholmes et al., 2008; Pappenberger et al., 2011).

  5. Forecasting Tools Point to Fishing Hotspots

    NASA Technical Reports Server (NTRS)

    2009-01-01

    Private weather forecaster WorldWinds Inc. of Slidell, Louisiana has employed satellite-gathered oceanic data from Marshall Space Flight Center to create a service that is every fishing enthusiast s dream. The company's FishBytes system uses information about sea surface temperature and chlorophyll levels to forecast favorable conditions for certain fish populations. Transmitting the data to satellite radio subscribers, FishBytes provides maps that guide anglers to the areas they are most likely to make their favorite catch.

  6. Sources of Wind Variability at a Single Station in Complex Terrain During Tropical Cyclone Passage

    DTIC Science & Technology

    2013-12-01

    Mesoscale Prediction System CPA Closest point of approach ET Extratropical transition FNMOC Fleet Numerical Meteorology and Oceanography Center...forecasts. However, 2 the TC forecast tracks and warnings they issue necessarily focus on the large-scale structure of the storm , and are not...winds at one station. Also, this technique is a storm - centered forecast and even if the grid spacing is on order of one kilometer, it is unlikely

  7. Forecasting Global Rainfall for Points Using ECMWF's Global Ensemble and Its Applications in Flood Forecasting

    NASA Astrophysics Data System (ADS)

    Pillosu, F. M.; Hewson, T.; Mazzetti, C.

    2017-12-01

    Prediction of local extreme rainfall has historically been the remit of nowcasting and high resolution limited area modelling, which represent only limited areas, may not be spatially accurate, give reasonable results only for limited lead times (<2 days) and become prohibitively expensive at global scale. ECMWF/EFAS/GLOFAS have developed a novel, cost-effective and physically-based statistical post-processing software ("ecPoint-Rainfall, ecPR", operational in 2017) that uses ECMWF Ensemble (ENS) output to deliver global probabilistic rainfall forecasts for points up to day 10. Firstly, ecPR applies a new notion of "remote calibration", which 1) allows us to replicate a multi-centennial training period using only one year of data, and 2) provides forecasts for anywhere in the world. Secondly, the software applies an understanding of how different rainfall generation mechanisms lead to different degrees of sub-grid variability in rainfall totals, and of where biases in the model can be improved upon. A long-term verification has shown that the post-processed rainfall has better reliability and resolution at every lead time if compared with ENS, and for large totals, ecPR outputs have the same skill at day 5 that the raw ENS has at day 1 (ROC area metric). ecPR could be used as input for hydrological models if its probabilistic output is modified accordingly to the inputs requirements for hydrological models. Indeed, ecPR does not provide information on where the highest total is likely to occur inside the gridbox, nor on the spatial distribution of rainfall values nearby. "Scenario forecasts" could be a solution. They are derived from locating the rainfall peak in sensitive positions (e.g. urban areas), and then redistributing the remaining quantities in the gridbox modifying traditional spatial correlation characterization methodologies (e.g. variogram analysis) in order to take account, for instance, of the type of rainfall forecast (stratiform, convective). Such an approach could be a turning point in the field of medium-range global real-time riverine flood forecasts. This presentation will illustrate for ecPR 1) system calibration, 2) operational implementation, 3) long-term verification, 4) future developments, and 5) early ideas for the application of ecPR outputs in hydrological models.

  8. A new short-term forecasting model for the total electron content storm time disturbances

    NASA Astrophysics Data System (ADS)

    Tsagouri, Ioanna; Koutroumbas, Konstantinos; Elias, Panagiotis

    2018-06-01

    This paper aims to introduce a new model for the short-term forecast of the vertical Total Electron Content (vTEC). The basic idea of the proposed model lies on the concept of the Solar Wind driven autoregressive model for Ionospheric short-term Forecast (SWIF). In its original version, the model is operationally implemented in the DIAS system (http://dias.space.noa.gr) and provides alerts and warnings for upcoming ionospheric disturbances, as well as single site and regional forecasts of the foF2 critical frequency over Europe up to 24 h in advance. The forecasts are driven by the real time assessment of the solar wind conditions at ACE location. The comparative analysis of the variations in foF2 and vTEC during eleven geomagnetic storm events that occurred in the present solar cycle 24 reveals similarities but also differences in the storm-time response of the two characteristics with respect to the local time and the latitude of the observation point. Since the aforementioned dependences drive the storm-time forecasts of the SWIF model, the results obtained here support the upgrade of the SWIF's modeling technique in forecasting the storm-time vTEC variation from its onset to full development and recovery. According to the proposed approach, the vTEC storm-time response can be forecasted from 1 to 12-13 h before its onset, depending on the local time of the observation point at storm onset at L1. Preliminary results on the assessment of the performance of the proposed model and further considerations on its potential implementation in operational mode are also discussed.

  9. Statistical earthquake focal mechanism forecasts

    NASA Astrophysics Data System (ADS)

    Kagan, Yan Y.; Jackson, David D.

    2014-04-01

    Forecasts of the focal mechanisms of future shallow (depth 0-70 km) earthquakes are important for seismic hazard estimates and Coulomb stress, and other models of earthquake occurrence. Here we report on a high-resolution global forecast of earthquake rate density as a function of location, magnitude and focal mechanism. In previous publications we reported forecasts of 0.5° spatial resolution, covering the latitude range from -75° to +75°, based on the Global Central Moment Tensor earthquake catalogue. In the new forecasts we have improved the spatial resolution to 0.1° and the latitude range from pole to pole. Our focal mechanism estimates require distance-weighted combinations of observed focal mechanisms within 1000 km of each gridpoint. Simultaneously, we calculate an average rotation angle between the forecasted mechanism and all the surrounding mechanisms, using the method of Kagan & Jackson proposed in 1994. This average angle reveals the level of tectonic complexity of a region and indicates the accuracy of the prediction. The procedure becomes problematical where longitude lines are not approximately parallel, and where shallow earthquakes are so sparse that an adequate sample spans very large distances. North or south of 75°, the azimuths of points 1000 km away may vary by about 35°. We solved this problem by calculating focal mechanisms on a plane tangent to the Earth's surface at each forecast point, correcting for the rotation of the longitude lines at the locations of earthquakes included in the averaging. The corrections are negligible between -30° and +30° latitude, but outside that band uncorrected rotations can be significantly off. Improved forecasts at 0.5° and 0.1° resolution are posted at http://eq.ess.ucla.edu/kagan/glob_gcmt_index.html.

  10. Development of an Adaptable Display and Diagnostic System for the Evaluation of Tropical Cyclone Forecasts

    NASA Astrophysics Data System (ADS)

    Kucera, P. A.; Burek, T.; Halley-Gotway, J.

    2015-12-01

    NCAR's Joint Numerical Testbed Program (JNTP) focuses on the evaluation of experimental forecasts of tropical cyclones (TCs) with the goal of developing new research tools and diagnostic evaluation methods that can be transitioned to operations. Recent activities include the development of new TC forecast verification methods and the development of an adaptable TC display and diagnostic system. The next generation display and diagnostic system is being developed to support evaluation needs of the U.S. National Hurricane Center (NHC) and broader TC research community. The new hurricane display and diagnostic capabilities allow forecasters and research scientists to more deeply examine the performance of operational and experimental models. The system is built upon modern and flexible technology that includes OpenLayers Mapping tools that are platform independent. The forecast track and intensity along with associated observed track information are stored in an efficient MySQL database. The system provides easy-to-use interactive display system, and provides diagnostic tools to examine forecast track stratified by intensity. Consensus forecasts can be computed and displayed interactively. The system is designed to display information for both real-time and for historical TC cyclones. The display configurations are easily adaptable to meet the needs of the end-user preferences. Ongoing enhancements include improving capabilities for stratification and evaluation of historical best tracks, development and implementation of additional methods to stratify and compute consensus hurricane track and intensity forecasts, and improved graphical display tools. The display is also being enhanced to incorporate gridded forecast, satellite, and sea surface temperature fields. The presentation will provide an overview of the display and diagnostic system development and demonstration of the current capabilities.

  11. Predicting and downscaling ENSO impacts on intraseasonal precipitation statistics in California: The 1997/98 event

    USGS Publications Warehouse

    Gershunov, A.; Barnett, T.P.; Cayan, D.R.; Tubbs, T.; Goddard, L.

    2000-01-01

    Three long-range forecasting methods have been evaluated for prediction and downscaling of seasonal and intraseasonal precipitation statistics in California. Full-statistical, hybrid-dynamical - statistical and full-dynamical approaches have been used to forecast El Nin??o - Southern Oscillation (ENSO) - related total precipitation, daily precipitation frequency, and average intensity anomalies during the January - March season. For El Nin??o winters, the hybrid approach emerges as the best performer, while La Nin??a forecasting skill is poor. The full-statistical forecasting method features reasonable forecasting skill for both La Nin??a and El Nin??o winters. The performance of the full-dynamical approach could not be evaluated as rigorously as that of the other two forecasting schemes. Although the full-dynamical forecasting approach is expected to outperform simpler forecasting schemes in the long run, evidence is presented to conclude that, at present, the full-dynamical forecasting approach is the least viable of the three, at least in California. The authors suggest that operational forecasting of any intraseasonal temperature, precipitation, or streamflow statistic derivable from the available records is possible now for ENSO-extreme years.

  12. Intermittent Demand Forecasting in a Tertiary Pediatric Intensive Care Unit.

    PubMed

    Cheng, Chen-Yang; Chiang, Kuo-Liang; Chen, Meng-Yin

    2016-10-01

    Forecasts of the demand for medical supplies both directly and indirectly affect the operating costs and the quality of the care provided by health care institutions. Specifically, overestimating demand induces an inventory surplus, whereas underestimating demand possibly compromises patient safety. Uncertainty in forecasting the consumption of medical supplies generates intermittent demand events. The intermittent demand patterns for medical supplies are generally classified as lumpy, erratic, smooth, and slow-moving demand. This study was conducted with the purpose of advancing a tertiary pediatric intensive care unit's efforts to achieve a high level of accuracy in its forecasting of the demand for medical supplies. On this point, several demand forecasting methods were compared in terms of the forecast accuracy of each. The results confirm that applying Croston's method combined with a single exponential smoothing method yields the most accurate results for forecasting lumpy, erratic, and slow-moving demand, whereas the Simple Moving Average (SMA) method is the most suitable for forecasting smooth demand. In addition, when the classification of demand consumption patterns were combined with the demand forecasting models, the forecasting errors were minimized, indicating that this classification framework can play a role in improving patient safety and reducing inventory management costs in health care institutions.

  13. Improving Radar QPE's in Complex Terrain for Improved Flash Flood Monitoring and Prediction

    NASA Astrophysics Data System (ADS)

    Cifelli, R.; Streubel, D. P.; Reynolds, D.

    2010-12-01

    Quantitative Precipitation Estimation (QPE) is extremely challenging in regions of complex terrain due to a combination of issues related to sampling. In particular, radar beams are often blocked or scan above the liquid precipitation zone while rain gauge density is often too low to properly characterize the spatial distribution of precipitation. Due to poor radar coverage, rain gauge networks are used by the National Weather Service (NWS) River Forecast Centers as the principal source for QPE across the western U.S. The California Nevada River Forecast Center (CNRFC) uses point rainfall measurements and historical rainfall runoff relationships to derive river stage forecasts. The point measurements are interpolated to a 4 km grid using Parameter-elevation Regressions on Independent Slopes Model (PRISM) data to develop a gridded 6-hour QPE product (hereafter referred to as RFC QPE). Local forecast offices can utilize the Multi-sensor Precipitation Estimator (MPE) software to improve local QPE’s and thus local flash flood monitoring and prediction. MPE uses radar and rain gauge data to develop a combined QPE product at 1-hour intervals. The rain gauge information is used to bias correct the radar precipitation estimates so that, in situations where the rain gauge density and radar coverage are adequate, MPE can take advantage of the spatial coverage of the radar and the “ground truth” of the rain gauges to provide an accurate QPE. The MPE 1-hour QPE analysis should provide better spatial and temporal resolution for short duration hydrologic events as compared to 6-hour analyses. These hourly QPEs are then used to correct radar derived rain rates used by the Flash Flood Monitoring and Prediction (FFMP) software in forecast offices for issuance of flash flood warnings. Although widely used by forecasters across the eastern U.S., MPE is not used extensively by the NWS in the west. Part of the reason for the lack of use of MPE across the west is that there has been little quantitative evaluation of MPE performance in this region compared to simply using a gage only analysis. In this study, an evaluation of MPE and RFC QPE is performed in a portion of the CNRFC (including the Russian and American River basins) using an independent set of rain gauge data from the Hydrometeorology Testbed (HMT). Data from a precipitation event in January 2010 are used to establish the comparison methodology and for preliminary evaluation. For this multi-day event, it is shown that the RFC QPE shows generally better agreement with the HMT gauges compared to MPE in terms of storm total precipitation. However, the bias in RFC:MPE is shown to vary as a function of terrain and time. Moreover, for a subset of the HMT gauges in Sonoma county, the 1-hour MPE precipitation totals are found to be generally well correlated to the HMT gauge totals with correlation coefficients ranging from 0.6-0.9. For the Sonoma county gauges, the MPE product generally underestimates rainfall compared to HMT, probably as a consequence of low-level, orographically forced precipitation that was not well captured by the MPE radar analysis.

  14. Central Pacific Hurricane Center - Honolulu, Hawai`i

    Science.gov Websites

    distance between lat/lon points Saffir-Simpson Scale Tropical Storm - winds 39-73 mph (34-63 kt) Category 1 Research and Development NOAA Hurricane Research Division Joint Hurricane Testbed Hurricane Forecast WFO Honolulu Weather Prediction Center Storm Prediction Center Ocean Prediction Center Local Forecast

  15. EVALUATION OF SEVERAL PM 2.5 FORECAST MODELS USING DATA COLLECTED DURING THE ICARTT/NEAQS 2004 FIELD STUDY

    EPA Science Inventory

    Real-time forecasts of PM2.5 aerosol mass from seven air-quality forecast models (AQFMs) are statistically evaluated against observations collected in the northeastern U.S. and southeastern Canada from two surface networks and aircraft data during the summer of 2004 IC...

  16. The Texas Children's Hospital immunization forecaster: conceptualization to implementation.

    PubMed

    Cunningham, Rachel M; Sahni, Leila C; Kerr, G Brady; King, Laura L; Bunker, Nathan A; Boom, Julie A

    2014-12-01

    Immunization forecasting systems evaluate patient vaccination histories and recommend the dates and vaccines that should be administered. We described the conceptualization, development, implementation, and distribution of a novel immunization forecaster, the Texas Children's Hospital (TCH) Forecaster. In 2007, TCH convened an internal expert team that included a pediatrician, immunization nurse, software engineer, and immunization subject matter experts to develop the TCH Forecaster. Our team developed the design of the model, wrote the software, populated the Excel tables, integrated the software, and tested the Forecaster. We created a table of rules that contained each vaccine's recommendations, minimum ages and intervals, and contraindications, which served as the basis for the TCH Forecaster. We created 15 vaccine tables that incorporated 79 unique dose states and 84 vaccine types to operationalize the entire United States recommended immunization schedule. The TCH Forecaster was implemented throughout the TCH system, the Indian Health Service, and the Virginia Department of Health. The TCH Forecast Tester is currently being used nationally. Immunization forecasting systems might positively affect adherence to vaccine recommendations. Efforts to support health care provider utilization of immunization forecasting systems and to evaluate their impact on patient care are needed.

  17. Diagnostic Evaluation of Nmme Precipitation and Temperature Forecasts for the Continental United States

    NASA Astrophysics Data System (ADS)

    Karlovits, G. S.; Villarini, G.; Bradley, A.; Vecchi, G. A.

    2014-12-01

    Forecasts of seasonal precipitation and temperature can provide information in advance of potentially costly disruptions caused by flood and drought conditions. The consequences of these adverse hydrometeorological conditions may be mitigated through informed planning and response, given useful and skillful forecasts of these conditions. However, the potential value and applicability of these forecasts is unavoidably linked to their forecast quality. In this work we evaluate the skill of four global circulation models (GCMs) part of the North American Multi-Model Ensemble (NMME) project in forecasting seasonal precipitation and temperature over the continental United States. The GCMs we consider are the Geophysical Fluid Dynamics Laboratory (GFDL)-CM2.1, NASA Global Modeling and Assimilation Office (NASA-GMAO)-GEOS-5, The Center for Ocean-Land-Atmosphere Studies - Rosenstiel School of Marine & Atmospheric Science (COLA-RSMAS)-CCSM3, Canadian Centre for Climate Modeling and Analysis (CCCma) - CanCM4. These models are available at a resolution of 1-degree and monthly, with a minimum forecast lead time of nine months, up to one year. These model ensembles are compared against gridded monthly temperature and precipitation data created by the PRISM Climate Group, which represent the reference observation dataset in this work. Aspects of forecast quality are quantified using a diagnostic skill score decomposition that allows the evaluation of the potential skill and conditional and unconditional biases associated with these forecasts. The evaluation of the decomposed GCM forecast skill over the continental United States, by season and by lead time allows for a better understanding of the utility of these models for flood and drought predictions. Moreover, it also represents a diagnostic tool that could provide model developers feedback about strengths and weaknesses of their models.

  18. Empirical prediction intervals improve energy forecasting

    PubMed Central

    Kaack, Lynn H.; Apt, Jay; Morgan, M. Granger; McSharry, Patrick

    2017-01-01

    Hundreds of organizations and analysts use energy projections, such as those contained in the US Energy Information Administration (EIA)’s Annual Energy Outlook (AEO), for investment and policy decisions. Retrospective analyses of past AEO projections have shown that observed values can differ from the projection by several hundred percent, and thus a thorough treatment of uncertainty is essential. We evaluate the out-of-sample forecasting performance of several empirical density forecasting methods, using the continuous ranked probability score (CRPS). The analysis confirms that a Gaussian density, estimated on past forecasting errors, gives comparatively accurate uncertainty estimates over a variety of energy quantities in the AEO, in particular outperforming scenario projections provided in the AEO. We report probabilistic uncertainties for 18 core quantities of the AEO 2016 projections. Our work frames how to produce, evaluate, and rank probabilistic forecasts in this setting. We propose a log transformation of forecast errors for price projections and a modified nonparametric empirical density forecasting method. Our findings give guidance on how to evaluate and communicate uncertainty in future energy outlooks. PMID:28760997

  19. Evaluating the performance of infectious disease forecasts: A comparison of climate-driven and seasonal dengue forecasts for Mexico.

    PubMed

    Johansson, Michael A; Reich, Nicholas G; Hota, Aditi; Brownstein, John S; Santillana, Mauricio

    2016-09-26

    Dengue viruses, which infect millions of people per year worldwide, cause large epidemics that strain healthcare systems. Despite diverse efforts to develop forecasting tools including autoregressive time series, climate-driven statistical, and mechanistic biological models, little work has been done to understand the contribution of different components to improved prediction. We developed a framework to assess and compare dengue forecasts produced from different types of models and evaluated the performance of seasonal autoregressive models with and without climate variables for forecasting dengue incidence in Mexico. Climate data did not significantly improve the predictive power of seasonal autoregressive models. Short-term and seasonal autocorrelation were key to improving short-term and long-term forecasts, respectively. Seasonal autoregressive models captured a substantial amount of dengue variability, but better models are needed to improve dengue forecasting. This framework contributes to the sparse literature of infectious disease prediction model evaluation, using state-of-the-art validation techniques such as out-of-sample testing and comparison to an appropriate reference model.

  20. Evaluating the performance of infectious disease forecasts: A comparison of climate-driven and seasonal dengue forecasts for Mexico

    PubMed Central

    Johansson, Michael A.; Reich, Nicholas G.; Hota, Aditi; Brownstein, John S.; Santillana, Mauricio

    2016-01-01

    Dengue viruses, which infect millions of people per year worldwide, cause large epidemics that strain healthcare systems. Despite diverse efforts to develop forecasting tools including autoregressive time series, climate-driven statistical, and mechanistic biological models, little work has been done to understand the contribution of different components to improved prediction. We developed a framework to assess and compare dengue forecasts produced from different types of models and evaluated the performance of seasonal autoregressive models with and without climate variables for forecasting dengue incidence in Mexico. Climate data did not significantly improve the predictive power of seasonal autoregressive models. Short-term and seasonal autocorrelation were key to improving short-term and long-term forecasts, respectively. Seasonal autoregressive models captured a substantial amount of dengue variability, but better models are needed to improve dengue forecasting. This framework contributes to the sparse literature of infectious disease prediction model evaluation, using state-of-the-art validation techniques such as out-of-sample testing and comparison to an appropriate reference model. PMID:27665707

  1. Regional Air Quality forecAST (RAQAST) Over the U.S

    NASA Astrophysics Data System (ADS)

    Yoshida, Y.; Choi, Y.; Zeng, T.; Wang, Y.

    2005-12-01

    A regional chemistry and transport modeling system is used to provide 48-hour forecast of the concentrations of ozone and its precursors over the United States. Meteorological forecast is conducted using the NCAR/Penn State MM5 model. The regional chemistry and transport model simulates the sources, transport, chemistry, and deposition of 24 chemical tracers. The lateral and upper boundary conditions of trace gas concentrations are specified using the monthly mean output from the global GEOS-CHEM model. The initial and boundary conditions for meteorological fields are taken from the NOAA AVN forecast. The forecast has been operational since August, 2003. Model simulations are evaluated using surface, aircraft, and satellite measurements in the A'hindcast' mode. The next step is an automated forecast evaluation system.

  2. An evaluation of the impact of biomass burning smoke aerosol particles on near surface temperature forecasts

    NASA Astrophysics Data System (ADS)

    Zhang, J.; Reid, J. S.; Benedetti, A.; Christensen, M.; Marquis, J. W.

    2016-12-01

    Currently, with the improvements in aerosol forecast accuracies through aerosol data assimilation, the community is unavoidably facing a scientific question: is it worth the computational time to insert real-time aerosol analyses into numerical models for weather forecasts? In this study, by analyzing a significant biomass burning aerosol event that occurred in 2015 over the Northern part of the Central US, the impact of aerosol particles on near-surface temperature forecasts is evaluated. The aerosol direct surface cooling efficiency, which links surface temperature changes to aerosol loading, is derived from observational-based data for the first time. The potential of including real-time aerosol analyses into weather forecasting models for near surface temperature forecasts is also investigated.

  3. Application research for 4D technology in flood forecasting and evaluation

    NASA Astrophysics Data System (ADS)

    Li, Ziwei; Liu, Yutong; Cao, Hongjie

    1998-08-01

    In order to monitor the region which disaster flood happened frequently in China, satisfy the great need of province governments for high accuracy monitoring and evaluated data for disaster and improve the efficiency for repelling disaster, under the Ninth Five-year National Key Technologies Programme, the method was researched for flood forecasting and evaluation using satellite and aerial remoted sensed image and land monitor data. The effective and practicable flood forecasting and evaluation system was established and DongTing Lake was selected as the test site. Modern Digital photogrammetry, remote sensing and GIS technology was used in this system, the disastrous flood could be forecasted and loss can be evaluated base on '4D' (DEM -- Digital Elevation Model, DOQ -- Digital OrthophotoQuads, DRG -- Digital Raster Graph, DTI -- Digital Thematic Information) disaster background database. The technology of gathering and establishing method for '4D' disaster environment background database, application technology for flood forecasting and evaluation based on '4D' background data and experimental results for DongTing Lake test site were introduced in detail in this paper.

  4. Seasat data applications in ocean industries

    NASA Technical Reports Server (NTRS)

    Montgomery, D. R.

    1985-01-01

    It is pointed out that the world population expansion and resulting shortages of food, minerals, and fuel have focused additional attention on the world's oceans. In this context, aspects of weather prediction and the monitoring/prediction of long-range climatic anomalies become more important. In spite of technological advances, the commercial ocean industry and the naval forces suffer now from inadequate data and forecast products related to the oceans. The Seasat Program and the planned Navy-Remote Oceanographic Satellite System (N-ROSS) represent major contributions to improved observational coverage and the processing needed to achieve better forecasts. The Seasat Program was initiated to evaluate the effectiveness of the remote sensing of oceanographic phenomena from a satellite platform. Possible oceanographic satellite applications are presented in a table, and the impact of Seasat data on industry sectors is discussed. Attention is given to offshore oil development, deep-ocean mining, fishing, and marine transportation.

  5. A Bayesian modelling method for post-processing daily sub-seasonal to seasonal rainfall forecasts from global climate models and evaluation for 12 Australian catchments

    NASA Astrophysics Data System (ADS)

    Schepen, Andrew; Zhao, Tongtiegang; Wang, Quan J.; Robertson, David E.

    2018-03-01

    Rainfall forecasts are an integral part of hydrological forecasting systems at sub-seasonal to seasonal timescales. In seasonal forecasting, global climate models (GCMs) are now the go-to source for rainfall forecasts. For hydrological applications however, GCM forecasts are often biased and unreliable in uncertainty spread, and calibration is therefore required before use. There are sophisticated statistical techniques for calibrating monthly and seasonal aggregations of the forecasts. However, calibration of seasonal forecasts at the daily time step typically uses very simple statistical methods or climate analogue methods. These methods generally lack the sophistication to achieve unbiased, reliable and coherent forecasts of daily amounts and seasonal accumulated totals. In this study, we propose and evaluate a Rainfall Post-Processing method for Seasonal forecasts (RPP-S), which is based on the Bayesian joint probability modelling approach for calibrating daily forecasts and the Schaake Shuffle for connecting the daily ensemble members of different lead times. We apply the method to post-process ACCESS-S forecasts for 12 perennial and ephemeral catchments across Australia and for 12 initialisation dates. RPP-S significantly reduces bias in raw forecasts and improves both skill and reliability. RPP-S forecasts are also more skilful and reliable than forecasts derived from ACCESS-S forecasts that have been post-processed using quantile mapping, especially for monthly and seasonal accumulations. Several opportunities to improve the robustness and skill of RPP-S are identified. The new RPP-S post-processed forecasts will be used in ensemble sub-seasonal to seasonal streamflow applications.

  6. Relationship between the Prediction Accuracy of Tsunami Inundation and Relative Distribution of Tsunami Source and Observation Arrays: A Case Study in Tokyo Bay

    NASA Astrophysics Data System (ADS)

    Takagawa, T.

    2017-12-01

    A rapid and precise tsunami forecast based on offshore monitoring is getting attention to reduce human losses due to devastating tsunami inundation. We developed a forecast method based on the combination of hierarchical Bayesian inversion with pre-computed database and rapid post-computing of tsunami inundation. The method was applied to Tokyo bay to evaluate the efficiency of observation arrays against three tsunamigenic earthquakes. One is a scenario earthquake at Nankai trough and the other two are historic ones of Genroku in 1703 and Enpo in 1677. In general, rich observation array near the tsunami source has an advantage in both accuracy and rapidness of tsunami forecast. To examine the effect of observation time length we used four types of data with the lengths of 5, 10, 20 and 45 minutes after the earthquake occurrences. Prediction accuracy of tsunami inundation was evaluated by the simulated tsunami inundation areas around Tokyo bay due to target earthquakes. The shortest time length of accurate prediction varied with target earthquakes. Here, accurate prediction means the simulated values fall within the 95% credible intervals of prediction. In Enpo earthquake case, 5-minutes observation is enough for accurate prediction for Tokyo bay, but 10-minutes and 45-minutes are needed in the case of Nankai trough and Genroku, respectively. The difference of the shortest time length for accurate prediction shows the strong relationship with the relative distance from the tsunami source and observation arrays. In the Enpo case, offshore tsunami observation points are densely distributed even in the source region. So, accurate prediction can be rapidly achieved within 5 minutes. This precise prediction is useful for early warnings. Even in the worst case of Genroku, where less observation points are available near the source, accurate prediction can be obtained within 45 minutes. This information can be useful to figure out the outline of the hazard in an early stage of reaction.

  7. Forecasting risk along a river basin using a probabilistic and deterministic model for environmental risk assessment of effluents through ecotoxicological evaluation and GIS.

    PubMed

    Gutiérrez, Simón; Fernandez, Carlos; Barata, Carlos; Tarazona, José Vicente

    2009-12-20

    This work presents a computer model for Risk Assessment of Basins by Ecotoxicological Evaluation (RABETOX). The model is based on whole effluent toxicity testing and water flows along a specific river basin. It is capable of estimating the risk along a river segment using deterministic and probabilistic approaches. The Henares River Basin was selected as a case study to demonstrate the importance of seasonal hydrological variations in Mediterranean regions. As model inputs, two different ecotoxicity tests (the miniaturized Daphnia magna acute test and the D.magna feeding test) were performed on grab samples from 5 waste water treatment plant effluents. Also used as model inputs were flow data from the past 25 years, water velocity measurements and precise distance measurements using Geographical Information Systems (GIS). The model was implemented into a spreadsheet and the results were interpreted and represented using GIS in order to facilitate risk communication. To better understand the bioassays results, the effluents were screened through SPME-GC/MS analysis. The deterministic model, performed each month during one calendar year, showed a significant seasonal variation of risk while revealing that September represents the worst-case scenario with values up to 950 Risk Units. This classifies the entire area of study for the month of September as "sublethal significant risk for standard species". The probabilistic approach using Monte Carlo analysis was performed on 7 different forecast points distributed along the Henares River. A 0% probability of finding "low risk" was found at all forecast points with a more than 50% probability of finding "potential risk for sensitive species". The values obtained through both the deterministic and probabilistic approximations reveal the presence of certain substances, which might be causing sublethal effects in the aquatic species present in the Henares River.

  8. Short-term energy outlook. Volume 2. Methodology

    NASA Astrophysics Data System (ADS)

    1983-05-01

    Recent changes in forecasting methodology for nonutility distillate fuel oil demand and for the near-term petroleum forecasts are discussed. The accuracy of previous short-term forecasts of most of the major energy sources published in the last 13 issues of the Outlook is evaluated. Macroeconomic and weather assumptions are included in this evaluation. Energy forecasts for 1983 are compared. Structural change in US petroleum consumption, the use of appropriate weather data in energy demand modeling, and petroleum inventories, imports, and refinery runs are discussed.

  9. Developing a Markov Model for Forecasting End Strength of Selected Marine Corps Reserve (SMCR) Officers

    DTIC Science & Technology

    2013-03-01

    moving average ( ARIMA ) model because the data is not a times series. The best a manpower planner can do at this point is to make an educated assumption...MARKOV MODEL FOR FORECASTING END STRENGTH OF SELECTED MARINE CORPS RESERVE (SMCR) OFFICERS by Anthony D. Licari March 2013 Thesis Advisor...March 2013 3. REPORT TYPE AND DATES COVERED Master’s Thesis 4. TITLE AND SUBTITLE DEVELOPING A MARKOV MODEL FOR FORECASTING END STRENGTH OF

  10. Evaluation of the NCEP CFSv2 45-day Forecasts for Predictability of Intraseasonal Tropical Storm Activities

    NASA Astrophysics Data System (ADS)

    Schemm, J. E.; Long, L.; Baxter, S.

    2013-12-01

    Evaluation of the NCEP CFSv2 45-day Forecasts for Predictability of Intraseasonal Tropical Storm Activities Jae-Kyung E. Schemm, Lindsey Long and Stephen Baxter Climate Prediction Center, NCEP/NWS/NOAA Predictability of intraseasonal tropical storm (TS) activities is assessed using the 1999-2010 CFSv2 hindcast suite. Weekly TS activities in the CFSv2 45-day forecasts were determined using the TS detection and tracking method devised by Carmago and Zebiak (2002). The forecast periods are divided into weekly intervals for Week 1 through Week 6, and also the 30-day mean. The TS activities in those intervals are compared to the observed activities based on the NHC HURDAT and JTWC Best Track datasets. The CFSv2 45-day hindcast suite is made of forecast runs initialized at 00, 06, 12 and 18Z every day during the 1999 - 2010 period. For predictability evaluation, forecast TS activities are analyzed based on 20-member ensemble forecasts comprised of 45-day runs made during the most recent 5 days prior to the verification period. The forecast TS activities are evaluated in terms of the number of storms, genesis locations and storm tracks during the weekly periods. The CFSv2 forecasts are shown to have a fair level of skill in predicting the number of storms over the Atlantic Basin with the temporal correlation scores ranging from 0.73 for Week 1 forecasts to 0.63 for Week 6, and the average RMS errors ranging from 0.86 to 1.07 during the 1999-2010 hurricane season. Also, the forecast track density distribution and false alarm statistics are compiled using the hindcast analyses. In real-time applications of the intraseasonal TS activity forecasts, the climatological TS forecast statistics will be used to make the model bias corrections in terms of the storm counts, track distribution and removal of false alarms. An operational implementation of the weekly TS activity prediction is planned for early 2014 to provide an objective input for the CPC's Global Tropical Hazards Outlooks.

  11. Quantifying the Usefulness of Ensemble-Based Precipitation Forecasts with Respect to Water Use and Yield during a Field Trial

    NASA Astrophysics Data System (ADS)

    Christ, E.; Webster, P. J.; Collins, G.; Byrd, S.

    2014-12-01

    Recent droughts and the continuing water wars between the states of Georgia, Alabama and Florida have made agricultural producers more aware of the importance of managing their irrigation systems more efficiently. Many southeastern states are beginning to consider laws that will require monitoring and regulation of water used for irrigation. Recently, Georgia suspended issuing irrigation permits in some areas of the southwestern portion of the state to try and limit the amount of water being used in irrigation. However, even in southern Georgia, which receives on average between 23 and 33 inches of rain during the growing season, irrigation can significantly impact crop yields. In fact, studies have shown that when fields do not receive rainfall at the most critical stages in the life of cotton, yield for irrigated fields can be up to twice as much as fields for non-irrigated cotton. This leads to the motivation for this study, which is to produce a forecast tool that will enable producers to make more efficient irrigation management decisions. We will use the ECMWF (European Centre for Medium-Range Weather Forecasts) vars EPS (Ensemble Prediction System) model precipitation forecasts for the grid points included in the 1◦ x 1◦ lat/lon square surrounding the point of interest. We will then apply q-to-q bias corrections to the forecasts. Once we have applied the bias corrections, we will use the check-book method of irrigation scheduling to determine the probability of receiving the required amount of rainfall for each week of the growing season. These forecasts will be used during a field trial conducted at the CM Stripling Irrigation Research Park in Camilla, Georgia. This research will compare differences in yield and water use among the standard checkbook method of irrigation, which uses no precipitation forecast knowledge, the weather.com forecast, a dry land plot, and the ensemble-based forecasts mentioned above.

  12. Forecasting the Emergency Department Patients Flow.

    PubMed

    Afilal, Mohamed; Yalaoui, Farouk; Dugardin, Frédéric; Amodeo, Lionel; Laplanche, David; Blua, Philippe

    2016-07-01

    Emergency department (ED) have become the patient's main point of entrance in modern hospitals causing it frequent overcrowding, thus hospital managers are increasingly paying attention to the ED in order to provide better quality service for patients. One of the key elements for a good management strategy is demand forecasting. In this case, forecasting patients flow, which will help decision makers to optimize human (doctors, nurses…) and material(beds, boxs…) resources allocation. The main interest of this research is forecasting daily attendance at an emergency department. The study was conducted on the Emergency Department of Troyes city hospital center, France, in which we propose a new practical ED patients classification that consolidate the CCMU and GEMSA categories into one category and innovative time-series based models to forecast long and short term daily attendance. The models we developed for this case study shows very good performances (up to 91,24 % for the annual Total flow forecast) and robustness to epidemic periods.

  13. A simplified real time method to forecast semi-enclosed basins storm surge

    NASA Astrophysics Data System (ADS)

    Pasquali, D.; Di Risio, M.; De Girolamo, P.

    2015-11-01

    Semi-enclosed basins are often prone to storm surge events. Indeed, their meteorological exposition, the presence of large continental shelf and their shape can lead to strong sea level set-up. A real time system aimed at forecasting storm surge may be of great help to protect human activities (i.e. to forecast flooding due to storm surge events), to manage ports and to safeguard coasts safety. This paper aims at illustrating a simple method able to forecast storm surge events in semi-enclosed basins in real time. The method is based on a mixed approach in which the results obtained by means of a simplified physics based model with low computational costs are corrected by means of statistical techniques. The proposed method is applied to a point of interest located in the Northern part of the Adriatic Sea. The comparison of forecasted levels against observed values shows the satisfactory reliability of the forecasts.

  14. Metrics for the Evaluation the Utility of Air Quality Forecasting

    NASA Astrophysics Data System (ADS)

    Sumo, T. M.; Stockwell, W. R.

    2013-12-01

    Global warming is expected to lead to higher levels of air pollution and therefore the forecasting of both long-term and daily air quality is an important component for the assessment of the costs of climate change and its impact on human health. Some of the risks associated with poor air quality days (where the Air Pollution Index is greater than 100), include hospital visits and mortality. Accurate air quality forecasting has the potential to allow sensitive groups to take appropriate precautions. This research builds metrics for evaluating the utility of air quality forecasting in terms of its potential impacts. Our analysis of air quality models focuses on the Washington, DC/Baltimore, MD region over the summertime ozone seasons between 2010 and 2012. The metrics that are relevant to our analysis include: (1) The number of times that a high ozone or particulate matter (PM) episode is correctly forecasted, (2) the number of times that high ozone or PM episode is forecasted when it does not occur and (3) the number of times when the air quality forecast predicts a cleaner air episode when the air was observed to have high ozone or PM. Our evaluation of the performance of air quality forecasts include those forecasts of ozone and particulate matter and data available from the U.S. Environmental Protection Agency (EPA)'s AIRNOW. We also examined observational ozone and particulate matter data available from Clean Air Partners. Overall the forecast models perform well for our region and time interval.

  15. Hydro-economic assessment of hydrological forecasting systems

    NASA Astrophysics Data System (ADS)

    Boucher, M.-A.; Tremblay, D.; Delorme, L.; Perreault, L.; Anctil, F.

    2012-01-01

    SummaryAn increasing number of publications show that ensemble hydrological forecasts exhibit good performance when compared to observed streamflow. Many studies also conclude that ensemble forecasts lead to a better performance than deterministic ones. This investigation takes one step further by not only comparing ensemble and deterministic forecasts to observed values, but by employing the forecasts in a stochastic decision-making assistance tool for hydroelectricity production, during a flood event on the Gatineau River in Canada. This allows the comparison between different types of forecasts according to their value in terms of energy, spillage and storage in a reservoir. The motivation for this is to adopt the point of view of an end-user, here a hydroelectricity production society. We show that ensemble forecasts exhibit excellent performances when compared to observations and are also satisfying when involved in operation management for electricity production. Further improvement in terms of productivity can be reached through the use of a simple post-processing method.

  16. Forecasting the student–professor matches that result in unusually effective teaching

    PubMed Central

    Gross, Jennifer; Lakey, Brian; Lucas, Jessica L; LaCross, Ryan; R Plotkowski, Andrea; Winegard, Bo

    2015-01-01

    Background Two important influences on students' evaluations of teaching are relationship and professor effects. Relationship effects reflect unique matches between students and professors such that some professors are unusually effective for some students, but not for others. Professor effects reflect inter-rater agreement that some professors are more effective than others, on average across students. Aims We attempted to forecast students' evaluations of live lectures from brief, video-recorded teaching trailers. Sample Participants were 145 college students (74% female) enrolled in introductory psychology courses at a public university in the Great Lakes region of the United States. Methods Students viewed trailers early in the semester and attended live lectures months later. Because subgroups of students viewed the same professors, statistical analyses could isolate professor and relationship effects. Results Evaluations were influenced strongly by relationship and professor effects, and students' evaluations of live lectures could be forecasted from students' evaluations of teaching trailers. That is, we could forecast the individual students who would respond unusually well to a specific professor (relationship effects). We could also forecast which professors elicited better evaluations in live lectures, on average across students (professor effects). Professors who elicited unusually good evaluations in some students also elicited better memory for lectures in those students. Conclusions It appears possible to forecast relationship and professor effects on teaching evaluations by presenting brief teaching trailers to students. Thus, it might be possible to develop online recommender systems to help match students and professors so that unusually effective teaching emerges. PMID:24953773

  17. Forecast skill of a high-resolution real-time mesoscale model designed for weather support of operations at Kennedy Space Center and Cape Canaveral Air Station

    NASA Technical Reports Server (NTRS)

    Taylor, Gregory E.; Zack, John W.; Manobianco, John

    1994-01-01

    NASA funded Mesoscale Environmental Simulations and Operations (MESO), Inc. to develop a version of the Mesoscale Atmospheric Simulation System (MASS). The model has been modified specifically for short-range forecasting in the vicinity of KSC/CCAS. To accomplish this, the model domain has been limited to increase the number of horizontal grid points (and therefore grid resolution) and the model' s treatment of precipitation, radiation, and surface hydrology physics has been enhanced to predict convection forced by local variations in surface heat, moisture fluxes, and cloud shading. The objective of this paper is to (1) provide an overview of MASS including the real-time initialization and configuration for running the data pre-processor and model, and (2) to summarize the preliminary evaluation of the model's forecasts of temperature, moisture, and wind at selected rawinsonde station locations during February 1994 and July 1994. MASS is a hydrostatic, three-dimensional modeling system which includes schemes to represent planetary boundary layer processes, surface energy and moisture budgets, free atmospheric long and short wave radiation, cloud microphysics, and sub-grid scale moist convection.

  18. Evaluation of streamflow forecast for the National Water Model of U.S. National Weather Service

    NASA Astrophysics Data System (ADS)

    Rafieeinasab, A.; McCreight, J. L.; Dugger, A. L.; Gochis, D.; Karsten, L. R.; Zhang, Y.; Cosgrove, B.; Liu, Y.

    2016-12-01

    The National Water Model (NWM), an implementation of the community WRF-Hydro modeling system, is an operational hydrologic forecasting model for the contiguous United States. The model forecasts distributed hydrologic states and fluxes, including soil moisture, snowpack, ET, and ponded water. In particular, the NWM provides streamflow forecasts at more than 2.7 million river reaches for three forecast ranges: short (15 hr), medium (10 days), and long (30 days). In this study, we verify short and medium range streamflow forecasts in the context of the verification of their respective quantitative precipitation forecasts/forcing (QPF), the High Resolution Rapid Refresh (HRRR) and the Global Forecast System (GFS). The streamflow evaluation is performed for summer of 2016 at more than 6,000 USGS gauges. Both individual forecasts and forecast lead times are examined. Selected case studies of extreme events aim to provide insight into the quality of the NWM streamflow forecasts. A goal of this comparison is to address how much streamflow bias originates from precipitation forcing bias. To this end, precipitation verification is performed over the contributing areas above (and between assimilated) USGS gauge locations. Precipitation verification is based on the aggregated, blended StageIV/StageII data as the "reference truth". We summarize the skill of the streamflow forecasts, their skill relative to the QPF, and make recommendations for improving NWM forecast skill.

  19. Evaluation of CMAQ and CAMx Ensemble Air Quality Forecasts during the 2015 MAPS-Seoul Field Campaign

    NASA Astrophysics Data System (ADS)

    Kim, E.; Kim, S.; Bae, C.; Kim, H. C.; Kim, B. U.

    2015-12-01

    The performance of Air quality forecasts during the 2015 MAPS-Seoul Field Campaign was evaluated. An forecast system has been operated to support the campaign's daily aircraft route decisions for airborne measurements to observe long-range transporting plume. We utilized two real-time ensemble systems based on the Weather Research and Forecasting (WRF)-Sparse Matrix Operator Kernel Emissions (SMOKE)-Comprehensive Air quality Model with extensions (CAMx) modeling framework and WRF-SMOKE- Community Multi_scale Air Quality (CMAQ) framework over northeastern Asia to simulate PM10 concentrations. Global Forecast System (GFS) from National Centers for Environmental Prediction (NCEP) was used to provide meteorological inputs for the forecasts. For an additional set of retrospective simulations, ERA Interim Reanalysis from European Centre for Medium-Range Weather Forecasts (ECMWF) was also utilized to access forecast uncertainties from the meteorological data used. Model Inter-Comparison Study for Asia (MICS-Asia) and National Institute of Environment Research (NIER) Clean Air Policy Support System (CAPSS) emission inventories are used for foreign and domestic emissions, respectively. In the study, we evaluate the CMAQ and CAMx model performance during the campaign by comparing the results to the airborne and surface measurements. Contributions of foreign and domestic emissions are estimated using a brute force method. Analyses on model performance and emissions will be utilized to improve air quality forecasts for the upcoming KORUS-AQ field campaign planned in 2016.

  20. Satellite Sounder Data Assimilation for Improving Alaska Region Weather Forecast

    NASA Technical Reports Server (NTRS)

    Zhu, Jiang; Stevens, E.; Zhang, X.; Zavodsky, B. T.; Heinrichs, T.; Broderson, D.

    2014-01-01

    A case study and monthly statistical analysis using sounder data assimilation to improve the Alaska regional weather forecast model are presented. Weather forecast in Alaska faces challenges as well as opportunities. Alaska has a large land with multiple types of topography and coastal area. Weather forecast models must be finely tuned in order to accurately predict weather in Alaska. Being in the high-latitudes provides Alaska greater coverage of polar orbiting satellites for integration into forecasting models than the lower 48. Forecasting marine low stratus clouds is critical to the Alaska aviation and oil industry and is the current focus of the case study. NASA AIRS/CrIS sounder profiles data are used to do data assimilation for the Alaska regional weather forecast model to improve Arctic marine stratus clouds forecast. Choosing physical options for the WRF model is discussed. Preprocess of AIRS/CrIS sounder data for data assimilation is described. Local observation data, satellite data, and global data assimilation data are used to verify and/or evaluate the forecast results by the MET tools Model Evaluation Tools (MET).

  1. Using Meteorological Analogues for Reordering Postprocessed Precipitation Ensembles in Hydrological Forecasting

    NASA Astrophysics Data System (ADS)

    Bellier, Joseph; Bontron, Guillaume; Zin, Isabella

    2017-12-01

    Meteorological ensemble forecasts are nowadays widely used as input of hydrological models for probabilistic streamflow forecasting. These forcings are frequently biased and have to be statistically postprocessed, using most of the time univariate techniques that apply independently to individual locations, lead times and weather variables. Postprocessed ensemble forecasts therefore need to be reordered so as to reconstruct suitable multivariate dependence structures. The Schaake shuffle and ensemble copula coupling are the two most popular methods for this purpose. This paper proposes two adaptations of them that make use of meteorological analogues for reconstructing spatiotemporal dependence structures of precipitation forecasts. Performances of the original and adapted techniques are compared through a multistep verification experiment using real forecasts from the European Centre for Medium-Range Weather Forecasts. This experiment evaluates not only multivariate precipitation forecasts but also the corresponding streamflow forecasts that derive from hydrological modeling. Results show that the relative performances of the different reordering methods vary depending on the verification step. In particular, the standard Schaake shuffle is found to perform poorly when evaluated on streamflow. This emphasizes the crucial role of the precipitation spatiotemporal dependence structure in hydrological ensemble forecasting.

  2. Structuring Process Evaluation to Forecast Use and Sustainability of an Intervention: Theory and Data From the Efficacy Trial for Lunch Is in the Bag.

    PubMed

    Roberts-Gray, Cindy; Sweitzer, Sara J; Ranjit, Nalini; Potratz, Christa; Rood, Magdalena; Romo-Palafox, Maria Jose; Byrd-Williams, Courtney E; Briley, Margaret E; Hoelscher, Deanna M

    2017-08-01

    A cluster-randomized trial at 30 early care and education centers (Intervention = 15, waitlist Control = 15) showed the Lunch Is in the Bag intervention increased parents' packing of fruits, vegetables, and whole grains in their preschool children's bag lunches (parent-child dyads = 351 Intervention, 282 Control). To examine the utility of structuring the trial's process evaluation to forecast use, sustainability, and readiness of the intervention for wider dissemination and implementation. Pretrial, the research team simulated user experience to forecast use of the intervention. Multiattribute evaluation of user experience measured during the trial assessed use and sustainability of the intervention. Thematic analysis of posttrial interviews with users evaluated sustained use and readiness for wider dissemination. Moderate use was forecast by the research team. Multiattribute evaluation of activity logs, surveys, and observations during the trial indicated use consistent with the forecast except that prevalence of parents reading the newsletters was greater (83% vs. 50%) and hearing their children talk about the classroom was less (4% vs. 50%) than forecast. Early care and education center-level likelihood of sustained use was projected to be near zero. Posttrial interviews indicated use was sustained at zero centers. Structuring the efficacy trial's process evaluation as a progression of assessments of user experience produced generally accurate forecasts of use and sustainability of the intervention at the trial sites. This approach can assist interpretation of trial outcomes, aid decisions about dissemination of the intervention, and contribute to translational science for improving health.

  3. Evaluation of regression and neural network models for solar forecasting over different short-term horizons

    DOE PAGES

    Inanlouganji, Alireza; Reddy, T. Agami; Katipamula, Srinivas

    2018-04-13

    Forecasting solar irradiation has acquired immense importance in view of the exponential increase in the number of solar photovoltaic (PV) system installations. In this article, analyses results involving statistical and machine-learning techniques to predict solar irradiation for different forecasting horizons are reported. Yearlong typical meteorological year 3 (TMY3) datasets from three cities in the United States with different climatic conditions have been used in this analysis. A simple forecast approach that assumes consecutive days to be identical serves as a baseline model to compare forecasting alternatives. To account for seasonal variability and to capture short-term fluctuations, different variants of themore » lagged moving average (LMX) model with cloud cover as the input variable are evaluated. Finally, the proposed LMX model is evaluated against an artificial neural network (ANN) model. How the one-hour and 24-hour models can be used in conjunction to predict different short-term rolling horizons is discussed, and this joint application is illustrated for a four-hour rolling horizon forecast scheme. Lastly, the effect of using predicted cloud cover values, instead of measured ones, on the accuracy of the models is assessed. Results show that LMX models do not degrade in forecast accuracy if models are trained with the forecast cloud cover data.« less

  4. Evaluation of regression and neural network models for solar forecasting over different short-term horizons

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Inanlouganji, Alireza; Reddy, T. Agami; Katipamula, Srinivas

    Forecasting solar irradiation has acquired immense importance in view of the exponential increase in the number of solar photovoltaic (PV) system installations. In this article, analyses results involving statistical and machine-learning techniques to predict solar irradiation for different forecasting horizons are reported. Yearlong typical meteorological year 3 (TMY3) datasets from three cities in the United States with different climatic conditions have been used in this analysis. A simple forecast approach that assumes consecutive days to be identical serves as a baseline model to compare forecasting alternatives. To account for seasonal variability and to capture short-term fluctuations, different variants of themore » lagged moving average (LMX) model with cloud cover as the input variable are evaluated. Finally, the proposed LMX model is evaluated against an artificial neural network (ANN) model. How the one-hour and 24-hour models can be used in conjunction to predict different short-term rolling horizons is discussed, and this joint application is illustrated for a four-hour rolling horizon forecast scheme. Lastly, the effect of using predicted cloud cover values, instead of measured ones, on the accuracy of the models is assessed. Results show that LMX models do not degrade in forecast accuracy if models are trained with the forecast cloud cover data.« less

  5. Affective Forecasting and Self-Rated Symptoms of Depression, Anxiety, and Hypomania: Evidence for a Dysphoric Forecasting Bias

    PubMed Central

    Hoerger, Michael; Quirk, Stuart W.; Chapman, Benjamin P.; Duberstein, Paul R.

    2011-01-01

    Emerging research has examined individual differences in affective forecasting; however, we are aware of no published study to date linking psychopathology symptoms to affective forecasting problems. Pitting cognitive theory against depressive realism theory, we examined whether dysphoria was associated with negatively biased affective forecasts or greater accuracy. Participants (n = 325) supplied predicted and actual emotional reactions for three days surrounding an emotionally-evocative relational event, Valentine’s Day. Predictions were made a month prior to the holiday. Consistent with cognitive theory, we found evidence for a dysphoric forecasting bias – the tendency of individuals in dysphoric states to overpredict negative emotional reactions to future events. The dysphoric forecasting bias was robust across ratings of positive and negative affect, forecasts for pleasant and unpleasant scenarios, continuous and categorical operationalizations of dysphoria, and three time points of observation. Similar biases were not observed in analyses examining the independent effects of anxiety and hypomania. Findings provide empirical evidence for the long assumed influence of depressive symptoms on future expectations. The present investigation has implications for affective forecasting studies examining information processing constructs, decision making, and broader domains of psychopathology. PMID:22397734

  6. Affective forecasting and self-rated symptoms of depression, anxiety, and hypomania: evidence for a dysphoric forecasting bias.

    PubMed

    Hoerger, Michael; Quirk, Stuart W; Chapman, Benjamin P; Duberstein, Paul R

    2012-01-01

    Emerging research has examined individual differences in affective forecasting; however, we are aware of no published study to date linking psychopathology symptoms to affective forecasting problems. Pitting cognitive theory against depressive realism theory, we examined whether dysphoria was associated with negatively biased affective forecasts or greater accuracy. Participants (n=325) supplied predicted and actual emotional reactions for three days surrounding an emotionally evocative relational event, Valentine's Day. Predictions were made a month prior to the holiday. Consistent with cognitive theory, we found evidence for a dysphoric forecasting bias-the tendency of individuals in dysphoric states to overpredict negative emotional reactions to future events. The dysphoric forecasting bias was robust across ratings of positive and negative affect, forecasts for pleasant and unpleasant scenarios, continuous and categorical operationalisations of dysphoria, and three time points of observation. Similar biases were not observed in analyses examining the independent effects of anxiety and hypomania. Findings provide empirical evidence for the long-assumed influence of depressive symptoms on future expectations. The present investigation has implications for affective forecasting studies examining information-processing constructs, decision making, and broader domains of psychopathology.

  7. Short-term ensemble streamflow forecasting using operationally-produced single-valued streamflow forecasts - A Hydrologic Model Output Statistics (HMOS) approach

    NASA Astrophysics Data System (ADS)

    Regonda, Satish Kumar; Seo, Dong-Jun; Lawrence, Bill; Brown, James D.; Demargne, Julie

    2013-08-01

    We present a statistical procedure for generating short-term ensemble streamflow forecasts from single-valued, or deterministic, streamflow forecasts produced operationally by the U.S. National Weather Service (NWS) River Forecast Centers (RFCs). The resulting ensemble streamflow forecast provides an estimate of the predictive uncertainty associated with the single-valued forecast to support risk-based decision making by the forecasters and by the users of the forecast products, such as emergency managers. Forced by single-valued quantitative precipitation and temperature forecasts (QPF, QTF), the single-valued streamflow forecasts are produced at a 6-h time step nominally out to 5 days into the future. The single-valued streamflow forecasts reflect various run-time modifications, or "manual data assimilation", applied by the human forecasters in an attempt to reduce error from various sources in the end-to-end forecast process. The proposed procedure generates ensemble traces of streamflow from a parsimonious approximation of the conditional multivariate probability distribution of future streamflow given the single-valued streamflow forecast, QPF, and the most recent streamflow observation. For parameter estimation and evaluation, we used a multiyear archive of the single-valued river stage forecast produced operationally by the NWS Arkansas-Red River Basin River Forecast Center (ABRFC) in Tulsa, Oklahoma. As a by-product of parameter estimation, the procedure provides a categorical assessment of the effective lead time of the operational hydrologic forecasts for different QPF and forecast flow conditions. To evaluate the procedure, we carried out hindcasting experiments in dependent and cross-validation modes. The results indicate that the short-term streamflow ensemble hindcasts generated from the procedure are generally reliable within the effective lead time of the single-valued forecasts and well capture the skill of the single-valued forecasts. For smaller basins, however, the effective lead time is significantly reduced by short basin memory and reduced skill in the single-valued QPF.

  8. Helping Resource Managers Understand Hydroclimatic Variability and Forecasts: A Case Study in Research Equity

    NASA Astrophysics Data System (ADS)

    Hartmann, H. C.; Pagano, T. C.; Sorooshian, S.; Bales, R.

    2002-12-01

    Expectations for hydroclimatic research are evolving as changes in the contract between science and society require researchers to provide "usable science" that can improve resource management policies and practices. However, decision makers have a broad range of abilities to access, interpret, and apply scientific research. "High-end users" have technical capabilities and operational flexibility capable of readily exploiting new information and products. "Low-end users" have fewer resources and are less likely to change their decision making processes without clear demonstration of benefits by influential early adopters (i.e., high-end users). Should research programs aim for efficiency, targeting high-end users? Should they aim for impact, targeting decisions with high economic value or great influence (e.g., state or national agencies)? Or should they focus on equity, whereby outcomes benefit groups across a range of capabilities? In this case study, we focus on hydroclimatic variability and forecasts. Agencies and individuals responsible for resource management decisions have varying perspectives about hydroclimatic variability and opportunities for using forecasts to improve decision outcomes. Improper interpretation of forecasts is widespread and many individuals find it difficult to place forecasts in an appropriate regional historical context. In addressing these issues, we attempted to mitigate traditional inequities in the scope, communication, and accessibility of hydroclimatic research results. High-end users were important in prioritizing information needs, while low-end users were important in determining how information should be communicated. For example, high-end users expressed hesitancy to use seasonal forecasts in the absence of quantitative performance evaluations. Our subsequently developed forecast evaluation framework and research products, however, were guided by the need for a continuum of evaluation measures and interpretive materials to enable low-end users to increase their understanding of probabilistic forecasts, credibility concepts, and implications for decision making. We also developed an interactive forecast assessment tool accessible over the Internet, to support resource decisions by individuals as well as agencies. The tool provides tutorials for guiding forecast interpretation, including quizzes that allow users to test their forecast interpretation skills. Users can monitor recent and historical observations for selected regions, communicated using terminology consistent with available forecast products. The tool also allows users to evaluate forecast performance for the regions, seasons, forecast lead times, and performance criteria relevant to their specific decision making situations. Using consistent product formats, the evaluation component allows individuals to use results at the level they are capable of understanding, while offering opportunity to shift to more sophisticated criteria. Recognizing that many individuals lack Internet access, the forecast assessment webtool design also includes capabilities for customized report generation so extension agents or other trusted information intermediaries can provide material to decision makers at meetings or site visits.

  9. Evaluations of Extended-Range tropical Cyclone Forecasts in the Western North Pacific by using the Ensemble Reforecasts: Preliminary Results

    NASA Astrophysics Data System (ADS)

    Tsai, Hsiao-Chung; Chen, Pang-Cheng; Elsberry, Russell L.

    2017-04-01

    The objective of this study is to evaluate the predictability of the extended-range forecasts of tropical cyclone (TC) in the western North Pacific using reforecasts from National Centers for Environmental Prediction (NCEP) Global Ensemble Forecast System (GEFS) during 1996-2015, and from the Climate Forecast System (CFS) during 1999-2010. Tsai and Elsberry have demonstrated that an opportunity exists to support hydrological operations by using the extended-range TC formation and track forecasts in the western North Pacific from the ECMWF 32-day ensemble. To demonstrate this potential for the decision-making processes regarding water resource management and hydrological operation in Taiwan reservoir watershed areas, special attention is given to the skill of the NCEP GEFS and CFS models in predicting the TCs affecting the Taiwan area. The first objective of this study is to analyze the skill of NCEP GEFS and CFS TC forecasts and quantify the forecast uncertainties via verifications of categorical binary forecasts and probabilistic forecasts. The second objective is to investigate the relationships among the large-scale environmental factors [e.g., El Niño Southern Oscillation (ENSO), Madden-Julian Oscillation (MJO), etc.] and the model forecast errors by using the reforecasts. Preliminary results are indicating that the skill of the TC activity forecasts based on the raw forecasts can be further improved if the model biases are minimized by utilizing these reforecasts.

  10. Ecological forecasts: An emerging imperative

    Treesearch

    James S. Clark; Steven R. Carpenter; Mary Barber; Scott Collins; Andy Dobson; Jonathan A. Foley; David M. Lodge; Mercedes Pascual; Roger Pielke; William Pizer; Cathy Pringle; Walter V. Reid; Kenneth A. Rose; Osvaldo Sala; William H. Schlesinger; Diana H. Wall; David Wear

    2001-01-01

    Planning and decision-making can be improved by access to reliable forecasts of ecosystem state, ecosystem services, and natural capital. Availability of new data sets, together with progress in computation and statistics, will increase our ability to forecast ecosystem change. An agenda that would lead toward a capacity to produce, evaluate, and communicate forecasts...

  11. THE NEW ENGLAND AIR QUALITY FORECASTING PILOT PROGRAM: DEVELOPMENT OF AN EVALUATION PROTOCOL AND PERFORMANCE BENCHMARK

    EPA Science Inventory

    The National Oceanic and Atmospheric Administration recently sponsored the New England Forecasting Pilot Program to serve as a "test bed" for chemical forecasting by providing all of the elements of a National Air Quality Forecasting System, including the development and implemen...

  12. Assessing the Value of Post-processed State-of-the-art Long-term Weather Forecast Ensembles within An Integrated Agronomic Modelling Framework

    NASA Astrophysics Data System (ADS)

    LI, Y.; Castelletti, A.; Giuliani, M.

    2014-12-01

    Over recent years, long-term climate forecast from global circulation models (GCMs) has been demonstrated to show increasing skills over the climatology, thanks to the advances in the modelling of coupled ocean-atmosphere dynamics. Improved information from long-term forecast is supposed to be a valuable support to farmers in optimizing farming operations (e.g. crop choice, cropping time) and for more effectively coping with the adverse impacts of climate variability. Yet, evaluating how valuable this information can be is not straightforward and farmers' response must be taken into consideration. Indeed, while long-range forecast are traditionally evaluated in terms of accuracy by comparison of hindcast and observed values, in the context of agricultural systems, potentially useful forecast information should alter the stakeholders' expectation, modify their decisions and ultimately have an impact on their annual benefit. Therefore, it is more desirable to assess the value of those long-term forecasts via decision-making models so as to extract direct indication of probable decision outcomes from farmers, i.e. from an end-to-end perspective. In this work, we evaluate the operational value of thirteen state-of-the-art long-range forecast ensembles against climatology forecast and subjective prediction (i.e. past year climate and historical average) within an integrated agronomic modeling framework embedding an implicit model of farmers' behavior. Collected ensemble datasets are bias-corrected and downscaled using a stochastic weather generator, in order to address the mismatch of the spatio-temporal scale between forecast data from GCMs and distributed crop simulation model. The agronomic model is first simulated using the forecast information (ex-ante), followed by a second run with actual climate (ex-post). Multi-year simulations are performed to account for climate variability and the value of the different climate forecast is evaluated against the perfect foresight scenario based on the expected crop productivity as well as the land-use decisions. Our results show that not all the products generate beneficial effects to farmers and that the forecast errors might be amplified by the farmers decisions.

  13. Prospectively Evaluating the Collaboratory for the Study of Earthquake Predictability: An Evaluation of the UCERF2 and Updated Five-Year RELM Forecasts

    NASA Astrophysics Data System (ADS)

    Strader, Anne; Schneider, Max; Schorlemmer, Danijel; Liukis, Maria

    2016-04-01

    The Collaboratory for the Study of Earthquake Predictability (CSEP) was developed to rigorously test earthquake forecasts retrospectively and prospectively through reproducible, completely transparent experiments within a controlled environment (Zechar et al., 2010). During 2006-2011, thirteen five-year time-invariant prospective earthquake mainshock forecasts developed by the Regional Earthquake Likelihood Models (RELM) working group were evaluated through the CSEP testing center (Schorlemmer and Gerstenberger, 2007). The number, spatial, and magnitude components of the forecasts were compared to the respective observed seismicity components using a set of consistency tests (Schorlemmer et al., 2007, Zechar et al., 2010). In the initial experiment, all but three forecast models passed every test at the 95% significance level, with all forecasts displaying consistent log-likelihoods (L-test) and magnitude distributions (M-test) with the observed seismicity. In the ten-year RELM experiment update, we reevaluate these earthquake forecasts over an eight-year period from 2008-2016, to determine the consistency of previous likelihood testing results over longer time intervals. Additionally, we test the Uniform California Earthquake Rupture Forecast (UCERF2), developed by the U.S. Geological Survey (USGS), and the earthquake rate model developed by the California Geological Survey (CGS) and the USGS for the National Seismic Hazard Mapping Program (NSHMP) against the RELM forecasts. Both the UCERF2 and NSHMP forecasts pass all consistency tests, though the Helmstetter et al. (2007) and Shen et al. (2007) models exhibit greater information gain per earthquake according to the T- and W- tests (Rhoades et al., 2011). Though all but three RELM forecasts pass the spatial likelihood test (S-test), multiple forecasts fail the M-test due to overprediction of the number of earthquakes during the target period. Though there is no significant difference between the UCERF2 and NSHMP models, residual scores show that the NSHMP model is preferred in locations with earthquake occurrence, due to the lower seismicity rates forecasted by the UCERF2 model.

  14. Performance assessment of a Bayesian Forecasting System (BFS) for real-time flood forecasting

    NASA Astrophysics Data System (ADS)

    Biondi, D.; De Luca, D. L.

    2013-02-01

    SummaryThe paper evaluates, for a number of flood events, the performance of a Bayesian Forecasting System (BFS), with the aim of evaluating total uncertainty in real-time flood forecasting. The predictive uncertainty of future streamflow is estimated through the Bayesian integration of two separate processors. The former evaluates the propagation of input uncertainty on simulated river discharge, the latter computes the hydrological uncertainty of actual river discharge associated with all other possible sources of error. A stochastic model and a distributed rainfall-runoff model were assumed, respectively, for rainfall and hydrological response simulations. A case study was carried out for a small basin in the Calabria region (southern Italy). The performance assessment of the BFS was performed with adequate verification tools suited for probabilistic forecasts of continuous variables such as streamflow. Graphical tools and scalar metrics were used to evaluate several attributes of the forecast quality of the entire time-varying predictive distributions: calibration, sharpness, accuracy, and continuous ranked probability score (CRPS). Besides the overall system, which incorporates both sources of uncertainty, other hypotheses resulting from the BFS properties were examined, corresponding to (i) a perfect hydrological model; (ii) a non-informative rainfall forecast for predicting streamflow; and (iii) a perfect input forecast. The results emphasize the importance of using different diagnostic approaches to perform comprehensive analyses of predictive distributions, to arrive at a multifaceted view of the attributes of the prediction. For the case study, the selected criteria revealed the interaction of the different sources of error, in particular the crucial role of the hydrological uncertainty processor when compensating, at the cost of wider forecast intervals, for the unreliable and biased predictive distribution resulting from the Precipitation Uncertainty Processor.

  15. Stochastic model to forecast ground-level ozone concentration at urban and rural areas.

    PubMed

    Dueñas, C; Fernández, M C; Cañete, S; Carretero, J; Liger, E

    2005-12-01

    Stochastic models that estimate the ground-level ozone concentrations in air at an urban and rural sampling points in South-eastern Spain have been developed. Studies of temporal series of data, spectral analyses of temporal series and ARIMA models have been used. The ARIMA model (1,0,0) x (1,0,1)24 satisfactorily predicts hourly ozone concentrations in the urban area. The ARIMA (2,1,1) x (0,1,1)24 has been developed for the rural area. In both sampling points, predictions of hourly ozone concentrations agree reasonably well with measured values. However, the prediction of hourly ozone concentrations in the rural point appears to be better than that of the urban point. The performance of ARIMA models suggests that this kind of modelling can be suitable for ozone concentrations forecasting.

  16. How do I know if I’ve improved my continental scale flood early warning system?

    NASA Astrophysics Data System (ADS)

    Cloke, Hannah L.; Pappenberger, Florian; Smith, Paul J.; Wetterhall, Fredrik

    2017-04-01

    Flood early warning systems mitigate damages and loss of life and are an economically efficient way of enhancing disaster resilience. The use of continental scale flood early warning systems is rapidly growing. The European Flood Awareness System (EFAS) is a pan-European flood early warning system forced by a multi-model ensemble of numerical weather predictions. Responses to scientific and technical changes can be complex in these computationally expensive continental scale systems, and improvements need to be tested by evaluating runs of the whole system. It is demonstrated here that forecast skill is not correlated with the value of warnings. In order to tell if the system has been improved an evaluation strategy is required that considers both forecast skill and warning value. The combination of a multi-forcing ensemble of EFAS flood forecasts is evaluated with a new skill-value strategy. The full multi-forcing ensemble is recommended for operational forecasting, but, there are spatial variations in the optimal forecast combination. Results indicate that optimizing forecasts based on value rather than skill alters the optimal forcing combination and the forecast performance. Also indicated is that model diversity and ensemble size are both important in achieving best overall performance. The use of several evaluation measures that consider both skill and value is strongly recommended when considering improvements to early warning systems.

  17. Forecasting the student-professor matches that result in unusually effective teaching.

    PubMed

    Gross, Jennifer; Lakey, Brian; Lucas, Jessica L; LaCross, Ryan; Plotkowski, Andrea R; Winegard, Bo

    2015-03-01

    Two important influences on students' evaluations of teaching are relationship and professor effects. Relationship effects reflect unique matches between students and professors such that some professors are unusually effective for some students, but not for others. Professor effects reflect inter-rater agreement that some professors are more effective than others, on average across students. We attempted to forecast students' evaluations of live lectures from brief, video-recorded teaching trailers. Participants were 145 college students (74% female) enrolled in introductory psychology courses at a public university in the Great Lakes region of the United States. Students viewed trailers early in the semester and attended live lectures months later. Because subgroups of students viewed the same professors, statistical analyses could isolate professor and relationship effects. Evaluations were influenced strongly by relationship and professor effects, and students' evaluations of live lectures could be forecasted from students' evaluations of teaching trailers. That is, we could forecast the individual students who would respond unusually well to a specific professor (relationship effects). We could also forecast which professors elicited better evaluations in live lectures, on average across students (professor effects). Professors who elicited unusually good evaluations in some students also elicited better memory for lectures in those students. It appears possible to forecast relationship and professor effects on teaching evaluations by presenting brief teaching trailers to students. Thus, it might be possible to develop online recommender systems to help match students and professors so that unusually effective teaching emerges. © 2014 The Authors. British Journal of Educational Psychology published by John Wiley & Sons Ltd on behalf of the British Psychological Society.

  18. How much are you prepared to PAY for a forecast?

    NASA Astrophysics Data System (ADS)

    Arnal, Louise; Coughlan, Erin; Ramos, Maria-Helena; Pappenberger, Florian; Wetterhall, Fredrik; Bachofen, Carina; van Andel, Schalk Jan

    2015-04-01

    Probabilistic hydro-meteorological forecasts are a crucial element of the decision-making chain in the field of flood prevention. The operational use of probabilistic forecasts is increasingly promoted through the development of new novel state-of-the-art forecast methods and numerical skill is continuously increasing. However, the value of such forecasts for flood early-warning systems is a topic of diverging opinions. Indeed, the word value, when applied to flood forecasting, is multifaceted. It refers, not only to the raw cost of acquiring and maintaining a probabilistic forecasting system (in terms of human and financial resources, data volume and computational time), but also and most importantly perhaps, to the use of such products. This game aims at investigating this point. It is a willingness to pay game, embedded in a risk-based decision-making experiment. Based on a ``Red Cross/Red Crescent, Climate Centre'' game, it is a contribution to the international Hydrologic Ensemble Prediction Experiment (HEPEX). A limited number of probabilistic forecasts will be auctioned to the participants; the price of these forecasts being market driven. All participants (irrespective of having bought or not a forecast set) will then be taken through a decision-making process to issue warnings for extreme rainfall. This game will promote discussions around the topic of the value of forecasts for decision-making in the field of flood prevention.

  19. Assimilation and High Resolution Forecasts of Surface and Near Surface Conditions for the 2010 Vancouver Winter Olympic and Paralympic Games

    NASA Astrophysics Data System (ADS)

    Bernier, Natacha B.; Bélair, Stéphane; Bilodeau, Bernard; Tong, Linying

    2014-01-01

    A dynamical model was experimentally implemented to provide high resolution forecasts at points of interests in the 2010 Vancouver Olympics and Paralympics Region. In a first experiment, GEM-Surf, the near surface and land surface modeling system, is driven by operational atmospheric forecasts and used to refine the surface forecasts according to local surface conditions such as elevation and vegetation type. In this simple form, temperature and snow depth forecasts are improved mainly as a result of the better representation of real elevation. In a second experiment, screen level observations and operational atmospheric forecasts are blended to drive a continuous cycle of near surface and land surface hindcasts. Hindcasts of the previous day conditions are then regarded as today's optimized initial conditions. Hence, in this experiment, given observations are available, observation driven hindcasts continuously ensure that daily forecasts are issued from improved initial conditions. GEM-Surf forecasts obtained from improved short-range hindcasts produced using these better conditions result in improved snow depth forecasts. In a third experiment, assimilation of snow depth data is applied to further optimize GEM-Surf's initial conditions, in addition to the use of blended observations and forecasts for forcing. Results show that snow depth and summer temperature forecasts are further improved by the addition of snow depth data assimilation.

  20. APPLICATION AND EVALUATION OF CMAQ IN THE UNITED STATES: AIR QUALITY FORECASTING AND RETROSPECTIVE MODELING

    EPA Science Inventory

    Presentation slides provide background on model evaluation techniques. Also included in the presentation is an operational evaluation of 2001 Community Multiscale Air Quality (CMAQ) annual simulation, and an evaluation of PM2.5 for the CMAQ air quality forecast (AQF) ...

  1. Improvements in approaches to forecasting and evaluation techniques

    NASA Astrophysics Data System (ADS)

    Weatherhead, Elizabeth

    2014-05-01

    The US is embarking on an experiment to make significant and sustained improvements in weather forecasting. The effort stems from a series of community conversations that recognized the rapid advancements in observations, modeling and computing techniques in the academic, governmental and private sectors. The new directions and initial efforts will be summarized, including information on possibilities for international collaboration. Most new projects are scheduled to start in the last half of 2014. Several advancements include ensemble forecasting with global models, and new sharing of computing resources. Newly developed techniques for evaluating weather forecast models will be presented in detail. The approaches use statistical techniques that incorporate pair-wise comparisons of forecasts with observations and account for daily auto-correlation to assess appropriate uncertainty in forecast changes. Some of the new projects allow for international collaboration, particularly on the research components of the projects.

  2. Survey of air cargo forecasting techniques

    NASA Technical Reports Server (NTRS)

    Kuhlthan, A. R.; Vermuri, R. S.

    1978-01-01

    Forecasting techniques currently in use in estimating or predicting the demand for air cargo in various markets are discussed with emphasis on the fundamentals of the different forecasting approaches. References to specific studies are cited when appropriate. The effectiveness of current methods is evaluated and several prospects for future activities or approaches are suggested. Appendices contain summary type analyses of about 50 specific publications on forecasting, and selected bibliographies on air cargo forecasting, air passenger demand forecasting, and general demand and modalsplit modeling.

  3. Skilful rainfall forecasts from artificial neural networks with long duration series and single-month optimization

    NASA Astrophysics Data System (ADS)

    Abbot, John; Marohasy, Jennifer

    2017-11-01

    General circulation models, which forecast by first modelling actual conditions in the atmosphere and ocean, are used extensively for monthly rainfall forecasting. We show how more skilful monthly and seasonal rainfall forecasts can be achieved through the mining of historical climate data using artificial neural networks (ANNs). This technique is demonstrated for two agricultural regions of Australia: the wheat belt of Western Australia and the sugar growing region of coastal Queensland. The most skilful monthly rainfall forecasts measured in terms of Ideal Point Error (IPE), and a score relative to climatology, are consistently achieved through the use of ANNs optimized for each month individually, and also by choosing to input longer historical series of climate indices. Using the longer series restricts the number of climate indices that can be used.

  4. Open Day at SHMI.

    NASA Astrophysics Data System (ADS)

    Jarosova, M.

    2010-09-01

    During the World Meteorological Day there has been preparing "Open Day" at Slovak Hydrometeorological Institute. This event has more than 10 years traditions. "Open Day" is one of a lot of possibilities to give more information about meteorology, climatology, hydrology too to public. This "Day" is executed in whole Slovakia. People can visit the laboratories, the forecasting room....and meteo and clima measuring points. The most popular is visiting forecasting room. Visitors are interested in e.g. climatologic change in Slovakia territory, preparing weather forecasting, dangerous phenomena.... Every year we have more than 500 visitors.

  5. An Evaluation of Real-time Air Quality Forecasts and their Urban Emissions over Eastern Texas During the Summer of 2006 Second Texas Air Quality Study Field Study

    EPA Science Inventory

    Forecasts of ozone (O3) and particulate matter (diameter less than 2.5 µm, PM2.5) from seven air quality forecast models (AQFMs) are statistically evaluated against observations collected during August and September of 2006 (49 days) through the AIRNow netwo...

  6. Risk Based Reservoir Operations Using Ensemble Streamflow Predictions for Lake Mendocino in Mendocino County, California

    NASA Astrophysics Data System (ADS)

    Delaney, C.; Mendoza, J.; Whitin, B.; Hartman, R. K.

    2017-12-01

    Ensemble Forecast Operations (EFO) is a risk based approach of reservoir flood operations that incorporates ensemble streamflow predictions (ESPs) made by NOAA's California-Nevada River Forecast Center (CNRFC). With the EFO approach, each member of an ESP is individually modeled to forecast system conditions and calculate risk of reaching critical operational thresholds. Reservoir release decisions are computed which seek to manage forecasted risk to established risk tolerance levels. A water management model was developed for Lake Mendocino, a 111,000 acre-foot reservoir located near Ukiah, California, to evaluate the viability of the EFO alternative to improve water supply reliability but not increase downstream flood risk. Lake Mendocino is a dual use reservoir, which is owned and operated for flood control by the United States Army Corps of Engineers and is operated for water supply by the Sonoma County Water Agency. Due to recent changes in the operations of an upstream hydroelectric facility, this reservoir has suffered from water supply reliability issues since 2007. The EFO alternative was simulated using a 26-year (1985-2010) ESP hindcast generated by the CNRFC, which approximates flow forecasts for 61 ensemble members for a 15-day horizon. Model simulation results of the EFO alternative demonstrate a 36% increase in median end of water year (September 30) storage levels over existing operations. Additionally, model results show no increase in occurrence of flows above flood stage for points downstream of Lake Mendocino. This investigation demonstrates that the EFO alternative may be a viable approach for managing Lake Mendocino for multiple purposes (water supply, flood mitigation, ecosystems) and warrants further investigation through additional modeling and analysis.

  7. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Jie; Cui, Mingjian; Hodge, Bri-Mathias

    The large variability and uncertainty in wind power generation present a concern to power system operators, especially given the increasing amounts of wind power being integrated into the electric power system. Large ramps, one of the biggest concerns, can significantly influence system economics and reliability. The Wind Forecast Improvement Project (WFIP) was to improve the accuracy of forecasts and to evaluate the economic benefits of these improvements to grid operators. This paper evaluates the ramp forecasting accuracy gained by improving the performance of short-term wind power forecasting. This study focuses on the WFIP southern study region, which encompasses most ofmore » the Electric Reliability Council of Texas (ERCOT) territory, to compare the experimental WFIP forecasts to the existing short-term wind power forecasts (used at ERCOT) at multiple spatial and temporal scales. The study employs four significant wind power ramping definitions according to the power change magnitude, direction, and duration. The optimized swinging door algorithm is adopted to extract ramp events from actual and forecasted wind power time series. The results show that the experimental WFIP forecasts improve the accuracy of the wind power ramp forecasting. This improvement can result in substantial costs savings and power system reliability enhancements.« less

  8. Evaluation of NU-WRF Rainfall Forecasts for IFloodS

    NASA Technical Reports Server (NTRS)

    Wu, Di; Peters-Lidard, Christa; Tao, Wei-Kuo; Petersen, Walter

    2016-01-01

    The Iowa Flood Studies (IFloodS) campaign was conducted in eastern Iowa as a pre- GPM-launch campaign from 1 May to 15 June 2013. During the campaign period, real time forecasts are conducted utilizing NASA-Unified Weather Research and Forecasting (NU-WRF) model to support the everyday weather briefing. In this study, two sets of the NU-WRF rainfall forecasts are evaluated with Stage IV and Multi-Radar Multi-Sensor (MRMS) Quantitative Precipitation Estimation (QPE), with the objective to understand the impact of Land Surface initialization on the predicted precipitation. NU-WRF is also compared with North American Mesoscale Forecast System (NAM) 12 kilometer forecast. In general, NU-WRF did a good job at capturing individual precipitation events. NU-WRF is also able to replicate a better rainfall spatial distribution compare with NAM. Further sensitivity tests show that the high-resolution makes a positive impact on rainfall forecast. The two sets of NU-WRF simulations produce very close rainfall characteristics. The Land surface initialization do not show significant impact on short term rainfall forecast, and it is largely due to the soil conditions during the field campaign period.

  9. Forecasting volcanic eruptions and other material failure phenomena: An evaluation of the failure forecast method

    NASA Astrophysics Data System (ADS)

    Bell, Andrew F.; Naylor, Mark; Heap, Michael J.; Main, Ian G.

    2011-08-01

    Power-law accelerations in the mean rate of strain, earthquakes and other precursors have been widely reported prior to material failure phenomena, including volcanic eruptions, landslides and laboratory deformation experiments, as predicted by several theoretical models. The Failure Forecast Method (FFM), which linearizes the power-law trend, has been routinely used to forecast the failure time in retrospective analyses; however, its performance has never been formally evaluated. Here we use synthetic and real data, recorded in laboratory brittle creep experiments and at volcanoes, to show that the assumptions of the FFM are inconsistent with the error structure of the data, leading to biased and imprecise forecasts. We show that a Generalized Linear Model method provides higher-quality forecasts that converge more accurately to the eventual failure time, accounting for the appropriate error distributions. This approach should be employed in place of the FFM to provide reliable quantitative forecasts and estimate their associated uncertainties.

  10. The impact of satellite temperature soundings on the forecasts of a small national meteorological service

    NASA Technical Reports Server (NTRS)

    Wolfson, N.; Thomasell, A.; Alperson, Z.; Brodrick, H.; Chang, J. T.; Gruber, A.; Ohring, G.

    1984-01-01

    The impact of introducing satellite temperature sounding data on a numerical weather prediction model of a national weather service is evaluated. A dry five level, primitive equation model which covers most of the Northern Hemisphere, is used for these experiments. Series of parallel forecast runs out to 48 hours are made with three different sets of initial conditions: (1) NOSAT runs, only conventional surface and upper air observations are used; (2) SAT runs, satellite soundings are added to the conventional data over oceanic regions and North Africa; and (3) ALLSAT runs, the conventional upper air observations are replaced by satellite soundings over the entire model domain. The impact on the forecasts is evaluated by three verification methods: the RMS errors in sea level pressure forecasts, systematic errors in sea level pressure forecasts, and errors in subjective forecasts of significant weather elements for a selected portion of the model domain. For the relatively short range of the present forecasts, the major beneficial impacts on the sea level pressure forecasts are found precisely in those areas where the satellite sounding are inserted and where conventional upper air observations are sparse. The RMS and systematic errors are reduced in these regions. The subjective forecasts of significant weather elements are improved with the use of the satellite data. It is found that the ALLSAT forecasts are of a quality comparable to the SAR forecasts.

  11. Communicating uncertainty in hydrological forecasts: mission impossible?

    NASA Astrophysics Data System (ADS)

    Ramos, Maria-Helena; Mathevet, Thibault; Thielen, Jutta; Pappenberger, Florian

    2010-05-01

    Cascading uncertainty in meteo-hydrological modelling chains for forecasting and integrated flood risk assessment is an essential step to improve the quality of hydrological forecasts. Although the best methodology to quantify the total predictive uncertainty in hydrology is still debated, there is a common agreement that one must avoid uncertainty misrepresentation and miscommunication, as well as misinterpretation of information by users. Several recent studies point out that uncertainty, when properly explained and defined, is no longer unwelcome among emergence response organizations, users of flood risk information and the general public. However, efficient communication of uncertain hydro-meteorological forecasts is far from being a resolved issue. This study focuses on the interpretation and communication of uncertain hydrological forecasts based on (uncertain) meteorological forecasts and (uncertain) rainfall-runoff modelling approaches to decision-makers such as operational hydrologists and water managers in charge of flood warning and scenario-based reservoir operation. An overview of the typical flow of uncertainties and risk-based decisions in hydrological forecasting systems is presented. The challenges related to the extraction of meaningful information from probabilistic forecasts and the test of its usefulness in assisting operational flood forecasting are illustrated with the help of two case-studies: 1) a study on the use and communication of probabilistic flood forecasting within the European Flood Alert System; 2) a case-study on the use of probabilistic forecasts by operational forecasters from the hydroelectricity company EDF in France. These examples show that attention must be paid to initiatives that promote or reinforce the active participation of expert forecasters in the forecasting chain. The practice of face-to-face forecast briefings, focusing on sharing how forecasters interpret, describe and perceive the model output forecasted scenarios, is essential. We believe that the efficient communication of uncertainty in hydro-meteorological forecasts is not a mission impossible. Questions remaining unanswered in probabilistic hydrological forecasting should not neutralize the goal of such a mission, and the suspense kept should instead act as a catalyst for overcoming the remaining challenges.

  12. Introducing an operational method to forecast long-term regional drought based on the application of artificial intelligence capabilities

    NASA Astrophysics Data System (ADS)

    Kousari, Mohammad Reza; Hosseini, Mitra Esmaeilzadeh; Ahani, Hossein; Hakimelahi, Hemila

    2017-01-01

    An effective forecast of the drought definitely gives lots of advantages in regard to the management of water resources being used in agriculture, industry, and households consumption. To introduce such a model applying simple data inputs, in this study a regional drought forecast method on the basis of artificial intelligence capabilities (artificial neural networks) and Standardized Precipitation Index (SPI in 3, 6, 9, 12, 18, and 24 monthly series) has been presented in Fars Province of Iran. The precipitation data of 41 rain gauge stations were applied for computing SPI values. Besides, weather signals including Multivariate ENSO Index (MEI), North Atlantic Oscillation (NAO), Southern Oscillation Index (SOI), NINO1+2, anomaly NINO1+2, NINO3, anomaly NINO3, NINO4, anomaly NINO4, NINO3.4, and anomaly NINO3.4 were also used as the predictor variables for SPI time series forecast the next 12 months. Frequent testing and validating steps were considered to obtain the best artificial neural networks (ANNs) models. The forecasted values were mapped in verification sector then they were compared with the observed maps at the same dates. Results showed considerable spatial and temporal relationships even among the maps of different SPI time series. Also, the first 6 months forecasted maps showed an average of 73 % agreements with the observed ones. The most important finding and the strong point of this study was the fact that although drought forecast in each station and time series was completely independent, the relationships between spatial and temporal predictions remained. This strong point mainly referred to frequent testing and validating steps in order to explore the best drought forecast models from plenty of produced ANNs models. Finally, wherever the precipitation data are available, the practical application of the presented method is possible.

  13. The Ensemble Space Weather Modeling System (eSWMS): Status, Capabilities and Challenges

    NASA Astrophysics Data System (ADS)

    Fry, C. D.; Eccles, J. V.; Reich, J. P.

    2010-12-01

    Marking a milestone in space weather forecasting, the Space Weather Modeling System (SWMS) successfully completed validation testing in advance of operational testing at Air Force Weather Agency’s primary space weather production center. This is the first coupling of stand-alone, physics-based space weather models that are currently in operations at AFWA supporting the warfighter. Significant development effort went into ensuring the component models were portable and scalable while maintaining consistent results across diverse high performance computing platforms. Coupling was accomplished under the Earth System Modeling Framework (ESMF). The coupled space weather models are the Hakamada-Akasofu-Fry version 2 (HAFv2) solar wind model and GAIM1, the ionospheric forecast component of the Global Assimilation of Ionospheric Measurements (GAIM) model. The SWMS was developed by team members from AFWA, Explorations Physics International, Inc. (EXPI) and Space Environment Corporation (SEC). The successful development of the SWMS provides new capabilities beyond enabling extended lead-time, data-driven ionospheric forecasts. These include ingesting diverse data sets at higher resolution, incorporating denser computational grids at finer time steps, and performing probability-based ensemble forecasts. Work of the SWMS development team now focuses on implementing the ensemble-based probability forecast capability by feeding multiple scenarios of 5 days of solar wind forecasts to the GAIM1 model based on the variation of the input fields to the HAFv2 model. The ensemble SWMS (eSWMS) will provide the most-likely space weather scenario with uncertainty estimates for important forecast fields. The eSWMS will allow DoD mission planners to consider the effects of space weather on their systems with more advance warning than is currently possible. The payoff is enhanced, tailored support to the warfighter with improved capabilities, such as point-to-point HF propagation forecasts, single-frequency GPS error corrections, and high cadence, high-resolution Space Situational Awareness (SSA) products. We present the current status of eSWMS, its capabilities, limitations and path of transition to operational use.

  14. Evaluation of statistical models for forecast errors from the HBV model

    NASA Astrophysics Data System (ADS)

    Engeland, Kolbjørn; Renard, Benjamin; Steinsland, Ingelin; Kolberg, Sjur

    2010-04-01

    SummaryThree statistical models for the forecast errors for inflow into the Langvatn reservoir in Northern Norway have been constructed and tested according to the agreement between (i) the forecast distribution and the observations and (ii) median values of the forecast distribution and the observations. For the first model observed and forecasted inflows were transformed by the Box-Cox transformation before a first order auto-regressive model was constructed for the forecast errors. The parameters were conditioned on weather classes. In the second model the Normal Quantile Transformation (NQT) was applied on observed and forecasted inflows before a similar first order auto-regressive model was constructed for the forecast errors. For the third model positive and negative errors were modeled separately. The errors were first NQT-transformed before conditioning the mean error values on climate, forecasted inflow and yesterday's error. To test the three models we applied three criterions: we wanted (a) the forecast distribution to be reliable; (b) the forecast intervals to be narrow; (c) the median values of the forecast distribution to be close to the observed values. Models 1 and 2 gave almost identical results. The median values improved the forecast with Nash-Sutcliffe R eff increasing from 0.77 for the original forecast to 0.87 for the corrected forecasts. Models 1 and 2 over-estimated the forecast intervals but gave the narrowest intervals. Their main drawback was that the distributions are less reliable than Model 3. For Model 3 the median values did not fit well since the auto-correlation was not accounted for. Since Model 3 did not benefit from the potential variance reduction that lies in bias estimation and removal it gave on average wider forecasts intervals than the two other models. At the same time Model 3 on average slightly under-estimated the forecast intervals, probably explained by the use of average measures to evaluate the fit.

  15. Evaluation and Applications of the Prediction of Intensity Model Error (PRIME) Model

    NASA Astrophysics Data System (ADS)

    Bhatia, K. T.; Nolan, D. S.; Demaria, M.; Schumacher, A.

    2015-12-01

    Forecasters and end users of tropical cyclone (TC) intensity forecasts would greatly benefit from a reliable expectation of model error to counteract the lack of consistency in TC intensity forecast performance. As a first step towards producing error predictions to accompany each TC intensity forecast, Bhatia and Nolan (2013) studied the relationship between synoptic parameters, TC attributes, and forecast errors. In this study, we build on previous results of Bhatia and Nolan (2013) by testing the ability of the Prediction of Intensity Model Error (PRIME) model to forecast the absolute error and bias of four leading intensity models available for guidance in the Atlantic basin. PRIME forecasts are independently evaluated at each 12-hour interval from 12 to 120 hours during the 2007-2014 Atlantic hurricane seasons. The absolute error and bias predictions of PRIME are compared to their respective climatologies to determine their skill. In addition to these results, we will present the performance of the operational version of PRIME run during the 2015 hurricane season. PRIME verification results show that it can reliably anticipate situations where particular models excel, and therefore could lead to a more informed protocol for hurricane evacuations and storm preparations. These positive conclusions suggest that PRIME forecasts also have the potential to lower the error in the original intensity forecasts of each model. As a result, two techniques are proposed to develop a post-processing procedure for a multimodel ensemble based on PRIME. The first approach is to inverse-weight models using PRIME absolute error predictions (higher predicted absolute error corresponds to lower weights). The second multimodel ensemble applies PRIME bias predictions to each model's intensity forecast and the mean of the corrected models is evaluated. The forecasts of both of these experimental ensembles are compared to those of the equal-weight ICON ensemble, which currently provides the most reliable forecasts in the Atlantic basin.

  16. Application Study of Comprehensive Forecasting Model Based on Entropy Weighting Method on Trend of PM2.5 Concentration in Guangzhou, China

    PubMed Central

    Liu, Dong-jun; Li, Li

    2015-01-01

    For the issue of haze-fog, PM2.5 is the main influence factor of haze-fog pollution in China. The trend of PM2.5 concentration was analyzed from a qualitative point of view based on mathematical models and simulation in this study. The comprehensive forecasting model (CFM) was developed based on the combination forecasting ideas. Autoregressive Integrated Moving Average Model (ARIMA), Artificial Neural Networks (ANNs) model and Exponential Smoothing Method (ESM) were used to predict the time series data of PM2.5 concentration. The results of the comprehensive forecasting model were obtained by combining the results of three methods based on the weights from the Entropy Weighting Method. The trend of PM2.5 concentration in Guangzhou China was quantitatively forecasted based on the comprehensive forecasting model. The results were compared with those of three single models, and PM2.5 concentration values in the next ten days were predicted. The comprehensive forecasting model balanced the deviation of each single prediction method, and had better applicability. It broadens a new prediction method for the air quality forecasting field. PMID:26110332

  17. Application Study of Comprehensive Forecasting Model Based on Entropy Weighting Method on Trend of PM2.5 Concentration in Guangzhou, China.

    PubMed

    Liu, Dong-jun; Li, Li

    2015-06-23

    For the issue of haze-fog, PM2.5 is the main influence factor of haze-fog pollution in China. The trend of PM2.5 concentration was analyzed from a qualitative point of view based on mathematical models and simulation in this study. The comprehensive forecasting model (CFM) was developed based on the combination forecasting ideas. Autoregressive Integrated Moving Average Model (ARIMA), Artificial Neural Networks (ANNs) model and Exponential Smoothing Method (ESM) were used to predict the time series data of PM2.5 concentration. The results of the comprehensive forecasting model were obtained by combining the results of three methods based on the weights from the Entropy Weighting Method. The trend of PM2.5 concentration in Guangzhou China was quantitatively forecasted based on the comprehensive forecasting model. The results were compared with those of three single models, and PM2.5 concentration values in the next ten days were predicted. The comprehensive forecasting model balanced the deviation of each single prediction method, and had better applicability. It broadens a new prediction method for the air quality forecasting field.

  18. An evaluation of the real-time tropical cyclone forecast skill of the Navy Operational Global Atmospheric Prediction System in the western North Pacific

    NASA Technical Reports Server (NTRS)

    Fiorino, Michael; Goerss, James S.; Jensen, Jack J.; Harrison, Edward J., Jr.

    1993-01-01

    The paper evaluates the meteorological quality and operational utility of the Navy Operational Global Atmospheric Prediction System (NOGAPS) in forecasting tropical cyclones. It is shown that the model can provide useful predictions of motion and formation on a real-time basis in the western North Pacific. The meterological characteristics of the NOGAPS tropical cyclone predictions are evaluated by examining the formation of low-level cyclone systems in the tropics and vortex structure in the NOGAPS analysis and verifying 72-h forecasts. The adjusted NOGAPS track forecasts showed equitable skill to the baseline aid and the dynamical model. NOGAPS successfully predicted unusual equatorward turns for several straight-running cyclones.

  19. Assessing North American multimodel ensemble (NMME) seasonal forecast skill to assist in the early warning of hydrometeorological extremes over East Africa

    USGS Publications Warehouse

    Shukla, Shraddhanand; Roberts, Jason B.; Hoell. Andrew,; Funk, Chris; Robertson, Franklin R.; Kirtmann, Benjamin

    2016-01-01

    The skill of North American multimodel ensemble (NMME) seasonal forecasts in East Africa (EA), which encompasses one of the most food and water insecure areas of the world, is evaluated using deterministic, categorical, and probabilistic evaluation methods. The skill is estimated for all three primary growing seasons: March–May (MAM), July–September (JAS), and October–December (OND). It is found that the precipitation forecast skill in this region is generally limited and statistically significant over only a small part of the domain. In the case of MAM (JAS) [OND] season it exceeds the skill of climatological forecasts in parts of equatorial EA (Northern Ethiopia) [equatorial EA] for up to 2 (5) [5] months lead. Temperature forecast skill is generally much higher than precipitation forecast skill (in terms of deterministic and probabilistic skill scores) and statistically significant over a majority of the region. Over the region as a whole, temperature forecasts also exhibit greater reliability than the precipitation forecasts. The NMME ensemble forecasts are found to be more skillful and reliable than the forecast from any individual model. The results also demonstrate that for some seasons (e.g. JAS), the predictability of precipitation signals varies and is higher during certain climate events (e.g. ENSO). Finally, potential room for improvement in forecast skill is identified in some models by comparing homogeneous predictability in individual NMME models with their respective forecast skill.

  20. The air quality forecast in Beijing with Community Multi-scale Air Quality Modeling (CMAQ) System: model evaluation and improvement

    NASA Astrophysics Data System (ADS)

    Wu, Q.

    2013-12-01

    The MM5-SMOKE-CMAQ model system, which is developed by the United States Environmental Protection Agency(U.S. EPA) as the Models-3 system, has been used for the daily air quality forecast in the Beijing Municipal Environmental Monitoring Center(Beijing MEMC), as a part of the Ensemble Air Quality Forecast System for Beijing(EMS-Beijing) since the Olympic Games year 2008. In this study, we collect the daily forecast results of the CMAQ model in the whole year 2010 for the model evaluation. The results show that the model play a good model performance in most days but underestimate obviously in some air pollution episode. A typical air pollution episode from 11st - 20th January 2010 was chosen, which the air pollution index(API) of particulate matter (PM10) observed by Beijing MEMC reaches to 180 while the prediction of PM10-API is about 100. Taking in account all stations in Beijing, including urban and suburban stations, three numerical methods are used for model improvement: firstly, enhance the inner domain with 4km grids, the coverage from only Beijing to the area including its surrounding cities; secondly, update the Beijing stationary area emission inventory, from statistical county-level to village-town level, that would provide more detail spatial informance for area emissions; thirdly, add some industrial points emission in Beijing's surrounding cities, the latter two are both the improvement of emission. As the result, the peak of the nine national standard stations averaged PM10-API, which is simulated by CMAQ as daily hindcast PM10-API, reach to 160 and much near to the observation. The new results show better model performance, which the correlation coefficent is 0.93 in national standard stations average and 0.84 in all stations, the relative error is 15.7% in national standard stations averaged and 27% in all stations. The time series of 9 national standard in Beijing urban The scatter diagram of all stations in Beijing, the red is the forecast and the blue is new result.

  1. Impact of single-point GPS integrated water vapor estimates on short-range WRF model forecasts over southern India

    NASA Astrophysics Data System (ADS)

    Kumar, Prashant; Gopalan, Kaushik; Shukla, Bipasha Paul; Shyam, Abhineet

    2017-11-01

    Specifying physically consistent and accurate initial conditions is one of the major challenges of numerical weather prediction (NWP) models. In this study, ground-based global positioning system (GPS) integrated water vapor (IWV) measurements available from the International Global Navigation Satellite Systems (GNSS) Service (IGS) station in Bangalore, India, are used to assess the impact of GPS data on NWP model forecasts over southern India. Two experiments are performed with and without assimilation of GPS-retrieved IWV observations during the Indian winter monsoon period (November-December, 2012) using a four-dimensional variational (4D-Var) data assimilation method. Assimilation of GPS data improved the model IWV analysis as well as the subsequent forecasts. There is a positive impact of ˜10 % over Bangalore and nearby regions. The Weather Research and Forecasting (WRF) model-predicted 24-h surface temperature forecasts have also improved when compared with observations. Small but significant improvements were found in the rainfall forecasts compared to control experiments.

  2. Contrasting environments associated with storm prediction center tornado outbreak forecasts using synoptic-scale composite analysis

    NASA Astrophysics Data System (ADS)

    Bates, Alyssa Victoria

    Tornado outbreaks have significant human impact, so it is imperative forecasts of these phenomena are accurate. As a synoptic setup lays the foundation for a forecast, synoptic-scale aspects of Storm Prediction Center (SPC) outbreak forecasts of varying accuracy were assessed. The percentages of the number of tornado outbreaks within SPC 10% tornado probability polygons were calculated. False alarm events were separately considered. The outbreaks were separated into quartiles using a point-in-polygon algorithm. Statistical composite fields were created to represent the synoptic conditions of these groups and facilitate comparison. Overall, temperature advection had the greatest differences between the groups. Additionally, there were significant differences in the jet streak strengths and amounts of vertical wind shear. The events forecasted with low accuracy consisted of the weakest synoptic-scale setups. These results suggest it is possible that events with weak synoptic setups should be regarded as areas of concern by tornado outbreak forecasters.

  3. Baseline and target values for regional and point PV power forecasts: Toward improved solar forecasting

    DOE PAGES

    Zhang, Jie; Hodge, Bri -Mathias; Lu, Siyuan; ...

    2015-11-10

    Accurate solar photovoltaic (PV) power forecasting allows utilities to reliably utilize solar resources on their systems. However, to truly measure the improvements that any new solar forecasting methods provide, it is important to develop a methodology for determining baseline and target values for the accuracy of solar forecasting at different spatial and temporal scales. This paper aims at developing a framework to derive baseline and target values for a suite of generally applicable, value-based, and custom-designed solar forecasting metrics. The work was informed by close collaboration with utility and independent system operator partners. The baseline values are established based onmore » state-of-the-art numerical weather prediction models and persistence models in combination with a radiative transfer model. The target values are determined based on the reduction in the amount of reserves that must be held to accommodate the uncertainty of PV power output. The proposed reserve-based methodology is a reasonable and practical approach that can be used to assess the economic benefits gained from improvements in accuracy of solar forecasting. Lastly, the financial baseline and targets can be translated back to forecasting accuracy metrics and requirements, which will guide research on solar forecasting improvements toward the areas that are most beneficial to power systems operations.« less

  4. Using Heliospheric Imaging for Storm Forecasting - SMEI CME Observations as a Tool for Operational Forecasting at AFWA

    NASA Astrophysics Data System (ADS)

    Webb, D. F.; Johnston, J. C.; Fry, C. D.; Kuchar, T. A.

    2008-12-01

    Observations of coronal mass ejections (CMEs) from heliospheric imagers such as the Solar Mass Ejection Imager (SMEI) can lead to significant improvements in operational space weather forecasting. We are working with the Air Force Weather Agency (AFWA) to ingest SMEI all-sky imagery with appropriate tools to help forecasters improve their operational space weather forecasts. We describe two approaches: 1) Near- real time analysis of propagating CMEs from SMEI images alone combined with near-Sun observations of CME onsets and, 2) Using these calculations of speed as a mid-course correction to the HAFv2 solar wind model forecasts. HAFv2 became operational at AFWA in late 2006. The objective is to determine a set of practical procedures that the duty forecaster can use to update or correct a solar wind forecast using heliospheric imager data. SMEI observations can be used inclusively to make storm forecasts, as recently discussed in Webb et al. (Space Weather, in press, 2008). We have developed a point-and-click analysis tool for use with SMEI images and are working with AFWA to ensure that timely SMEI images are available for analyses. When a frontside solar eruption occurs, especially if within about 45 deg. of Sun center, a forecaster checks for an associated CME observed by a coronagraph within an appropriate time window. If found, especially if the CME is a halo type, the forecaster checks SMEI observations about a day later, depending on the apparent initial CME speed, for possibly associated CMEs. If one is found, then the leading edge is measured over several successive frames and an elongation-time plot constructed. A minimum of three data points, i.e., over 3-4 orbits or about 6 hours, are necessary for such a plot. Using the solar source location and onset time of the CME from, e.g., SOHO observations, and assuming radial propagation, a distance-time relation is calculated and extrapolated to the 1 AU distance. As shown by Webb et al., the storm onset time is then expected to be about 3 hours after this 1 AU arrival time (AT). The prediction program is updated as more SMEI data become available. Currently when an appropriate solar event occurs, AFWA routinely runs the HAFv2 model to make a forecast of the shock and ejecta arrival times at Earth. SMEI data can be used to improve this prediction. The HAFv2 model can produce synthetic sky maps of predicted CME brightness for comparison with SMEI images. The forecaster uses SMEI imagery to observe and track the CME. The forecaster then measures the CME location and speed using the SMEI imagery and the HAFv2 synthetic sky maps. After comparing the SMEI and HAFv2 results, the forecaster can adjust a key input to HAFv2, such as the initial speed of the disturbance at the Sun or the mid-course speed. The forecaster then iteratively runs HAFv2 until the observed and forecast sky maps match. The final HAFv2 solution becomes the new forecast. When the CME/shock arrives at (or does not reach) Earth, the forecaster verifies the forecast and updates the forecast skill statistics. Eventually, we plan to develop a more automated version of this procedure.

  5. Evaluating probabilistic dengue risk forecasts from a prototype early warning system for Brazil.

    PubMed

    Lowe, Rachel; Coelho, Caio As; Barcellos, Christovam; Carvalho, Marilia Sá; Catão, Rafael De Castro; Coelho, Giovanini E; Ramalho, Walter Massa; Bailey, Trevor C; Stephenson, David B; Rodó, Xavier

    2016-02-24

    Recently, a prototype dengue early warning system was developed to produce probabilistic forecasts of dengue risk three months ahead of the 2014 World Cup in Brazil. Here, we evaluate the categorical dengue forecasts across all microregions in Brazil, using dengue cases reported in June 2014 to validate the model. We also compare the forecast model framework to a null model, based on seasonal averages of previously observed dengue incidence. When considering the ability of the two models to predict high dengue risk across Brazil, the forecast model produced more hits and fewer missed events than the null model, with a hit rate of 57% for the forecast model compared to 33% for the null model. This early warning model framework may be useful to public health services, not only ahead of mass gatherings, but also before the peak dengue season each year, to control potentially explosive dengue epidemics.

  6. Assessing Upper-Level Winds on Day-of-Launch

    NASA Technical Reports Server (NTRS)

    Bauman, William H., III; Wheeler, Mark M.

    2012-01-01

    On the day-or-launch. the 45th Weather Squadron Launch Weather Officers (LWOS) monitor the upper-level winds for their launch customers to include NASA's Launch Services Program (LSP). During launch operations, the payload launch team sometimes asks the LWO if they expect the upper level winds to change during the countdown but the LWOs did not have the capability to quickly retrieve or display the upper-level observations and compare them to the numerical weather prediction model point forecasts. The LWOs requested the Applied Meteorology Unit (AMU) develop a capability in the form of a graphical user interface (GUI) that would allow them to plot upper-level wind speed and direction observations from the Kennedy Space Center Doppler Radar Wind Profilers and Cape Canaveral Air Force Station rawinsondes and then overlay model point forecast profiles on the observation profiles to assess the performance of these models and graphically display them to the launch team. The AMU developed an Excel-based capability for the LWOs to assess the model forecast upper-level winds and compare them to observations. They did so by creating a GUI in Excel that allows the LWOs to first initialize the models by comparing the O-hour model forecasts to the observations and then to display model forecasts in 3-hour intervals from the current time through 12 hours.

  7. A seasonal hydrologic ensemble prediction system for water resource management

    NASA Astrophysics Data System (ADS)

    Luo, L.; Wood, E. F.

    2006-12-01

    A seasonal hydrologic ensemble prediction system, developed for the Ohio River basin, has been improved and expanded to several other regions including the Eastern U.S., Africa and East Asia. The prediction system adopts the traditional Extended Streamflow Prediction (ESP) approach, utilizing the VIC (Variable Infiltration Capacity) hydrological model as the central tool for producing ensemble prediction of soil moisture, snow and streamflow with lead times up to 6-month. VIC is forced by observed meteorology to estimate the hydrological initial condition prior to the forecast, but during the forecast period the atmospheric forcing comes from statistically downscaled, seasonal forecast from dynamic climate models. The seasonal hydrologic ensemble prediction system is currently producing realtime seasonal hydrologic forecast for these regions on a monthly basis. Using hindcasts from a 19-year period (1981-1999), during which seasonal hindcasts from NCEP Climate Forecast System (CFS) and European Union DEMETER project are available, we evaluate the performance of the forecast system over our forecast regions. The evaluation shows that the prediction system using the current forecast approach is able to produce reliable and accurate precipitation, soil moisture and streamflow predictions. The overall skill is much higher then the traditional ESP. In particular, forecasts based on multiple climate model forecast are more skillful than single model-based forecast. This emphasizes the significant need for producing seasonal climate forecast with multiple climate models for hydrologic applications. Forecast from this system is expected to provide very valuable information about future hydrologic states and associated risks for end users, including water resource management and financial sectors.

  8. Integration of Behind-the-Meter PV Fleet Forecasts into Utility Grid System Operations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hoff, Thomas Hoff; Kankiewicz, Adam

    Four major research objectives were completed over the course of this study. Three of the objectives were to evaluate three, new, state-of-the-art solar irradiance forecasting models. The fourth objective was to improve the California Independent System Operator’s (ISO) load forecasts by integrating behind-the-meter (BTM) PV forecasts. The three, new, state-of-the-art solar irradiance forecasting models included: the infrared (IR) satellite-based cloud motion vector (CMV) model; the WRF-SolarCA model and variants; and the Optimized Deep Machine Learning (ODML)-training model. The first two forecasting models targeted known weaknesses in current operational solar forecasts. They were benchmarked against existing operational numerical weather prediction (NWP)more » forecasts, visible satellite CMV forecasts, and measured PV plant power production. IR CMV, WRF-SolarCA, and ODML-training forecasting models all improved the forecast to a significant degree. Improvements varied depending on time of day, cloudiness index, and geographic location. The fourth objective was to demonstrate that the California ISO’s load forecasts could be improved by integrating BTM PV forecasts. This objective represented the project’s most exciting and applicable gains. Operational BTM forecasts consisting of 200,000+ individual rooftop PV forecasts were delivered into the California ISO’s real-time automated load forecasting (ALFS) environment. They were then evaluated side-by-side with operational load forecasts with no BTM-treatment. Overall, ALFS-BTM day-ahead (DA) forecasts performed better than baseline ALFS forecasts when compared to actual load data. Specifically, ALFS-BTM DA forecasts were observed to have the largest reduction of error during the afternoon on cloudy days. Shorter term 30 minute-ahead ALFS-BTM forecasts were shown to have less error under all sky conditions, especially during the morning time periods when traditional load forecasts often experience their largest uncertainties. This work culminated in a GO decision being made by the California ISO to include zonal BTM forecasts into its operational load forecasting system. The California ISO’s Manager of Short Term Forecasting, Jim Blatchford, summarized the research performed in this project with the following quote: “The behind-the-meter (BTM) California ISO region forecasting research performed by Clean Power Research and sponsored by the Department of Energy’s SUNRISE program was an opportunity to verify value and demonstrate improved load forecast capability. In 2016, the California ISO will be incorporating the BTM forecast into the Hour Ahead and Day Ahead load models to look for improvements in the overall load forecast accuracy as BTM PV capacity continues to grow.”« less

  9. Theoretical basis for operational ensemble forecasting of coronal mass ejections

    NASA Astrophysics Data System (ADS)

    Pizzo, V. J.; de Koning, C.; Cash, M.; Millward, G.; Biesecker, D. A.; Puga, L.; Codrescu, M.; Odstrcil, D.

    2015-10-01

    We lay out the theoretical underpinnings for the application of the Wang-Sheeley-Arge-Enlil modeling system to ensemble forecasting of coronal mass ejections (CMEs) in an operational environment. In such models, there is no magnetic cloud component, so our results pertain only to CME front properties, such as transit time to Earth. Within this framework, we find no evidence that the propagation is chaotic, and therefore, CME forecasting calls for different tactics than employed for terrestrial weather or hurricane forecasting. We explore a broad range of CME cone inputs and ambient states to flesh out differing CME evolutionary behavior in the various dynamical domains (e.g., large, fast CMEs launched into a slow ambient, and the converse; plus numerous permutations in between). CME propagation in both uniform and highly structured ambient flows is considered to assess how much the solar wind background affects the CME front properties at 1 AU. Graphical and analytic tools pertinent to an ensemble approach are developed to enable uncertainties in forecasting CME impact at Earth to be realistically estimated. We discuss how uncertainties in CME pointing relative to the Sun-Earth line affects the reliability of a forecast and how glancing blows become an issue for CME off-points greater than about the half width of the estimated input CME. While the basic results appear consistent with established impressions of CME behavior, the next step is to use existing records of well-observed CMEs at both Sun and Earth to verify that real events appear to follow the systematic tendencies presented in this study.

  10. Assessing Applications of GPM and IMERG Passive Microwave Rain Rates in Modeling and Operational Forecasting

    NASA Astrophysics Data System (ADS)

    Zavodsky, B.; Le Roy, A.; Smith, M. R.; Case, J.

    2016-12-01

    In support of NASA's recently launched GPM `core' satellite, the NASA-SPoRT project is leveraging experience in research-to-operations transitions and training to provide feedback on the operational utility of GPM products. Thus far, SPoRT has focused on evaluating the Level 2 GPROF passive microwave and IMERG rain rate estimates. Formal evaluations with end-users have occurred, as well as internal evaluations of the datasets. One set of end users for these products is National Weather Service Forecast Offices (WFOs) and National Weather Service River Forecast Centers (RFCs), comprising forecasters and hydrologists. SPoRT has hosted a series of formal assessments to determine uses and utility of these datasets for NWS operations at specific offices. Forecasters primarily have used Level 2 swath rain rates to observe rainfall in otherwise data-void regions and to confirm model QPF for their nowcasting or short-term forecasting. Hydrologists have been evaluating both the Level 2 rain rates and the IMERG rain rates, including rain rate accumulations derived from IMERG; hydrologists have used these data to supplement gauge data for post-event analysis as well as for longer-term forecasting. Results from specific evaluations will be presented. Another evaluation of the GPM passive microwave rain rates has been in using the data within other products that are currently transitioned to end-users, rather than as stand-alone observations. For example, IMERG Early data is being used as a forcing mechanism in the NASA Land Information System (LIS) for real-time soil moisture product over eastern Africa. IMERG is providing valuable precipitation information to LIS in an otherwise data-void region. Results and caveats will briefly be discussed. A third application of GPM data is using the IMERG Late and Final products for model verification in remote regions where high-quality gridded precipitation fields are not readily available. These datasets can now be used to verify NWP model forecasts over Eastern Africa using the SPoRT-MET scripts verification package, a wrapper around the NCAR Model Evaluation Toolkit (MET) verification software.

  11. Near-term probabilistic forecast of significant wildfire events for the Western United States

    Treesearch

    Haiganoush K. Preisler; Karin L. Riley; Crystal S. Stonesifer; Dave E. Calkin; Matt Jolly

    2016-01-01

    Fire danger and potential for large fires in the United States (US) is currently indicated via several forecasted qualitative indices. However, landscape-level quantitative forecasts of the probability of a large fire are currently lacking. In this study, we present a framework for forecasting large fire occurrence - an extreme value event - and evaluating...

  12. Assessing the skill of seasonal precipitation and streamflow forecasts in sixteen French catchments

    NASA Astrophysics Data System (ADS)

    Crochemore, Louise; Ramos, Maria-Helena; Pappenberger, Florian

    2015-04-01

    Meteorological centres make sustained efforts to provide seasonal forecasts that are increasingly skilful. Streamflow forecasting is one of the many applications than can benefit from these efforts. Seasonal flow forecasts generated using seasonal ensemble precipitation forecasts as input to a hydrological model can help to take anticipatory measures for water supply reservoir operation or drought risk management. The objective of the study is to assess the skill of seasonal precipitation and streamflow forecasts in France. First, we evaluated the skill of ECMWF SYS4 seasonal precipitation forecasts for streamflow forecasting in sixteen French catchments. Daily flow forecasts were produced using raw seasonal precipitation forecasts as input to the GR6J hydrological model. Ensemble forecasts are issued every month with 15 or 51 members according to the month of the year and evaluated for up to 90 days ahead. In a second step, we applied eight variants of bias correction approaches to the precipitation forecasts prior to generating the flow forecasts. The approaches were based on the linear scaling and the distribution mapping methods. The skill of the ensemble forecasts was assessed in accuracy (MAE), reliability (PIT Diagram) and overall performance (CRPS). The results show that, in most catchments, raw seasonal precipitation and streamflow forecasts are more skilful in terms of accuracy and overall performance than a reference prediction based on historic observed precipitation and watershed initial conditions at the time of forecast. Reliability is the only attribute that is not significantly improved. The skill of the forecasts is, in general, improved when applying bias correction. Two bias correction methods showed the best performance for the studied catchments: the simple linear scaling of monthly values and the empirical distribution mapping of daily values. L. Crochemore is funded by the Interreg IVB DROP Project (Benefit of governance in DROught adaPtation).

  13. How Hydroclimate Influences the Effectiveness of Particle Filter Data Assimilation of Streamflow in Initializing Short- to Medium-range Streamflow Forecasts

    NASA Astrophysics Data System (ADS)

    Clark, E.; Wood, A.; Nijssen, B.; Clark, M. P.

    2017-12-01

    Short- to medium-range (1- to 7-day) streamflow forecasts are important for flood control operations and in issuing potentially life-save flood warnings. In the U.S., the National Weather Service River Forecast Centers (RFCs) issue such forecasts in real time, depending heavily on a manual data assimilation (DA) approach. Forecasters adjust model inputs, states, parameters and outputs based on experience and consideration of a range of supporting real-time information. Achieving high-quality forecasts from new automated, centralized forecast systems will depend critically on the adequacy of automated DA approaches to make analogous corrections to the forecasting system. Such approaches would further enable systematic evaluation of real-time flood forecasting methods and strategies. Toward this goal, we have implemented a real-time Sequential Importance Resampling particle filter (SIR-PF) approach to assimilate observed streamflow into simulated initial hydrologic conditions (states) for initializing ensemble flood forecasts. Assimilating streamflow alone in SIR-PF improves simulated streamflow and soil moisture during the model spin up period prior to a forecast, with consequent benefits for forecasts. Nevertheless, it only consistently limits error in simulated snow water equivalent during the snowmelt season and in basins where precipitation falls primarily as snow. We examine how the simulated initial conditions with and without SIR-PF propagate into 1- to 7-day ensemble streamflow forecasts. Forecasts are evaluated in terms of reliability and skill over a 10-year period from 2005-2015. The focus of this analysis is on how interactions between hydroclimate and SIR-PF performance impact forecast skill. To this end, we examine forecasts for 5 hydroclimatically diverse basins in the western U.S. Some of these basins receive most of their precipitation as snow, others as rain. Some freeze throughout the mid-winter while others experience significant mid-winter melt events. We describe the methodology and present seasonal and inter-basin variations in DA-enhanced forecast skill.

  14. Observation Impacts for Longer Forecast Lead-Times

    NASA Astrophysics Data System (ADS)

    Mahajan, R.; Gelaro, R.; Todling, R.

    2013-12-01

    Observation impact on forecasts evaluated using adjoint-based techniques (e.g. Langland and Baker, 2004) are limited by the validity of the assumptions underlying the forecasting model adjoint. Most applications of this approach have focused on deriving observation impacts on short-range forecasts (e.g. 24-hour) in part to stay well within linearization assumptions. The most widely used measure of observation impact relies on the availability of the analysis for verifying the forecasts. As pointed out by Gelaro et al. (2007), and more recently by Todling (2013), this introduces undesirable correlations in the measure that are likely to affect the resulting assessment of the observing system. Stappers and Barkmeijer (2012) introduced a technique that, in principle, allows extending the validity of tangent linear and corresponding adjoint models to longer lead-times, thereby reducing the correlations in the measures used for observation impact assessments. The methodology provides the means to better represent linearized models by making use of Gaussian quadrature relations to handle various underlying non-linear model trajectories. The formulation is exact for particular bi-linear dynamics; it corresponds to an approximation for general-type nonlinearities and must be tested for large atmospheric models. The present work investigates the approach of Stappers and Barkmeijer (2012)in the context of NASA's Goddard Earth Observing System Version 5 (GEOS-5) atmospheric data assimilation system (ADAS). The goal is to calculate observation impacts in the GEOS-5 ADAS for forecast lead-times of at least 48 hours in order to reduce the potential for undesirable correlations that occur at shorter forecast lead times. References [1]Langland, R. H., and N. L. Baker, 2004: Estimation of observation impact using the NRL atmospheric variational data assimilation adjoint system. Tellus, 56A, 189-201. [2] Gelaro, R., Y. Zhu, and R. M. Errico, 2007: Examination of various-order adjoint-based approximations of observation impact. Meteoroloische Zeitschrift, 16, 685-692. [3]Stappers, R. J. J., and J. Barkmeijer, 2012: Optimal linearization trajectories for tangent linear models. Q. J. R. Meteorol. Soc., 138, 170-184. [4] Todling, R. 2013: Comparing two approaches for assessing observation impact. Mon. Wea. Rev., 141, 1484-1505.

  15. The development and evaluation of a hydrological seasonal forecast system prototype for predicting spring flood volumes in Swedish rivers

    NASA Astrophysics Data System (ADS)

    Foster, Kean; Bertacchi Uvo, Cintia; Olsson, Jonas

    2018-05-01

    Hydropower makes up nearly half of Sweden's electrical energy production. However, the distribution of the water resources is not aligned with demand, as most of the inflows to the reservoirs occur during the spring flood period. This means that carefully planned reservoir management is required to help redistribute water resources to ensure optimal production and accurate forecasts of the spring flood volume (SFV) is essential for this. The current operational SFV forecasts use a historical ensemble approach where the HBV model is forced with historical observations of precipitation and temperature. In this work we develop and test a multi-model prototype, building on previous work, and evaluate its ability to forecast the SFV in 84 sub-basins in northern Sweden. The hypothesis explored in this work is that a multi-model seasonal forecast system incorporating different modelling approaches is generally more skilful at forecasting the SFV in snow dominated regions than a forecast system that utilises only one approach. The testing is done using cross-validated hindcasts for the period 1981-2015 and the results are evaluated against both climatology and the current system to determine skill. Both the multi-model methods considered showed skill over the reference forecasts. The version that combined the historical modelling chain, dynamical modelling chain, and statistical modelling chain performed better than the other and was chosen for the prototype. The prototype was able to outperform the current operational system 57 % of the time on average and reduce the error in the SFV by ˜ 6 % across all sub-basins and forecast dates.

  16. Spatial Ensemble Postprocessing of Precipitation Forecasts Using High Resolution Analyses

    NASA Astrophysics Data System (ADS)

    Lang, Moritz N.; Schicker, Irene; Kann, Alexander; Wang, Yong

    2017-04-01

    Ensemble prediction systems are designed to account for errors or uncertainties in the initial and boundary conditions, imperfect parameterizations, etc. However, due to sampling errors and underestimation of the model errors, these ensemble forecasts tend to be underdispersive, and to lack both reliability and sharpness. To overcome such limitations, statistical postprocessing methods are commonly applied to these forecasts. In this study, a full-distributional spatial post-processing method is applied to short-range precipitation forecasts over Austria using Standardized Anomaly Model Output Statistics (SAMOS). Following Stauffer et al. (2016), observation and forecast fields are transformed into standardized anomalies by subtracting a site-specific climatological mean and dividing by the climatological standard deviation. Due to the need of fitting only a single regression model for the whole domain, the SAMOS framework provides a computationally inexpensive method to create operationally calibrated probabilistic forecasts for any arbitrary location or for all grid points in the domain simultaneously. Taking advantage of the INCA system (Integrated Nowcasting through Comprehensive Analysis), high resolution analyses are used for the computation of the observed climatology and for model training. The INCA system operationally combines station measurements and remote sensing data into real-time objective analysis fields at 1 km-horizontal resolution and 1 h-temporal resolution. The precipitation forecast used in this study is obtained from a limited area model ensemble prediction system also operated by ZAMG. The so called ALADIN-LAEF provides, by applying a multi-physics approach, a 17-member forecast at a horizontal resolution of 10.9 km and a temporal resolution of 1 hour. The performed SAMOS approach statistically combines the in-house developed high resolution analysis and ensemble prediction system. The station-based validation of 6 hour precipitation sums shows a mean improvement of more than 40% in CRPS when compared to bilinearly interpolated uncalibrated ensemble forecasts. The validation on randomly selected grid points, representing the true height distribution over Austria, still indicates a mean improvement of 35%. The applied statistical model is currently set up for 6-hourly and daily accumulation periods, but will be extended to a temporal resolution of 1-3 hours within a new probabilistic nowcasting system operated by ZAMG.

  17. A Model For Rapid Estimation of Economic Loss

    NASA Astrophysics Data System (ADS)

    Holliday, J. R.; Rundle, J. B.

    2012-12-01

    One of the loftier goals in seismic hazard analysis is the creation of an end-to-end earthquake prediction system: a "rupture to rafters" work flow that takes a prediction of fault rupture, propagates it with a ground shaking model, and outputs a damage or loss profile at a given location. So far, the initial prediction of an earthquake rupture (either as a point source or a fault system) has proven to be the most difficult and least solved step in this chain. However, this may soon change. The Collaboratory for the Study of Earthquake Predictability (CSEP) has amassed a suite of earthquake source models for assorted testing regions worldwide. These models are capable of providing rate-based forecasts for earthquake (point) sources over a range of time horizons. Furthermore, these rate forecasts can be easily refined into probabilistic source forecasts. While it's still difficult to fully assess the "goodness" of each of these models, progress is being made: new evaluation procedures are being devised and earthquake statistics continue to accumulate. The scientific community appears to be heading towards a better understanding of rupture predictability. Ground shaking mechanics are better understood, and many different sophisticated models exists. While these models tend to be computationally expensive and often regionally specific, they do a good job at matching empirical data. It is perhaps time to start addressing the third step in the seismic hazard prediction system. We present a model for rapid economic loss estimation using ground motion (PGA or PGV) and socioeconomic measures as its input. We show that the model can be calibrated on a global scale and applied worldwide. We also suggest how the model can be improved and generalized to non-seismic natural disasters such as hurricane and severe wind storms.

  18. Using Socioeconomic Data to Calibrate Loss Estimates

    NASA Astrophysics Data System (ADS)

    Holliday, J. R.; Rundle, J. B.

    2013-12-01

    One of the loftier goals in seismic hazard analysis is the creation of an end-to-end earthquake prediction system: a "rupture to rafters" work flow that takes a prediction of fault rupture, propagates it with a ground shaking model, and outputs a damage or loss profile at a given location. So far, the initial prediction of an earthquake rupture (either as a point source or a fault system) has proven to be the most difficult and least solved step in this chain. However, this may soon change. The Collaboratory for the Study of Earthquake Predictability (CSEP) has amassed a suite of earthquake source models for assorted testing regions worldwide. These models are capable of providing rate-based forecasts for earthquake (point) sources over a range of time horizons. Furthermore, these rate forecasts can be easily refined into probabilistic source forecasts. While it's still difficult to fully assess the "goodness" of each of these models, progress is being made: new evaluation procedures are being devised and earthquake statistics continue to accumulate. The scientific community appears to be heading towards a better understanding of rupture predictability. Ground shaking mechanics are better understood, and many different sophisticated models exists. While these models tend to be computationally expensive and often regionally specific, they do a good job at matching empirical data. It is perhaps time to start addressing the third step in the seismic hazard prediction system. We present a model for rapid economic loss estimation using ground motion (PGA or PGV) and socioeconomic measures as its input. We show that the model can be calibrated on a global scale and applied worldwide. We also suggest how the model can be improved and generalized to non-seismic natural disasters such as hurricane and severe wind storms.

  19. Forecasting sustainability: growth to removals ratio dynamics

    Treesearch

    Natasha A. James; Robert C. Abt; Karen L. Abt; Raymond M. Sheffield; Fredrick W. Cubbage

    2012-01-01

    The growth to removals ratio (G/R) is often used as a measure of forest resource sustainability and as a reference point to forecast future resource sustainability. However, little work has been done to determine if any relationship exists between G/R over time. Forest Inventory and Analysis data for 12 southern states were used to determine if any relationship exists...

  20. Short-term load forecasting using neural network for future smart grid application

    NASA Astrophysics Data System (ADS)

    Zennamo, Joseph Anthony, III

    Short-term load forecasting of power system has been a classic problem for a long time. Not merely it has been researched extensively and intensively, but also a variety of forecasting methods has been raised. This thesis outlines some aspects and functions of smart meter. It also presents different policies and current statuses as well as future projects and objectives of SG development in several countries. Then the thesis compares main aspects about latest products of smart meter from different companies. Lastly, three types of prediction models are established in MATLAB to emulate the functions of smart grid in the short-term load forecasting, and then their results are compared and analyzed in terms of accuracy. For this thesis, more variables such as dew point temperature are used in the Neural Network model to achieve more accuracy for better short-term load forecasting results.

  1. Uncertainty in forecasts of long-run economic growth.

    PubMed

    Christensen, P; Gillingham, K; Nordhaus, W

    2018-05-22

    Forecasts of long-run economic growth are critical inputs into policy decisions being made today on the economy and the environment. Despite its importance, there is a sparse literature on long-run forecasts of economic growth and the uncertainty in such forecasts. This study presents comprehensive probabilistic long-run projections of global and regional per-capita economic growth rates, comparing estimates from an expert survey and a low-frequency econometric approach. Our primary results suggest a median 2010-2100 global growth rate in per-capita gross domestic product of 2.1% per year, with a standard deviation (SD) of 1.1 percentage points, indicating substantially higher uncertainty than is implied in existing forecasts. The larger range of growth rates implies a greater likelihood of extreme climate change outcomes than is currently assumed and has important implications for social insurance programs in the United States.

  2. Ensemble Statistical Post-Processing of the National Air Quality Forecast Capability: Enhancing Ozone Forecasts in Baltimore, Maryland

    NASA Technical Reports Server (NTRS)

    Garner, Gregory G.; Thompson, Anne M.

    2013-01-01

    An ensemble statistical post-processor (ESP) is developed for the National Air Quality Forecast Capability (NAQFC) to address the unique challenges of forecasting surface ozone in Baltimore, MD. Air quality and meteorological data were collected from the eight monitors that constitute the Baltimore forecast region. These data were used to build the ESP using a moving-block bootstrap, regression tree models, and extreme-value theory. The ESP was evaluated using a 10-fold cross-validation to avoid evaluation with the same data used in the development process. Results indicate that the ESP is conditionally biased, likely due to slight overfitting while training the regression tree models. When viewed from the perspective of a decision-maker, the ESP provides a wealth of additional information previously not available through the NAQFC alone. The user is provided the freedom to tailor the forecast to the decision at hand by using decision-specific probability thresholds that define a forecast for an ozone exceedance. Taking advantage of the ESP, the user not only receives an increase in value over the NAQFC, but also receives value for An ensemble statistical post-processor (ESP) is developed for the National Air Quality Forecast Capability (NAQFC) to address the unique challenges of forecasting surface ozone in Baltimore, MD. Air quality and meteorological data were collected from the eight monitors that constitute the Baltimore forecast region. These data were used to build the ESP using a moving-block bootstrap, regression tree models, and extreme-value theory. The ESP was evaluated using a 10-fold cross-validation to avoid evaluation with the same data used in the development process. Results indicate that the ESP is conditionally biased, likely due to slight overfitting while training the regression tree models. When viewed from the perspective of a decision-maker, the ESP provides a wealth of additional information previously not available through the NAQFC alone. The user is provided the freedom to tailor the forecast to the decision at hand by using decision-specific probability thresholds that define a forecast for an ozone exceedance. Taking advantage of the ESP, the user not only receives an increase in value over the NAQFC, but also receives value for

  3. Data Quality Assessment Methods for the Eastern Range 915 MHz Wind Profiler Network

    NASA Technical Reports Server (NTRS)

    Lambert, Winifred C.; Taylor, Gregory E.

    1998-01-01

    The Eastern Range installed a network of five 915 MHz Doppler Radar Wind Profilers with Radio Acoustic Sounding Systems in the Cape Canaveral Air Station/Kennedy Space Center area to provide three-dimensional wind speed and direction and virtual temperature estimates in the boundary layer. The Applied Meteorology Unit, staffed by ENSCO, Inc., was tasked by the 45th Weather Squadron, the Spaceflight Meteorology Group, and the National Weather Service in Melbourne, Florida to investigate methods which will help forecasters assess profiler network data quality when developing forecasts and warnings for critical ground, launch and landing operations. Four routines were evaluated in this study: a consensus time period check a precipitation contamination check, a median filter, and the Weber-Wuertz (WW) algorithm. No routine was able to effectively flag suspect data when used by itself. Therefore, the routines were used in different combinations. An evaluation of all possible combinations revealed two that provided the best results. The precipitation contamination and consensus time routines were used in both combinations. The median filter or WW was used as the final routine in the combinations to flag all other suspect data points.

  4. Cardiac catheterization laboratory inpatient forecast tool: a prospective evaluation

    PubMed Central

    Flanagan, Eleni; Siddiqui, Sauleh; Appelbaum, Jeff; Kasper, Edward K; Levin, Scott

    2016-01-01

    Objective To develop and prospectively evaluate a web-based tool that forecasts the daily bed need for admissions from the cardiac catheterization laboratory using routinely available clinical data within electronic medical records (EMRs). Methods The forecast model was derived using a 13-month retrospective cohort of 6384 catheterization patients. Predictor variables such as demographics, scheduled procedures, and clinical indicators mined from free-text notes were input to a multivariable logistic regression model that predicted the probability of inpatient admission. The model was embedded into a web-based application connected to the local EMR system and used to support bed management decisions. After implementation, the tool was prospectively evaluated for accuracy on a 13-month test cohort of 7029 catheterization patients. Results The forecast model predicted admission with an area under the receiver operating characteristic curve of 0.722. Daily aggregate forecasts were accurate to within one bed for 70.3% of days and within three beds for 97.5% of days during the prospective evaluation period. The web-based application housing the forecast model was used by cardiology providers in practice to estimate daily admissions from the catheterization laboratory. Discussion The forecast model identified older age, male gender, invasive procedures, coronary artery bypass grafts, and a history of congestive heart failure as qualities indicating a patient was at increased risk for admission. Diagnostic procedures and less acute clinical indicators decreased patients’ risk of admission. Despite the site-specific limitations of the model, these findings were supported by the literature. Conclusion Data-driven predictive analytics may be used to accurately forecast daily demand for inpatient beds for cardiac catheterization patients. Connecting these analytics to EMR data sources has the potential to provide advanced operational decision support. PMID:26342217

  5. Multi-RCM ensemble downscaling of global seasonal forecasts (MRED)

    NASA Astrophysics Data System (ADS)

    Arritt, R.

    2009-04-01

    Regional climate models (RCMs) have long been used to downscale global climate simulations. In contrast the ability of RCMs to downscale seasonal climate forecasts has received little attention. The Multi-RCM Ensemble Downscaling (MRED) project was recently initiated to address the question, Does dynamical downscaling using RCMs provide additional useful information for seasonal forecasts made by global models? MRED is using a suite of RCMs to downscale seasonal forecasts produced by the National Centers for Environmental Prediction (NCEP) Climate Forecast System (CFS) seasonal forecast system and the NASA GEOS5 system. The initial focus is on wintertime forecasts in order to evaluate topographic forcing, snowmelt, and the usefulness of higher resolution for near-surface fields influenced by high resolution orography. Each RCM covers the conterminous U.S. at approximately 32 km resolution, comparable to the scale of the North American Regional Reanalysis (NARR) which will be used to evaluate the models. The forecast ensemble for each RCM is comprised of 15 members over a period of 22+ years (from 1982 to 2003+) for the forecast period 1 December - 30 April. Each RCM will create a 15-member lagged ensemble by starting on different dates in the preceding November. This results in a 120-member ensemble for each projection (8 RCMs by 15 members per RCM). The RCMs will be continually updated at their lateral boundaries using 6-hourly output from CFS or GEOS5. Hydrometeorological output will be produced in a standard netCDF-based format for a common analysis grid, which simplifies both model intercomparison and the generation of ensembles. MRED will compare individual RCM and global forecasts as well as ensemble mean precipitation and temperature forecasts, which are currently being used to drive macroscale land surface models (LSMs). Metrics of ensemble spread will also be evaluated. Extensive process-oriented analysis will be performed to link improvements in downscaled forecast skill to regional forcings and physical mechanisms. Our overarching goal is to determine what additional skill can be provided by a community ensemble of high resolution regional models, which we believe will define a strategy for more skillful and useful regional seasonal climate forecasts.

  6. Evaluating Rapid Models for High-Throughput Exposure Forecasting (SOT)

    EPA Science Inventory

    High throughput exposure screening models can provide quantitative predictions for thousands of chemicals; however these predictions must be systematically evaluated for predictive ability. Without the capability to make quantitative, albeit uncertain, forecasts of exposure, the ...

  7. Understanding Farmers’ Forecast Use from Their Beliefs, Values, Social Norms, and Perceived Obstacles

    NASA Astrophysics Data System (ADS)

    Hu, Qi; Pytlik Zillig, Lisa M.; Lynne, Gary D.; Tomkins, Alan J.; Waltman, William J.; Hayes, Michael J.; Hubbard, Kenneth G.; Artikov, Ikrom; Hoffman, Stacey J.; Wilhite, Donald A.

    2006-09-01

    Although the accuracy of weather and climate forecasts is continuously improving and new information retrieved from climate data is adding to the understanding of climate variation, use of the forecasts and climate information by farmers in farming decisions has changed little. This lack of change may result from knowledge barriers and psychological, social, and economic factors that undermine farmer motivation to use forecasts and climate information. According to the theory of planned behavior (TPB), the motivation to use forecasts may arise from personal attitudes, social norms, and perceived control or ability to use forecasts in specific decisions. These attributes are examined using data from a survey designed around the TPB and conducted among farming communities in the region of eastern Nebraska and the western U.S. Corn Belt. There were three major findings: 1) the utility and value of the forecasts for farming decisions as perceived by farmers are, on average, around 3.0 on a 0 7 scale, indicating much room to improve attitudes toward the forecast value. 2) The use of forecasts by farmers to influence decisions is likely affected by several social groups that can provide “expert viewpoints” on forecast use. 3) A major obstacle, next to forecast accuracy, is the perceived identity and reliability of the forecast makers. Given the rapidly increasing number of forecasts in this growing service business, the ambiguous identity of forecast providers may have left farmers confused and may have prevented them from developing both trust in forecasts and skills to use them. These findings shed light on productive avenues for increasing the influence of forecasts, which may lead to greater farming productivity. In addition, this study establishes a set of reference points that can be used for comparisons with future studies to quantify changes in forecast use and influence.

  8. New Aspects of Probabilistic Forecast Verification Using Information Theory

    NASA Astrophysics Data System (ADS)

    Tödter, Julian; Ahrens, Bodo

    2013-04-01

    This work deals with information-theoretical methods in probabilistic forecast verification, particularly concerning ensemble forecasts. Recent findings concerning the "Ignorance Score" are shortly reviewed, then a consistent generalization to continuous forecasts is motivated. For ensemble-generated forecasts, the presented measures can be calculated exactly. The Brier Score (BS) and its generalizations to the multi-categorical Ranked Probability Score (RPS) and to the Continuous Ranked Probability Score (CRPS) are prominent verification measures for probabilistic forecasts. Particularly, their decompositions into measures quantifying the reliability, resolution and uncertainty of the forecasts are attractive. Information theory sets up a natural framework for forecast verification. Recently, it has been shown that the BS is a second-order approximation of the information-based Ignorance Score (IGN), which also contains easily interpretable components and can also be generalized to a ranked version (RIGN). Here, the IGN, its generalizations and decompositions are systematically discussed in analogy to the variants of the BS. Additionally, a Continuous Ranked IGN (CRIGN) is introduced in analogy to the CRPS. The useful properties of the conceptually appealing CRIGN are illustrated, together with an algorithm to evaluate its components reliability, resolution, and uncertainty for ensemble-generated forecasts. This algorithm can also be used to calculate the decomposition of the more traditional CRPS exactly. The applicability of the "new" measures is demonstrated in a small evaluation study of ensemble-based precipitation forecasts.

  9. Application of satellite-based rainfall and medium range meteorological forecast in real-time flood forecasting in the Mahanadi River basin

    NASA Astrophysics Data System (ADS)

    Nanda, Trushnamayee; Beria, Harsh; Sahoo, Bhabagrahi; Chatterjee, Chandranath

    2016-04-01

    Increasing frequency of hydrologic extremes in a warming climate call for the development of reliable flood forecasting systems. The unavailability of meteorological parameters in real-time, especially in the developing parts of the world, makes it a challenging task to accurately predict flood, even at short lead times. The satellite-based Tropical Rainfall Measuring Mission (TRMM) provides an alternative to the real-time precipitation data scarcity. Moreover, rainfall forecasts by the numerical weather prediction models such as the medium term forecasts issued by the European Center for Medium range Weather Forecasts (ECMWF) are promising for multistep-ahead flow forecasts. We systematically evaluate these rainfall products over a large catchment in Eastern India (Mahanadi River basin). We found spatially coherent trends, with both the real-time TRMM rainfall and ECMWF rainfall forecast products overestimating low rainfall events and underestimating high rainfall events. However, no significant bias was found for the medium rainfall events. Another key finding was that these rainfall products captured the phase of the storms pretty well, but suffered from consistent under-prediction. The utility of the real-time TRMM and ECMWF forecast products are evaluated by rainfall-runoff modeling using different artificial neural network (ANN)-based models up to 3-days ahead. Keywords: TRMM; ECMWF; forecast; ANN; rainfall-runoff modeling

  10. Evaluation of precipitation forecasts from 3D-Var and hybrid GSI-based system during Indian summer monsoon 2015

    NASA Astrophysics Data System (ADS)

    Singh, Sanjeev Kumar; Prasad, V. S.

    2018-02-01

    This paper presents a systematic investigation of medium-range rainfall forecasts from two versions of the National Centre for Medium Range Weather Forecasting (NCMRWF)-Global Forecast System based on three-dimensional variational (3D-Var) and hybrid analysis system namely, NGFS and HNGFS, respectively, during Indian summer monsoon (June-September) 2015. The NGFS uses gridpoint statistical interpolation (GSI) 3D-Var data assimilation system, whereas HNGFS uses hybrid 3D ensemble-variational scheme. The analysis includes the evaluation of rainfall fields and comparisons of rainfall using statistical score such as mean precipitation, bias, correlation coefficient, root mean square error and forecast improvement factor. In addition to these, categorical scores like Peirce skill score and bias score are also computed to describe particular aspects of forecasts performance. The comparison results of mean precipitation reveal that both the versions of model produced similar large-scale feature of Indian summer monsoon rainfall for day-1 through day-5 forecasts. The inclusion of fully flow-dependent background error covariance significantly improved the wet biases in HNGFS over the Indian Ocean. The forecast improvement factor and Peirce skill score in the HNGFS have also found better than NGFS for day-1 through day-5 forecasts.

  11. Spatio-temporal pattern clustering for skill assessment of the Korea Operational Oceanographic System

    NASA Astrophysics Data System (ADS)

    Kim, J.; Park, K.

    2016-12-01

    In order to evaluate the performance of operational forecast models in the Korea operational oceanographic system (KOOS) which has been developed by Korea Institute of Ocean Science and Technology (KIOST), a skill assessment (SA) tool has developed and provided multiple skill metrics including not only correlation and error skills by comparing predictions and observation but also pattern clustering with numerical models, satellite, and observation. The KOOS has produced 72 hours forecast information on atmospheric and hydrodynamic forecast variables of wind, pressure, current, tide, wave, temperature, and salinity at every 12 hours per day produced by operating numerical models such as WRF, ROMS, MOM5, WW-III, and SWAN and the SA has conducted to evaluate the forecasts. We have been operationally operated several kinds of numerical models such as WRF, ROMS, MOM5, MOHID, WW-III. Quantitative assessment of operational ocean forecast model is very important to provide accurate ocean forecast information not only to general public but also to support ocean-related problems. In this work, we propose a method of pattern clustering using machine learning method and GIS-based spatial analytics to evaluate spatial distribution of numerical models and spatial observation data such as satellite and HF radar. For the clustering, we use 10 or 15 years-long reanalysis data which was computed by the KOOS, ECMWF, and HYCOM to make best matching clusters which are classified physical meaning with time variation and then we compare it with forecast data. Moreover, for evaluating current, we develop extraction method of dominant flow and apply it to hydrodynamic models and HF radar's sea surface current data. By applying pattern clustering method, it allows more accurate and effective assessment of ocean forecast models' performance by comparing not only specific observation positions which are determined by observation stations but also spatio-temporal distribution of whole model areas. We believe that our proposed method will be very useful to examine and evaluate large amount of numerical modeling data as well as satellite data.

  12. Recent Achievements of the Collaboratory for the Study of Earthquake Predictability

    NASA Astrophysics Data System (ADS)

    Jackson, D. D.; Liukis, M.; Werner, M. J.; Schorlemmer, D.; Yu, J.; Maechling, P. J.; Zechar, J. D.; Jordan, T. H.

    2015-12-01

    Maria Liukis, SCEC, USC; Maximilian Werner, University of Bristol; Danijel Schorlemmer, GFZ Potsdam; John Yu, SCEC, USC; Philip Maechling, SCEC, USC; Jeremy Zechar, Swiss Seismological Service, ETH; Thomas H. Jordan, SCEC, USC, and the CSEP Working Group The Collaboratory for the Study of Earthquake Predictability (CSEP) supports a global program to conduct prospective earthquake forecasting experiments. CSEP testing centers are now operational in California, New Zealand, Japan, China, and Europe with 435 models under evaluation. The California testing center, operated by SCEC, has been operational since Sept 1, 2007, and currently hosts 30-minute, 1-day, 3-month, 1-year and 5-year forecasts, both alarm-based and probabilistic, for California, the Western Pacific, and worldwide. We have reduced testing latency, implemented prototype evaluation of M8 forecasts, and are currently developing formats and procedures to evaluate externally-hosted forecasts and predictions. These efforts are related to CSEP support of the USGS program in operational earthquake forecasting and a DHS project to register and test external forecast procedures from experts outside seismology. A retrospective experiment for the 2010-2012 Canterbury earthquake sequence has been completed, and the results indicate that some physics-based and hybrid models outperform purely statistical (e.g., ETAS) models. The experiment also demonstrates the power of the CSEP cyberinfrastructure for retrospective testing. Our current development includes evaluation strategies that increase computational efficiency for high-resolution global experiments, such as the evaluation of the Global Earthquake Activity Rate (GEAR) model. We describe the open-source CSEP software that is available to researchers as they develop their forecast models (http://northridge.usc.edu/trac/csep/wiki/MiniCSEP). We also discuss applications of CSEP infrastructure to geodetic transient detection and how CSEP procedures are being adapted to ground motion prediction experiments.

  13. Multi-RCM ensemble downscaling of global seasonal forecasts (MRED)

    NASA Astrophysics Data System (ADS)

    Arritt, R. W.

    2008-12-01

    The Multi-RCM Ensemble Downscaling (MRED) project was recently initiated to address the question, Can regional climate models provide additional useful information from global seasonal forecasts? MRED will use a suite of regional climate models to downscale seasonal forecasts produced by the new National Centers for Environmental Prediction (NCEP) Climate Forecast System (CFS) seasonal forecast system and the NASA GEOS5 system. The initial focus will be on wintertime forecasts in order to evaluate topographic forcing, snowmelt, and the potential usefulness of higher resolution, especially for near-surface fields influenced by high resolution orography. Each regional model will cover the conterminous US (CONUS) at approximately 32 km resolution, and will perform an ensemble of 15 runs for each year 1982-2003 for the forecast period 1 December - 30 April. MRED will compare individual regional and global forecasts as well as ensemble mean precipitation and temperature forecasts, which are currently being used to drive macroscale land surface models (LSMs), as well as wind, humidity, radiation, turbulent heat fluxes, which are important for more advanced coupled macro-scale hydrologic models. Metrics of ensemble spread will also be evaluated. Extensive analysis will be performed to link improvements in downscaled forecast skill to regional forcings and physical mechanisms. Our overarching goal is to determine what additional skill can be provided by a community ensemble of high resolution regional models, which we believe will eventually define a strategy for more skillful and useful regional seasonal climate forecasts.

  14. Forecasting conditional climate-change using a hybrid approach

    USGS Publications Warehouse

    Esfahani, Akbar Akbari; Friedel, Michael J.

    2014-01-01

    A novel approach is proposed to forecast the likelihood of climate-change across spatial landscape gradients. This hybrid approach involves reconstructing past precipitation and temperature using the self-organizing map technique; determining quantile trends in the climate-change variables by quantile regression modeling; and computing conditional forecasts of climate-change variables based on self-similarity in quantile trends using the fractionally differenced auto-regressive integrated moving average technique. The proposed modeling approach is applied to states (Arizona, California, Colorado, Nevada, New Mexico, and Utah) in the southwestern U.S., where conditional forecasts of climate-change variables are evaluated against recent (2012) observations, evaluated at a future time period (2030), and evaluated as future trends (2009–2059). These results have broad economic, political, and social implications because they quantify uncertainty in climate-change forecasts affecting various sectors of society. Another benefit of the proposed hybrid approach is that it can be extended to any spatiotemporal scale providing self-similarity exists.

  15. Experiment evaluates ocean models and data assimiliation in the Gulf Stream

    NASA Astrophysics Data System (ADS)

    Willems, Robert C.; Glenn, S. M.; Crowley, M. F.; Malanotte-Rizzoli, P.; Young, R. E.; Ezer, T.; Mellor, G. L.; Arango, H. G.; Robinson, A. R.; Lai, C.-C. A.

    Using data sets of known quality as the basis for comparison, a recent experiment explored the Gulf Stream Region at 27°-47°N and 80°-50°W to assess the nowcast/forecast capability of specific ocean models and the impact of data assimilation. Scientists from five universities and the Naval Research Laboratory/Stennis Space Center participated in the Data Assimilation and Model Evaluation Experiment (DAMEÉ-GSR).DAMEÉ-GSR was based on case studies, each successively more complex, and was divided into three phases using case studies (data) from 1987 and 1988. Phase I evaluated models' forecast capability using common initial conditions and comparing model forecast fields with observational data at forecast time over a 2-week period. Phase II added data assimilation and assessed its impact on forecast capability, using the same case studies as in phase I, and phase III added a 2-month case study overlapping some periods in Phases I and II.

  16. GloFAS-Seasonal: Operational Seasonal Ensemble River Flow Forecasts at the Global Scale

    NASA Astrophysics Data System (ADS)

    Emerton, Rebecca; Zsoter, Ervin; Smith, Paul; Salamon, Peter

    2017-04-01

    Seasonal hydrological forecasting has potential benefits for many sectors, including agriculture, water resources management and humanitarian aid. At present, no global scale seasonal hydrological forecasting system exists operationally; although smaller scale systems have begun to emerge around the globe over the past decade, a system providing consistent global scale seasonal forecasts would be of great benefit in regions where no other forecasting system exists, and to organisations operating at the global scale, such as disaster relief. We present here a new operational global ensemble seasonal hydrological forecast, currently under development at ECMWF as part of the Global Flood Awareness System (GloFAS). The proposed system, which builds upon the current version of GloFAS, takes the long-range forecasts from the ECMWF System4 ensemble seasonal forecast system (which incorporates the HTESSEL land surface scheme) and uses this runoff as input to the Lisflood routing model, producing a seasonal river flow forecast out to 4 months lead time, for the global river network. The seasonal forecasts will be evaluated using the global river discharge reanalysis, and observations where available, to determine the potential value of the forecasts across the globe. The seasonal forecasts will be presented as a new layer in the GloFAS interface, which will provide a global map of river catchments, indicating whether the catchment-averaged discharge forecast is showing abnormally high or low flows during the 4-month lead time. Each catchment will display the corresponding forecast as an ensemble hydrograph of the weekly-averaged discharge forecast out to 4 months, with percentile thresholds shown for comparison with the discharge climatology. The forecast visualisation is based on a combination of the current medium-range GloFAS forecasts and the operational EFAS (European Flood Awareness System) seasonal outlook, and aims to effectively communicate the nature of a seasonal outlook while providing useful information to users and partners. We demonstrate the first version of an operational GloFAS seasonal outlook, outlining the model set-up and presenting a first look at the seasonal forecasts that will be displayed in the GloFAS interface, and discuss the initial results of the forecast evaluation.

  17. Climate, weather, space weather: model development in an operational context

    NASA Astrophysics Data System (ADS)

    Folini, Doris

    2018-05-01

    Aspects of operational modeling for climate, weather, and space weather forecasts are contrasted, with a particular focus on the somewhat conflicting demands of "operational stability" versus "dynamic development" of the involved models. Some common key elements are identified, indicating potential for fruitful exchange across communities. Operational model development is compelling, driven by factors that broadly fall into four categories: model skill, basic physics, advances in computer architecture, and new aspects to be covered, from costumer needs over physics to observational data. Evaluation of model skill as part of the operational chain goes beyond an automated skill score. Permanent interaction between "pure research" and "operational forecast" people is beneficial to both sides. This includes joint model development projects, although ultimate responsibility for the operational code remains with the forecast provider. The pace of model development reflects operational lead times. The points are illustrated with selected examples, many of which reflect the author's background and personal contacts, notably with the Swiss Weather Service and the Max Planck Institute for Meteorology, Hamburg, Germany. In view of current and future challenges, large collaborations covering a range of expertise are a must - within and across climate, weather, and space weather. To profit from and cope with the rapid progress of computer architectures, supercompute centers must form part of the team.

  18. Forecasts of non-Gaussian parameter spaces using Box-Cox transformations

    NASA Astrophysics Data System (ADS)

    Joachimi, B.; Taylor, A. N.

    2011-09-01

    Forecasts of statistical constraints on model parameters using the Fisher matrix abound in many fields of astrophysics. The Fisher matrix formalism involves the assumption of Gaussianity in parameter space and hence fails to predict complex features of posterior probability distributions. Combining the standard Fisher matrix with Box-Cox transformations, we propose a novel method that accurately predicts arbitrary posterior shapes. The Box-Cox transformations are applied to parameter space to render it approximately multivariate Gaussian, performing the Fisher matrix calculation on the transformed parameters. We demonstrate that, after the Box-Cox parameters have been determined from an initial likelihood evaluation, the method correctly predicts changes in the posterior when varying various parameters of the experimental setup and the data analysis, with marginally higher computational cost than a standard Fisher matrix calculation. We apply the Box-Cox-Fisher formalism to forecast cosmological parameter constraints by future weak gravitational lensing surveys. The characteristic non-linear degeneracy between matter density parameter and normalization of matter density fluctuations is reproduced for several cases, and the capabilities of breaking this degeneracy by weak-lensing three-point statistics is investigated. Possible applications of Box-Cox transformations of posterior distributions are discussed, including the prospects for performing statistical data analysis steps in the transformed Gaussianized parameter space.

  19. Recent Achievements of the Collaboratory for the Study of Earthquake Predictability

    NASA Astrophysics Data System (ADS)

    Jordan, T. H.; Liukis, M.; Werner, M. J.; Schorlemmer, D.; Yu, J.; Maechling, P. J.; Jackson, D. D.; Rhoades, D. A.; Zechar, J. D.; Marzocchi, W.

    2016-12-01

    The Collaboratory for the Study of Earthquake Predictability (CSEP) supports a global program to conduct prospective earthquake forecasting experiments. CSEP testing centers are now operational in California, New Zealand, Japan, China, and Europe with 442 models under evaluation. The California testing center, started by SCEC, Sept 1, 2007, currently hosts 30-minute, 1-day, 3-month, 1-year and 5-year forecasts, both alarm-based and probabilistic, for California, the Western Pacific, and worldwide. Our tests are now based on the hypocentral locations and magnitudes of cataloged earthquakes, but we plan to test focal mechanisms, seismic hazard models, ground motion forecasts, and finite rupture forecasts as well. We have increased computational efficiency for high-resolution global experiments, such as the evaluation of the Global Earthquake Activity Rate (GEAR) model, introduced Bayesian ensemble models, and implemented support for non-Poissonian simulation-based forecasts models. We are currently developing formats and procedures to evaluate externally hosted forecasts and predictions. CSEP supports the USGS program in operational earthquake forecasting and a DHS project to register and test external forecast procedures from experts outside seismology. We found that earthquakes as small as magnitude 2.5 provide important information on subsequent earthquakes larger than magnitude 5. A retrospective experiment for the 2010-2012 Canterbury earthquake sequence showed that some physics-based and hybrid models outperform catalog-based (e.g., ETAS) models. This experiment also demonstrates the ability of the CSEP infrastructure to support retrospective forecast testing. Current CSEP development activities include adoption of the Comprehensive Earthquake Catalog (ComCat) as an authorized data source, retrospective testing of simulation-based forecasts, and support for additive ensemble methods. We describe the open-source CSEP software that is available to researchers as they develop their forecast models. We also discuss how CSEP procedures are being adapted to intensity and ground motion prediction experiments as well as hazard model testing.

  20. Precipitable water vapour forecasting: a tool for optimizing IR observations at Roque de los Muchachos Observatory

    NASA Astrophysics Data System (ADS)

    Pérez-Jordán, wG; Castro-Almazán, J. A.; Muñoz-Tuñón, C.

    2018-07-01

    We validate the Weather Research and Forecasting (WRF) model for precipitable water vapour (PWV) forecasting as a fully operational tool for optimizing astronomical infrared observations at Roque de los Muchachos Observatory (ORM). For the model validation, we used GNSS-based (Global Navigation Satellite System) data from the PWV monitor located at the ORM. We have run WRF every 24 h for near two months, with a horizon of 48 h (hourly forecasts), from 2016 January 11 to March 04. These runs represent 1296 hourly forecast points. The validation is carried out using different approaches: performance as a function of the forecast range, time horizon accuracy, performance as a function of the PWV value, and performance of the operational WRF time series with 24- and 48-h horizons. Excellent agreement was found between the model forecasts and observations, with R = 0.951 and 0.904 for the 24- and 48-h forecast time series, respectively. The 48-h forecast was further improved by correcting a time lag of 2 h found in the predictions. The final errors, taking into account all the uncertainties involved, are 1.75 mm for the 24-h forecasts and 1.99 mm for 48 h. We found linear trends in both the correlation and root-mean-square error of the residuals (measurements - forecasts) as a function of the forecast range within the horizons analysed (up to 48 h). In summary, the WRF performance is excellent and accurate, thus allowing it to be implemented as an operational tool at the ORM.

  1. Precipitable water vapour forecasting: a tool for optimizing IR observations at Roque de los Muchachos Observatory.

    NASA Astrophysics Data System (ADS)

    Pérez-Jordán, G.; Castro-Almazán, J. A.; Muñoz-Tuñón, C.

    2018-04-01

    We validate the Weather Research and Forecasting (WRF) model for precipitable water vapour (PWV) forecasting as a fully operational tool for optimizing astronomical infrared (IR) observations at Roque de los Muchachos Observatory (ORM). For the model validation we used GNSS-based (Global Navigation Satellite System) data from the PWV monitor located at the ORM. We have run WRF every 24 h for near two months, with a horizon of 48 hours (hourly forecasts), from 2016 January 11 to 2016 March 4. These runs represent 1296 hourly forecast points. The validation is carried out using different approaches: performance as a function of the forecast range, time horizon accuracy, performance as a function of the PWV value, and performance of the operational WRF time series with 24- and 48-hour horizons. Excellent agreement was found between the model forecasts and observations, with R =0.951 and R =0.904 for the 24- and 48-h forecast time series respectively. The 48-h forecast was further improved by correcting a time lag of 2 h found in the predictions. The final errors, taking into account all the uncertainties involved, are 1.75 mm for the 24-h forecasts and 1.99 mm for 48 h. We found linear trends in both the correlation and RMSE of the residuals (measurements - forecasts) as a function of the forecast range within the horizons analysed (up to 48 h). In summary, the WRF performance is excellent and accurate, thus allowing it to be implemented as an operational tool at the ORM.

  2. AN OPERATIONAL EVALUATION OF THE ETA-CMAQ AIR QUALITY FORECAST MODEL

    EPA Science Inventory

    The National Oceanic and Atmospheric Administration (NOAA), in collaboration with the Environmental Protection Agency (EPA), are developing an Air Quality Forecasting Program that will eventually result in an operational Nationwide Air Quality Forecasting System. The initial pha...

  3. Application of Neural Network Optimized by Mind Evolutionary Computation in Building Energy Prediction

    NASA Astrophysics Data System (ADS)

    Song, Chen; Zhong-Cheng, Wu; Hong, Lv

    2018-03-01

    Building Energy forecasting plays an important role in energy management and plan. Using mind evolutionary algorithm to find the optimal network weights and threshold, to optimize the BP neural network, can overcome the problem of the BP neural network into a local minimum point. The optimized network is used for time series prediction, and the same month forecast, to get two predictive values. Then two kinds of predictive values are put into neural network, to get the final forecast value. The effectiveness of the method was verified by experiment with the energy value of three buildings in Hefei.

  4. Trends of jet fuel demand and properties

    NASA Technical Reports Server (NTRS)

    Friedman, R.

    1984-01-01

    Petroleum industry forecasts predict an increasing demand for jet fuels, a decrease in the gasoline-to-distillate (heavier fuel) demand ratio, and a greater influx of poorer quality petroleum in the next two to three decades. These projections are important for refinery product analyses. The forecasts have not been accurate, however, in predicting the recent, short term fluctuations in jet fuel and competing product demand. Changes in petroleum quality can be assessed, in part, by a review of jet fuel property inspections. Surveys covering the last 10 years show that average jet fuel freezing points, aromatic contents, and smoke points have trends toward their specification limits.

  5. Tsunami field survey in French Polynesia of the 2015 Chilean earthquake Mw = 8.2 and what we learned.

    NASA Astrophysics Data System (ADS)

    Jamelot, Anthony; Reymond, Dominique; Savigny, Jonathan; Hyvernaud, Olivier

    2016-04-01

    The tsunami generated by the earthquake of magnitude Mw=8.2 near the coast of central Chile on the 16th September 2015 was observed on 7 tide gauges distributed over the five archipelagoes composing French Polynesia, a territory as large as Europe. We'll sum up all the observations of the tsunami and the field survey done in Tahiti (Society islands) and Hiva-Oa (Marquesas islands) to evaluate the preliminary tsunami forecast tool (MERIT) and the detailed tsunami forecast tool (COASTER) of the French Polynesian Tsunami Warning Center. The preliminary tool forecasted a maximal tsunami height between 0.5m to 2.3 m all over the Marquesas Islands. But only the island of Hiva-Oa had a tsunami forecast greater than 1 meter especially in the Tahauku Bay well known for its local response due to its resonance properties. In Tahauku bay, the tide gauge located at the entrance of the bay recorded a maximal tsunami height above mean sea level ~ 1.7 m; and we measured at the bottom of the bay a run-up about 2.8 m at 388 m inland from the shoreline in the river bed, and a run-up of 2.5 m located 155 m inland. The multi-grid simulation over Tahiti was done one hour after the origin time of the earthquake and gave a very localized tsunami impact on the North shore. Our forecast indicated an inundation about 10 m inland that lead Civil Authorities to evacuate 6 houses. It was the first operational use of this new fine grid covering the north part of Tahiti that is not protected by a coral reef. So we were attentive to the feed back of the alert that confirm the forecast of the maximal height arrival 1 hour after the first arrival. The tsunami warning system forecast well strong impact as well as low impact as long as we have an early robust description of the seismic parameters and fine grids about 10 m spatial resolution to simulate tsunami impact. In January of 2016 we are able to forecast tsunami heights for 72 points located over 35 islands of French Polynesia.

  6. Neural network versus classical time series forecasting models

    NASA Astrophysics Data System (ADS)

    Nor, Maria Elena; Safuan, Hamizah Mohd; Shab, Noorzehan Fazahiyah Md; Asrul, Mohd; Abdullah, Affendi; Mohamad, Nurul Asmaa Izzati; Lee, Muhammad Hisyam

    2017-05-01

    Artificial neural network (ANN) has advantage in time series forecasting as it has potential to solve complex forecasting problems. This is because ANN is data driven approach which able to be trained to map past values of a time series. In this study the forecast performance between neural network and classical time series forecasting method namely seasonal autoregressive integrated moving average models was being compared by utilizing gold price data. Moreover, the effect of different data preprocessing on the forecast performance of neural network being examined. The forecast accuracy was evaluated using mean absolute deviation, root mean square error and mean absolute percentage error. It was found that ANN produced the most accurate forecast when Box-Cox transformation was used as data preprocessing.

  7. Evaluation of NOAA's High Resolution Rapid Refresh (HRRR), 12 km North America Model (NAM12) and 4km North America Model (NAM 4) hub-height wind speed forecasts

    NASA Astrophysics Data System (ADS)

    Pendergrass, W.; Vogel, C. A.

    2013-12-01

    As an outcome of discussions between Duke Energy Generation and NOAA/ARL following the 2009 AMS Summer Community Meeting, in Norman Oklahoma, ARL and Duke Energy Generation (Duke) signed a Cooperative Research and Development Agreement (CRADA) which allows NOAA to conduct atmospheric boundary layer (ABL) research using Duke renewable energy sites as research testbeds. One aspect of this research has been the evaluation of forecast hub-height winds from three NOAA atmospheric models. Forecasts of 10m (surface) and 80m (hub-height) wind speeds from (1) NOAA/GSD's High Resolution Rapid Refresh (HRRR) model, (2) NOAA/NCEP's 12 km North America Model (NAM12) and (3) NOAA/NCEP's 4k high resolution North America Model (NAM4) were evaluated against 18 months of surface-layer wind observations collected at the joint NOAA/Duke Energy research station located at Duke Energy's West Texas Ocotillo wind farm over the period April 2011 through October 2012. HRRR, NAM12 and NAM4 10m wind speed forecasts were compared with 10m level wind speed observations measured on the NOAA/ATDD flux-tower. Hub-height (80m) HRRR , NAM12 and NAM4 forecast wind speeds were evaluated against the 80m operational PMM27-28 meteorological tower supporting the Ocotillo wind farm. For each HRRR update, eight forecast hours (hour 01, 02, 03, 05, 07, 10, 12, 15) plus the initialization hour (hour 00), evaluated. For the NAM12 and NAM4 models forecast hours 00-24 from the 06z initialization were evaluated. Performance measures or skill score based on absolute error 50% cumulative probability were calculated for each forecast hour. HRRR forecast hour 01 provided the best skill score with an absolute wind speed error within 0.8 m/s of observed 10m wind speed and 1.25 m/s for hub-height wind speed at the designated 50% cumulative probability. For both NAM4 and NAM12 models, skill scores were diurnal with comparable best scores observed during the day of 0.7 m/s of observed 10m wind speed and 1.1 m/s for hub-height wind speed at the designated 50% cumulative probability level.

  8. Gas demand forecasting by a new artificial intelligent algorithm

    NASA Astrophysics Data System (ADS)

    Khatibi. B, Vahid; Khatibi, Elham

    2012-01-01

    Energy demand forecasting is a key issue for consumers and generators in all energy markets in the world. This paper presents a new forecasting algorithm for daily gas demand prediction. This algorithm combines a wavelet transform and forecasting models such as multi-layer perceptron (MLP), linear regression or GARCH. The proposed method is applied to real data from the UK gas markets to evaluate their performance. The results show that the forecasting accuracy is improved significantly by using the proposed method.

  9. Seasonal Forecasting of Reservoir Inflow for the Segura River Basin, Spain

    NASA Astrophysics Data System (ADS)

    de Tomas, Alberto; Hunink, Johannes

    2017-04-01

    A major threat to the agricultural sector in Europe is an increasing occurrence of low water availability for irrigation, affecting the local and regional food security and economies. Especially in the Mediterranean region, such as in the Segura river basin (Spain), drought epidodes are relatively frequent. Part of the irrigation water demand in this basin is met by a water transfer from the Tagus basin (central Spain), but also in this basin an increasing pressure on the water resources has reduced the water available to be transferred. Currently, Drought Management Plans in these Spanish basins are in place and mitigate the impact of drought periods to some extent. Drought indicators that are derived from the available water in the storage reservoirs impose a set of drought mitigation measures. Decisions on water transfers are dependent on a regression-based time series forecast from the reservoir inflows of the preceding months. This user-forecast has its limitations and can potentially be improved using more advanced techniques. Nowadays, seasonal climate forecasts have shown to have increasing skill for certain areas and for certain applications. So far, such forecasts have not been evaluated in a seasonal hydrologic forecasting system in the Spanish context. The objective of this work is to develop a prototype of a Seasonal Hydrologic Forecasting System and compare this with a reference forecast. The reference forecast in this case is the locally used regression-based forecast. Additionally, hydrological simulations derived from climatological reanalysis (ERA-Interim) are taken as a reference forecast. The Spatial Processes in Hydrology model (SPHY - http://www.sphy.nl/) forced with the ECMWF- SFS4 (15 ensembles) Seasonal Forecast Systems is used to predict reservoir inflows of the upper basins of the Segura and Tagus rivers. The system is evaluated for 4 seasons with a forecasting lead time of 3 months. First results show that only for certain initialization months and lead times, the developed system outperforms the reference forecast. This research is carried out within the European research project IMPREX (www.imprex.eu) that aims at investigating the value of improving predictions of hydro-meteorological extremes in a number of water sectors, including agriculture . The next step is to integrate improved seasonal forecasts into the system and evaluate these. This should finally lead to a more robust forecasting system that allows water managers and irrigators to better anticipate to drought episodes and putting into practice more effective water allocation and mitigation practices.

  10. Evaluation of Pollen Apps Forecasts: The Need for Quality Control in an eHealth Service.

    PubMed

    Bastl, Katharina; Berger, Uwe; Kmenta, Maximilian

    2017-05-08

    Pollen forecasts are highly valuable for allergen avoidance and thus raising the quality of life of persons concerned by pollen allergies. They are considered as valuable free services for the public. Careful scientific evaluation of pollen forecasts in terms of accurateness and reliability has not been available till date. The aim of this study was to analyze 9 mobile apps, which deliver pollen information and pollen forecasts, with a focus on their accurateness regarding the prediction of the pollen load in the grass pollen season 2016 to assess their usefulness for pollen allergy sufferers. The following number of apps was evaluated for each location: 3 apps for Vienna (Austria), 4 apps for Berlin (Germany), and 1 app each for Basel (Switzerland) and London (United Kingdom). All mobile apps were freely available. Today's grass pollen forecast was compared throughout the defined grass pollen season at each respective location with measured grass pollen concentrations. Hit rates were calculated for the exact performance and for a tolerance in a range of ±2 and ±4 pollen per cubic meter. In general, for most apps, hit rates score around 50% (6 apps). It was found that 1 app showed better results, whereas 3 apps performed less well. Hit rates increased when calculated with tolerances for most apps. In contrast, the forecast for the "readiness to flower" for grasses was performed at a sufficiently accurate level, although only two apps provided such a forecast. The last of those forecasts coincided with the first moderate grass pollen load on the predicted day or 3 days after and performed even from about a month before well within the range of 3 days. Advertisement was present in 3 of the 9 analyzed apps, whereas an imprint mentioning institutions with experience in pollen forecasting was present in only three other apps. The quality of pollen forecasts is in need of improvement, and quality control for pollen forecasts is recommended to avoid potential harm to pollen allergy sufferers due to inadequate forecasts. The inclusion of information on reliability of provided forecasts and a similar handling regarding probabilistic weather forecasts should be considered. ©Katharina Bastl, Uwe Berger, Maximilian Kmenta. Originally published in the Journal of Medical Internet Research (http://www.jmir.org), 08.05.2017.

  11. Evaluation of Pollen Apps Forecasts: The Need for Quality Control in an eHealth Service

    PubMed Central

    Berger, Uwe; Kmenta, Maximilian

    2017-01-01

    Background Pollen forecasts are highly valuable for allergen avoidance and thus raising the quality of life of persons concerned by pollen allergies. They are considered as valuable free services for the public. Careful scientific evaluation of pollen forecasts in terms of accurateness and reliability has not been available till date. Objective The aim of this study was to analyze 9 mobile apps, which deliver pollen information and pollen forecasts, with a focus on their accurateness regarding the prediction of the pollen load in the grass pollen season 2016 to assess their usefulness for pollen allergy sufferers. Methods The following number of apps was evaluated for each location: 3 apps for Vienna (Austria), 4 apps for Berlin (Germany), and 1 app each for Basel (Switzerland) and London (United Kingdom). All mobile apps were freely available. Today’s grass pollen forecast was compared throughout the defined grass pollen season at each respective location with measured grass pollen concentrations. Hit rates were calculated for the exact performance and for a tolerance in a range of ±2 and ±4 pollen per cubic meter. Results In general, for most apps, hit rates score around 50% (6 apps). It was found that 1 app showed better results, whereas 3 apps performed less well. Hit rates increased when calculated with tolerances for most apps. In contrast, the forecast for the “readiness to flower” for grasses was performed at a sufficiently accurate level, although only two apps provided such a forecast. The last of those forecasts coincided with the first moderate grass pollen load on the predicted day or 3 days after and performed even from about a month before well within the range of 3 days. Advertisement was present in 3 of the 9 analyzed apps, whereas an imprint mentioning institutions with experience in pollen forecasting was present in only three other apps. Conclusions The quality of pollen forecasts is in need of improvement, and quality control for pollen forecasts is recommended to avoid potential harm to pollen allergy sufferers due to inadequate forecasts. The inclusion of information on reliability of provided forecasts and a similar handling regarding probabilistic weather forecasts should be considered. PMID:28483740

  12. Regional early flood warning system: design and implementation

    NASA Astrophysics Data System (ADS)

    Chang, L. C.; Yang, S. N.; Kuo, C. L.; Wang, Y. F.

    2017-12-01

    This study proposes a prototype of the regional early flood inundation warning system in Tainan City, Taiwan. The AI technology is used to forecast multi-step-ahead regional flood inundation maps during storm events. The computing time is only few seconds that leads to real-time regional flood inundation forecasting. A database is built to organize data and information for building real-time forecasting models, maintaining the relations of forecasted points, and displaying forecasted results, while real-time data acquisition is another key task where the model requires immediately accessing rain gauge information to provide forecast services. All programs related database are constructed in Microsoft SQL Server by using Visual C# to extracting real-time hydrological data, managing data, storing the forecasted data and providing the information to the visual map-based display. The regional early flood inundation warning system use the up-to-date Web technologies driven by the database and real-time data acquisition to display the on-line forecasting flood inundation depths in the study area. The friendly interface includes on-line sequentially showing inundation area by Google Map, maximum inundation depth and its location, and providing KMZ file download of the results which can be watched on Google Earth. The developed system can provide all the relevant information and on-line forecast results that helps city authorities to make decisions during typhoon events and make actions to mitigate the losses.

  13. Mixture EMOS model for calibrating ensemble forecasts of wind speed.

    PubMed

    Baran, S; Lerch, S

    2016-03-01

    Ensemble model output statistics (EMOS) is a statistical tool for post-processing forecast ensembles of weather variables obtained from multiple runs of numerical weather prediction models in order to produce calibrated predictive probability density functions. The EMOS predictive probability density function is given by a parametric distribution with parameters depending on the ensemble forecasts. We propose an EMOS model for calibrating wind speed forecasts based on weighted mixtures of truncated normal (TN) and log-normal (LN) distributions where model parameters and component weights are estimated by optimizing the values of proper scoring rules over a rolling training period. The new model is tested on wind speed forecasts of the 50 member European Centre for Medium-range Weather Forecasts ensemble, the 11 member Aire Limitée Adaptation dynamique Développement International-Hungary Ensemble Prediction System ensemble of the Hungarian Meteorological Service, and the eight-member University of Washington mesoscale ensemble, and its predictive performance is compared with that of various benchmark EMOS models based on single parametric families and combinations thereof. The results indicate improved calibration of probabilistic and accuracy of point forecasts in comparison with the raw ensemble and climatological forecasts. The mixture EMOS model significantly outperforms the TN and LN EMOS methods; moreover, it provides better calibrated forecasts than the TN-LN combination model and offers an increased flexibility while avoiding covariate selection problems. © 2016 The Authors Environmetrics Published by JohnWiley & Sons Ltd.

  14. Why preferring parametric forecasting to nonparametric methods?

    PubMed

    Jabot, Franck

    2015-05-07

    A recent series of papers by Charles T. Perretti and collaborators have shown that nonparametric forecasting methods can outperform parametric methods in noisy nonlinear systems. Such a situation can arise because of two main reasons: the instability of parametric inference procedures in chaotic systems which can lead to biased parameter estimates, and the discrepancy between the real system dynamics and the modeled one, a problem that Perretti and collaborators call "the true model myth". Should ecologists go on using the demanding parametric machinery when trying to forecast the dynamics of complex ecosystems? Or should they rely on the elegant nonparametric approach that appears so promising? It will be here argued that ecological forecasting based on parametric models presents two key comparative advantages over nonparametric approaches. First, the likelihood of parametric forecasting failure can be diagnosed thanks to simple Bayesian model checking procedures. Second, when parametric forecasting is diagnosed to be reliable, forecasting uncertainty can be estimated on virtual data generated with the fitted to data parametric model. In contrast, nonparametric techniques provide forecasts with unknown reliability. This argumentation is illustrated with the simple theta-logistic model that was previously used by Perretti and collaborators to make their point. It should convince ecologists to stick to standard parametric approaches, until methods have been developed to assess the reliability of nonparametric forecasting. Copyright © 2015 Elsevier Ltd. All rights reserved.

  15. Observed Impacts. Chapter 19

    NASA Technical Reports Server (NTRS)

    Rosenzweig, Cynthia

    1999-01-01

    Agricultural applications of El Nino forecasts are already underway in some countries and need to be evaluated or re-evaluated. For example, in Peru, El Nino forecasts have been incorporated into national planning for the agricultural sector, and areas planted with rice and cotton (cotton being the more drought-tolerant crop) are adjusted accordingly. How well are this and other such programs working? Such evaluations will contribute to the governmental and intergovernmental institutions, including the Inter-American Institute for Global Change Research and the US National Ocean and Atmospheric Agency that are fostering programs to aid the effective use of forecasts. This research involves expanding, deepening, and applying the understanding of physical climate to the fields of agronomy and social science; and the reciprocal understanding of crop growth and farm economics to climatology. Delivery of a regional climate forecast with no information about how the climate forecast was derived limits its effectiveness. Explanation of a region's major climate driving forces helps to place a seasonal forecast in context. Then, a useful approach is to show historical responses to previous El Nino events, and projections, with uncertainty intervals, of crop response from dynamic process crop growth models. Regional forecasts should be updated with real-time weather conditions. Since every El Nino event is different, it is important to track, report and advise on each new event as it unfolds.

  16. The SPoRT-WRF: Evaluating the Impact of NASA Datasets on Convective Forecasts

    NASA Technical Reports Server (NTRS)

    Zavodsky, Bradley; Case, Jonathan; Kozlowski, Danielle; Molthan, Andrew

    2012-01-01

    The Short-term Prediction Research and Transition Center (SPoRT) is a collaborative partnership between NASA and operational forecasting entities, including a number of National Weather Service offices. SPoRT transitions real-time NASA products and capabilities to its partners to address specific operational forecast challenges. One challenge that forecasters face is applying convection-allowing numerical models to predict mesoscale convective weather. In order to address this specific forecast challenge, SPoRT produces real-time mesoscale model forecasts using the Weather Research and Forecasting (WRF) model that includes unique NASA products and capabilities. Currently, the SPoRT configuration of the WRF model (SPoRT-WRF) incorporates the 4-km Land Information System (LIS) land surface data, 1-km SPoRT sea surface temperature analysis and 1-km Moderate resolution Imaging Spectroradiometer (MODIS) greenness vegetation fraction (GVF) analysis, and retrieved thermodynamic profiles from the Atmospheric Infrared Sounder (AIRS). The LIS, SST, and GVF data are all integrated into the SPoRT-WRF through adjustments to the initial and boundary conditions, and the AIRS data are assimilated into a 9-hour SPoRT WRF forecast each day at 0900 UTC. This study dissects the overall impact of the NASA datasets and the individual surface and atmospheric component datasets on daily mesoscale forecasts. A case study covering the super tornado outbreak across the Ce ntral and Southeastern United States during 25-27 April 2011 is examined. Three different forecasts are analyzed including the SPoRT-WRF (NASA surface and atmospheric data), the SPoRT WRF without AIRS (NASA surface data only), and the operational National Severe Storms Laboratory (NSSL) WRF (control with no NASA data). The forecasts are compared qualitatively by examining simulated versus observed radar reflectivity. Differences between the simulated reflectivity are further investigated using convective parameters along with model soundings to determine the impacts of the various NASA datasets. Additionally, quantitative evaluation of select meteorological parameters is performed using the Meteorological Evaluation Tools model verification package to compare forecasts to in situ surface and upper air observations.

  17. Remotely-sensed, nocturnal, dew point correlates with malaria transmission in Southern Province, Zambia: a time-series study

    PubMed Central

    2014-01-01

    Background Plasmodium falciparum transmission has decreased significantly in Zambia in the last decade. The malaria transmission is influenced by environmental variables. Incorporation of environmental variables in models of malaria transmission likely improves model fit and predicts probable trends in malaria disease. This work is based on the hypothesis that remotely-sensed environmental factors, including nocturnal dew point, are associated with malaria transmission and sustain foci of transmission during the low transmission season in the Southern Province of Zambia. Methods Thirty-eight rural health centres in Southern Province, Zambia were divided into three zones based on transmission patterns. Correlations between weekly malaria cases and remotely-sensed nocturnal dew point, nocturnal land surface temperature as well as vegetation indices and rainfall were evaluated in time-series analyses from 2012 week 19 to 2013 week 36. Zonal as well as clinic-based, multivariate, autoregressive, integrated, moving average (ARIMAX) models implementing environmental variables were developed to model transmission in 2011 week 19 to 2012 week 18 and forecast transmission in 2013 week 37 to week 41. Results During the dry, low transmission season significantly higher vegetation indices, nocturnal land surface temperature and nocturnal dew point were associated with the areas of higher transmission. Environmental variables improved ARIMAX models. Dew point and normalized differentiated vegetation index were significant predictors and improved all zonal transmission models. In the high-transmission zone, this was also seen for land surface temperature. Clinic models were improved by adding dew point and land surface temperature as well as normalized differentiated vegetation index. The mean average error of prediction for ARIMAX models ranged from 0.7 to 33.5%. Forecasts of malaria incidence were valid for three out of five rural health centres; however, with poor results at the zonal level. Conclusions In this study, the fit of ARIMAX models improves when environmental variables are included. There is a significant association of remotely-sensed nocturnal dew point with malaria transmission. Interestingly, dew point might be one of the factors sustaining malaria transmission in areas of general aridity during the dry season. PMID:24927747

  18. Remotely-sensed, nocturnal, dew point correlates with malaria transmission in Southern Province, Zambia: a time-series study.

    PubMed

    Nygren, David; Stoyanov, Cristina; Lewold, Clemens; Månsson, Fredrik; Miller, John; Kamanga, Aniset; Shiff, Clive J

    2014-06-13

    Plasmodium falciparum transmission has decreased significantly in Zambia in the last decade. The malaria transmission is influenced by environmental variables. Incorporation of environmental variables in models of malaria transmission likely improves model fit and predicts probable trends in malaria disease. This work is based on the hypothesis that remotely-sensed environmental factors, including nocturnal dew point, are associated with malaria transmission and sustain foci of transmission during the low transmission season in the Southern Province of Zambia. Thirty-eight rural health centres in Southern Province, Zambia were divided into three zones based on transmission patterns. Correlations between weekly malaria cases and remotely-sensed nocturnal dew point, nocturnal land surface temperature as well as vegetation indices and rainfall were evaluated in time-series analyses from 2012 week 19 to 2013 week 36. Zonal as well as clinic-based, multivariate, autoregressive, integrated, moving average (ARIMAX) models implementing environmental variables were developed to model transmission in 2011 week 19 to 2012 week 18 and forecast transmission in 2013 week 37 to week 41. During the dry, low transmission season significantly higher vegetation indices, nocturnal land surface temperature and nocturnal dew point were associated with the areas of higher transmission. Environmental variables improved ARIMAX models. Dew point and normalized differentiated vegetation index were significant predictors and improved all zonal transmission models. In the high-transmission zone, this was also seen for land surface temperature. Clinic models were improved by adding dew point and land surface temperature as well as normalized differentiated vegetation index. The mean average error of prediction for ARIMAX models ranged from 0.7 to 33.5%. Forecasts of malaria incidence were valid for three out of five rural health centres; however, with poor results at the zonal level. In this study, the fit of ARIMAX models improves when environmental variables are included. There is a significant association of remotely-sensed nocturnal dew point with malaria transmission. Interestingly, dew point might be one of the factors sustaining malaria transmission in areas of general aridity during the dry season.

  19. Value of biologic therapy: a forecasting model in three disease areas.

    PubMed

    Paramore, L Clark; Hunter, Craig A; Luce, Bryan R; Nordyke, Robert J; Halbert, R J

    2010-01-01

    Forecast the return on investment (ROI) for advances in biologic therapies in years 2015 and 2030, based upon impact on disease prevalence, morbidity, and mortality for asthma, diabetes, and colorectal cancer. A deterministic, spreadsheet-based, forecasting model was developed based on trends in demographics and disease epidemiology. 'Return' was defined as reductions in disease burden (prevalence, morbidity, mortality) translated into monetary terms; 'investment' was defined as the incremental costs of biologic therapy advances. Data on disease prevalence, morbidity, mortality, and associated costs were obtained from government survey statistics or published literature. Expected impact of advances in biologic therapies was based on expert opinion. Gains in quality-adjusted life years (QALYs) were valued at $100,000 per QALY. The base case analysis, in which reductions in disease prevalence and mortality predicted by the expert panel are not considered, shows the resulting ROIs remain positive for asthma and diabetes but fall below $1 for colorectal cancer. Analysis involving expert panel predictions indicated positive ROI results for all three diseases at both time points, ranging from $207 for each incremental dollar spent on biologic therapies to treat asthma in 2030, to $4 for each incremental dollar spent on biologic therapies to treat colorectal cancer in 2015. If QALYs are not considered, the resulting ROIs remain positive for all three diseases at both time points. Society may expect substantial returns from investments in innovative biologic therapies. These benefits are most likely to be realized in an environment of appropriate use of new molecules. The potential variance between forecasted (from expert opinion) and actual future health outcomes could be significant. Similarly, the forecasted growth in use of biologic therapies relied upon unvalidated market forecasts.

  20. The RAPIDD ebola forecasting challenge: Synthesis and lessons learnt.

    PubMed

    Viboud, Cécile; Sun, Kaiyuan; Gaffey, Robert; Ajelli, Marco; Fumanelli, Laura; Merler, Stefano; Zhang, Qian; Chowell, Gerardo; Simonsen, Lone; Vespignani, Alessandro

    2018-03-01

    Infectious disease forecasting is gaining traction in the public health community; however, limited systematic comparisons of model performance exist. Here we present the results of a synthetic forecasting challenge inspired by the West African Ebola crisis in 2014-2015 and involving 16 international academic teams and US government agencies, and compare the predictive performance of 8 independent modeling approaches. Challenge participants were invited to predict 140 epidemiological targets across 5 different time points of 4 synthetic Ebola outbreaks, each involving different levels of interventions and "fog of war" in outbreak data made available for predictions. Prediction targets included 1-4 week-ahead case incidences, outbreak size, peak timing, and several natural history parameters. With respect to weekly case incidence targets, ensemble predictions based on a Bayesian average of the 8 participating models outperformed any individual model and did substantially better than a null auto-regressive model. There was no relationship between model complexity and prediction accuracy; however, the top performing models for short-term weekly incidence were reactive models with few parameters, fitted to a short and recent part of the outbreak. Individual model outputs and ensemble predictions improved with data accuracy and availability; by the second time point, just before the peak of the epidemic, estimates of final size were within 20% of the target. The 4th challenge scenario - mirroring an uncontrolled Ebola outbreak with substantial data reporting noise - was poorly predicted by all modeling teams. Overall, this synthetic forecasting challenge provided a deep understanding of model performance under controlled data and epidemiological conditions. We recommend such "peace time" forecasting challenges as key elements to improve coordination and inspire collaboration between modeling groups ahead of the next pandemic threat, and to assess model forecasting accuracy for a variety of known and hypothetical pathogens. Published by Elsevier B.V.

  1. Some economic benefits of a synchronous earth observatory satellite

    NASA Technical Reports Server (NTRS)

    Battacharyya, R. K.; Greenberg, J. S.; Lowe, D. S.; Sattinger, I. J.

    1974-01-01

    An analysis was made of the economic benefits which might be derived from reduced forecasting errors made possible by data obtained from a synchronous satellite system which can collect earth observation and meteorological data continuously and on demand. User costs directly associated with achieving benefits are included. In the analysis, benefits were evaluated which might be obtained as a result of improved thunderstorm forecasting, frost warning, and grain harvest forecasting capabilities. The anticipated system capabilities were used to arrive at realistic estimates of system performance on which to base the benefit analysis. Emphasis was placed on the benefits which result from system forecasting accuracies. Benefits from improved thunderstorm forecasts are indicated for the construction, air transportation, and agricultural industries. The effects of improved frost warning capability on the citrus crop are determined. The benefits from improved grain forecasting capability are evaluated in terms of both U.S. benefits resulting from domestic grain distribution and U.S. benefits from international grain distribution.

  2. The SPoRT-WRF: Evaluating the Impact of NASA Datasets on Convective Forecasts

    NASA Technical Reports Server (NTRS)

    Zavodsky, Bradley; Kozlowski, Danielle; Case, Jonathan; Molthan, Andrew

    2012-01-01

    Short-term Prediction Research and Transition (SPoRT) seeks to improve short-term, regional weather forecasts using unique NASA products and capabilities SPoRT has developed a unique, real-time configuration of the NASA Unified Weather Research and Forecasting (WRF)WRF (ARW) that integrates all SPoRT modeling research data: (1) 2-km SPoRT Sea Surface Temperature (SST) Composite, (2) 3-km LIS with 1-km Greenness Vegetation Fraction (GVFs) (3) 45-km AIRS retrieved profiles. Transitioned this real-time forecast to NOAA's Hazardous Weather Testbed (HWT) as deterministic model at Experimental Forecast Program (EFP). Feedback from forecasters/participants and internal evaluation of SPoRT-WRF shows a cool, dry bias that appears to suppress convection likely related to methodology for assimilation of AIRS profiles Version 2 of the SPoRT-WRF will premier at the 2012 EFP and include NASA physics, cycling data assimilation methodology, better coverage of precipitation forcing, and new GVFs

  3. Evaluating probabilistic dengue risk forecasts from a prototype early warning system for Brazil

    PubMed Central

    Lowe, Rachel; Coelho, Caio AS; Barcellos, Christovam; Carvalho, Marilia Sá; Catão, Rafael De Castro; Coelho, Giovanini E; Ramalho, Walter Massa; Bailey, Trevor C; Stephenson, David B; Rodó, Xavier

    2016-01-01

    Recently, a prototype dengue early warning system was developed to produce probabilistic forecasts of dengue risk three months ahead of the 2014 World Cup in Brazil. Here, we evaluate the categorical dengue forecasts across all microregions in Brazil, using dengue cases reported in June 2014 to validate the model. We also compare the forecast model framework to a null model, based on seasonal averages of previously observed dengue incidence. When considering the ability of the two models to predict high dengue risk across Brazil, the forecast model produced more hits and fewer missed events than the null model, with a hit rate of 57% for the forecast model compared to 33% for the null model. This early warning model framework may be useful to public health services, not only ahead of mass gatherings, but also before the peak dengue season each year, to control potentially explosive dengue epidemics. DOI: http://dx.doi.org/10.7554/eLife.11285.001 PMID:26910315

  4. Is the economic value of hydrological forecasts related to their quality? Case study of the hydropower sector.

    NASA Astrophysics Data System (ADS)

    Cassagnole, Manon; Ramos, Maria-Helena; Thirel, Guillaume; Gailhard, Joël; Garçon, Rémy

    2017-04-01

    The improvement of a forecasting system and the evaluation of the quality of its forecasts are recurrent steps in operational practice. However, the evaluation of forecast value or forecast usefulness for better decision-making is, to our knowledge, less frequent, even if it might be essential in many sectors such as hydropower and flood warning. In the hydropower sector, forecast value can be quantified by the economic gain obtained with the optimization of operations or reservoir management rules. Several hydropower operational systems use medium-range forecasts (up to 7-10 days ahead) and energy price predictions to optimize hydropower production. Hence, the operation of hydropower systems, including the management of water in reservoirs, is impacted by weather, climate and hydrologic variability as well as extreme events. In order to assess how the quality of hydrometeorological forecasts impact operations, it is essential to first understand if and how operations and management rules are sensitive to input predictions of different quality. This study investigates how 7-day ahead deterministic and ensemble streamflow forecasts of different quality might impact the economic gains of energy production. It is based on a research model developed by Irstea and EDF to investigate issues relevant to the links between quality and value of forecasts in the optimisation of energy production at the short range. Based on streamflow forecasts and pre-defined management constraints, the model defines the best hours (i.e., the hours with high energy prices) to produce electricity. To highlight the link between forecasts quality and their economic value, we built several synthetic ensemble forecasts based on observed streamflow time series. These inputs are generated in a controlled environment in order to obtain forecasts of different quality in terms of accuracy and reliability. These forecasts are used to assess the sensitivity of the decision model to forecast quality. Relationships between forecast quality and economic value are discussed. This work is part of the IMPREX project, a research project supported by the European Commission under the Horizon 2020 Framework programme, with grant No. 641811 (http://www.imprex.eu)

  5. Real-time demonstration and evaluation of over-the-loop short to medium-range ensemble streamflow forecasting

    NASA Astrophysics Data System (ADS)

    Wood, A. W.; Clark, E.; Newman, A. J.; Nijssen, B.; Clark, M. P.; Gangopadhyay, S.; Arnold, J. R.

    2015-12-01

    The US National Weather Service River Forecasting Centers are beginning to operationalize short range to medium range ensemble predictions that have been in development for several years. This practice contrasts with the traditional single-value forecast practice at these lead times not only because the ensemble forecasts offer a basis for quantifying forecast uncertainty, but also because the use of ensembles requires a greater degree of automation in the forecast workflow than is currently used. For instance, individual ensemble member forcings cannot (practically) be manually adjusted, a step not uncommon with the current single-value paradigm, thus the forecaster is required to adopt a more 'over-the-loop' role than before. The relative lack of experience among operational forecasters and forecast users (eg, water managers) in the US with over-the-loop approaches motivates the creation of a real-time demonstration and evaluation platform for exploring the potential of over-the-loop workflows to produce usable ensemble short-to-medium range forecasts, as well as long range predictions. We describe the development and early results of such an effort by a collaboration between NCAR and the two water agencies, the US Army Corps of Engineers and the US Bureau of Reclamation. Focusing on small to medium sized headwater basins around the US, and using multi-decade series of ensemble streamflow hindcasts, we also describe early results, assessing the skill of daily-updating, over-the-loop forecasts driven by a set of ensemble atmospheric outputs from the NCEP GEFS for lead times from 1-15 days.

  6. How seasonal forecast could help a decision maker: an example of climate service for water resource management

    NASA Astrophysics Data System (ADS)

    Viel, Christian; Beaulant, Anne-Lise; Soubeyroux, Jean-Michel; Céron, Jean-Pierre

    2016-04-01

    The FP7 project EUPORIAS was a great opportunity for the climate community to co-design with stakeholders some original and innovative climate services at seasonal time scales. In this framework, Météo-France proposed a prototype that aimed to provide to water resource managers some tailored information to better anticipate the coming season. It is based on a forecasting system, built on a refined hydrological suite, forced by a coupled seasonal forecast model. It particularly delivers probabilistic river flow prediction on river basins all over the French territory. This paper presents the work we have done with "EPTB Seine Grands Lacs" (EPTB SGL), an institutional stakeholder in charge of the management of 4 great reservoirs on the upper Seine Basin. First, we present the co-design phase, which means the translation of classical climate outputs into several indices, relevant to influence the stakeholder's decision making process (DMP). And second, we detail the evaluation of the impact of the forecast on the DMP. This evaluation is based on an experiment realised in collaboration with the stakeholder. Concretely EPTB SGL has replayed some past decisions, in three different contexts: without any forecast, with a forecast A and with a forecast B. One of forecast A and B really contained seasonal forecast, the other only contained random forecasts taken from past climate. This placebo experiment, realised in a blind test, allowed us to calculate promising skill scores of the DMP based on seasonal forecast in comparison to a classical approach based on climatology, and to EPTG SGL current practice.

  7. Forecasting methodologies for Ganoderma spore concentration using combined statistical approaches and model evaluations

    NASA Astrophysics Data System (ADS)

    Sadyś, Magdalena; Skjøth, Carsten Ambelas; Kennedy, Roy

    2016-04-01

    High concentration levels of Ganoderma spp. spores were observed in Worcester, UK, during 2006-2010. These basidiospores are known to cause sensitization due to the allergen content and their small dimensions. This enables them to penetrate the lower part of the respiratory tract in humans. Establishment of a link between occurring symptoms of sensitization to Ganoderma spp. and other basidiospores is challenging due to lack of information regarding spore concentration in the air. Hence, aerobiological monitoring should be conducted, and if possible extended with the construction of forecast models. Daily mean concentration of allergenic Ganoderma spp. spores in the atmosphere of Worcester was measured using 7-day volumetric spore sampler through five consecutive years. The relationships between the presence of spores in the air and the weather parameters were examined. Forecast models were constructed for Ganoderma spp. spores using advanced statistical techniques, i.e. multivariate regression trees and artificial neural networks. Dew point temperature along with maximum temperature was the most important factor influencing the presence of spores in the air of Worcester. Based on these two major factors and several others of lesser importance, thresholds for certain levels of fungal spore concentration, i.e. low (0-49 s m-3), moderate (50-99 s m-3), high (100-149 s m-3) and very high (150 < n s m-3), could be designated. Despite some deviation in results obtained by artificial neural networks, authors have achieved a forecasting model, which was accurate (correlation between observed and predicted values varied from r s = 0.57 to r s = 0.68).

  8. Forecasting methodologies for Ganoderma spore concentration using combined statistical approaches and model evaluations.

    PubMed

    Sadyś, Magdalena; Skjøth, Carsten Ambelas; Kennedy, Roy

    2016-04-01

    High concentration levels of Ganoderma spp. spores were observed in Worcester, UK, during 2006-2010. These basidiospores are known to cause sensitization due to the allergen content and their small dimensions. This enables them to penetrate the lower part of the respiratory tract in humans. Establishment of a link between occurring symptoms of sensitization to Ganoderma spp. and other basidiospores is challenging due to lack of information regarding spore concentration in the air. Hence, aerobiological monitoring should be conducted, and if possible extended with the construction of forecast models. Daily mean concentration of allergenic Ganoderma spp. spores in the atmosphere of Worcester was measured using 7-day volumetric spore sampler through five consecutive years. The relationships between the presence of spores in the air and the weather parameters were examined. Forecast models were constructed for Ganoderma spp. spores using advanced statistical techniques, i.e. multivariate regression trees and artificial neural networks. Dew point temperature along with maximum temperature was the most important factor influencing the presence of spores in the air of Worcester. Based on these two major factors and several others of lesser importance, thresholds for certain levels of fungal spore concentration, i.e. low (0-49 s m(-3)), moderate (50-99 s m(-3)), high (100-149 s m(-3)) and very high (150 < n s m(-3)), could be designated. Despite some deviation in results obtained by artificial neural networks, authors have achieved a forecasting model, which was accurate (correlation between observed and predicted values varied from r s = 0.57 to r s = 0.68).

  9. Forecasting of Information Security Related Incidents: Amount of Spam Messages as a Case Study

    NASA Astrophysics Data System (ADS)

    Romanov, Anton; Okamoto, Eiji

    With the increasing demand for services provided by communication networks, quality and reliability of such services as well as confidentiality of data transfer are becoming ones of the highest concerns. At the same time, because of growing hacker's activities, quality of provided content and reliability of its continuous delivery strongly depend on integrity of data transmission and availability of communication infrastructure, thus on information security of a given IT landscape. But, the amount of resources allocated to provide information security (like security staff, technical countermeasures and etc.) must be reasonable from the economic point of view. This fact, in turn, leads to the need to employ a forecasting technique in order to make planning of IT budget and short-term planning of potential bottlenecks. In this paper we present an approach to make such a forecasting for a wide class of information security related incidents (ISRI) — unambiguously detectable ISRI. This approach is based on different auto regression models which are widely used in financial time series analysis but can not be directly applied to ISRI time series due to specifics related to information security. We investigate and address this specifics by proposing rules (special conditions) of collection and storage of ISRI time series, adherence to which improves forecasting in this subject field. We present an application of our approach to one type of unambiguously detectable ISRI — amount of spam messages which, if not mitigated properly, could create additional load on communication infrastructure and consume significant amounts of network capacity. Finally we evaluate our approach by simulation and actual measurement.

  10. Improving inflow forecasting into hydropower reservoirs through a complementary modelling framework

    NASA Astrophysics Data System (ADS)

    Gragne, A. S.; Sharma, A.; Mehrotra, R.; Alfredsen, K.

    2014-10-01

    Accuracy of reservoir inflow forecasts is instrumental for maximizing the value of water resources and benefits gained through hydropower generation. Improving hourly reservoir inflow forecasts over a 24 h lead-time is considered within the day-ahead (Elspot) market of the Nordic exchange market. We present here a new approach for issuing hourly reservoir inflow forecasts that aims to improve on existing forecasting models that are in place operationally, without needing to modify the pre-existing approach, but instead formulating an additive or complementary model that is independent and captures the structure the existing model may be missing. Besides improving forecast skills of operational models, the approach estimates the uncertainty in the complementary model structure and produces probabilistic inflow forecasts that entrain suitable information for reducing uncertainty in the decision-making processes in hydropower systems operation. The procedure presented comprises an error model added on top of an un-alterable constant parameter conceptual model, the models being demonstrated with reference to the 207 km2 Krinsvatn catchment in central Norway. The structure of the error model is established based on attributes of the residual time series from the conceptual model. Deterministic and probabilistic evaluations revealed an overall significant improvement in forecast accuracy for lead-times up to 17 h. Season based evaluations indicated that the improvement in inflow forecasts varies across seasons and inflow forecasts in autumn and spring are less successful with the 95% prediction interval bracketing less than 95% of the observations for lead-times beyond 17 h.

  11. From Tornadoes to Earthquakes: Forecast Verification for Binary Events Applied to the 1999 Chi-Chi, Taiwan, Earthquake

    NASA Astrophysics Data System (ADS)

    Chen, C.; Rundle, J. B.; Holliday, J. R.; Nanjo, K.; Turcotte, D. L.; Li, S.; Tiampo, K. F.

    2005-12-01

    Forecast verification procedures for statistical events with binary outcomes typically rely on the use of contingency tables and Relative Operating Characteristic (ROC) diagrams. Originally developed for the statistical evaluation of tornado forecasts on a county-by-county basis, these methods can be adapted to the evaluation of competing earthquake forecasts. Here we apply these methods retrospectively to two forecasts for the m = 7.3 1999 Chi-Chi, Taiwan, earthquake. These forecasts are based on a method, Pattern Informatics (PI), that locates likely sites for future large earthquakes based on large change in activity of the smallest earthquakes. A competing null hypothesis, Relative Intensity (RI), is based on the idea that future large earthquake locations are correlated with sites having the greatest frequency of small earthquakes. We show that for Taiwan, the PI forecast method is superior to the RI forecast null hypothesis. Inspection of the two maps indicates that their forecast locations are indeed quite different. Our results confirm an earlier result suggesting that the earthquake preparation process for events such as the Chi-Chi earthquake involves anomalous changes in activation or quiescence, and that signatures of these processes can be detected in precursory seismicity data. Furthermore, we find that our methods can accurately forecast the locations of aftershocks from precursory seismicity changes alone, implying that the main shock together with its aftershocks represent a single manifestation of the formation of a high-stress region nucleating prior to the main shock.

  12. A scoping review of malaria forecasting: past work and future directions

    PubMed Central

    Zinszer, Kate; Verma, Aman D; Charland, Katia; Brewer, Timothy F; Brownstein, John S; Sun, Zhuoyu; Buckeridge, David L

    2012-01-01

    Objectives There is a growing body of literature on malaria forecasting methods and the objective of our review is to identify and assess methods, including predictors, used to forecast malaria. Design Scoping review. Two independent reviewers searched information sources, assessed studies for inclusion and extracted data from each study. Information sources Search strategies were developed and the following databases were searched: CAB Abstracts, EMBASE, Global Health, MEDLINE, ProQuest Dissertations & Theses and Web of Science. Key journals and websites were also manually searched. Eligibility criteria for included studies We included studies that forecasted incidence, prevalence or epidemics of malaria over time. A description of the forecasting model and an assessment of the forecast accuracy of the model were requirements for inclusion. Studies were restricted to human populations and to autochthonous transmission settings. Results We identified 29 different studies that met our inclusion criteria for this review. The forecasting approaches included statistical modelling, mathematical modelling and machine learning methods. Climate-related predictors were used consistently in forecasting models, with the most common predictors being rainfall, relative humidity, temperature and the normalised difference vegetation index. Model evaluation was typically based on a reserved portion of data and accuracy was measured in a variety of ways including mean-squared error and correlation coefficients. We could not compare the forecast accuracy of models from the different studies as the evaluation measures differed across the studies. Conclusions Applying different forecasting methods to the same data, exploring the predictive ability of non-environmental variables, including transmission reducing interventions and using common forecast accuracy measures will allow malaria researchers to compare and improve models and methods, which should improve the quality of malaria forecasting. PMID:23180505

  13. Evaluation of quantitative precipitation forecasts by TIGGE ensembles for south China during the presummer rainy season

    NASA Astrophysics Data System (ADS)

    Huang, Ling; Luo, Yali

    2017-08-01

    Based on The Observing System Research and Predictability Experiment Interactive Grand Global Ensemble (TIGGE) data set, this study evaluates the ability of global ensemble prediction systems (EPSs) from the European Centre for Medium-Range Weather Forecasts (ECMWF), U.S. National Centers for Environmental Prediction, Japan Meteorological Agency (JMA), Korean Meteorological Administration, and China Meteorological Administration (CMA) to predict presummer rainy season (April-June) precipitation in south China. Evaluation of 5 day forecasts in three seasons (2013-2015) demonstrates the higher skill of probability matching forecasts compared to simple ensemble mean forecasts and shows that the deterministic forecast is a close second. The EPSs overestimate light-to-heavy rainfall (0.1 to 30 mm/12 h) and underestimate heavier rainfall (>30 mm/12 h), with JMA being the worst. By analyzing the synoptic situations predicted by the identified more skillful (ECMWF) and less skillful (JMA and CMA) EPSs and the ensemble sensitivity for four representative cases of torrential rainfall, the transport of warm-moist air into south China by the low-level southwesterly flow, upstream of the torrential rainfall regions, is found to be a key synoptic factor that controls the quantitative precipitation forecast. The results also suggest that prediction of locally produced torrential rainfall is more challenging than prediction of more extensively distributed torrential rainfall. A slight improvement in the performance is obtained by shortening the forecast lead time from 30-36 h to 18-24 h to 6-12 h for the cases with large-scale forcing, but not for the locally produced cases.

  14. Evaluation of weather forecast systems for storm surge modeling in the Chesapeake Bay

    NASA Astrophysics Data System (ADS)

    Garzon, Juan L.; Ferreira, Celso M.; Padilla-Hernandez, Roberto

    2018-01-01

    Accurate forecast of sea-level heights in coastal areas depends, among other factors, upon a reliable coupling of a meteorological forecast system to a hydrodynamic and wave system. This study evaluates the predictive skills of the coupled circulation and wind-wave model system (ADCIRC+SWAN) for simulating storm tides in the Chesapeake Bay, forced by six different products: (1) Global Forecast System (GFS), (2) Climate Forecast System (CFS) version 2, (3) North American Mesoscale Forecast System (NAM), (4) Rapid Refresh (RAP), (5) European Center for Medium-Range Weather Forecasts (ECMWF), and (6) the Atlantic hurricane database (HURDAT2). This evaluation is based on the hindcasting of four events: Irene (2011), Sandy (2012), Joaquin (2015), and Jonas (2016). By comparing the simulated water levels to observations at 13 monitoring stations, we have found that the ADCIR+SWAN System forced by the following: (1) the HURDAT2-based system exhibited the weakest statistical skills owing to a noteworthy overprediction of the simulated wind speed; (2) the ECMWF, RAP, and NAM products captured the moment of the peak and moderately its magnitude during all storms, with a correlation coefficient ranging between 0.98 and 0.77; (3) the CFS system exhibited the worst averaged root-mean-square difference (excepting HURDAT2); (4) the GFS system (the lowest horizontal resolution product tested) resulted in a clear underprediction of the maximum water elevation. Overall, the simulations forced by NAM and ECMWF systems induced the most accurate results best accuracy to support water level forecasting in the Chesapeake Bay during both tropical and extra-tropical storms.

  15. Blending forest fire smoke forecasts with observed data can improve their utility for public health applications

    NASA Astrophysics Data System (ADS)

    Yuchi, Weiran; Yao, Jiayun; McLean, Kathleen E.; Stull, Roland; Pavlovic, Radenko; Davignon, Didier; Moran, Michael D.; Henderson, Sarah B.

    2016-11-01

    Fine particulate matter (PM2.5) generated by forest fires has been associated with a wide range of adverse health outcomes, including exacerbation of respiratory diseases and increased risk of mortality. Due to the unpredictable nature of forest fires, it is challenging for public health authorities to reliably evaluate the magnitude and duration of potential exposures before they occur. Smoke forecasting tools are a promising development from the public health perspective, but their widespread adoption is limited by their inherent uncertainties. Observed measurements from air quality monitoring networks and remote sensing platforms are more reliable, but they are inherently retrospective. It would be ideal to reduce the uncertainty in smoke forecasts by integrating any available observations. This study takes spatially resolved PM2.5 estimates from an empirical model that integrates air quality measurements with satellite data, and averages them with PM2.5 predictions from two smoke forecasting systems. Two different indicators of population respiratory health are then used to evaluate whether the blending improved the utility of the smoke forecasts. Among a total of six models, including two single forecasts and four blended forecasts, the blended estimates always performed better than the forecast values alone. Integrating measured observations into smoke forecasts could improve public health preparedness for smoke events, which are becoming more frequent and intense as the climate changes.

  16. Forecasting Responses of a Northern Peatland Carbon Cycle to Elevated CO2 and a Gradient of Experimental Warming

    NASA Astrophysics Data System (ADS)

    Jiang, Jiang; Huang, Yuanyuan; Ma, Shuang; Stacy, Mark; Shi, Zheng; Ricciuto, Daniel M.; Hanson, Paul J.; Luo, Yiqi

    2018-03-01

    The ability to forecast ecological carbon cycling is imperative to land management in a world where past carbon fluxes are no longer a clear guide in the Anthropocene. However, carbon-flux forecasting has not been practiced routinely like numerical weather prediction. This study explored (1) the relative contributions of model forcing data and parameters to uncertainty in forecasting flux- versus pool-based carbon cycle variables and (2) the time points when temperature and CO2 treatments may cause statistically detectable differences in those variables. We developed an online forecasting workflow (Ecological Platform for Assimilation of Data (EcoPAD)), which facilitates iterative data-model integration. EcoPAD automates data transfer from sensor networks, data assimilation, and ecological forecasting. We used the Spruce and Peatland Responses Under Changing Experiments data collected from 2011 to 2014 to constrain the parameters in the Terrestrial Ecosystem Model, forecast carbon cycle responses to elevated CO2 and a gradient of warming from 2015 to 2024, and specify uncertainties in the model output. Our results showed that data assimilation substantially reduces forecasting uncertainties. Interestingly, we found that the stochasticity of future external forcing contributed more to the uncertainty of forecasting future dynamics of C flux-related variables than model parameters. However, the parameter uncertainty primarily contributes to the uncertainty in forecasting C pool-related response variables. Given the uncertainties in forecasting carbon fluxes and pools, our analysis showed that statistically different responses of fast-turnover pools to various CO2 and warming treatments were observed sooner than slow-turnover pools. Our study has identified the sources of uncertainties in model prediction and thus leads to improve ecological carbon cycling forecasts in the future.

  17. Operational Earthquake Forecasting of Aftershocks for New England

    NASA Astrophysics Data System (ADS)

    Ebel, J.; Fadugba, O. I.

    2015-12-01

    Although the forecasting of mainshocks is not possible, recent research demonstrates that probabilistic forecasts of expected aftershock activity following moderate and strong earthquakes is possible. Previous work has shown that aftershock sequences in intraplate regions behave similarly to those in California, and thus the operational aftershocks forecasting methods that are currently employed in California can be adopted for use in areas of the eastern U.S. such as New England. In our application, immediately after a felt earthquake in New England, a forecast of expected aftershock activity for the next 7 days will be generated based on a generic aftershock activity model. Approximately 24 hours after the mainshock, the parameters of the aftershock model will be updated using the observed aftershock activity observed to that point in time, and a new forecast of expected aftershock activity for the next 7 days will be issued. The forecast will estimate the average number of weak, felt aftershocks and the average expected number of aftershocks based on the aftershock statistics of past New England earthquakes. The forecast also will estimate the probability that an earthquake that is stronger than the mainshock will take place during the next 7 days. The aftershock forecast will specify the expected aftershocks locations as well as the areas over which aftershocks of different magnitudes could be felt. The system will use web pages, email and text messages to distribute the aftershock forecasts. For protracted aftershock sequences, new forecasts will be issued on a regular basis, such as weekly. Initially, the distribution system of the aftershock forecasts will be limited, but later it will be expanded as experience with and confidence in the system grows.

  18. The Economic Value of Air Quality Forecasting

    NASA Astrophysics Data System (ADS)

    Anderson-Sumo, Tasha

    Both long-term and daily air quality forecasts provide an essential component to human health and impact costs. According the American Lung Association, the estimated current annual cost of air pollution related illness in the United States, adjusted for inflation (3% per year), is approximately $152 billion. Many of the risks such as hospital visits and morality are associated with poor air quality days (where the Air Quality Index is greater than 100). Groups such as sensitive groups become more susceptible to the resulting conditions and more accurate forecasts would help to take more appropriate precautions. This research focuses on evaluating the utility of air quality forecasting in terms of its potential impacts by building on air quality forecasting and economical metrics. Our analysis includes data collected during the summertime ozone seasons between 2010 and 2012 from air quality models for the Washington, DC/Baltimore, MD region. The metrics that are relevant to our analysis include: (1) The number of times that a high ozone or particulate matter (PM) episode is correctly forecasted, (2) the number of times that high ozone or PM episode is forecasted when it does not occur and (3) the number of times when the air quality forecast predicts a cleaner air episode when the air was observed to have high ozone or PM. Our collection of data included available air quality model forecasts of ozone and particulate matter data from the U.S. Environmental Protection Agency (EPA)'s AIRNOW as well as observational data of ozone and particulate matter from Clean Air Partners. We evaluated the performance of the air quality forecasts with that of the observational data and found that the forecast models perform well for the Baltimore/Washington region and the time interval observed. We estimate the potential amount for the Baltimore/Washington region accrues to a savings of up to 5,905 lives and 5.9 billion dollars per year. This total assumes perfect compliance with bad air quality warning and forecast air quality forecasts. There is a difficulty presented with evaluating the economic utility of the forecasts. All may not comply and even with a low compliance rate of 5% and 72% as the average probability of detection of poor air quality days by the air quality models, we estimate that the forecasting program saves 412 lives or 412 million dollars per year for the region. The totals we found are great or greater than other typical yearly meteorological hazard programs such as tornado or hurricane forecasting and it is clear that the economic value of air quality forecasting in the Baltimore/Washington region is vital.

  19. Applied Meteorology Unit (AMU) Quarterly Report. First Quarter FY-05

    NASA Technical Reports Server (NTRS)

    Bauman, William; Wheeler, Mark; Lambert, Winifred; Case, Jonathan; Short, David

    2005-01-01

    This report summarizes the Applied Meteorology Unit (AMU) activities for the first quarter of Fiscal Year 2005 (October - December 2005). Tasks reviewed include: (1) Objective Lightning Probability Forecast: Phase I, (2) Severe Weather Forecast Decision Aid, (3) Hail Index, (4) Stable Low Cloud Evaluation, (5) Shuttle Ascent Camera Cloud Obstruction Forecast, (6) Range Standardization and Automation (RSA) and Legacy Wind Sensor Evaluation, (7) Advanced Regional Prediction System (ARPS) Optimization and Training Extension, and (8) User Control Interface for ARPS Data Analysis System (ADAS) Data Ingest

  20. Evaluation of a Wildfire Smoke Forecasting System as a Tool for Public Health Protection

    PubMed Central

    Brauer, Michael; Henderson, Sarah B.

    2013-01-01

    Background: Exposure to wildfire smoke has been associated with cardiopulmonary health impacts. Climate change will increase the severity and frequency of smoke events, suggesting a need for enhanced public health protection. Forecasts of smoke exposure can facilitate public health responses. Objectives: We evaluated the utility of a wildfire smoke forecasting system (BlueSky) for public health protection by comparing its forecasts with observations and assessing their associations with population-level indicators of respiratory health in British Columbia, Canada. Methods: We compared BlueSky PM2.5 forecasts with PM2.5 measurements from air quality monitors, and BlueSky smoke plume forecasts with plume tracings from National Oceanic and Atmospheric Administration Hazard Mapping System remote sensing data. Daily counts of the asthma drug salbutamol sulfate dispensations and asthma-related physician visits were aggregated for each geographic local health area (LHA). Daily continuous measures of PM2.5 and binary measures of smoke plume presence, either forecasted or observed, were assigned to each LHA. Poisson regression was used to estimate the association between exposure measures and health indicators. Results: We found modest agreement between forecasts and observations, which was improved during intense fire periods. A 30-μg/m3 increase in BlueSky PM2.5 was associated with an 8% increase in salbutamol dispensations and a 5% increase in asthma-related physician visits. BlueSky plume coverage was associated with 5% and 6% increases in the two health indicators, respectively. The effects were similar for observed smoke, and generally stronger in very smoky areas. Conclusions: BlueSky forecasts showed modest agreement with retrospective measures of smoke and were predictive of respiratory health indicators, suggesting they can provide useful information for public health protection. Citation: Yao J, Brauer M, Henderson SB. 2013. Evaluation of a wildfire smoke forecasting system as a tool for public health protection. Environ Health Perspect 121:1142–1147; http://dx.doi.org/10.1289/ehp.1306768 PMID:23906969

  1. Constraints on Rational Model Weighting, Blending and Selecting when Constructing Probability Forecasts given Multiple Models

    NASA Astrophysics Data System (ADS)

    Higgins, S. M. W.; Du, H. L.; Smith, L. A.

    2012-04-01

    Ensemble forecasting on a lead time of seconds over several years generates a large forecast-outcome archive, which can be used to evaluate and weight "models". Challenges which arise as the archive becomes smaller are investigated: in weather forecasting one typically has only thousands of forecasts however those launched 6 hours apart are not independent of each other, nor is it justified to mix seasons with different dynamics. Seasonal forecasts, as from ENSEMBLES and DEMETER, typically have less than 64 unique launch dates; decadal forecasts less than eight, and long range climate forecasts arguably none. It is argued that one does not weight "models" so much as entire ensemble prediction systems (EPSs), and that the marginal value of an EPS will depend on the other members in the mix. The impact of using different skill scores is examined in the limits of both very large forecast-outcome archives (thereby evaluating the efficiency of the skill score) and in very small forecast-outcome archives (illustrating fundamental limitations due to sampling fluctuations and memory in the physical system being forecast). It is shown that blending with climatology (J. Bröcker and L.A. Smith, Tellus A, 60(4), 663-678, (2008)) tends to increase the robustness of the results; also a new kernel dressing methodology (simply insuring that the expected probability mass tends to lie outside the range of the ensemble) is illustrated. Fair comparisons using seasonal forecasts from the ENSEMBLES project are used to illustrate the importance of these results with fairly small archives. The robustness of these results across the range of small, moderate and huge archives is demonstrated using imperfect models of perfectly known nonlinear (chaotic) dynamical systems. The implications these results hold for distinguishing the skill of a forecast from its value to a user of the forecast are discussed.

  2. Evaluating NMME Seasonal Forecast Skill for use in NASA SERVIR Hub Regions

    NASA Technical Reports Server (NTRS)

    Roberts, J. Brent; Roberts, Franklin R.

    2013-01-01

    The U.S. National Multi-Model Ensemble seasonal forecasting system is providing hindcast and real-time data streams to be used in assessing and improving seasonal predictive capacity. The coupled forecasts have numerous potential applications, both national and international in scope. The NASA / USAID SERVIR project, which leverages satellite and modeling-based resources for environmental decision making in developing nations, is focusing on the evaluation of NMME forecasts specifically for use in driving applications models in hub regions including East Africa, the Hindu Kush- Himalayan (HKH) region and Mesoamerica. A prerequisite for seasonal forecast use in application modeling (e.g. hydrology, agriculture) is bias correction and skill assessment. Efforts to address systematic biases and multi-model combination in support of NASA SERVIR impact modeling requirements will be highlighted. Specifically, quantilequantile mapping for bias correction has been implemented for all archived NMME hindcasts. Both deterministic and probabilistic skill estimates for raw, bias-corrected, and multi-model ensemble forecasts as a function of forecast lead will be presented for temperature and precipitation. Complementing this statistical assessment will be case studies of significant events, for example, the ability of the NMME forecasts suite to anticipate the 2010/2011 drought in the Horn of Africa and its relationship to evolving SST patterns.

  3. A Performance Evaluation of the National Air Quality Forecast Capability for the Summer of 2007

    EPA Science Inventory

    This paper provides a performance evaluation of the real-time, CONUS-scale National Air Quality Forecast Capability (NAQFC), developed collaboratively by the National Oceanic and Atmospheric Administration (NOAA) and Environmental Protection Agency (EPA), that supported, in part,...

  4. Quantification of Forecasting and Change-Point Detection Methods for Predictive Maintenance

    DTIC Science & Technology

    2015-08-19

    industries to manage the service life of equipment, and also to detect precursors to the failure of components found in nuclear power plants, wind turbines ...detection methods for predictive maintenance 5a. CONTRACT NUMBER FA2386-14-1-4096 5b. GRANT NUMBER Grant 14IOA015 AOARD-144096 5c. PROGRAM ELEMENT...sensitive to changes related to abnormality. 15. SUBJECT TERMS predictive maintenance , predictive maintenance , forecasting 16

  5. ECMWF Extreme Forecast Index for water vapor transport: A forecast tool for atmospheric rivers and extreme precipitation

    NASA Astrophysics Data System (ADS)

    Lavers, David A.; Pappenberger, Florian; Richardson, David S.; Zsoter, Ervin

    2016-11-01

    In winter, heavy precipitation and floods along the west coasts of midlatitude continents are largely caused by intense water vapor transport (integrated vapor transport (IVT)) within the atmospheric river of extratropical cyclones. This study builds on previous findings that showed that forecasts of IVT have higher predictability than precipitation, by applying and evaluating the European Centre for Medium-Range Weather Forecasts Extreme Forecast Index (EFI) for IVT in ensemble forecasts during three winters across Europe. We show that the IVT EFI is more able (than the precipitation EFI) to capture extreme precipitation in forecast week 2 during forecasts initialized in a positive North Atlantic Oscillation (NAO) phase; conversely, the precipitation EFI is better during the negative NAO phase and at shorter leads. An IVT EFI example for storm Desmond in December 2015 highlights its potential to identify upcoming hydrometeorological extremes, which may prove useful to the user and forecasting communities.

  6. Evaluating space weather forecasts of geomagnetic activity from a user perspective

    NASA Astrophysics Data System (ADS)

    Thomson, A. W. P.

    2000-12-01

    Decision Theory can be used as a tool for discussing the relative costs of complacency and false alarms with users of space weather forecasts. We describe a new metric for the value of space weather forecasts, derived from Decision Theory. In particular we give equations for the level of accuracy that a forecast must exceed in order to be useful to a specific customer. The technique is illustrated by simplified example forecasts for global geomagnetic activity and for geophysical exploration and power grid management in the British Isles.

  7. Forecasting infectious disease emergence subject to seasonal forcing.

    PubMed

    Miller, Paige B; O'Dea, Eamon B; Rohani, Pejman; Drake, John M

    2017-09-06

    Despite high vaccination coverage, many childhood infections pose a growing threat to human populations. Accurate disease forecasting would be of tremendous value to public health. Forecasting disease emergence using early warning signals (EWS) is possible in non-seasonal models of infectious diseases. Here, we assessed whether EWS also anticipate disease emergence in seasonal models. We simulated the dynamics of an immunizing infectious pathogen approaching the tipping point to disease endemicity. To explore the effect of seasonality on the reliability of early warning statistics, we varied the amplitude of fluctuations around the average transmission. We proposed and analyzed two new early warning signals based on the wavelet spectrum. We measured the reliability of the early warning signals depending on the strength of their trend preceding the tipping point and then calculated the Area Under the Curve (AUC) statistic. Early warning signals were reliable when disease transmission was subject to seasonal forcing. Wavelet-based early warning signals were as reliable as other conventional early warning signals. We found that removing seasonal trends, prior to analysis, did not improve early warning statistics uniformly. Early warning signals anticipate the onset of critical transitions for infectious diseases which are subject to seasonal forcing. Wavelet-based early warning statistics can also be used to forecast infectious disease.

  8. Evaluating Downscaling Methods for Seasonal Climate Forecasts over East Africa

    NASA Technical Reports Server (NTRS)

    Roberts, J. Brent; Robertson, Franklin R.; Bosilovich, Michael; Lyon, Bradfield; Funk, Chris

    2013-01-01

    The U.S. National Multi-Model Ensemble seasonal forecasting system is providing hindcast and real-time data streams to be used in assessing and improving seasonal predictive capacity. The NASA / USAID SERVIR project, which leverages satellite and modeling-based resources for environmental decision making in developing nations, is focusing on the evaluation of NMME forecasts specifically for use in impact modeling within hub regions including East Africa, the Hindu Kush-Himalayan (HKH) region and Mesoamerica. One of the participating models in NMME is the NASA Goddard Earth Observing System (GEOS5). This work will present an intercomparison of downscaling methods using the GEOS5 seasonal forecasts of temperature and precipitation over East Africa. The current seasonal forecasting system provides monthly averaged forecast anomalies. These anomalies must be spatially downscaled and temporally disaggregated for use in application modeling (e.g. hydrology, agriculture). There are several available downscaling methodologies that can be implemented to accomplish this goal. Selected methods include both a non-homogenous hidden Markov model and an analogue based approach. A particular emphasis will be placed on quantifying the ability of different methods to capture the intermittency of precipitation within both the short and long rain seasons. Further, the ability to capture spatial covariances will be assessed. Both probabilistic and deterministic skill measures will be evaluated over the hindcast period

  9. Evaluating Downscaling Methods for Seasonal Climate Forecasts over East Africa

    NASA Technical Reports Server (NTRS)

    Robertson, Franklin R.; Roberts, J. Brent; Bosilovich, Michael; Lyon, Bradfield

    2013-01-01

    The U.S. National Multi-Model Ensemble seasonal forecasting system is providing hindcast and real-time data streams to be used in assessing and improving seasonal predictive capacity. The NASA / USAID SERVIR project, which leverages satellite and modeling-based resources for environmental decision making in developing nations, is focusing on the evaluation of NMME forecasts specifically for use in impact modeling within hub regions including East Africa, the Hindu Kush-Himalayan (HKH) region and Mesoamerica. One of the participating models in NMME is the NASA Goddard Earth Observing System (GEOS5). This work will present an intercomparison of downscaling methods using the GEOS5 seasonal forecasts of temperature and precipitation over East Africa. The current seasonal forecasting system provides monthly averaged forecast anomalies. These anomalies must be spatially downscaled and temporally disaggregated for use in application modeling (e.g. hydrology, agriculture). There are several available downscaling methodologies that can be implemented to accomplish this goal. Selected methods include both a non-homogenous hidden Markov model and an analogue based approach. A particular emphasis will be placed on quantifying the ability of different methods to capture the intermittency of precipitation within both the short and long rain seasons. Further, the ability to capture spatial covariances will be assessed. Both probabilistic and deterministic skill measures will be evaluated over the hindcast period.

  10. Challenges for operational forecasting and early warning of rainfall induced landslides

    NASA Astrophysics Data System (ADS)

    Guzzetti, Fausto

    2017-04-01

    In many areas of the world, landslides occur every year, claiming lives and producing severe economic and environmental damage. Many of the landslides with human or economic consequences are the result of intense or prolonged rainfall. For this reason, in many areas the timely forecast of rainfall-induced landslides is of both scientific interest and social relevance. In the recent years, there has been a mounting interest and an increasing demand for operational landslide forecasting, and for associated landslide early warning systems. Despite the relevance of the problem, and the increasing interest and demand, only a few systems have been designed, and are currently operated. Inspection of the - limited - literature on operational landslide forecasting, and on the associated early warning systems, reveals that common criteria and standards for the design, the implementation, the operation, and the evaluation of the performances of the systems, are lacking. This limits the possibility to compare and to evaluate the systems critically, to identify their inherent strengths and weaknesses, and to improve the performance of the systems. Lack of common criteria and of established standards can also limit the credibility of the systems, and consequently their usefulness and potential practical impact. Landslides are very diversified phenomena, and the information and the modelling tools used to attempt landslide forecasting vary largely, depending on the type and size of the landslides, the extent of the geographical area considered, the timeframe of the forecasts, and the scope of the predictions. Consequently, systems for landslide forecasting and early warning can be designed and implemented at several different geographical scales, from the local (site or slope specific) to the regional, or even national scale. The talk focuses on regional to national scale landslide forecasting systems, and specifically on operational systems based on empirical rainfall threshold models. Building on the experience gained in designing, implementing, and operating national and regional landslide forecasting systems in Italy, and on a preliminary review of the existing literature on regional landslide early warning systems, the talk discusses concepts, limitations and challenges inherent to the design of reliable forecasting and early warning systems for rainfall-triggered landslides, the evaluation of the performances of the systems, and on problems related to the use of the forecasts and the issuing of landslide warnings. Several of the typical elements of an operational landslide forecasting system are considered, including: (i) the rainfall and landslide information used to establish the threshold models, (ii) the methods and tools used to define the empirical rainfall thresholds, and their associated uncertainty, (iii) the quality (e.g., the temporal and spatial resolution) of the rainfall information used for operational forecasting, including rain gauge and radar measurements, satellite estimates, and quantitative weather forecasts, (iv) the ancillary information used to prepare the forecasts, including e.g., the terrain subdivisions and the landslide susceptibility zonations, (v) the criteria used to transform the forecasts into landslide warnings and the methods used to communicate the warnings, and (vi) the criteria and strategies adopted to evaluate the performances of the systems, and to define minimum or optimal performance levels.

  11. Evaluation of probabilistic forecasts with the scoringRules package

    NASA Astrophysics Data System (ADS)

    Jordan, Alexander; Krüger, Fabian; Lerch, Sebastian

    2017-04-01

    Over the last decades probabilistic forecasts in the form of predictive distributions have become popular in many scientific disciplines. With the proliferation of probabilistic models arises the need for decision-theoretically principled tools to evaluate the appropriateness of models and forecasts in a generalized way in order to better understand sources of prediction errors and to improve the models. Proper scoring rules are functions S(F,y) which evaluate the accuracy of a forecast distribution F , given that an outcome y was observed. In coherence with decision-theoretical principles they allow to compare alternative models, a crucial ability given the variety of theories, data sources and statistical specifications that is available in many situations. This contribution presents the software package scoringRules for the statistical programming language R, which provides functions to compute popular scoring rules such as the continuous ranked probability score for a variety of distributions F that come up in applied work. For univariate variables, two main classes are parametric distributions like normal, t, or gamma distributions, and distributions that are not known analytically, but are indirectly described through a sample of simulation draws. For example, ensemble weather forecasts take this form. The scoringRules package aims to be a convenient dictionary-like reference for computing scoring rules. We offer state of the art implementations of several known (but not routinely applied) formulas, and implement closed-form expressions that were previously unavailable. Whenever more than one implementation variant exists, we offer statistically principled default choices. Recent developments include the addition of scoring rules to evaluate multivariate forecast distributions. The use of the scoringRules package is illustrated in an example on post-processing ensemble forecasts of temperature.

  12. Results of the Regional Earthquake Likelihood Models (RELM) test of earthquake forecasts in California.

    PubMed

    Lee, Ya-Ting; Turcotte, Donald L; Holliday, James R; Sachs, Michael K; Rundle, John B; Chen, Chien-Chih; Tiampo, Kristy F

    2011-10-04

    The Regional Earthquake Likelihood Models (RELM) test of earthquake forecasts in California was the first competitive evaluation of forecasts of future earthquake occurrence. Participants submitted expected probabilities of occurrence of M ≥ 4.95 earthquakes in 0.1° × 0.1° cells for the period 1 January 1, 2006, to December 31, 2010. Probabilities were submitted for 7,682 cells in California and adjacent regions. During this period, 31 M ≥ 4.95 earthquakes occurred in the test region. These earthquakes occurred in 22 test cells. This seismic activity was dominated by earthquakes associated with the M = 7.2, April 4, 2010, El Mayor-Cucapah earthquake in northern Mexico. This earthquake occurred in the test region, and 16 of the other 30 earthquakes in the test region could be associated with it. Nine complete forecasts were submitted by six participants. In this paper, we present the forecasts in a way that allows the reader to evaluate which forecast is the most "successful" in terms of the locations of future earthquakes. We conclude that the RELM test was a success and suggest ways in which the results can be used to improve future forecasts.

  13. Results of the Regional Earthquake Likelihood Models (RELM) test of earthquake forecasts in California

    PubMed Central

    Lee, Ya-Ting; Turcotte, Donald L.; Holliday, James R.; Sachs, Michael K.; Rundle, John B.; Chen, Chien-Chih; Tiampo, Kristy F.

    2011-01-01

    The Regional Earthquake Likelihood Models (RELM) test of earthquake forecasts in California was the first competitive evaluation of forecasts of future earthquake occurrence. Participants submitted expected probabilities of occurrence of M≥4.95 earthquakes in 0.1° × 0.1° cells for the period 1 January 1, 2006, to December 31, 2010. Probabilities were submitted for 7,682 cells in California and adjacent regions. During this period, 31 M≥4.95 earthquakes occurred in the test region. These earthquakes occurred in 22 test cells. This seismic activity was dominated by earthquakes associated with the M = 7.2, April 4, 2010, El Mayor–Cucapah earthquake in northern Mexico. This earthquake occurred in the test region, and 16 of the other 30 earthquakes in the test region could be associated with it. Nine complete forecasts were submitted by six participants. In this paper, we present the forecasts in a way that allows the reader to evaluate which forecast is the most “successful” in terms of the locations of future earthquakes. We conclude that the RELM test was a success and suggest ways in which the results can be used to improve future forecasts. PMID:21949355

  14. Predicting Near-surface Winds with WindNinja for Wind Energy Applications

    NASA Astrophysics Data System (ADS)

    Wagenbrenner, N. S.; Forthofer, J.; Shannon, K.; Butler, B.

    2016-12-01

    WindNinja is a high-resolution diagnostic wind model widely used by operational wildland fire managers to predict how near-surface winds may influence fire behavior. Many of the features which have made WindNinja successful for wildland fire are also important for wind energy applications. Some of these features include flexible runtime options which allow the user to initialize the model with coarser scale weather model forecasts, sparse weather station observations, or a simple domain-average wind for what-if scenarios; built-in data fetchers for required model inputs, including gridded terrain and vegetation data and operational weather model forecasts; relatively fast runtimes on simple hardware; an extremely user-friendly interface; and a number of output format options, including KMZ files for viewing in Google Earth and GeoPDFs which can be viewed in a GIS. The recent addition of a conservation of mass and momentum solver based on OpenFOAM libraries further increases the utility of WindNinja to modelers in the wind energy sector interested not just in mean wind predictions, but also in turbulence metrics. Here we provide an evaluation of WindNinja forecasts based on (1) operational weather model forecasts and (2) weather station observations provided by the MesoWest API. We also compare the high-resolution WindNinja forecasts to the coarser operational weather model forecasts. For this work we will use the High Resolution Rapid Refresh (HRRR) model and the North American Mesoscale (NAM) model. Forecasts will be evaluated with data collected in the Birch Creek valley of eastern Idaho, USA between June-October 2013. Near-surface wind, turbulence data, and vertical wind and temperature profiles were collected at very high spatial resolution during this field campaign specifically for use in evaluating high-resolution wind models like WindNinja. This work demonstrates the ability of WindNinja to generate very high-resolution wind forecasts for wind energy applications and evaluates the forecasts produced by two different initialization methods with data collected in a broad valley surrounded by complex terrain.

  15. Assimilation of lightning data by nudging tropospheric water vapor and applications to numerical forecasts of convective events

    NASA Astrophysics Data System (ADS)

    Dixon, Kenneth

    A lightning data assimilation technique is developed for use with observations from the World Wide Lightning Location Network (WWLLN). The technique nudges the water vapor mixing ratio toward saturation within 10 km of a lightning observation. This technique is applied to deterministic forecasts of convective events on 29 June 2012, 17 November 2013, and 19 April 2011 as well as an ensemble forecast of the 29 June 2012 event using the Weather Research and Forecasting (WRF) model. Lightning data are assimilated over the first 3 hours of the forecasts, and the subsequent impact on forecast quality is evaluated. The nudged deterministic simulations for all events produce composite reflectivity fields that are closer to observations. For the ensemble forecasts of the 29 June 2012 event, the improvement in forecast quality from lightning assimilation is more subtle than for the deterministic forecasts, suggesting that the lightning assimilation may improve ensemble convective forecasts where conventional observations (e.g., aircraft, surface, radiosonde, satellite) are less dense or unavailable.

  16. Evaluation of Wind Power Forecasts from the Vermont Weather Analytics Center and Identification of Improvements

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Optis, Michael; Scott, George N.; Draxl, Caroline

    The goal of this analysis was to assess the wind power forecast accuracy of the Vermont Weather Analytics Center (VTWAC) forecast system and to identify potential improvements to the forecasts. Based on the analysis at Georgia Mountain, the following recommendations for improving forecast performance were made: 1. Resolve the significant negative forecast bias in February-March 2017 (50% underprediction on average) 2. Improve the ability of the forecast model to capture the strong diurnal cycle of wind power 3. Add ability for forecast model to assess internal wake loss, particularly at sites where strong diurnal shifts in wind direction are present.more » Data availability and quality limited the robustness of this forecast assessment. A more thorough analysis would be possible given a longer period of record for the data (at least one full year), detailed supervisory control and data acquisition data for each wind plant, and more detailed information on the forecast system input data and methodologies.« less

  17. Verification of space weather forecasts at the UK Met Office

    NASA Astrophysics Data System (ADS)

    Bingham, S.; Sharpe, M.; Jackson, D.; Murray, S.

    2017-12-01

    The UK Met Office Space Weather Operations Centre (MOSWOC) has produced space weather guidance twice a day since its official opening in 2014. Guidance includes 4-day probabilistic forecasts of X-ray flares, geomagnetic storms, high-energy electron events and high-energy proton events. Evaluation of such forecasts is important to forecasters, stakeholders, model developers and users to understand the performance of these forecasts and also strengths and weaknesses to enable further development. Met Office terrestrial near real-time verification systems have been adapted to provide verification of X-ray flare and geomagnetic storm forecasts. Verification is updated daily to produce Relative Operating Characteristic (ROC) curves and Reliability diagrams, and rolling Ranked Probability Skill Scores (RPSSs) thus providing understanding of forecast performance and skill. Results suggest that the MOSWOC issued X-ray flare forecasts are usually not statistically significantly better than a benchmark climatological forecast (where the climatology is based on observations from the previous few months). By contrast, the issued geomagnetic storm activity forecast typically performs better against this climatological benchmark.

  18. General-aviation's view of progress in the aviation weather system

    NASA Technical Reports Server (NTRS)

    Lundgren, Douglas J.

    1988-01-01

    For all its activity statistics, general-aviation is the most vulnerable to hazardous weather. Of concern to the general aviation industry are: (1) the slow pace of getting units of the Automated Weather Observation System (AWOS) to the field; (2) the efforts of the National Weather Service to withdraw from both the observation and dissemination roles of the aviation weather system; (3) the need for more observation points to improve the accuracy of terminal and area forecasts; (4) the need for improvements in all area forecasts, terminal forecasts, and winds aloft forecasts; (5) slow progress in cockpit weather displays; (6) the erosion of transcribed weather broadcasts (TWEB) and other deficiencies in weather information dissemination; (7) the need to push to make the Direct User Access Terminal (DUAT) a reality; and (7) the need to improve severe weather (thunderstorm) warning systems.

  19. Data Analysis, Modeling, and Ensemble Forecasting to Support NOWCAST and Forecast Activities at the Fallon Naval Station

    DTIC Science & Technology

    2010-09-30

    and climate forecasting and use of satellite data assimilation for model evaluation. He is a task leader on another NSF_EPSCoR project for the...1 DISTRIBUTION STATEMENT A: Approved for public release; distribution is unlimited. Data Analysis, Modeling, and Ensemble Forecasting to...observations including remotely sensed data . OBJECTIVES The main objectives of the study are: 1) to further develop, test, and continue twice daily

  20. Data Analysis, Modeling, and Ensemble Forecasting to Support NOWCAST and Forecast Activities at the Fallon Naval Station

    DTIC Science & Technology

    2011-09-30

    forecasting and use of satellite data assimilation for model evaluation (Jiang et al, 2011a). He is a task leader on another NSF EPSCoR project...K. Horvath, R. Belu, 2011a: Application of variational data assimilation to dynamical downscaling of regional wind energy resources in the western...1 DISTRIBUTION STATEMENT A. Approved for public release; distribution is unlimited. Data Analysis, Modeling, and Ensemble Forecasting to

  1. WASTE TREATMENT PLANT (WTP) LIQUID EFFLUENT TREATABILITY EVALUATION

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    LUECK, K.J.

    2004-10-18

    A forecast of the radioactive, dangerous liquid effluents expected to be produced by the Waste Treatment Plant (WTP) was provided by Bechtel National, Inc. (BNI 2004). The forecast represents the liquid effluents generated from the processing of Tank Farm waste through the end-of-mission for the WTP. The WTP forecast is provided in the Appendices. The WTP liquid effluents will be stored, treated, and disposed of in the Liquid Effluent Retention Facility (LERF) and the Effluent Treatment Facility (ETF). Both facilities are located in the 200 East Area and are operated by Fluor Hanford, Inc. (FH) for the US. Department ofmore » Energy (DOE). The treatability of the WTP liquid effluents in the LERF/ETF was evaluated. The evaluation was conducted by comparing the forecast to the LERF/ETF treatability envelope (Aromi 1997), which provides information on the items which determine if a liquid effluent is acceptable for receipt and treatment at the LERF/ETF. The format of the evaluation corresponds directly to the outline of the treatability envelope document. Except where noted, the maximum annual average concentrations over the range of the 27 year forecast was evaluated against the treatability envelope. This is an acceptable approach because the volume capacity in the LERF Basin will equalize the minimum and maximum peaks. Background information on the LERF/ETF design basis is provided in the treatability envelope document.« less

  2. Nonlinear time series modeling and forecasting the seismic data of the Hindu Kush region

    NASA Astrophysics Data System (ADS)

    Khan, Muhammad Yousaf; Mittnik, Stefan

    2018-01-01

    In this study, we extended the application of linear and nonlinear time models in the field of earthquake seismology and examined the out-of-sample forecast accuracy of linear Autoregressive (AR), Autoregressive Conditional Duration (ACD), Self-Exciting Threshold Autoregressive (SETAR), Threshold Autoregressive (TAR), Logistic Smooth Transition Autoregressive (LSTAR), Additive Autoregressive (AAR), and Artificial Neural Network (ANN) models for seismic data of the Hindu Kush region. We also extended the previous studies by using Vector Autoregressive (VAR) and Threshold Vector Autoregressive (TVAR) models and compared their forecasting accuracy with linear AR model. Unlike previous studies that typically consider the threshold model specifications by using internal threshold variable, we specified these models with external transition variables and compared their out-of-sample forecasting performance with the linear benchmark AR model. The modeling results show that time series models used in the present study are capable of capturing the dynamic structure present in the seismic data. The point forecast results indicate that the AR model generally outperforms the nonlinear models. However, in some cases, threshold models with external threshold variables specification produce more accurate forecasts, indicating that specification of threshold time series models is of crucial importance. For raw seismic data, the ACD model does not show an improved out-of-sample forecasting performance over the linear AR model. The results indicate that the AR model is the best forecasting device to model and forecast the raw seismic data of the Hindu Kush region.

  3. Prospective Tests of Southern California Earthquake Forecasts

    NASA Astrophysics Data System (ADS)

    Jackson, D. D.; Schorlemmer, D.; Gerstenberger, M.; Kagan, Y. Y.; Helmstetter, A.; Wiemer, S.; Field, N.

    2004-12-01

    We are testing earthquake forecast models prospectively using likelihood ratios. Several investigators have developed such models as part of the Southern California Earthquake Center's project called Regional Earthquake Likelihood Models (RELM). Various models are based on fault geometry and slip rates, seismicity, geodetic strain, and stress interactions. Here we describe the testing procedure and present preliminary results. Forecasts are expressed as the yearly rate of earthquakes within pre-specified bins of longitude, latitude, magnitude, and focal mechanism parameters. We test models against each other in pairs, which requires that both forecasts in a pair be defined over the same set of bins. For this reason we specify a standard "menu" of bins and ground rules to guide forecasters in using common descriptions. One menu category includes five-year forecasts of magnitude 5.0 and larger. Contributors will be requested to submit forecasts in the form of a vector of yearly earthquake rates on a 0.1 degree grid at the beginning of the test. Focal mechanism forecasts, when available, are also archived and used in the tests. Interim progress will be evaluated yearly, but final conclusions would be made on the basis of cumulative five-year performance. The second category includes forecasts of earthquakes above magnitude 4.0 on a 0.1 degree grid, evaluated and renewed daily. Final evaluation would be based on cumulative performance over five years. Other types of forecasts with different magnitude, space, and time sampling are welcome and will be tested against other models with shared characteristics. Tests are based on the log likelihood scores derived from the probability that future earthquakes would occur where they do if a given forecast were true [Kagan and Jackson, J. Geophys. Res.,100, 3,943-3,959, 1995]. For each pair of forecasts, we compute alpha, the probability that the first would be wrongly rejected in favor of the second, and beta, the probability that the second would be wrongly rejected in favor of the first. Computing alpha and beta requires knowing the theoretical distribution of likelihood scores under each hypothesis, which we estimate by simulations. In this scheme, each forecast is given equal status; there is no "null hypothesis" which would be accepted by default. Forecasts and test results will be archived and posted on the RELM web site. Major problems under discussion include how to treat aftershocks, which clearly violate the variable-rate Poissonian hypotheses that we employ, and how to deal with the temporal variations in catalog completeness that follow large earthquakes.

  4. An improved Multimodel Approach for Global Sea Surface Temperature Forecasts

    NASA Astrophysics Data System (ADS)

    Khan, M. Z. K.; Mehrotra, R.; Sharma, A.

    2014-12-01

    The concept of ensemble combinations for formulating improved climate forecasts has gained popularity in recent years. However, many climate models share similar physics or modeling processes, which may lead to similar (or strongly correlated) forecasts. Recent approaches for combining forecasts that take into consideration differences in model accuracy over space and time have either ignored the similarity of forecast among the models or followed a pairwise dynamic combination approach. Here we present a basis for combining model predictions, illustrating the improvements that can be achieved if procedures for factoring in inter-model dependence are utilised. The utility of the approach is demonstrated by combining sea surface temperature (SST) forecasts from five climate models over a period of 1960-2005. The variable of interest, the monthly global sea surface temperature anomalies (SSTA) at a 50´50 latitude-longitude grid, is predicted three months in advance to demonstrate the utility of the proposed algorithm. Results indicate that the proposed approach offers consistent and significant improvements for majority of grid points compared to the case where the dependence among the models is ignored. Therefore, the proposed approach of combining multiple models by taking into account the existing interdependence, provides an attractive alternative to obtain improved climate forecast. In addition, an approach to combine seasonal forecasts from multiple climate models with varying periods of availability is also demonstrated.

  5. Weather forecasting based on hybrid neural model

    NASA Astrophysics Data System (ADS)

    Saba, Tanzila; Rehman, Amjad; AlGhamdi, Jarallah S.

    2017-11-01

    Making deductions and expectations about climate has been a challenge all through mankind's history. Challenges with exact meteorological directions assist to foresee and handle problems well in time. Different strategies have been investigated using various machine learning techniques in reported forecasting systems. Current research investigates climate as a major challenge for machine information mining and deduction. Accordingly, this paper presents a hybrid neural model (MLP and RBF) to enhance the accuracy of weather forecasting. Proposed hybrid model ensure precise forecasting due to the specialty of climate anticipating frameworks. The study concentrates on the data representing Saudi Arabia weather forecasting. The main input features employed to train individual and hybrid neural networks that include average dew point, minimum temperature, maximum temperature, mean temperature, average relative moistness, precipitation, normal wind speed, high wind speed and average cloudiness. The output layer composed of two neurons to represent rainy and dry weathers. Moreover, trial and error approach is adopted to select an appropriate number of inputs to the hybrid neural network. Correlation coefficient, RMSE and scatter index are the standard yard sticks adopted for forecast accuracy measurement. On individual standing MLP forecasting results are better than RBF, however, the proposed simplified hybrid neural model comes out with better forecasting accuracy as compared to both individual networks. Additionally, results are better than reported in the state of art, using a simple neural structure that reduces training time and complexity.

  6. Real-time forecasting of an epidemic using a discrete time stochastic model: a case study of pandemic influenza (H1N1-2009).

    PubMed

    Nishiura, Hiroshi

    2011-02-16

    Real-time forecasting of epidemics, especially those based on a likelihood-based approach, is understudied. This study aimed to develop a simple method that can be used for the real-time epidemic forecasting. A discrete time stochastic model, accounting for demographic stochasticity and conditional measurement, was developed and applied as a case study to the weekly incidence of pandemic influenza (H1N1-2009) in Japan. By imposing a branching process approximation and by assuming the linear growth of cases within each reporting interval, the epidemic curve is predicted using only two parameters. The uncertainty bounds of the forecasts are computed using chains of conditional offspring distributions. The quality of the forecasts made before the epidemic peak appears largely to depend on obtaining valid parameter estimates. The forecasts of both weekly incidence and final epidemic size greatly improved at and after the epidemic peak with all the observed data points falling within the uncertainty bounds. Real-time forecasting using the discrete time stochastic model with its simple computation of the uncertainty bounds was successful. Because of the simplistic model structure, the proposed model has the potential to additionally account for various types of heterogeneity, time-dependent transmission dynamics and epidemiological details. The impact of such complexities on forecasting should be explored when the data become available as part of the disease surveillance.

  7. Improving real-time inflow forecasting into hydropower reservoirs through a complementary modelling framework

    NASA Astrophysics Data System (ADS)

    Gragne, A. S.; Sharma, A.; Mehrotra, R.; Alfredsen, K.

    2015-08-01

    Accuracy of reservoir inflow forecasts is instrumental for maximizing the value of water resources and benefits gained through hydropower generation. Improving hourly reservoir inflow forecasts over a 24 h lead time is considered within the day-ahead (Elspot) market of the Nordic exchange market. A complementary modelling framework presents an approach for improving real-time forecasting without needing to modify the pre-existing forecasting model, but instead formulating an independent additive or complementary model that captures the structure the existing operational model may be missing. We present here the application of this principle for issuing improved hourly inflow forecasts into hydropower reservoirs over extended lead times, and the parameter estimation procedure reformulated to deal with bias, persistence and heteroscedasticity. The procedure presented comprises an error model added on top of an unalterable constant parameter conceptual model. This procedure is applied in the 207 km2 Krinsvatn catchment in central Norway. The structure of the error model is established based on attributes of the residual time series from the conceptual model. Besides improving forecast skills of operational models, the approach estimates the uncertainty in the complementary model structure and produces probabilistic inflow forecasts that entrain suitable information for reducing uncertainty in the decision-making processes in hydropower systems operation. Deterministic and probabilistic evaluations revealed an overall significant improvement in forecast accuracy for lead times up to 17 h. Evaluation of the percentage of observations bracketed in the forecasted 95 % confidence interval indicated that the degree of success in containing 95 % of the observations varies across seasons and hydrologic years.

  8. Performance of the Prognocean Plus system during the El Niño 2015/2016: predictions of sea level anomalies as tools for forecasting El Niño

    NASA Astrophysics Data System (ADS)

    Świerczyńska-Chlaściak, Małgorzata; Niedzielski, Tomasz; Miziński, Bartłomiej

    2017-04-01

    The aim of this paper is to present the performance of the Prognocean Plus system, which produces long-term predictions of sea level anomalies, during the El Niño 2015/2016. The main objective of work is to identify such ocean areas in which long-term forecasts of sea level anomalies during El Niño 2015/2016 reveal a considerable accuracy. At present, the system produces prognoses using four data-based models and their combinations: polynomial-harmonic model, autoregressive model, threshold autoregressive model and multivariate autoregressive model. The system offers weekly forecasts, with lead time up to 12 weeks. Several statistics that describe the efficiency of the available prediction models in four seasons used for estimating Oceanic Niño index (ONI) are calculated. The accuracies/skills of the predicting models were computed in the specific locations in the equatorial Pacific, namely the geometrically-determined central points of all Niño regions. For the said locations, we focused on the forecasts which targeted at the local maximum of sea level, driven by the El Niño 2015/2016. As a result, a series of the "spaghetti" graphs (for each point, season and model) as well as plots presenting the prognostic performance of every model - for all lead times, seasons and locations - were created. It is found that the Prognocean Plus system has a potential to become a new solution which may enhance the diagnostic discussions on the El Niño development. The forecasts produced by the threshold autoregressive model, for lead times of 5-6 weeks and 9 weeks, within the Niño1+2 region for the November-to-January (NDJ) season anticipated the culmination of the El Niño 2015/2016. The longest forecasts (8-12 weeks) were found to be the most accurate in the phase of transition from El Niño to normal conditions (the multivariate autoregressive model, central point of Niño1+2 region, the December-to-February season). The study was conducted to verify the ability and usefulness of sea level anomaly forecasts in predicting phenomena that are controlled by the ocean-atmosphere processes, such as El Niño Southern Oscillation or North Atlantic Oscillation. The results may support further investigations into long-term forecasting of the quantitative indices of these oscillations, solely based on prognoses of sea level change. In particular, comparing the accuracies of prognoses of the North Atlantic Oscillation index remains one of the tasks of the research project no. 2016/21/N/ST10/03231, financed by the National Science Center of Poland.

  9. The Super Tuesday Outbreak: Forecast Sensitivities to Single-Moment Microphysics Schemes

    NASA Technical Reports Server (NTRS)

    Molthan, Andrew L.; Case, Jonathan L.; Dembek, Scott R.; Jedlovec, Gary J.; Lapenta, William M.

    2008-01-01

    Forecast precipitation and radar characteristics are used by operational centers to guide the issuance of advisory products. As operational numerical weather prediction is performed at increasingly finer spatial resolution, convective precipitation traditionally represented by sub-grid scale parameterization schemes is now being determined explicitly through single- or multi-moment bulk water microphysics routines. Gains in forecasting skill are expected through improved simulation of clouds and their microphysical processes. High resolution model grids and advanced parameterizations are now available through steady increases in computer resources. As with any parameterization, their reliability must be measured through performance metrics, with errors noted and targeted for improvement. Furthermore, the use of these schemes within an operational framework requires an understanding of limitations and an estimate of biases so that forecasters and model development teams can be aware of potential errors. The National Severe Storms Laboratory (NSSL) Spring Experiments have produced daily, high resolution forecasts used to evaluate forecast skill among an ensemble with varied physical parameterizations and data assimilation techniques. In this research, high resolution forecasts of the 5-6 February 2008 Super Tuesday Outbreak are replicated using the NSSL configuration in order to evaluate two components of simulated convection on a large domain: sensitivities of quantitative precipitation forecasts to assumptions within a single-moment bulk water microphysics scheme, and to determine if these schemes accurately depict the reflectivity characteristics of well-simulated, organized, cold frontal convection. As radar returns are sensitive to the amount of hydrometeor mass and the distribution of mass among variably sized targets, radar comparisons may guide potential improvements to a single-moment scheme. In addition, object-based verification metrics are evaluated for their utility in gauging model performance and QPF variability.

  10. Studies regarding the quality of numerical weather forecasts of the WRF model integrated at high-resolutions for the Romanian territory

    DOE PAGES

    Iriza, Amalia; Dumitrache, Rodica C.; Lupascu, Aurelia; ...

    2016-01-01

    Our paper aims to evaluate the quality of high-resolution weather forecasts from the Weather Research and Forecasting (WRF) numerical weather prediction model. The lateral and boundary conditions were obtained from the numerical output of the Consortium for Small-scale Modeling (COSMO) model at 7 km horizontal resolution. Furthermore, the WRF model was run for January and July 2013 at two horizontal resolutions (3 and 1 km). The numerical forecasts of the WRF model were evaluated using different statistical scores for 2 m temperature and 10 m wind speed. Our results showed a tendency of the WRF model to overestimate the valuesmore » of the analyzed parameters in comparison to observations.« less

  11. Studies regarding the quality of numerical weather forecasts of the WRF model integrated at high-resolutions for the Romanian territory

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Iriza, Amalia; Dumitrache, Rodica C.; Lupascu, Aurelia

    Our paper aims to evaluate the quality of high-resolution weather forecasts from the Weather Research and Forecasting (WRF) numerical weather prediction model. The lateral and boundary conditions were obtained from the numerical output of the Consortium for Small-scale Modeling (COSMO) model at 7 km horizontal resolution. Furthermore, the WRF model was run for January and July 2013 at two horizontal resolutions (3 and 1 km). The numerical forecasts of the WRF model were evaluated using different statistical scores for 2 m temperature and 10 m wind speed. Our results showed a tendency of the WRF model to overestimate the valuesmore » of the analyzed parameters in comparison to observations.« less

  12. Adapting National Water Model Forecast Data to Local Hyper-Resolution H&H Models During Hurricane Irma

    NASA Astrophysics Data System (ADS)

    Singhofen, P.

    2017-12-01

    The National Water Model (NWM) is a remarkable undertaking. The foundation of the NWM is a 1 square kilometer grid which is used for near real-time modeling and flood forecasting of most rivers and streams in the contiguous United States. However, the NWM falls short in highly urbanized areas with complex drainage infrastructure. To overcome these shortcomings, the presenter proposes to leverage existing local hyper-resolution H&H models and adapt the NWM forcing data to them. Gridded near real-time rainfall, short range forecasts (18-hour) and medium range forecasts (10-day) during Hurricane Irma are applied to numerous detailed H&H models in highly urbanized areas of the State of Florida. Coastal and inland models are evaluated. Comparisons of near real-time rainfall data are made with observed gaged data and the ability to predict flooding in advance based on forecast data is evaluated. Preliminary findings indicate that the near real-time rainfall data is consistently and significantly lower than observed data. The forecast data is more promising. For example, the medium range forecast data provides 2 - 3 days advanced notice of peak flood conditions to a reasonable level of accuracy in most cases relative to both timing and magnitude. Short range forecast data provides about 12 - 14 hours advanced notice. Since these are hyper-resolution models, flood forecasts can be made at the street level, providing emergency response teams with valuable information for coordinating and dispatching limited resources.

  13. Real-time eruption forecasting using the material Failure Forecast Method with a Bayesian approach

    NASA Astrophysics Data System (ADS)

    Boué, A.; Lesage, P.; Cortés, G.; Valette, B.; Reyes-Dávila, G.

    2015-04-01

    Many attempts for deterministic forecasting of eruptions and landslides have been performed using the material Failure Forecast Method (FFM). This method consists in adjusting an empirical power law on precursory patterns of seismicity or deformation. Until now, most of the studies have presented hindsight forecasts based on complete time series of precursors and do not evaluate the ability of the method for carrying out real-time forecasting with partial precursory sequences. In this study, we present a rigorous approach of the FFM designed for real-time applications on volcano-seismic precursors. We use a Bayesian approach based on the FFM theory and an automatic classification of seismic events. The probability distributions of the data deduced from the performance of this classification are used as input. As output, it provides the probability of the forecast time at each observation time before the eruption. The spread of the a posteriori probability density function of the prediction time and its stability with respect to the observation time are used as criteria to evaluate the reliability of the forecast. We test the method on precursory accelerations of long-period seismicity prior to vulcanian explosions at Volcán de Colima (Mexico). For explosions preceded by a single phase of seismic acceleration, we obtain accurate and reliable forecasts using approximately 80% of the whole precursory sequence. It is, however, more difficult to apply the method to multiple acceleration patterns.

  14. Evaluation of radar and automatic weather station data assimilation for a heavy rainfall event in southern China

    NASA Astrophysics Data System (ADS)

    Hou, Tuanjie; Kong, Fanyou; Chen, Xunlai; Lei, Hengchi; Hu, Zhaoxia

    2015-07-01

    To improve the accuracy of short-term (0-12 h) forecasts of severe weather in southern China, a real-time storm-scale forecasting system, the Hourly Assimilation and Prediction System (HAPS), has been implemented in Shenzhen, China. The forecasting system is characterized by combining the Advanced Research Weather Research and Forecasting (WRF-ARW) model and the Advanced Regional Prediction System (ARPS) three-dimensional variational data assimilation (3DVAR) package. It is capable of assimilating radar reflectivity and radial velocity data from multiple Doppler radars as well as surface automatic weather station (AWS) data. Experiments are designed to evaluate the impacts of data assimilation on quantitative precipitation forecasting (QPF) by studying a heavy rainfall event in southern China. The forecasts from these experiments are verified against radar, surface, and precipitation observations. Comparison of echo structure and accumulated precipitation suggests that radar data assimilation is useful in improving the short-term forecast by capturing the location and orientation of the band of accumulated rainfall. The assimilation of radar data improves the short-term precipitation forecast skill by up to 9 hours by producing more convection. The slight but generally positive impact that surface AWS data has on the forecast of near-surface variables can last up to 6-9 hours. The assimilation of AWS observations alone has some benefit for improving the Fractions Skill Score (FSS) and bias scores; when radar data are assimilated, the additional AWS data may increase the degree of rainfall overprediction.

  15. Statistical and dynamical forecast of regional precipitation after mature phase of ENSO

    NASA Astrophysics Data System (ADS)

    Sohn, S.; Min, Y.; Lee, J.; Tam, C.; Ahn, J.

    2010-12-01

    While the seasonal predictability of general circulation models (GCMs) has been improved, the current model atmosphere in the mid-latitude does not respond correctly to external forcing such as tropical sea surface temperature (SST), particularly over the East Asia and western North Pacific summer monsoon regions. In addition, the time-scale of prediction scope is considerably limited and the model forecast skill still is very poor beyond two weeks. Although recent studies indicate that coupled model based multi-model ensemble (MME) forecasts show the better performance, the long-lead forecasts exceeding 9 months still show a dramatic decrease of the seasonal predictability. This study aims at diagnosing the dynamical MME forecasts comprised of the state of art 1-tier models as well as comparing them with the statistical model forecasts, focusing on the East Asian summer precipitation predictions after mature phase of ENSO. The lagged impact of El Nino as major climate contributor on the summer monsoon in model environments is also evaluated, in the sense of the conditional probabilities. To evaluate the probability forecast skills, the reliability (attributes) diagram and the relative operating characteristics following the recommendations of the World Meteorological Organization (WMO) Standardized Verification System for Long-Range Forecasts are used in this study. The results should shed light on the prediction skill for dynamical model and also for the statistical model, in forecasting the East Asian summer monsoon rainfall with a long-lead time.

  16. EVALUATING HYDROLOGICAL RESPONSE TO FORECASTED LAND-USE CHANGE: SCENARIO TESTING IN TWO WESTERN U.S. WATERSHEDS

    EPA Science Inventory

    Envisioning and evaluating future scenarios has emerged as a critical component of both science and social decision-making. The ability to assess, report, map, and forecast the life support functions of ecosystems is absolutely critical to our capacity to make informed decisions ...

  17. 77 FR 1761 - Self-Regulatory Organizations; NYSE Arca, Inc.; Order Granting Approval of Proposed Rule Change...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-01-11

    ... quantitative research and evaluation process that forecasts economic excess sector returns (over/under the... proprietary SectorSAM quantitative research and evaluation process. \\8\\ The following convictions constitute... Allocation Methodology'' (``SectorSAM''), which is a proprietary quantitative analysis, to forecast each...

  18. Determining relevant parameters for a statistical tropical cyclone genesis tool based upon global model output

    NASA Astrophysics Data System (ADS)

    Halperin, D.; Hart, R. E.; Fuelberg, H. E.; Cossuth, J.

    2013-12-01

    Predicting tropical cyclone (TC) genesis has been a vexing problem for forecasters. While the literature describes environmental conditions which are necessary for TC genesis, predicting if and when a specific disturbance will organize and become a TC remains a challenge. As recently as 5-10 years ago, global models possessed little if any skill in forecasting TC genesis. However, due to increased resolution and more advanced model parameterizations, we have reached the point where global models can provide useful TC genesis guidance to operational forecasters. A recent study evaluated five global models' ability to predict TC genesis out to four days over the North Atlantic basin (Halperin et al. 2013). The results indicate that the models are indeed able to capture the genesis time and location correctly a fair percentage of the time. The study also uncovered model biases. For example, probability of detection and false alarm rate varies spatially within the basin. Also, as expected, the models' performance decreases with increasing lead time. In order to explain these and other biases, it is useful to analyze the model-indicated genesis events further to determine whether or not there are systematic differences between successful forecasts (hits), false alarms, and miss events. This study will examine composites of a number of physically-relevant environmental parameters (e.g., magnitude of vertical wind shear, aerially averaged mid-level relative humidity) and disturbance-based parameters (e.g., 925 hPa maximum wind speed, vertical alignment of relative vorticity) among each TC genesis event classification (i.e., hit, false alarm, miss). We will use standard statistical tests (e.g., Student's t test, Mann-Whitney-U Test) to calculate whether or not any differences are statistically significant. We also plan to discuss how these composite results apply to a few illustrative case studies. The results may help determine which aspects of the forecast are (in)correct and whether the incorrect aspects can be bias-corrected. This, in turn, may allow us to further enhance probabilistic forecasts of TC genesis.

  19. A simple approach to measure transmissibility and forecast incidence.

    PubMed

    Nouvellet, Pierre; Cori, Anne; Garske, Tini; Blake, Isobel M; Dorigatti, Ilaria; Hinsley, Wes; Jombart, Thibaut; Mills, Harriet L; Nedjati-Gilani, Gemma; Van Kerkhove, Maria D; Fraser, Christophe; Donnelly, Christl A; Ferguson, Neil M; Riley, Steven

    2018-03-01

    Outbreaks of novel pathogens such as SARS, pandemic influenza and Ebola require substantial investments in reactive interventions, with consequent implementation plans sometimes revised on a weekly basis. Therefore, short-term forecasts of incidence are often of high priority. In light of the recent Ebola epidemic in West Africa, a forecasting exercise was convened by a network of infectious disease modellers. The challenge was to forecast unseen "future" simulated data for four different scenarios at five different time points. In a similar method to that used during the recent Ebola epidemic, we estimated current levels of transmissibility, over variable time-windows chosen in an ad hoc way. Current estimated transmissibility was then used to forecast near-future incidence. We performed well within the challenge and often produced accurate forecasts. A retrospective analysis showed that our subjective method for deciding on the window of time with which to estimate transmissibility often resulted in the optimal choice. However, when near-future trends deviated substantially from exponential patterns, the accuracy of our forecasts was reduced. This exercise highlights the urgent need for infectious disease modellers to develop more robust descriptions of processes - other than the widespread depletion of susceptible individuals - that produce non-exponential patterns of incidence. Copyright © 2017 The Authors. Published by Elsevier B.V. All rights reserved.

  20. Skillful Spring Forecasts of September Arctic Sea Ice Extent Using Passive Microwave Data

    NASA Technical Reports Server (NTRS)

    Petty, A. A.; Schroder, D.; Stroeve, J. C.; Markus, Thorsten; Miller, Jeffrey A.; Kurtz, Nathan Timothy; Feltham, D. L.; Flocco, D.

    2017-01-01

    In this study, we demonstrate skillful spring forecasts of detrended September Arctic sea ice extent using passive microwave observations of sea ice concentration (SIC) and melt onset (MO). We compare these to forecasts produced using data from a sophisticated melt pond model, and find similar to higher skill values, where the forecast skill is calculated relative to linear trend persistence. The MO forecasts shows the highest skill in March-May, while the SIC forecasts produce the highest skill in June-August, especially when the forecasts are evaluated over recent years (since 2008). The high MO forecast skill in early spring appears to be driven primarily by the presence and timing of open water anomalies, while the high SIC forecast skill appears to be driven by both open water and surface melt processes. Spatial maps of detrended anomalies highlight the drivers of the different forecasts, and enable us to understand regions of predictive importance. Correctly capturing sea ice state anomalies, along with changes in open water coverage appear to be key processes in skillfully forecasting summer Arctic sea ice.

  1. On Manpower Forecasting. Methods for Manpower Analysis, No.2.

    ERIC Educational Resources Information Center

    Morton, J.E.

    Some of the problems and techniques involved in manpower forecasting are discussed. This non-technical introduction to the field aims at reducing fears of data manipulation methods and at increasing respect for conceptual, logical, and analytical issues. The major approaches to manpower forecasting are explicated and evaluated under the headings:…

  2. Forecasting Enrollments with Fuzzy Time Series.

    ERIC Educational Resources Information Center

    Song, Qiang; Chissom, Brad S.

    The concept of fuzzy time series is introduced and used to forecast the enrollment of a university. Fuzzy time series, an aspect of fuzzy set theory, forecasts enrollment using a first-order time-invariant model. To evaluate the model, the conventional linear regression technique is applied and the predicted values obtained are compared to the…

  3. ASSESSMENT OF AN ENSEMBLE OF SEVEN REAL-TIME OZONE FORECASTS OVER EASTERN NORTH AMERICA DURING THE SUMMER OF 2004

    EPA Science Inventory

    The real-time forecasts of ozone (O3) from seven air quality forecast models (AQFMs) are statistically evaluated against observations collected during July and August of 2004 (53 days) through the Aerometric Information Retrieval Now (AIRNow) network at roughly 340 mon...

  4. Marked point process for modelling seismic activity (case study in Sumatra and Java)

    NASA Astrophysics Data System (ADS)

    Pratiwi, Hasih; Sulistya Rini, Lia; Wayan Mangku, I.

    2018-05-01

    Earthquake is a natural phenomenon that is random, irregular in space and time. Until now the forecast of earthquake occurrence at a location is still difficult to be estimated so that the development of earthquake forecast methodology is still carried out both from seismology aspect and stochastic aspect. To explain the random nature phenomena, both in space and time, a point process approach can be used. There are two types of point processes: temporal point process and spatial point process. The temporal point process relates to events observed over time as a sequence of time, whereas the spatial point process describes the location of objects in two or three dimensional spaces. The points on the point process can be labelled with additional information called marks. A marked point process can be considered as a pair (x, m) where x is the point of location and m is the mark attached to the point of that location. This study aims to model marked point process indexed by time on earthquake data in Sumatra Island and Java Island. This model can be used to analyse seismic activity through its intensity function by considering the history process up to time before t. Based on data obtained from U.S. Geological Survey from 1973 to 2017 with magnitude threshold 5, we obtained maximum likelihood estimate for parameters of the intensity function. The estimation of model parameters shows that the seismic activity in Sumatra Island is greater than Java Island.

  5. Forecasting daily patient volumes in the emergency department.

    PubMed

    Jones, Spencer S; Thomas, Alun; Evans, R Scott; Welch, Shari J; Haug, Peter J; Snow, Gregory L

    2008-02-01

    Shifts in the supply of and demand for emergency department (ED) resources make the efficient allocation of ED resources increasingly important. Forecasting is a vital activity that guides decision-making in many areas of economic, industrial, and scientific planning, but has gained little traction in the health care industry. There are few studies that explore the use of forecasting methods to predict patient volumes in the ED. The goals of this study are to explore and evaluate the use of several statistical forecasting methods to predict daily ED patient volumes at three diverse hospital EDs and to compare the accuracy of these methods to the accuracy of a previously proposed forecasting method. Daily patient arrivals at three hospital EDs were collected for the period January 1, 2005, through March 31, 2007. The authors evaluated the use of seasonal autoregressive integrated moving average, time series regression, exponential smoothing, and artificial neural network models to forecast daily patient volumes at each facility. Forecasts were made for horizons ranging from 1 to 30 days in advance. The forecast accuracy achieved by the various forecasting methods was compared to the forecast accuracy achieved when using a benchmark forecasting method already available in the emergency medicine literature. All time series methods considered in this analysis provided improved in-sample model goodness of fit. However, post-sample analysis revealed that time series regression models that augment linear regression models by accounting for serial autocorrelation offered only small improvements in terms of post-sample forecast accuracy, relative to multiple linear regression models, while seasonal autoregressive integrated moving average, exponential smoothing, and artificial neural network forecasting models did not provide consistently accurate forecasts of daily ED volumes. This study confirms the widely held belief that daily demand for ED services is characterized by seasonal and weekly patterns. The authors compared several time series forecasting methods to a benchmark multiple linear regression model. The results suggest that the existing methodology proposed in the literature, multiple linear regression based on calendar variables, is a reasonable approach to forecasting daily patient volumes in the ED. However, the authors conclude that regression-based models that incorporate calendar variables, account for site-specific special-day effects, and allow for residual autocorrelation provide a more appropriate, informative, and consistently accurate approach to forecasting daily ED patient volumes.

  6. Development of a System to Generate Near Real Time Tropospheric Delay and Precipitable Water Vapor in situ at Geodetic GPS Stations, to Improve Forecasting of Severe Weather Events

    NASA Astrophysics Data System (ADS)

    Moore, A. W.; Bock, Y.; Geng, J.; Gutman, S. I.; Laber, J. L.; Morris, T.; Offield, D. G.; Small, I.; Squibb, M. B.

    2012-12-01

    We describe a system under development for generating ultra-low latency tropospheric delay and precipitable water vapor (PWV) estimates in situ at a prototype network of geodetic GPS sites in southern California, and demonstrating their utility in forecasting severe storms commonly associated with flooding and debris flow events along the west coast of North America through infusion of this meteorological data at NOAA National Weather Service (NWS) Forecast Offices and the NOAA Earth System Research Laboratory (ESRL). The first continuous geodetic GPS network was established in southern California in the early 1990s and much of it was converted to real-time (latency <1s) high-rate (1Hz) mode over the following decades. GPS stations are multi-purpose and can also provide estimates of tropospheric zenith delays, which can be converted into mm-accuracy PWV using collocated pressure and temperature measurements, the basis for GPS meteorology (Bevis et al. 1992, 1994; Duan et al. 1996) as implemented by NOAA with a nationwide distribution of about 300 GPS-Met stations providing PW estimates at subhourly resolution currently used in operational weather forecasting in the U.S. We improve upon the current paradigm of transmitting large quantities of raw data back to a central facility for processing into higher-order products. By operating semi-autonomously, each station will provide low-latency, high-fidelity and compact data products within the constraints of the narrow communications bandwidth that often occurs in the aftermath of natural disasters. The onsite ambiguity-resolved precise point positioning solutions are enabled by a power-efficient, low-cost, plug-in Geodetic Module for fusion of data from in situ sensors including GPS and a low-cost MEMS meteorological sensor package. The decreased latency (~5 minutes) PW estimates will provide the detailed knowledge of the distribution and magnitude of PW that NWS forecasters require to monitor and predict severe winter storms, landfalling atmospheric rivers, and summer thunderstorms associated with the North American monsoon. On the national level, the ESRL will evaluate the utility of ultra-low resolution GNSS observations to improve NOAA's warning and forecast capabilities. The overall objective is to better forecast, assess, and mitigate natural hazards through the flow of information from multiple geodetic stations to scientists, mission planners, decision makers, and first responders.

  7. Dispersion Modeling Using Ensemble Forecasts Compared to ETEX Measurements.

    NASA Astrophysics Data System (ADS)

    Straume, Anne Grete; N'dri Koffi, Ernest; Nodop, Katrin

    1998-11-01

    Numerous numerical models are developed to predict long-range transport of hazardous air pollution in connection with accidental releases. When evaluating and improving such a model, it is important to detect uncertainties connected to the meteorological input data. A Lagrangian dispersion model, the Severe Nuclear Accident Program, is used here to investigate the effect of errors in the meteorological input data due to analysis error. An ensemble forecast, produced at the European Centre for Medium-Range Weather Forecasts, is then used as model input. The ensemble forecast members are generated by perturbing the initial meteorological fields of the weather forecast. The perturbations are calculated from singular vectors meant to represent possible forecast developments generated by instabilities in the atmospheric flow during the early part of the forecast. The instabilities are generated by errors in the analyzed fields. Puff predictions from the dispersion model, using ensemble forecast input, are compared, and a large spread in the predicted puff evolutions is found. This shows that the quality of the meteorological input data is important for the success of the dispersion model. In order to evaluate the dispersion model, the calculations are compared with measurements from the European Tracer Experiment. The model manages to predict the measured puff evolution concerning shape and time of arrival to a fairly high extent, up to 60 h after the start of the release. The modeled puff is still too narrow in the advection direction.

  8. Application and evaluation of forecasting methods for municipal solid waste generation in an Eastern-European city.

    PubMed

    Rimaityte, Ingrida; Ruzgas, Tomas; Denafas, Gintaras; Racys, Viktoras; Martuzevicius, Dainius

    2012-01-01

    Forecasting of generation of municipal solid waste (MSW) in developing countries is often a challenging task due to the lack of data and selection of suitable forecasting method. This article aimed to select and evaluate several methods for MSW forecasting in a medium-scaled Eastern European city (Kaunas, Lithuania) with rapidly developing economics, with respect to affluence-related and seasonal impacts. The MSW generation was forecast with respect to the economic activity of the city (regression modelling) and using time series analysis. The modelling based on social-economic indicators (regression implemented in LCA-IWM model) showed particular sensitivity (deviation from actual data in the range from 2.2 to 20.6%) to external factors, such as the synergetic effects of affluence parameters or changes in MSW collection system. For the time series analysis, the combination of autoregressive integrated moving average (ARIMA) and seasonal exponential smoothing (SES) techniques were found to be the most accurate (mean absolute percentage error equalled to 6.5). Time series analysis method was very valuable for forecasting the weekly variation of waste generation data (r (2) > 0.87), but the forecast yearly increase should be verified against the data obtained by regression modelling. The methods and findings of this study may assist the experts, decision-makers and scientists performing forecasts of MSW generation, especially in developing countries.

  9. Assessing the value of post-processed state-of-the-art long-term weather forecast ensembles for agricultural water management mediated by farmers' behaviours

    NASA Astrophysics Data System (ADS)

    Li, Yu; Giuliani, Matteo; Castelletti, Andrea

    2016-04-01

    Recent advances in modelling of coupled ocean-atmosphere dynamics significantly improved skills of long-term climate forecast from global circulation models (GCMs). These more accurate weather predictions are supposed to be a valuable support to farmers in optimizing farming operations (e.g. crop choice, cropping and watering time) and for more effectively coping with the adverse impacts of climate variability. Yet, assessing how actually valuable this information can be to a farmer is not straightforward and farmers' response must be taken into consideration. Indeed, in the context of agricultural systems potentially useful forecast information should alter stakeholders' expectation, modify their decisions, and ultimately produce an impact on their performance. Nevertheless, long-term forecast are mostly evaluated in terms of accuracy (i.e., forecast quality) by comparing hindcast and observed values and only few studies investigated the operational value of forecast looking at the gain of utility within the decision-making context, e.g. by considering the derivative of forecast information, such as simulated crop yields or simulated soil moisture, which are essential to farmers' decision-making process. In this study, we contribute a step further in the assessment of the operational value of long-term weather forecasts products by embedding these latter into farmers' behavioral models. This allows a more critical assessment of the forecast value mediated by the end-users' perspective, including farmers' risk attitudes and behavioral patterns. Specifically, we evaluate the operational value of thirteen state-of-the-art long-range forecast products against climatology forecast and empirical prediction (i.e. past year climate and historical average) within an integrated agronomic modeling framework embedding an implicit model of the farmers' decision-making process. Raw ensemble datasets are bias-corrected and downscaled using a stochastic weather generator, in order to address the mismatch of the spatio-temporal scale between forecast data from GCMs and our model. For each product, the experiment is composed by two cascade simulations: 1) an ex-ante simulation using forecast data, and 2) an ex-post simulation with observations. Multi-year simulations are performed to account for climate variability, and the operational value of the different forecast products is evaluated against the perfect foresight on the basis of expected crop productivity as well as the final decisions under different decision-making criterions. Our results show that not all products generate beneficial effects to farmers' performance, and the forecast errors might be amplified due to farmers' decision-making process and risk attitudes, yielding little or even worse performance compared with the empirical approaches.

  10. Personalized glucose forecasting for type 2 diabetes using data assimilation

    PubMed Central

    Albers, David J.; Gluckman, Bruce; Ginsberg, Henry; Hripcsak, George; Mamykina, Lena

    2017-01-01

    Type 2 diabetes leads to premature death and reduced quality of life for 8% of Americans. Nutrition management is critical to maintaining glycemic control, yet it is difficult to achieve due to the high individual differences in glycemic response to nutrition. Anticipating glycemic impact of different meals can be challenging not only for individuals with diabetes, but also for expert diabetes educators. Personalized computational models that can accurately forecast an impact of a given meal on an individual’s blood glucose levels can serve as the engine for a new generation of decision support tools for individuals with diabetes. However, to be useful in practice, these computational engines need to generate accurate forecasts based on limited datasets consistent with typical self-monitoring practices of individuals with type 2 diabetes. This paper uses three forecasting machines: (i) data assimilation, a technique borrowed from atmospheric physics and engineering that uses Bayesian modeling to infuse data with human knowledge represented in a mechanistic model, to generate real-time, personalized, adaptable glucose forecasts; (ii) model averaging of data assimilation output; and (iii) dynamical Gaussian process model regression. The proposed data assimilation machine, the primary focus of the paper, uses a modified dual unscented Kalman filter to estimate states and parameters, personalizing the mechanistic models. Model selection is used to make a personalized model selection for the individual and their measurement characteristics. The data assimilation forecasts are empirically evaluated against actual postprandial glucose measurements captured by individuals with type 2 diabetes, and against predictions generated by experienced diabetes educators after reviewing a set of historical nutritional records and glucose measurements for the same individual. The evaluation suggests that the data assimilation forecasts compare well with specific glucose measurements and match or exceed in accuracy expert forecasts. We conclude by examining ways to present predictions as forecast-derived range quantities and evaluate the comparative advantages of these ranges. PMID:28448498

  11. Operational value of ensemble streamflow forecasts for hydropower production: A Canadian case study

    NASA Astrophysics Data System (ADS)

    Boucher, Marie-Amélie; Tremblay, Denis; Luc, Perreault; François, Anctil

    2010-05-01

    Ensemble and probabilistic forecasts have many advantages over deterministic ones, both in meteorology and hydrology (e.g. Krzysztofowicz, 2001). Mainly, they inform the user on the uncertainty linked to the forecast. It has been brought to attention that such additional information could lead to improved decision making (e.g. Wilks and Hamill, 1995; Mylne, 2002; Roulin, 2007), but very few studies concentrate on operational situations involving the use of such forecasts. In addition, many authors have demonstrated that ensemble forecasts outperform deterministic forecasts in terms of performance (e.g. Jaun et al., 2005; Velazquez et al., 2009; Laio and Tamea, 2007). However, such performance is mostly assessed on the basis of numerical scoring rules, which compare the forecasts to the observations, and seldom in terms of management gains. The proposed case study adopts an operational point of view, on the basis that a novel forecasting system has value only if it leads to increase monetary and societal gains (e.g. Murphy, 1994; Laio and Tamea, 2007). More specifically, Environment Canada operational ensemble precipitation forecasts are used to drive the HYDROTEL distributed hydrological model (Fortin et al., 1995), calibrated on the Gatineau watershed located in Québec, Canada. The resulting hydrological ensemble forecasts are then incorporated into Hydro-Québec SOHO stochastic management optimization tool that automatically search for optimal operation decisions for the all reservoirs and hydropower plants located on the basin. The timeline of the study is the fall season of year 2003. This period is especially relevant because of high precipitations that nearly caused a major spill, and forced the preventive evacuation of a portion of the population located near one of the dams. We show that the use of the ensemble forecasts would have reduced the occurrence of spills and flooding, which is of particular importance for dams located in populous area, and increased hydropower production. The ensemble precipitation forecasts extend from March 1st of 2002 to December 31st of 2003. They were obtained using two atmospheric models, SEF (8 members plus the control deterministic forecast) and GEM (8 members). The corresponding deterministic precipitation forecast issued by SEF model is also used within HYDROTEL in order to compare ensemble streamflow forecasts with their deterministic counterparts. Although this study does not incorporate all the sources of uncertainty, precipitation is certainly the most important input for hydrological modeling and conveys a great portion of the total uncertainty. References: Fortin, J.P., Moussa, R., Bocquillon, C. and Villeneuve, J.P. 1995: HYDROTEL, un modèle hydrologique distribué pouvant bénéficier des données fournies par la télédétection et les systèmes d'information géographique, Revue des Sciences de l'Eau, 8(1), 94-124. Jaun, S., Ahrens, B., Walser, A., Ewen, T. and Schaer, C. 2008: A probabilistic view on the August 2005 floods in the upper Rhine catchment, Natural Hazards and Earth System Sciences, 8 (2), 281-291. Krzysztofowicz, R. 2001: The case for probabilistic forecasting in hydrology, Journal of Hydrology, 249, 2-9. Murphy, A.H. 1994: Assessing the economic value of weather forecasts: An overview of methods, results and issues, Meteorological Applications, 1, 69-73. Mylne, K.R. 2002: Decision-Making from probability forecasts based on forecast value, Meteorological Applications, 9, 307-315. Laio, F. and Tamea, S. 2007: Verification tools for probabilistic forecasts of continuous hydrological variables, Hydrology and Earth System Sciences, 11, 1267-1277. Roulin, E. 2007: Skill and relative economic value of medium-range hydrological ensemble predictions, Hydrology and Earth System Sciences, 11, 725-737. Velazquez, J.-A., Petit, T., Lavoie, A., Boucher, M.-A., Turcotte, R., Fortin, V. and Anctil, F. 2009: An evaluation of the Canadian global meteorological ensemble prediction system for short-term hydrological forecasting, Hydrology and Earth System Sciences, 13(11), 2221-2231. Wilks, D.S. and Hamill, T.M. 1995: Potential economic value of ensemble-based surface weather forecasts, Monthly Weather Review, 123(12), 3565-3575.

  12. A real-time evaluation and demonstration of strategies for 'Over-The-Loop' ensemble streamflow forecasting in US watersheds

    NASA Astrophysics Data System (ADS)

    Wood, Andy; Clark, Elizabeth; Mendoza, Pablo; Nijssen, Bart; Newman, Andy; Clark, Martyn; Nowak, Kenneth; Arnold, Jeffrey

    2017-04-01

    Many if not most national operational streamflow prediction systems rely on a forecaster-in-the-loop approach that require the hands-on-effort of an experienced human forecaster. This approach evolved from the need to correct for long-standing deficiencies in the models and datasets used in forecasting, and the practice often leads to skillful flow predictions despite the use of relatively simple, conceptual models. Yet the 'in-the-loop' forecast process is not reproducible, which limits opportunities to assess and incorporate new techniques systematically, and the effort required to make forecasts in this way is an obstacle to expanding forecast services - e.g., though adding new forecast locations or more frequent forecast updates, running more complex models, or producing forecast and hindcasts that can support verification. In the last decade, the hydrologic forecasting community has begun develop more centralized, 'over-the-loop' systems. The quality of these new forecast products will depend on their ability to leverage research in areas including earth system modeling, parameter estimation, data assimilation, statistical post-processing, weather and climate prediction, verification, and uncertainty estimation through the use of ensembles. Currently, many national operational streamflow forecasting and water management communities have little experience with the strengths and weaknesses of over-the-loop approaches, even as such systems are beginning to be deployed operationally in centers such as ECMWF. There is thus a need both to evaluate these forecasting advances and to demonstrate their potential in a public arena, raising awareness in forecast user communities and development programs alike. To address this need, the US National Center for Atmospheric Research is collaborating with the University of Washington, the Bureau of Reclamation and the US Army Corps of Engineers, using the NCAR 'System for Hydromet Analysis Research and Prediction Applications' (SHARP) to implement, assess and demonstrate real-time over-the-loop ensemble flow forecasts in a range of US watersheds. The system relies on fully ensemble techniques, including: an 100-member ensemble of meteorological model forcings and an ensemble particle filter data assimilation for initializing watershed states; analog/regression-based downscaling of ensemble weather forecasts from GEFS; and statistical post-processing of ensemble forecast outputs, all of which run in real-time within a workflow managed by ECWMF's ecFlow libraries over large US regional domains. We describe SHARP and present early hindcast and verification results for short to seasonal range streamflow forecasts in a number of US case study watersheds.

  13. A GLM Post-processor to Adjust Ensemble Forecast Traces

    NASA Astrophysics Data System (ADS)

    Thiemann, M.; Day, G. N.; Schaake, J. C.; Draijer, S.; Wang, L.

    2011-12-01

    The skill of hydrologic ensemble forecasts has improved in the last years through a better understanding of climate variability, better climate forecasts and new data assimilation techniques. Having been extensively utilized for probabilistic water supply forecasting, interest is developing to utilize these forecasts in operational decision making. Hydrologic ensemble forecast members typically have inherent biases in flow timing and volume caused by (1) structural errors in the models used, (2) systematic errors in the data used to calibrate those models, (3) uncertain initial hydrologic conditions, and (4) uncertainties in the forcing datasets. Furthermore, hydrologic models have often not been developed for operational decision points and ensemble forecasts are thus not always available where needed. A statistical post-processor can be used to address these issues. The post-processor should (1) correct for systematic biases in flow timing and volume, (2) preserve the skill of the available raw forecasts, (3) preserve spatial and temporal correlation as well as the uncertainty in the forecasted flow data, (4) produce adjusted forecast ensembles that represent the variability of the observed hydrograph to be predicted, and (5) preserve individual forecast traces as equally likely. The post-processor should also allow for the translation of available ensemble forecasts to hydrologically similar locations where forecasts are not available. This paper introduces an ensemble post-processor (EPP) developed in support of New York City water supply operations. The EPP employs a general linear model (GLM) to (1) adjust available ensemble forecast traces and (2) create new ensembles for (nearby) locations where only historic flow observations are available. The EPP is calibrated by developing daily and aggregated statistical relationships form historical flow observations and model simulations. These are then used in operation to obtain the conditional probability density function (PDF) of the observations to be predicted, thus jointly adjusting individual ensemble members. These steps are executed in a normalized transformed space ('z'-space) to account for the strong non-linearity in the flow observations involved. A data window centered on each calibration date is used to minimize impacts from sampling errors and data noise. Testing on datasets from California and New York suggests that the EPP can successfully minimize biases in ensemble forecasts, while preserving the raw forecast skill in a 'days to weeks' forecast horizon and reproducing the variability of climatology for 'weeks to years' forecast horizons.

  14. Use of forecasting signatures to help distinguish periodicity, randomness, and chaos in ripples and other spatial patterns

    USGS Publications Warehouse

    Rubin, D.M.

    1992-01-01

    Forecasting of one-dimensional time series previously has been used to help distinguish periodicity, chaos, and noise. This paper presents two-dimensional generalizations for making such distinctions for spatial patterns. The techniques are evaluated using synthetic spatial patterns and then are applied to a natural example: ripples formed in sand by blowing wind. Tests with the synthetic patterns demonstrate that the forecasting techniques can be applied to two-dimensional spatial patterns, with the same utility and limitations as when applied to one-dimensional time series. One limitation is that some combinations of periodicity and randomness exhibit forecasting signatures that mimic those of chaos. For example, sine waves distorted with correlated phase noise have forecasting errors that increase with forecasting distance, errors that, are minimized using nonlinear models at moderate embedding dimensions, and forecasting properties that differ significantly between the original and surrogates. Ripples formed in sand by flowing air or water typically vary in geometry from one to another, even when formed in a flow that is uniform on a large scale; each ripple modifies the local flow or sand-transport field, thereby influencing the geometry of the next ripple downcurrent. Spatial forecasting was used to evaluate the hypothesis that such a deterministic process - rather than randomness or quasiperiodicity - is responsible for the variation between successive ripples. This hypothesis is supported by a forecasting error that increases with forecasting distance, a greater accuracy of nonlinear relative to linear models, and significant differences between forecasts made with the original ripples and those made with surrogate patterns. Forecasting signatures cannot be used to distinguish ripple geometry from sine waves with correlated phase noise, but this kind of structure can be ruled out by two geometric properties of the ripples: Successive ripples are highly correlated in wavelength, and ripple crests display dislocations such as branchings and mergers. ?? 1992 American Institute of Physics.

  15. Uncertainty analysis of neural network based flood forecasting models: An ensemble based approach for constructing prediction interval

    NASA Astrophysics Data System (ADS)

    Kasiviswanathan, K.; Sudheer, K.

    2013-05-01

    Artificial neural network (ANN) based hydrologic models have gained lot of attention among water resources engineers and scientists, owing to their potential for accurate prediction of flood flows as compared to conceptual or physics based hydrologic models. The ANN approximates the non-linear functional relationship between the complex hydrologic variables in arriving at the river flow forecast values. Despite a large number of applications, there is still some criticism that ANN's point prediction lacks in reliability since the uncertainty of predictions are not quantified, and it limits its use in practical applications. A major concern in application of traditional uncertainty analysis techniques on neural network framework is its parallel computing architecture with large degrees of freedom, which makes the uncertainty assessment a challenging task. Very limited studies have considered assessment of predictive uncertainty of ANN based hydrologic models. In this study, a novel method is proposed that help construct the prediction interval of ANN flood forecasting model during calibration itself. The method is designed to have two stages of optimization during calibration: at stage 1, the ANN model is trained with genetic algorithm (GA) to obtain optimal set of weights and biases vector, and during stage 2, the optimal variability of ANN parameters (obtained in stage 1) is identified so as to create an ensemble of predictions. During the 2nd stage, the optimization is performed with multiple objectives, (i) minimum residual variance for the ensemble mean, (ii) maximum measured data points to fall within the estimated prediction interval and (iii) minimum width of prediction interval. The method is illustrated using a real world case study of an Indian basin. The method was able to produce an ensemble that has an average prediction interval width of 23.03 m3/s, with 97.17% of the total validation data points (measured) lying within the interval. The derived prediction interval for a selected hydrograph in the validation data set is presented in Fig 1. It is noted that most of the observed flows lie within the constructed prediction interval, and therefore provides information about the uncertainty of the prediction. One specific advantage of the method is that when ensemble mean value is considered as a forecast, the peak flows are predicted with improved accuracy by this method compared to traditional single point forecasted ANNs. Fig. 1 Prediction Interval for selected hydrograph

  16. Influence of warning information changes on emergency response

    NASA Astrophysics Data System (ADS)

    Heisterkamp, Tobias; Ulbrich, Uwe; Glade, Thomas; Tetzlaff, Gerd

    2014-05-01

    Mitigation and risk reduction of natural hazards is significantly related to the possibility of predicting the actual event. Some hazards can already be forecasted several days in advance. For these hazards, early warning systems have been developed, installed and improved over the years. The formation of winter storms for example can be recognized up to one week before they pass through Central Europe. This relative long early warning time has the advantage that forecasters can concretise the warnings over time. Therefore, warnings can even be adapted to alternating conditions within the process, the observation or changes in its modelling. Emergency managers are one group of warning recipients in the civil protection sector. They have to prepare or initiate prevention or response measures at a specific point of time, depending on the required lead time of the referring actions. At this point of time already, the forecast and its equivalent warning, has to be assumed as a stage of reality, hence the decision-makers have to come to a conclusion. These decisions are based on spatial and temporal knowledge of the forecasted event and the consequential situation of risk. With incoming warning updates, the detailed status of information is permanently being alternated. Consequently, decisions can be influenced by the development of the warning situation and the inherent tendency before a certain point of time. They can also be adapted to updates later on, according to the changing 'decision reality'. The influence of these dynamic hazard situations on operational planning and response by emergency managers is investigated in case studies on winter storms for Berlin, Germany. Therefore, the issued warnings by the weather service and data of operation of Berlin Fire Brigades are analysed and compared. This presentation shows and discusses first results.

  17. National Weather Service

    Science.gov Websites

    Forecast and Warning Services of the National Weather Service Introduction Quantitative precipitation future which is an active area of research currently. 2) Evaluate HPN performance for forecast periods

  18. A Preliminary Evaluation of the GFS Physics in the Navy Global Environmental Model

    NASA Astrophysics Data System (ADS)

    Liu, M.; Langland, R.; Martini, M.; Viner, K.

    2017-12-01

    Global extended long-range weather forecast is a goal in the near future at Navy's Fleet Numerical Meteorology and Oceanography Center (FNMOC). In an effort to improve the performance of the Navy Global Environmental Model (NAVGEM) operated at FNMOC, and to gain more understanding of the impact of atmospheric physics in the long-range forecast, the physics package of the Global Forecast System (GFS) of the National Centers for Environmental Prediction is being evaluated in the framework of NAVGEM. That is GFS physics being transported by NAVGEM Semi-Lagrangian Semi-Implicit advection, and update-cycled by the 4D-variational data assimilation along with the assimilated land surface data of NASA's Land Information System. The output of free long runs of 10-day GFS physics forecast in a summer and a winter season are evaluated through the comparisons with the output of NAVGEM physics long forecast, and through the validations with observations and with the European Center's analyses data. It is found that the GFS physics is able to effectively reduce some of the modeling biases of NAVGEM, especially wind speed of the troposphere and land surface temperature that is an important surface boundary condition. The bias corrections increase with forecast leads, reaching maximum at 240 hours. To further understand the relative roles of physics and dynamics in extended long-range forecast, the tendencies of physics components and advection are also calculated and analyzed to compare their forces of magnitudes in the integration of winds, temperature, and moisture. The comparisons reveal the strength and limitation of GFS physics in the overall improvement of NAVGEM prediction system.

  19. Effect of initial conditions of a catchment on seasonal streamflow prediction using ensemble streamflow prediction (ESP) technique for the Rangitata and Waitaki River basins on the South Island of New Zealand

    NASA Astrophysics Data System (ADS)

    Singh, Shailesh Kumar; Zammit, Christian; Hreinsson, Einar; Woods, Ross; Clark, Martyn; Hamlet, Alan

    2013-04-01

    Increased access to water is a key pillar of the New Zealand government plan for economic growths. Variable climatic conditions coupled with market drivers and increased demand on water resource result in critical decision made by water managers based on climate and streamflow forecast. Because many of these decisions have serious economic implications, accurate forecast of climate and streamflow are of paramount importance (eg irrigated agriculture and electricity generation). New Zealand currently does not have a centralized, comprehensive, and state-of-the-art system in place for providing operational seasonal to interannual streamflow forecasts to guide water resources management decisions. As a pilot effort, we implement and evaluate an experimental ensemble streamflow forecasting system for the Waitaki and Rangitata River basins on New Zealand's South Island using a hydrologic simulation model (TopNet) and the familiar ensemble streamflow prediction (ESP) paradigm for estimating forecast uncertainty. To provide a comprehensive database for evaluation of the forecasting system, first a set of retrospective model states simulated by the hydrologic model on the first day of each month were archived from 1972-2009. Then, using the hydrologic simulation model, each of these historical model states was paired with the retrospective temperature and precipitation time series from each historical water year to create a database of retrospective hindcasts. Using the resulting database, the relative importance of initial state variables (such as soil moisture and snowpack) as fundamental drivers of uncertainties in forecasts were evaluated for different seasons and lead times. The analysis indicate that the sensitivity of flow forecast to initial condition uncertainty is depend on the hydrological regime and season of forecast. However initial conditions do not have a large impact on seasonal flow uncertainties for snow dominated catchments. Further analysis indicates that this result is valid when the hindcast database is conditioned by ENSO classification. As a result hydrological forecasts based on ESP technique, where present initial conditions with histological forcing data are used may be plausible for New Zealand catchments.

  20. Evaluation of annual, global seismicity forecasts, including ensemble models

    NASA Astrophysics Data System (ADS)

    Taroni, Matteo; Zechar, Jeremy; Marzocchi, Warner

    2013-04-01

    In 2009, the Collaboratory for the Study of the Earthquake Predictability (CSEP) initiated a prototype global earthquake forecast experiment. Three models participated in this experiment for 2009, 2010 and 2011—each model forecast the number of earthquakes above magnitude 6 in 1x1 degree cells that span the globe. Here we use likelihood-based metrics to evaluate the consistency of the forecasts with the observed seismicity. We compare model performance with statistical tests and a new method based on the peer-to-peer gambling score. The results of the comparisons are used to build ensemble models that are a weighted combination of the individual models. Notably, in these experiments the ensemble model always performs significantly better than the single best-performing model. Our results indicate the following: i) time-varying forecasts, if not updated after each major shock, may not provide significant advantages with respect to time-invariant models in 1-year forecast experiments; ii) the spatial distribution seems to be the most important feature to characterize the different forecasting performances of the models; iii) the interpretation of consistency tests may be misleading because some good models may be rejected while trivial models may pass consistency tests; iv) a proper ensemble modeling seems to be a valuable procedure to get the best performing model for practical purposes.

  1. Evaluation of the product ratio coherent model in forecasting mortality rates and life expectancy at births by States

    NASA Astrophysics Data System (ADS)

    Shair, Syazreen Niza; Yusof, Aida Yuzi; Asmuni, Nurin Haniah

    2017-05-01

    Coherent mortality forecasting models have recently received increasing attention particularly in their application to sub-populations. The advantage of coherent models over independent models is the ability to forecast a non-divergent mortality for two or more sub-populations. One of the coherent models was recently developed by [1] known as the product-ratio model. This model is an extension version of the functional independent model from [2]. The product-ratio model has been applied in a developed country, Australia [1] and has been extended in a developing nation, Malaysia [3]. While [3] accounted for coherency of mortality rates between gender and ethnic group, the coherency between states in Malaysia has never been explored. This paper will forecast the mortality rates of Malaysian sub-populations according to states using the product ratio coherent model and its independent version— the functional independent model. The forecast accuracies of two different models are evaluated using the out-of-sample error measurements— the mean absolute forecast error (MAFE) for age-specific death rates and the mean forecast error (MFE) for the life expectancy at birth. We employ Malaysian mortality time series data from 1991 to 2014, segregated by age, gender and states.

  2. Comparison of the performance and reliability of 18 lumped hydrological models driven by ECMWF rainfall ensemble forecasts: a case study on 29 French catchments

    NASA Astrophysics Data System (ADS)

    Velázquez, Juan Alberto; Anctil, François; Ramos, Maria-Helena; Perrin, Charles

    2010-05-01

    An ensemble forecasting system seeks to assess and to communicate the uncertainty of hydrological predictions by proposing, at each time step, an ensemble of forecasts from which one can estimate the probability distribution of the predictant (the probabilistic forecast), in contrast with a single estimate of the flow, for which no distribution is obtainable (the deterministic forecast). In the past years, efforts towards the development of probabilistic hydrological prediction systems were made with the adoption of ensembles of numerical weather predictions (NWPs). The additional information provided by the different available Ensemble Prediction Systems (EPS) was evaluated in a hydrological context on various case studies (see the review by Cloke and Pappenberger, 2009). For example, the European ECMWF-EPS was explored in case studies by Roulin et al. (2005), Bartholmes et al. (2005), Jaun et al. (2008), and Renner et al. (2009). The Canadian EC-EPS was also evaluated by Velázquez et al. (2009). Most of these case studies investigate the ensemble predictions of a given hydrological model, set up over a limited number of catchments. Uncertainty from weather predictions is assessed through the use of meteorological ensembles. However, uncertainty from the tested hydrological model and statistical robustness of the forecasting system when coping with different hydro-meteorological conditions are less frequently evaluated. The aim of this study is to evaluate and compare the performance and the reliability of 18 lumped hydrological models applied to a large number of catchments in an operational ensemble forecasting context. Some of these models were evaluated in a previous study (Perrin et al. 2001) for their ability to simulate streamflow. Results demonstrated that very simple models can achieve a level of performance almost as high (sometimes higher) as models with more parameters. In the present study, we focus on the ability of the hydrological models to provide reliable probabilistic forecasts of streamflow, based on ensemble weather predictions. The models were therefore adapted to run in a forecasting mode, i.e., to update initial conditions according to the last observed discharge at the time of the forecast, and to cope with ensemble weather scenarios. All models are lumped, i.e., the hydrological behavior is integrated over the spatial scale of the catchment, and run at daily time steps. The complexity of tested models varies between 3 and 13 parameters. The models are tested on 29 French catchments. Daily streamflow time series extend over 17 months, from March 2005 to July 2006. Catchment areas range between 1470 km2 and 9390 km2, and represent a variety of hydrological and meteorological conditions. The 12 UTC 10-day ECMWF rainfall ensemble (51 members) was used, which led to daily streamflow forecasts for a 9-day lead time. In order to assess the performance and reliability of the hydrological ensemble predictions, we computed the Continuous Ranked probability Score (CRPS) (Matheson and Winkler, 1976), as well as the reliability diagram (e.g. Wilks, 1995) and the rank histogram (Talagrand et al., 1999). Since the ECMWF deterministic forecasts are also available, the performance of the hydrological forecasting systems was also evaluated by comparing the deterministic score (MAE) with the probabilistic score (CRPS). The results obtained for the 18 hydrological models and the 29 studied catchments are discussed in the perspective of improving the operational use of ensemble forecasting in hydrology. References Bartholmes, J. and Todini, E.: Coupling meteorological and hydrological models for flood forecasting, Hydrol. Earth Syst. Sci., 9, 333-346, 2005. Cloke, H. and Pappenberger, F.: Ensemble Flood Forecasting: A Review. Journal of Hydrology 375 (3-4): 613-626, 2009. Jaun, S., Ahrens, B., Walser, A., Ewen, T., and Schär, C.: A probabilistic view on the August 2005 floods in the upper Rhine catchment, Nat. Hazards Earth Syst. Sci., 8, 281-291, 2008. Matheson, J. E. and Winkler, R. L.: Scoring rules for continuous probability distributions, Manage Sci., 22, 1087-1096, 1976. Perrin, C., Michel C. and Andréassian,V. Does a large number of parameters enhance model performance? Comparative assessment of common catchment model structures on 429 catchments, J. Hydrol., 242, 275-301, 2001. Renner, M., Werner, M. G. F., Rademacher, S., and Sprokkereef, E.: Verification of ensemble flow forecast for the River Rhine, J. Hydrol., 376, 463-475, 2009. Roulin, E. and Vannitsem, S.: Skill of medium-range hydrological ensemble predictions, J. Hydrometeorol., 6, 729-744, 2005. Talagrand, O., Vautard, R., and Strauss, B.: Evaluation of the probabilistic prediction systems, in: Proceedings, ECMWF Workshop on Predictability, Shinfield Park, Reading, Berkshire, ECMWF, 1-25, 1999. Velázquez, J.A., Petit, T., Lavoie, A., Boucher M.-A., Turcotte R., Fortin V., and Anctil, F. : An evaluation of the Canadian global meteorological ensemble prediction system for short-term hydrological forecasting, Hydrol. Earth Syst. Sci., 13, 2221-2231, 2009. Wilks, D. S.: Statistical Methods in the Atmospheric Sciences, Academic Press, San Diego, CA, 465 pp., 1995.

  3. EVALUATING HYDROLOGICAL RESPONSE TO FORECASTED LAND-USE CHANGE: SCENARIO TESTING WITH THE AUTOMATED GEOSPATIAL WATERSHED ASSESSMENT (AGWA) TOOL

    EPA Science Inventory

    Envisioning and evaluating future scenarios has emerged as a critical component of both science and social decision-making. The ability to assess, report, map, and forecast the life support functions of ecosystems is absolutely critical to our capacity to make informed decisions...

  4. Scenario Analysis: Evaluating Biodiversity Response to Forecasted Land-Use Change in the San Pedro River Basin (U.S.-Mexico)

    EPA Science Inventory

    Envisioning and evaluating future scenarios has emerged as a critical component of both science and social decision-making. The ability to assess, report, map, and forecast the life support functions of ecosystems is absolutely critical to our capacity to make informed decisions...

  5. 76 FR 72474 - Self-Regulatory Organizations; NYSE Arca, Inc.; Notice of Filing of Proposed Rule Change To List...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-11-23

    ... has developed a proprietary SectorSAM \\TM\\ quantitative research and evaluation process that forecasts... and short portfolios as dictated by its proprietary SectorSAM quantitative research and evaluation... a proprietary quantitative analysis, to forecast each sector's excess return within a specific time...

  6. Comparative Validation of Realtime Solar Wind Forecasting Using the UCSD Heliospheric Tomography Model

    NASA Technical Reports Server (NTRS)

    MacNeice, Peter; Taktakishvili, Alexandra; Jackson, Bernard; Clover, John; Bisi, Mario; Odstrcil, Dusan

    2011-01-01

    The University of California, San Diego 3D Heliospheric Tomography Model reconstructs the evolution of heliospheric structures, and can make forecasts of solar wind density and velocity up to 72 hours in the future. The latest model version, installed and running in realtime at the Community Coordinated Modeling Center(CCMC), analyzes scintillations of meter wavelength radio point sources recorded by the Solar-Terrestrial Environment Laboratory(STELab) together with realtime measurements of solar wind speed and density recorded by the Advanced Composition Explorer(ACE) Solar Wind Electron Proton Alpha Monitor(SWEPAM).The solution is reconstructed using tomographic techniques and a simple kinematic wind model. Since installation, the CCMC has been recording the model forecasts and comparing them with ACE measurements, and with forecasts made using other heliospheric models hosted by the CCMC. We report the preliminary results of this validation work and comparison with alternative models.

  7. Volcanic Eruption Forecasts From Accelerating Rates of Drumbeat Long-Period Earthquakes

    NASA Astrophysics Data System (ADS)

    Bell, Andrew F.; Naylor, Mark; Hernandez, Stephen; Main, Ian G.; Gaunt, H. Elizabeth; Mothes, Patricia; Ruiz, Mario

    2018-02-01

    Accelerating rates of quasiperiodic "drumbeat" long-period earthquakes (LPs) are commonly reported before eruptions at andesite and dacite volcanoes, and promise insights into the nature of fundamental preeruptive processes and improved eruption forecasts. Here we apply a new Bayesian Markov chain Monte Carlo gamma point process methodology to investigate an exceptionally well-developed sequence of drumbeat LPs preceding a recent large vulcanian explosion at Tungurahua volcano, Ecuador. For more than 24 hr, LP rates increased according to the inverse power law trend predicted by material failure theory, and with a retrospectively forecast failure time that agrees with the eruption onset within error. LPs resulted from repeated activation of a single characteristic source driven by accelerating loading, rather than a distributed failure process, showing that similar precursory trends can emerge from quite different underlying physics. Nevertheless, such sequences have clear potential for improving forecasts of eruptions at Tungurahua and analogous volcanoes.

  8. Applications of Machine Learning to Downscaling and Verification

    NASA Astrophysics Data System (ADS)

    Prudden, R.

    2017-12-01

    Downscaling, sometimes known as super-resolution, means converting model data into a more detailed local forecast. It is a problem which could be highly amenable to machine learning approaches, provided that sufficient historical forecast data and observations are available. It is also closely linked to the subject of verification, since improving a forecast requires a way to measure that improvement. This talk will describe some early work towards downscaling Met Office ensemble forecasts, and discuss how the output may be usefully evaluated.

  9. Centralized Storm Information System (CSIS)

    NASA Technical Reports Server (NTRS)

    Norton, C. C.

    1985-01-01

    A final progress report is presented on the Centralized Storm Information System (CSIS). The primary purpose of the CSIS is to demonstrate and evaluate real time interactive computerized data collection, interpretation and display techniques as applied to severe weather forecasting. CSIS objectives pertaining to improved severe storm forecasting and warning systems are outlined. The positive impact that CSIS has had on the National Severe Storms Forecast Center (NSSFC) is discussed. The benefits of interactive processing systems on the forecasting ability of the NSSFC are described.

  10. Evaluation of ensemble precipitation forecasts generated through post-processing in a Canadian catchment

    NASA Astrophysics Data System (ADS)

    Jha, Sanjeev K.; Shrestha, Durga L.; Stadnyk, Tricia A.; Coulibaly, Paulin

    2018-03-01

    Flooding in Canada is often caused by heavy rainfall during the snowmelt period. Hydrologic forecast centers rely on precipitation forecasts obtained from numerical weather prediction (NWP) models to enforce hydrological models for streamflow forecasting. The uncertainties in raw quantitative precipitation forecasts (QPFs) are enhanced by physiography and orography effects over a diverse landscape, particularly in the western catchments of Canada. A Bayesian post-processing approach called rainfall post-processing (RPP), developed in Australia (Robertson et al., 2013; Shrestha et al., 2015), has been applied to assess its forecast performance in a Canadian catchment. Raw QPFs obtained from two sources, Global Ensemble Forecasting System (GEFS) Reforecast 2 project, from the National Centers for Environmental Prediction, and Global Deterministic Forecast System (GDPS), from Environment and Climate Change Canada, are used in this study. The study period from January 2013 to December 2015 covered a major flood event in Calgary, Alberta, Canada. Post-processed results show that the RPP is able to remove the bias and reduce the errors of both GEFS and GDPS forecasts. Ensembles generated from the RPP reliably quantify the forecast uncertainty.

  11. The Schaake shuffle: A method for reconstructing space-time variability in forecasted precipitation and temperature fields

    USGS Publications Warehouse

    Clark, M.R.; Gangopadhyay, S.; Hay, L.; Rajagopalan, B.; Wilby, R.

    2004-01-01

    A number of statistical methods that are used to provide local-scale ensemble forecasts of precipitation and temperature do not contain realistic spatial covariability between neighboring stations or realistic temporal persistence for subsequent forecast lead times. To demonstrate this point, output from a global-scale numerical weather prediction model is used in a stepwise multiple linear regression approach to downscale precipitation and temperature to individual stations located in and around four study basins in the United States. Output from the forecast model is downscaled for lead times up to 14 days. Residuals in the regression equation are modeled stochastically to provide 100 ensemble forecasts. The precipitation and temperature ensembles from this approach have a poor representation of the spatial variability and temporal persistence. The spatial correlations for downscaled output are considerably lower than observed spatial correlations at short forecast lead times (e.g., less than 5 days) when there is high accuracy in the forecasts. At longer forecast lead times, the downscaled spatial correlations are close to zero. Similarly, the observed temporal persistence is only partly present at short forecast lead times. A method is presented for reordering the ensemble output in order to recover the space-time variability in precipitation and temperature fields. In this approach, the ensemble members for a given forecast day are ranked and matched with the rank of precipitation and temperature data from days randomly selected from similar dates in the historical record. The ensembles are then reordered to correspond to the original order of the selection of historical data. Using this approach, the observed intersite correlations, intervariable correlations, and the observed temporal persistence are almost entirely recovered. This reordering methodology also has applications for recovering the space-time variability in modeled streamflow. ?? 2004 American Meteorological Society.

  12. Using Analog Ensemble to generate spatially downscaled probabilistic wind power forecasts

    NASA Astrophysics Data System (ADS)

    Delle Monache, L.; Shahriari, M.; Cervone, G.

    2017-12-01

    We use the Analog Ensemble (AnEn) method to generate probabilistic 80-m wind power forecasts. We use data from the NCEP GFS ( 28 km resolution) and NCEP NAM (12 km resolution). We use forecasts data from NAM and GFS, and analysis data from NAM which enables us to: 1) use a lower-resolution model to create higher-resolution forecasts, and 2) use a higher-resolution model to create higher-resolution forecasts. The former essentially increases computing speed and the latter increases forecast accuracy. An aggregated model of the former can be compared against the latter to measure the accuracy of the AnEn spatial downscaling. The AnEn works by taking a deterministic future forecast and comparing it with past forecasts. The model searches for the best matching estimates within the past forecasts and selects the predictand value corresponding to these past forecasts as the ensemble prediction for the future forecast. Our study is based on predicting wind speed and air density at more than 13,000 grid points in the continental US. We run the AnEn model twice: 1) estimating 80-m wind speed by using predictor variables such as temperature, pressure, geopotential height, U-component and V-component of wind, 2) estimating air density by using predictors such as temperature, pressure, and relative humidity. We use the air density values to correct the standard wind power curves for different values of air density. The standard deviation of the ensemble members (i.e. ensemble spread) will be used as the degree of difficulty to predict wind power at different locations. The value of the correlation coefficient between the ensemble spread and the forecast error determines the appropriateness of this measure. This measure is prominent for wind farm developers as building wind farms in regions with higher predictability will reduce the real-time risks of operating in the electricity markets.

  13. Error models for official mortality forecasts.

    PubMed

    Alho, J M; Spencer, B D

    1990-09-01

    "The Office of the Actuary, U.S. Social Security Administration, produces alternative forecasts of mortality to reflect uncertainty about the future.... In this article we identify the components and assumptions of the official forecasts and approximate them by stochastic parametric models. We estimate parameters of the models from past data, derive statistical intervals for the forecasts, and compare them with the official high-low intervals. We use the models to evaluate the forecasts rather than to develop different predictions of the future. Analysis of data from 1972 to 1985 shows that the official intervals for mortality forecasts for males or females aged 45-70 have approximately a 95% chance of including the true mortality rate in any year. For other ages the chances are much less than 95%." excerpt

  14. Towards an improved ensemble precipitation forecast: A probabilistic post-processing approach

    NASA Astrophysics Data System (ADS)

    Khajehei, Sepideh; Moradkhani, Hamid

    2017-03-01

    Recently, ensemble post-processing (EPP) has become a commonly used approach for reducing the uncertainty in forcing data and hence hydrologic simulation. The procedure was introduced to build ensemble precipitation forecasts based on the statistical relationship between observations and forecasts. More specifically, the approach relies on a transfer function that is developed based on a bivariate joint distribution between the observations and the simulations in the historical period. The transfer function is used to post-process the forecast. In this study, we propose a Bayesian EPP approach based on copula functions (COP-EPP) to improve the reliability of the precipitation ensemble forecast. Evaluation of the copula-based method is carried out by comparing the performance of the generated ensemble precipitation with the outputs from an existing procedure, i.e. mixed type meta-Gaussian distribution. Monthly precipitation from Climate Forecast System Reanalysis (CFS) and gridded observation from Parameter-Elevation Relationships on Independent Slopes Model (PRISM) have been employed to generate the post-processed ensemble precipitation. Deterministic and probabilistic verification frameworks are utilized in order to evaluate the outputs from the proposed technique. Distribution of seasonal precipitation for the generated ensemble from the copula-based technique is compared to the observation and raw forecasts for three sub-basins located in the Western United States. Results show that both techniques are successful in producing reliable and unbiased ensemble forecast, however, the COP-EPP demonstrates considerable improvement in the ensemble forecast in both deterministic and probabilistic verification, in particular in characterizing the extreme events in wet seasons.

  15. Short-term streamflow forecasting with global climate change implications A comparative study between genetic programming and neural network models

    NASA Astrophysics Data System (ADS)

    Makkeasorn, A.; Chang, N. B.; Zhou, X.

    2008-05-01

    SummarySustainable water resources management is a critically important priority across the globe. While water scarcity limits the uses of water in many ways, floods may also result in property damages and the loss of life. To more efficiently use the limited amount of water under the changing world or to resourcefully provide adequate time for flood warning, the issues have led us to seek advanced techniques for improving streamflow forecasting on a short-term basis. This study emphasizes the inclusion of sea surface temperature (SST) in addition to the spatio-temporal rainfall distribution via the Next Generation Radar (NEXRAD), meteorological data via local weather stations, and historical stream data via USGS gage stations to collectively forecast discharges in a semi-arid watershed in south Texas. Two types of artificial intelligence models, including genetic programming (GP) and neural network (NN) models, were employed comparatively. Four numerical evaluators were used to evaluate the validity of a suite of forecasting models. Research findings indicate that GP-derived streamflow forecasting models were generally favored in the assessment in which both SST and meteorological data significantly improve the accuracy of forecasting. Among several scenarios, NEXRAD rainfall data were proven its most effectiveness for a 3-day forecast, and SST Gulf-to-Atlantic index shows larger impacts than the SST Gulf-to-Pacific index on the streamflow forecasts. The most forward looking GP-derived models can even perform a 30-day streamflow forecast ahead of time with an r-square of 0.84 and RMS error 5.4 in our study.

  16. Satellite freeze forecast system

    NASA Technical Reports Server (NTRS)

    Martsolf, J. D. (Principal Investigator)

    1983-01-01

    Provisions for back-up operations for the satellite freeze forecast system are discussed including software and hardware maintenance and DS/1000-1V linkage; troubleshooting; and digitized radar usage. The documentation developed; dissemination of data products via television and the IFAS computer network; data base management; predictive models; the installation of and progress towards the operational status of key stations; and digital data acquisition are also considered. The d addition of dew point temperature into the P-model is outlined.

  17. Comparative verification between GEM model and official aviation terminal forecasts

    NASA Technical Reports Server (NTRS)

    Miller, Robert G.

    1988-01-01

    The Generalized Exponential Markov (GEM) model uses the local standard airways observation (SAO) to predict hour-by-hour the following elements: temperature, pressure, dew point depression, first and second cloud-layer height and amount, ceiling, total cloud amount, visibility, wind, and present weather conditions. GEM is superior to persistence at all projections for all elements in a large independent sample. A minute-by-minute GEM forecasting system utilizing the Automated Weather Observation System (AWOS) is under development.

  18. Economic Models for Projecting Industrial Capacity for Defense Production: A Review

    DTIC Science & Technology

    1983-02-01

    macroeconomic forecast to establish the level of civilian final demand; all use the DoD Bridge Table to allocate budget category outlays to industries. Civilian...output table.’ 3. Macroeconomic Assumptions and the Prediction of Final Demand All input-output models require as a starting point a prediction of final... macroeconomic fore- cast of GNP and its components and (2) a methodology to transform these forecast values of consumption, investment, exports, etc. into

  19. Medium-range reference evapotranspiration forecasts for the contiguous United States based on multi-model numerical weather predictions

    NASA Astrophysics Data System (ADS)

    Medina, Hanoi; Tian, Di; Srivastava, Puneet; Pelosi, Anna; Chirico, Giovanni B.

    2018-07-01

    Reference evapotranspiration (ET0) plays a fundamental role in agronomic, forestry, and water resources management. Estimating and forecasting ET0 have long been recognized as a major challenge for researchers and practitioners in these communities. This work explored the potential of multiple leading numerical weather predictions (NWPs) for estimating and forecasting summer ET0 at 101 U.S. Regional Climate Reference Network stations over nine climate regions across the contiguous United States (CONUS). Three leading global NWP model forecasts from THORPEX Interactive Grand Global Ensemble (TIGGE) dataset were used in this study, including the single model ensemble forecasts from the European Centre for Medium-Range Weather Forecasts (EC), the National Centers for Environmental Prediction Global Forecast System (NCEP), and the United Kingdom Meteorological Office forecasts (MO), as well as multi-model ensemble forecasts from the combinations of these NWP models. A regression calibration was employed to bias correct the ET0 forecasts. Impact of individual forecast variables on ET0 forecasts were also evaluated. The results showed that the EC forecasts provided the least error and highest skill and reliability, followed by the MO and NCEP forecasts. The multi-model ensembles constructed from the combination of EC and MO forecasts provided slightly better performance than the single model EC forecasts. The regression process greatly improved ET0 forecast performances, particularly for the regions involving stations near the coast, or with a complex orography. The performance of EC forecasts was only slightly influenced by the size of the ensemble members, particularly at short lead times. Even with less ensemble members, EC still performed better than the other two NWPs. Errors in the radiation forecasts, followed by those in the wind, had the most detrimental effects on the ET0 forecast performances.

  20. A short-term ensemble wind speed forecasting system for wind power applications

    NASA Astrophysics Data System (ADS)

    Baidya Roy, S.; Traiteur, J. J.; Callicutt, D.; Smith, M.

    2011-12-01

    This study develops an adaptive, blended forecasting system to provide accurate wind speed forecasts 1 hour ahead of time for wind power applications. The system consists of an ensemble of 21 forecasts with different configurations of the Weather Research and Forecasting Single Column Model (WRFSCM) and a persistence model. The ensemble is calibrated against observations for a 2 month period (June-July, 2008) at a potential wind farm site in Illinois using the Bayesian Model Averaging (BMA) technique. The forecasting system is evaluated against observations for August 2008 at the same site. The calibrated ensemble forecasts significantly outperform the forecasts from the uncalibrated ensemble while significantly reducing forecast uncertainty under all environmental stability conditions. The system also generates significantly better forecasts than persistence, autoregressive (AR) and autoregressive moving average (ARMA) models during the morning transition and the diurnal convective regimes. This forecasting system is computationally more efficient than traditional numerical weather prediction models and can generate a calibrated forecast, including model runs and calibration, in approximately 1 minute. Currently, hour-ahead wind speed forecasts are almost exclusively produced using statistical models. However, numerical models have several distinct advantages over statistical models including the potential to provide turbulence forecasts. Hence, there is an urgent need to explore the role of numerical models in short-term wind speed forecasting. This work is a step in that direction and is likely to trigger a debate within the wind speed forecasting community.

  1. A retrospective evaluation of traffic forecasting techniques.

    DOT National Transportation Integrated Search

    2016-08-01

    Traffic forecasting techniquessuch as extrapolation of previous years traffic volumes, regional travel demand models, or : local trip generation rateshelp planners determine needed transportation improvements. Thus, knowing the accuracy of t...

  2. Air Quality Forecasting through Different Statistical and Artificial Intelligence Techniques

    NASA Astrophysics Data System (ADS)

    Mishra, D.; Goyal, P.

    2014-12-01

    Urban air pollution forecasting has emerged as an acute problem in recent years because there are sever environmental degradation due to increase in harmful air pollutants in the ambient atmosphere. In this study, there are different types of statistical as well as artificial intelligence techniques are used for forecasting and analysis of air pollution over Delhi urban area. These techniques are principle component analysis (PCA), multiple linear regression (MLR) and artificial neural network (ANN) and the forecasting are observed in good agreement with the observed concentrations through Central Pollution Control Board (CPCB) at different locations in Delhi. But such methods suffers from disadvantages like they provide limited accuracy as they are unable to predict the extreme points i.e. the pollution maximum and minimum cut-offs cannot be determined using such approach. Also, such methods are inefficient approach for better output forecasting. But with the advancement in technology and research, an alternative to the above traditional methods has been proposed i.e. the coupling of statistical techniques with artificial Intelligence (AI) can be used for forecasting purposes. The coupling of PCA, ANN and fuzzy logic is used for forecasting of air pollutant over Delhi urban area. The statistical measures e.g., correlation coefficient (R), normalized mean square error (NMSE), fractional bias (FB) and index of agreement (IOA) of the proposed model are observed in better agreement with the all other models. Hence, the coupling of statistical and artificial intelligence can be use for the forecasting of air pollutant over urban area.

  3. Real-time forecasting of an epidemic using a discrete time stochastic model: a case study of pandemic influenza (H1N1-2009)

    PubMed Central

    2011-01-01

    Background Real-time forecasting of epidemics, especially those based on a likelihood-based approach, is understudied. This study aimed to develop a simple method that can be used for the real-time epidemic forecasting. Methods A discrete time stochastic model, accounting for demographic stochasticity and conditional measurement, was developed and applied as a case study to the weekly incidence of pandemic influenza (H1N1-2009) in Japan. By imposing a branching process approximation and by assuming the linear growth of cases within each reporting interval, the epidemic curve is predicted using only two parameters. The uncertainty bounds of the forecasts are computed using chains of conditional offspring distributions. Results The quality of the forecasts made before the epidemic peak appears largely to depend on obtaining valid parameter estimates. The forecasts of both weekly incidence and final epidemic size greatly improved at and after the epidemic peak with all the observed data points falling within the uncertainty bounds. Conclusions Real-time forecasting using the discrete time stochastic model with its simple computation of the uncertainty bounds was successful. Because of the simplistic model structure, the proposed model has the potential to additionally account for various types of heterogeneity, time-dependent transmission dynamics and epidemiological details. The impact of such complexities on forecasting should be explored when the data become available as part of the disease surveillance. PMID:21324153

  4. On some methods for assessing earthquake predictions

    NASA Astrophysics Data System (ADS)

    Molchan, G.; Romashkova, L.; Peresan, A.

    2017-09-01

    A regional approach to the problem of assessing earthquake predictions inevitably faces a deficit of data. We point out some basic limits of assessment methods reported in the literature, considering the practical case of the performance of the CN pattern recognition method in the prediction of large Italian earthquakes. Along with the classical hypothesis testing, a new game approach, the so-called parimutuel gambling (PG) method, is examined. The PG, originally proposed for the evaluation of the probabilistic earthquake forecast, has been recently adapted for the case of 'alarm-based' CN prediction. The PG approach is a non-standard method; therefore it deserves careful examination and theoretical analysis. We show that the PG alarm-based version leads to an almost complete loss of information about predicted earthquakes (even for a large sample). As a result, any conclusions based on the alarm-based PG approach are not to be trusted. We also show that the original probabilistic PG approach does not necessarily identifies the genuine forecast correctly among competing seismicity rate models, even when applied to extensive data.

  5. Potential Improvements in Space Weather Forecasting using New Products Developed for the Upcoming DSCOVR Solar Wind Mission

    NASA Astrophysics Data System (ADS)

    Cash, M. D.; Biesecker, D. A.; Reinard, A. A.

    2013-05-01

    The Deep Space Climate Observatory (DSCOVR) mission, which is scheduled for launch in late 2014, will provide real-time solar wind thermal plasma and magnetic measurements to ensure continuous monitoring for space weather forecasting. DSCOVR will be located at the L1 Lagrangian point and will include a Faraday cup to measure the proton and alpha components of the solar wind and a triaxial fluxgate magnetometer to measure the magnetic field in three dimensions. The real-time data provided by DSCOVR will be used to generate space weather applications and products that have been demonstrated to be highly accurate and provide actionable information for customers. We present several future space weather products currently under evaluation for development. New potential space weather products for use with DSCOVR real-time data include: automated shock detection, more accurate L1 to Earth delay time, automatic solar wind regime identification, and prediction of rotations in solar wind Bz within magnetic clouds. Additional ideas from the community on future space weather products are encouraged.

  6. A 30-day forecast experiment with the GISS model and updated sea surface temperatures

    NASA Technical Reports Server (NTRS)

    Spar, J.; Atlas, R.; Kuo, E.

    1975-01-01

    The GISS model was used to compute two parallel global 30-day forecasts for the month January 1974. In one forecast, climatological January sea surface temperatures were used, while in the other observed sea temperatures were inserted and updated daily. A comparison of the two forecasts indicated no clear-cut beneficial effect of daily updating of sea surface temperatures. Despite the rapid decay of daily predictability, the model produced a 30-day mean forecast for January 1974 that was generally superior to persistence and climatology when evaluated over either the globe or the Northern Hemisphere, but not over smaller regions.

  7. Use of the Box and Jenkins time series technique in traffic forecasting

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nihan, N.L.; Holmesland, K.O.

    The use of recently developed time series techniques for short-term traffic volume forecasting is examined. A data set containing monthly volumes on a freeway segment for 1968-76 is used to fit a time series model. The resultant model is used to forecast volumes for 1977. The forecast volumes are then compared with actual volumes in 1977. Time series techniques can be used to develop highly accurate and inexpensive short-term forecasts. The feasibility of using these models to evaluate the effects of policy changes or other outside impacts is considered. (1 diagram, 1 map, 14 references,2 tables)

  8. Application of Second-Moment Source Analysis to Three Problems in Earthquake Forecasting

    NASA Astrophysics Data System (ADS)

    Donovan, J.; Jordan, T. H.

    2011-12-01

    Though earthquake forecasting models have often represented seismic sources as space-time points (usually hypocenters), a more complete hazard analysis requires the consideration of finite-source effects, such as rupture extent, orientation, directivity, and stress drop. The most compact source representation that includes these effects is the finite moment tensor (FMT), which approximates the degree-two polynomial moments of the stress glut by its projection onto the seismic (degree-zero) moment tensor. This projection yields a scalar space-time source function whose degree-one moments define the centroid moment tensor (CMT) and whose degree-two moments define the FMT. We apply this finite-source parameterization to three forecasting problems. The first is the question of hypocenter bias: can we reject the null hypothesis that the conditional probability of hypocenter location is uniformly distributed over the rupture area? This hypothesis is currently used to specify rupture sets in the "extended" earthquake forecasts that drive simulation-based hazard models, such as CyberShake. Following McGuire et al. (2002), we test the hypothesis using the distribution of FMT directivity ratios calculated from a global data set of source slip inversions. The second is the question of source identification: given an observed FMT (and its errors), can we identify it with an FMT in the complete rupture set that represents an extended fault-based rupture forecast? Solving this problem will facilitate operational earthquake forecasting, which requires the rapid updating of earthquake triggering and clustering models. Our proposed method uses the second-order uncertainties as a norm on the FMT parameter space to identify the closest member of the hypothetical rupture set and to test whether this closest member is an adequate representation of the observed event. Finally, we address the aftershock excitation problem: given a mainshock, what is the spatial distribution of aftershock probabilities? The FMT representation allows us to generalize the models typically used for this purpose (e.g., marked point process models, such as ETAS), which will again be necessary in operational earthquake forecasting. To quantify aftershock probabilities, we compare mainshock FMTs with the first and second spatial moments of weighted aftershock hypocenters. We will describe applications of these results to the Uniform California Earthquake Rupture Forecast, version 3, which is now under development by the Working Group on California Earthquake Probabilities.

  9. An Objective Verification of the North American Mesoscale Model for Kennedy Space Center and Cape Canaveral Air Force Station

    NASA Technical Reports Server (NTRS)

    Bauman, William H., III

    2010-01-01

    The 45th Weather Squadron (45 WS) Launch Weather Officers use the 12-km resolution North American Mesoscale (NAM) model (MesoNAM) text and graphical product forecasts extensively to support launch weather operations. However, the actual performance of the model at Kennedy Space Center (KSC) and Cape Canaveral Air Force Station (CCAFS) has not been measured objectively. In order to have tangible evidence of model performance, the 45 WS tasked the Applied Meteorology Unit to conduct a detailed statistical analysis of model output compared to observed values. The model products are provided to the 45 WS by ACTA, Inc. and include hourly forecasts from 0 to 84 hours based on model initialization times of 00, 06, 12 and 18 UTC. The objective analysis compared the MesoNAM forecast winds, temperature and dew point, as well as the changes in these parameters over time, to the observed values from the sensors in the KSC/CCAFS wind tower network. Objective statistics will give the forecasters knowledge of the model's strength and weaknesses, which will result in improved forecasts for operations.

  10. National Water Model assessment for water management needs over the Western United States.

    NASA Astrophysics Data System (ADS)

    Viterbo, F.; Thorstensen, A.; Cifelli, R.; Hughes, M.; Johnson, L.; Gochis, D.; Wood, A.; Nowak, K.; Dahm, K.

    2017-12-01

    The NOAA National Water Model (NWM) became operational in August 2016, providing the first ever, real-time distributed high-resolution forecasts for the continental United States. Since the model predictions occur at the CONUS scale, there is a need to evaluate the NWM in different regions to assess the wide variety and heterogeneity of hydrological processes that are included (e.g., snow melting, ice freezing, flash flooding events). In particular, to address water management needs in the western U.S., a collaborative project between the Bureau of Reclamation, NOAA, and NCAR is ongoing to assess the NWM performance for reservoir inflow forecasting needs and water management operations. In this work, the NWM is evaluated using different forecast ranges (short to medium) and retrospective historical runs forced by North American Land Data Assimilation System (NLDAS) analysis to assess the NWM skills over key headwaters watersheds in the western U.S. that are of interest to the Bureau of Reclamation. The streamflow results are analyzed and compared with the available observations at the gauge sites, evaluating different NWM operational versions together with the already existing local River Forecast Center forecasts. The NWM uncertainty is also considered, evaluating the propagation of the precipitation forcing uncertainties in the resulting hydrograph. In addition, the possible advantages of high-resolution distributed output variables (such as soil moisture, evapotranspiration fluxes) are investigated, to determine the utility of such information for water managers in terms of watershed characteristics in areas that traditionally have not had any forecast information. The results highlight the NWM's ability to provide high-resolution forecast information in space and time. As anticipated, the performance is best in regions that are dominated by natural flows and where the model has benefited from efforts toward parameter calibration. In highly regulated basins, the water management operations result in NWM overestimation of the peak flows and too fast recession curves. As a future project goal, some reforecasts will be run on target locations, ingesting water management information into the NWM and comparing the new results with the actual operational forecast.

  11. Utility of flood warning systems for emergency management

    NASA Astrophysics Data System (ADS)

    Molinari, Daniela; Ballio, Francesco; Menoni, Scira

    2010-05-01

    The presentation is focused on a simple and crucial question for warning systems: are flood and hydrological modelling and forecasting helpful to manage flood events? Indeed, it is well known that a warning process can be invalidated by inadequate forecasts so that the accuracy and robustness of the previsional model is a key issue for any flood warning procedure. However, one problem still arises at this perspective: when forecasts can be considered to be adequate? According to Murphy (1993, Wea. Forecasting 8, 281-293), forecasts hold no intrinsic value but they acquire it through their ability to influence the decisions made by their users. Moreover, we can add that forecasts value depends on the particular problem at stake showing, this way, a multifaceted nature. As a result, forecasts verification should not be seen as a universal process, instead it should be tailored to the particular context in which forecasts are implemented. This presentation focuses on warning problems in mountain regions, whereas the short time which is distinctive of flood events makes the provision of adequate forecasts particularly significant. In this context, the quality of a forecast is linked to its capability to reduce the impact of a flood by improving the correctness of the decision about issuing (or not) a warning as well as of the implementation of a proper set of actions aimed at lowering potential flood damages. The present study evaluates the performance of a real flood forecasting system from this perspective. In detail, a back analysis of past flood events and available verification tools have been implemented. The final objective was to evaluate the system ability to support appropriate decisions with respect not only to the flood characteristics but also to the peculiarities of the area at risk as well as to the uncertainty of forecasts. This meant to consider also flood damages and forecasting uncertainty among the decision variables. Last but not least, the presentation explains how the procedure implemented in the case study could support the definition of a proper warning rule.

  12. A systematic review of studies on forecasting the dynamics of influenza outbreaks

    PubMed Central

    Nsoesie, Elaine O; Brownstein, John S; Ramakrishnan, Naren; Marathe, Madhav V

    2014-01-01

    Forecasting the dynamics of influenza outbreaks could be useful for decision-making regarding the allocation of public health resources. Reliable forecasts could also aid in the selection and implementation of interventions to reduce morbidity and mortality due to influenza illness. This paper reviews methods for influenza forecasting proposed during previous influenza outbreaks and those evaluated in hindsight. We discuss the various approaches, in addition to the variability in measures of accuracy and precision of predicted measures. PubMed and Google Scholar searches for articles on influenza forecasting retrieved sixteen studies that matched the study criteria. We focused on studies that aimed at forecasting influenza outbreaks at the local, regional, national, or global level. The selected studies spanned a wide range of regions including USA, Sweden, Hong Kong, Japan, Singapore, United Kingdom, Canada, France, and Cuba. The methods were also applied to forecast a single measure or multiple measures. Typical measures predicted included peak timing, peak height, daily/weekly case counts, and outbreak magnitude. Due to differences in measures used to assess accuracy, a single estimate of predictive error for each of the measures was difficult to obtain. However, collectively, the results suggest that these diverse approaches to influenza forecasting are capable of capturing specific outbreak measures with some degree of accuracy given reliable data and correct disease assumptions. Nonetheless, several of these approaches need to be evaluated and their performance quantified in real-time predictions. PMID:24373466

  13. A systematic review of studies on forecasting the dynamics of influenza outbreaks.

    PubMed

    Nsoesie, Elaine O; Brownstein, John S; Ramakrishnan, Naren; Marathe, Madhav V

    2014-05-01

    Forecasting the dynamics of influenza outbreaks could be useful for decision-making regarding the allocation of public health resources. Reliable forecasts could also aid in the selection and implementation of interventions to reduce morbidity and mortality due to influenza illness. This paper reviews methods for influenza forecasting proposed during previous influenza outbreaks and those evaluated in hindsight. We discuss the various approaches, in addition to the variability in measures of accuracy and precision of predicted measures. PubMed and Google Scholar searches for articles on influenza forecasting retrieved sixteen studies that matched the study criteria. We focused on studies that aimed at forecasting influenza outbreaks at the local, regional, national, or global level. The selected studies spanned a wide range of regions including USA, Sweden, Hong Kong, Japan, Singapore, United Kingdom, Canada, France, and Cuba. The methods were also applied to forecast a single measure or multiple measures. Typical measures predicted included peak timing, peak height, daily/weekly case counts, and outbreak magnitude. Due to differences in measures used to assess accuracy, a single estimate of predictive error for each of the measures was difficult to obtain. However, collectively, the results suggest that these diverse approaches to influenza forecasting are capable of capturing specific outbreak measures with some degree of accuracy given reliable data and correct disease assumptions. Nonetheless, several of these approaches need to be evaluated and their performance quantified in real-time predictions. © 2013 The Authors. Influenza and Other Respiratory Viruses Published by John Wiley & Sons Ltd.

  14. Drought mitigation in Australia: reducing the losses but not removing the hazard

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Heathcote, R.L.

    This paper presents a brief history of drought in Australia, pointing up some parallels and contrasts with the North American experience. It then outlines the various strategies (technological and nontechnological) that have been adopted to try to mitigate drought. It reviews the current thinking on the effect of increasing atmospheric carbon dioxide on the Australian climate and their relevance to agricultural and pastoral activities through possible modification of the incidence and intensity of drought. Finally, it evaluates the history of technological adjustments to drought stresses and tries to forecast the success or failure of such adjustments to foreseeable climate change.

  15. Monitoring Users' Satisfactions of the NOAA NWS Climate Products and Services

    NASA Astrophysics Data System (ADS)

    Horsfall, F. M.; Timofeyeva, M. M.; Dixon, S.; Meyers, J. C.

    2011-12-01

    The NOAA's National Weather Service (NWS) Climate Services Division (CSD) ensures the relevance of NWS climate products and services. There are several ongoing efforts to identify the level of user satisfaction. One of these efforts includes periodical surveys conducted by Claes Fornell International (CFI) Group using the American Customer Satisfaction Index (ACSI), which is "the only uniform, national, cross-industry measure of satisfaction with the quality of goods and services available in the United States" (http://www.cfigroup.com/acsi/overview.asp). The CFI Group conducted NWS Climate Products and Services surveys in 2004 and 2009. In 2010, a prominent routine was established for a periodical assessment of the customer satisfaction. From 2010 onward, yearly surveys will cover major climate services products and services. An expanded suite of climate products will be surveyed every other year. Each survey evaluated customer satisfaction with a range of NWS climate services, data, and products, including Climate Prediction Center (CPC) outlooks, drought monitoring, and ENSO monitoring and forecasts, as well as NWS local climate data and forecast products and services. The survey results provide insight into the NWS climate customer base and their requirements for climate services. They also evaluate whether we are meeting the needs of customers and the ease of their understanding for routine climate services, forecasts, and outlooks. In addition, the evaluation of specific topics, such as NWS forecast product category names, probabilistic nature of climate products, interpretation issues, etc., were addressed to assess how our users interpret prediction terminology. This paper provides an analysis of the following products: hazards, extended-range, long-lead and drought outlooks, El Nino Southern Oscillation monitoring and predictions as well as local climate data products. Two key issues make comparing the different surveys challenging, including the inconsistent suite of characteristics measured and the different number of respondent collected for each survey. Regardless of these two factors contributing to uncertainty of the results, CSD observed general improvement in customer satisfaction. Although, all NWS climate products have competitive scores, the leading ACSIs are for NWS Drought products and climate surface observation products. Overall, the survey results identify requirements for improving existing NWS climate services and introducing new ones. To date, the 2011 survey results have not been evaluated, but will be included in the conference presentation. A key point out of the initial 2011 survey results was that the climate section captured the greatest interest (as measured by number of respondents) of the customers of NWS products and services.

  16. Post LANDSAT D Advanced Concept Evaluation (PLACE). [with emphasis on mission planning, technological forecasting, and user requirements

    NASA Technical Reports Server (NTRS)

    1977-01-01

    An outline is given of the mission objectives and requirements, system elements, system concepts, technology requirements and forecasting, and priority analysis for LANDSAT D. User requirements and mission analysis and technological forecasting are emphasized. Mission areas considered include agriculture, range management, forestry, geology, land use, water resources, environmental quality, and disaster assessment.

  17. The net benefits of human-ignited wildfire forecasting: the case of Tribal land units in the United States

    PubMed Central

    Prestemon, Jeffrey P.; Butry, David T.; Thomas, Douglas S.

    2017-01-01

    Research shows that some categories of human-ignited wildfires might be forecastable, due to their temporal clustering, with the possibility that resources could be pre-deployed to help reduce the incidence of such wildfires. We estimated several kinds of incendiary and other human-ignited wildfire forecast models at the weekly time step for tribal land units in the United States, evaluating their forecast skill out of sample. Analyses show that an Autoregressive Conditional Poisson (ACP) model of both incendiary and non-incendiary human-ignited wildfires is more accurate out of sample compared to alternatives, and the simplest of the ACP models performed the best. Additionally, an ensemble of these and simpler, less analytically intensive approaches performed even better. Wildfire hotspot forecast models using all model types were evaluated in a simulation mode to assess the net benefits of forecasts in the context of law enforcement resource reallocations. Our analyses show that such hotspot tools could yield large positive net benefits for the tribes in terms of suppression expenditures averted for incendiary wildfires but that the hotspot tools were less likely to be beneficial for addressing outbreaks of non-incendiary human-ignited wildfires. PMID:28769549

  18. The net benefits of human-ignited wildfire forecasting: the case of Tribal land units in the United States.

    PubMed

    Prestemon, Jeffrey P; Butry, David T; Thomas, Douglas S

    2016-01-01

    Research shows that some categories of human-ignited wildfires might be forecastable, due to their temporal clustering, with the possibility that resources could be pre-deployed to help reduce the incidence of such wildfires. We estimated several kinds of incendiary and other human-ignited wildfire forecast models at the weekly time step for tribal land units in the United States, evaluating their forecast skill out of sample. Analyses show that an Autoregressive Conditional Poisson (ACP) model of both incendiary and non-incendiary human-ignited wildfires is more accurate out of sample compared to alternatives, and the simplest of the ACP models performed the best. Additionally, an ensemble of these and simpler, less analytically intensive approaches performed even better. Wildfire hotspot forecast models using all model types were evaluated in a simulation mode to assess the net benefits of forecasts in the context of law enforcement resource reallocations. Our analyses show that such hotspot tools could yield large positive net benefits for the tribes in terms of suppression expenditures averted for incendiary wildfires but that the hotspot tools were less likely to be beneficial for addressing outbreaks of non-incendiary human-ignited wildfires.

  19. A comparative study of artificial neural network, adaptive neuro fuzzy inference system and support vector machine for forecasting river flow in the semiarid mountain region

    NASA Astrophysics Data System (ADS)

    He, Zhibin; Wen, Xiaohu; Liu, Hu; Du, Jun

    2014-02-01

    Data driven models are very useful for river flow forecasting when the underlying physical relationships are not fully understand, but it is not clear whether these data driven models still have a good performance in the small river basin of semiarid mountain regions where have complicated topography. In this study, the potential of three different data driven methods, artificial neural network (ANN), adaptive neuro fuzzy inference system (ANFIS) and support vector machine (SVM) were used for forecasting river flow in the semiarid mountain region, northwestern China. The models analyzed different combinations of antecedent river flow values and the appropriate input vector has been selected based on the analysis of residuals. The performance of the ANN, ANFIS and SVM models in training and validation sets are compared with the observed data. The model which consists of three antecedent values of flow has been selected as the best fit model for river flow forecasting. To get more accurate evaluation of the results of ANN, ANFIS and SVM models, the four quantitative standard statistical performance evaluation measures, the coefficient of correlation (R), root mean squared error (RMSE), Nash-Sutcliffe efficiency coefficient (NS) and mean absolute relative error (MARE), were employed to evaluate the performances of various models developed. The results indicate that the performance obtained by ANN, ANFIS and SVM in terms of different evaluation criteria during the training and validation period does not vary substantially; the performance of the ANN, ANFIS and SVM models in river flow forecasting was satisfactory. A detailed comparison of the overall performance indicated that the SVM model performed better than ANN and ANFIS in river flow forecasting for the validation data sets. The results also suggest that ANN, ANFIS and SVM method can be successfully applied to establish river flow with complicated topography forecasting models in the semiarid mountain regions.

  20. Forecasts of 21st Century Snowpack and Implications for Snowmobile and Snowcoach Use in Yellowstone National Park

    PubMed Central

    Tercek, Michael; Rodman, Ann

    2016-01-01

    Climate models project a general decline in western US snowpack throughout the 21st century, but long-term, spatially fine-grained, management-relevant projections of snowpack are not available for Yellowstone National Park. We focus on the implications that future snow declines may have for oversnow vehicle (snowmobile and snowcoach) use because oversnow tourism is critical to the local economy and has been a contentious issue in the park for more than 30 years. Using temperature-indexed snow melt and accumulation equations with temperature and precipitation data from downscaled global climate models, we forecast the number of days that will be suitable for oversnow travel on each Yellowstone road segment during the mid- and late-21st century. The west entrance road was forecast to be the least suitable for oversnow use in the future while the south entrance road was forecast to remain at near historical levels of driveability. The greatest snow losses were forecast for the west entrance road where as little as 29% of the December–March oversnow season was forecast to be driveable by late century. The climatic conditions that allow oversnow vehicle use in Yellowstone are forecast by our methods to deteriorate significantly in the future. At some point it may be prudent to consider plowing the roads that experience the greatest snow losses. PMID:27467778

  1. Legal Challenges to Teacher Evaluation: Pitfalls and Possibilities in the States

    ERIC Educational Resources Information Center

    Hazi, Helen M.

    2014-01-01

    This article forecasts potential legal problems emerging from the use of new teacher evaluation systems in the states. This research was a policy analysis that combined three types of data to forecast the states and the legal challenges they might encounter: state policy data, selected case law, and problems from the literature of teacher…

  2. Evaluation of Clear-Air Turbulence Diagnostics: GTG in Korea

    NASA Astrophysics Data System (ADS)

    Kim, J.-H.; Chun, H.-Y.; Jang, W.; Sharman, R. D.

    2009-04-01

    Turbulence forecasting algorithm, the Graphical Turbulence Guidance (GTG) system developed at NCAR (Sharman et al., 2006), is evaluated with available turbulence observations (e.g. pilot reports; PIREPs) reported in South Korea during the recent 4 years (2003-2007). Clear-air turbulence (CAT) is extracted from PIREPs by using cloud-to-ground lightning flash data from Korean Meteorological Administration (KMA). The GTG system includes several steps. First, 45 turbulence indices are calculated in the East Asian region near Korean peninsula using the Regional Data Assimilation and Prediction System (RDAPS) analysis data with 30 km horizontal grid spacing provided by KMA. Second, 10 CAT indices that performed ten best forecasting score are selected. The scoring method is based on the probability of detection, which is calculated using PIREPs exclusively of moderate-or-greater intensity. Various statistical examinations and sensitivity tests of the GTG system are performed by yearly and seasonally classified PIREPs in South Korea. Performance of GTG is more consistent and stable than that of any individual diagnostic in each year and season. In addition, current-year forecasting based on yearly PIREPs is better than adjacent-year forecasting and year-after-year forecasting. Seasonal forecasting is generally better than yearly forecasting, because selected CAT indices in each season represent meteorological condition much more properly than applying the selected CAT indices to all seasons. Wintertime forecasting is the best among the four seasonal forecastings. This is likely due to that the GTG system consists of many CAT indices related to jet stream, and turbulence associated with the jet can be most activated in wintertime under strong jet magnitude. On the other hand, summertime forecasting skill is much less than in wintertime. To acquire better performance for summertime forecasting, it is likely to develop more turbulence indices related to, for example, convections. By sensitivity test to the number of combined indices, it is found that yearly and seasonal GTG is the best when about 7 CAT indices are combined.

  3. 7 CFR 1710.205 - Minimum approval requirements for all load forecasts.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... electronically to RUS computer software applications. RUS will evaluate borrower load forecasts for readability...'s engineering planning documents, such as the construction work plan, incorporate consumer and usage...

  4. 7 CFR 1710.205 - Minimum approval requirements for all load forecasts.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... electronically to RUS computer software applications. RUS will evaluate borrower load forecasts for readability...'s engineering planning documents, such as the construction work plan, incorporate consumer and usage...

  5. 7 CFR 1710.205 - Minimum approval requirements for all load forecasts.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... computer software applications. RUS will evaluate borrower load forecasts for readability, understanding..., distribution costs, other systems costs, average revenue per kWh, and inflation. Also, a borrower's engineering...

  6. 7 CFR 1710.205 - Minimum approval requirements for all load forecasts.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... electronically to RUS computer software applications. RUS will evaluate borrower load forecasts for readability...'s engineering planning documents, such as the construction work plan, incorporate consumer and usage...

  7. Flash floods in June and July 2009 in the Czech Republic

    NASA Astrophysics Data System (ADS)

    Sercl, Petr; Danhelka, Jan; Tyl, Radovan

    2010-05-01

    Several flash floods occurred in the territory of the Czech Republic during the last decade of June and beginning of July 2009. These events caused vast economic damage and unfortunately there were also 15 fatalities. The complete evaluation of flash floods from the point of view of its meteorological cause, hydrological development and impacts was done under the responsibility of Ministry of Environment of the Czech Republic. Czech Hydrometeorological Institute (CHMI) coordinated this project. The results of the project contain several concrete proposals to reduce the threat of flash floods in the Czech Republic. The proposals were focused on possible future improvements of CHMI forecasting service activities including all other parts of Flood prevention and protection system in the Czech Republic. The synoptic cause of floods was the extraordinary long (12 days is longest in more than 60 years history) presence of eastern cyclonic situation over the Central Europe bringing warm, moist and unstable air masses from Mediterranean and Black Sea area. Very intensive thunderstorms accompanied by torrential rain occurred almost daily. Storm cells were organized in train effect and crossed repeatedly the same places within several hours. The extremity of the flood events was also influenced by soil saturation due to daily occurrence of rainstorms. The peak flows exceeded significantly 100-year of recurrence time in many sites. The observed and mainly unobserved catchments were affected. The detailed fields of rainfall amounts were gained from the adjusted meteorological radar observation. All of the available rainfall measurements at the climatological and rain gage stations were used for the adjustment. Hydraulic and rainfall-runoff models were used to evaluate the hydrological response. It was proved again, that the outputs from currently used meteorological forecasting models are not sufficient for a reliable local forecast of the strong convective storms and their possible consequences - flash floods. Within the frame of the research project SP/1c4/16/07 "Implementation of new techniques for stream flow forecasting tools" (project period 2007-2011, funded by Ministry of Environment) a forecasting system for the estimation of runoff response to torrential rainfall has been developed. CN value automatic update based on antecedent precipitation is used to estimate possible runoff from storm. Ten minutes radar rainfall estimates and COTREC based nowcasting serve as meteorological input. Results of 2009 events hindcast are presented. It proved the underestimation of rainfall by raw radar data and thus the need for real time adjustment of radar estimates based on rain gauge data. The main output from presented forecasting system is an estimation of flash flood risk. Risk estimation is based on exceeding 3 defined thresholds defined as ratios between the estimated peak flow and theoretical 100-year flood on particular basin. The procedures mentioned above were being developed during the period 2008-2009. Intensive testing is expected by CHMI forecasting offices during 2010-2011.

  8. Integrating Wind Profiling Radars and Radiosonde Observations with Model Point Data to Develop a Decision Support Tool to Assess Upper-level Winds For Space Launch

    NASA Technical Reports Server (NTRS)

    Bauman, William H., III; Flinn, Clay

    2012-01-01

    Launch directors need to know upper-level wind forecasts. We developed an Excel-based GUI to display upper-level winds: (1) Rawinsonde at CCAFS, (2) Wind profilers at KSC, (3) Model point data at CCAFS.

  9. Seasonal scale water deficit forecasting in Africa and the Middle East using NASA's Land Information System (LIS)

    NASA Astrophysics Data System (ADS)

    Shukla, Shraddhanand; Arsenault, Kristi R.; Getirana, Augusto; Kumar, Sujay V.; Roningen, Jeanne; Zaitchik, Ben; McNally, Amy; Koster, Randal D.; Peters-Lidard, Christa

    2017-04-01

    Drought and water scarcity are among the important issues facing several regions within Africa and the Middle East. A seamless and effective monitoring and early warning system is needed by regional/national stakeholders. Such system should support a proactive drought management approach and mitigate the socio-economic losses up to the extent possible. In this presentation, we report on the ongoing development and validation of a seasonal scale water deficit forecasting system based on NASA's Land Information System (LIS) and seasonal climate forecasts. First, our presentation will focus on the implementation and validation of the LIS models used for drought and water availability monitoring in the region. The second part will focus on evaluating drought and water availability forecasts. Finally, details will be provided of our ongoing collaboration with end-user partners in the region (e.g., USAID's Famine Early Warning Systems Network, FEWS NET), on formulating meaningful early warning indicators, effective communication and seamless dissemination of the monitoring and forecasting products through NASA's web-services. The water deficit forecasting system thus far incorporates NOAA's Noah land surface model (LSM), version 3.3, the Variable Infiltration Capacity (VIC) model, version 4.12, NASA GMAO's Catchment LSM, and the Noah Multi-Physics (MP) LSM (the latter two incorporate prognostic water table schemes). In addition, the LSMs' surface and subsurface runoff are routed through the Hydrological Modeling and Analysis Platform (HyMAP) to simulate surface water dynamics. The LSMs are driven by NASA/GMAO's Modern-Era Retrospective analysis for Research and Applications, Version 2 (MERRA-2), and the USGS and UCSB Climate Hazards Group InfraRed Precipitation with Station (CHIRPS) daily rainfall dataset. The LIS software framework integrates these forcing datasets and drives the four LSMs and HyMAP. The Land Verification Toolkit (LVT) is used for the evaluation of the LSMs, as it provides model ensemble metrics and the ability to compare against a variety of remotely sensed measurements, like different evapotranspiration (ET) and soil moisture products, and other reanalysis datasets that are available for this region. Comparison of the models' energy and hydrological budgets will be shown for this region (and sub-basin level, e.g., Blue Nile River) and time period (1981-2015), along with evaluating ET, streamflow, groundwater storage and soil moisture, using evaluation metrics (e.g., anomaly correlation, RMSE, etc.). The system uses seasonal climate forecasts from NASA's GMAO (the Goddard Earth Observing System Model, version 5) and NCEP's Climate Forecast System, version 2, and it produces forecasts of soil moisture, ET and streamflow out to 6 months in the future. Forecasts of those variables are formulated in terms of indicators to provide forecasts of drought and water availability in the region.

  10. Serving Real-Time Point Observation Data in netCDF using Climate and Forecasting Discrete Sampling Geometry Conventions

    NASA Astrophysics Data System (ADS)

    Ward-Garrison, C.; May, R.; Davis, E.; Arms, S. C.

    2016-12-01

    NetCDF is a set of software libraries and self-describing, machine-independent data formats that support the creation, access, and sharing of array-oriented scientific data. The Climate and Forecasting (CF) metadata conventions for netCDF foster the ability to work with netCDF files in general and useful ways. These conventions include metadata attributes for physical units, standard names, and spatial coordinate systems. While these conventions have been successful in easing the use of working with netCDF-formatted output from climate and forecast models, their use for point-based observation data has been less so. Unidata has prototyped using the discrete sampling geometry (DSG) CF conventions to serve, using the THREDDS Data Server, the real-time point observation data flowing across the Internet Data Distribution (IDD). These data originate in text format reports for individual stations (e.g. METAR surface data or TEMP upper air data) and are converted and stored in netCDF files in real-time. This work discusses the experiences and challenges of using the current CF DSG conventions for storing such real-time data. We also test how parts of netCDF's extended data model can address these challenges, in order to inform decisions for a future version of CF (CF 2.0) that would take advantage of features of the netCDF enhanced data model.

  11. Page not found

    Science.gov Websites

    ) Kohala (PHKM) South Point (PHWA) Forecasts Activity Planner Hawaii Marine Aviation Fire Weather Local Activity Planner Hawaii Marine Aviation Fire Weather Local Graphics National Graphics Model Output Climate

  12. Research on Nonlinear Time Series Forecasting of Time-Delay NN Embedded with Bayesian Regularization

    NASA Astrophysics Data System (ADS)

    Jiang, Weijin; Xu, Yusheng; Xu, Yuhui; Wang, Jianmin

    Based on the idea of nonlinear prediction of phase space reconstruction, this paper presented a time delay BP neural network model, whose generalization capability was improved by Bayesian regularization. Furthermore, the model is applied to forecast the imp&exp trades in one industry. The results showed that the improved model has excellent generalization capabilities, which not only learned the historical curve, but efficiently predicted the trend of business. Comparing with common evaluation of forecasts, we put on a conclusion that nonlinear forecast can not only focus on data combination and precision improvement, it also can vividly reflect the nonlinear characteristic of the forecasting system. While analyzing the forecasting precision of the model, we give a model judgment by calculating the nonlinear characteristic value of the combined serial and original serial, proved that the forecasting model can reasonably 'catch' the dynamic characteristic of the nonlinear system which produced the origin serial.

  13. Forecasting Dust Storms Using the CARMA-Dust Model and MM5 Weather Data

    NASA Astrophysics Data System (ADS)

    Barnum, B. H.; Winstead, N. S.; Wesely, J.; Hakola, A.; Colarco, P.; Toon, O. B.; Ginoux, P.; Brooks, G.; Hasselbarth, L. M.; Toth, B.; Sterner, R.

    2002-12-01

    An operational model for the forecast of dust storms in Northern Africa, the Middle East and Southwest Asia has been developed for the United States Air Force Weather Agency (AFWA). The dust forecast model uses the 5th generation Penn State Mesoscale Meteorology Model (MM5), and a modified version of the Colorado Aerosol and Radiation Model for Atmospheres (CARMA). AFWA conducted a 60 day evaluation of the dust model to look at the model's ability to forecast dust storms for short, medium and long range (72 hour) forecast periods. The study used satellite and ground observations of dust storms to verify the model's effectiveness. Each of the main mesoscale forecast theaters was broken down into smaller sub-regions for detailed analysis. The study found the forecast model was able to forecast dust storms in Saharan Africa and the Sahel region with an average Probability of Detection (POD)exceeding 68%, with a 16% False Alarm Rate (FAR). The Southwest Asian theater had average POD's of 61% with FAR's averaging 10%.

  14. Performance evaluation of ionospheric time delay forecasting models using GPS observations at a low-latitude station

    NASA Astrophysics Data System (ADS)

    Sivavaraprasad, G.; Venkata Ratnam, D.

    2017-07-01

    Ionospheric delay is one of the major atmospheric effects on the performance of satellite-based radio navigation systems. It limits the accuracy and availability of Global Positioning System (GPS) measurements, related to critical societal and safety applications. The temporal and spatial gradients of ionospheric total electron content (TEC) are driven by several unknown priori geophysical conditions and solar-terrestrial phenomena. Thereby, the prediction of ionospheric delay is challenging especially over Indian sub-continent. Therefore, an appropriate short/long-term ionospheric delay forecasting model is necessary. Hence, the intent of this paper is to forecast ionospheric delays by considering day to day, monthly and seasonal ionospheric TEC variations. GPS-TEC data (January 2013-December 2013) is extracted from a multi frequency GPS receiver established at K L University, Vaddeswaram, Guntur station (geographic: 16.37°N, 80.37°E; geomagnetic: 7.44°N, 153.75°E), India. An evaluation, in terms of forecasting capabilities, of three ionospheric time delay models - an Auto Regressive Moving Average (ARMA) model, Auto Regressive Integrated Moving Average (ARIMA) model, and a Holt-Winter's model is presented. The performances of these models are evaluated through error measurement analysis during both geomagnetic quiet and disturbed days. It is found that, ARMA model is effectively forecasting the ionospheric delay with an accuracy of 82-94%, which is 10% more superior to ARIMA and Holt-Winter's models. Moreover, the modeled VTEC derived from International Reference Ionosphere, IRI (IRI-2012) model and new global TEC model, Neustrelitz TEC Model (NTCM-GL) have compared with forecasted VTEC values of ARMA, ARIMA and Holt-Winter's models during geomagnetic quiet days. The forecast results are indicating that ARMA model would be useful to set up an early warning system for ionospheric disturbances at low latitude regions.

  15. Evaluating the Performance of Single and Double Moment Microphysics Schemes During a Synoptic-Scale Snowfall Event

    NASA Technical Reports Server (NTRS)

    Molthan, Andrew L.

    2011-01-01

    Increases in computing resources have allowed for the utilization of high-resolution weather forecast models capable of resolving cloud microphysical and precipitation processes among varying numbers of hydrometeor categories. Several microphysics schemes are currently available within the Weather Research and Forecasting (WRF) model, ranging from single-moment predictions of precipitation content to double-moment predictions that include a prediction of particle number concentrations. Each scheme incorporates several assumptions related to the size distribution, shape, and fall speed relationships of ice crystals in order to simulate cold-cloud processes and resulting precipitation. Field campaign data offer a means of evaluating the assumptions present within each scheme. The Canadian CloudSat/CALIPSO Validation Project (C3VP) represented collaboration among the CloudSat, CALIPSO, and NASA Global Precipitation Measurement mission communities, to observe cold season precipitation processes relevant to forecast model evaluation and the eventual development of satellite retrievals of cloud properties and precipitation rates. During the C3VP campaign, widespread snowfall occurred on 22 January 2007, sampled by aircraft and surface instrumentation that provided particle size distributions, ice water content, and fall speed estimations along with traditional surface measurements of temperature and precipitation. In this study, four single-moment and two double-moment microphysics schemes were utilized to generate hypothetical WRF forecasts of the event, with C3VP data used in evaluation of their varying assumptions. Schemes that incorporate flexibility in size distribution parameters and density assumptions are shown to be preferable to fixed constants, and that a double-moment representation of the snow category may be beneficial when representing the effects of aggregation. These results may guide forecast centers in optimal configurations of their forecast models for winter weather and identify best practices present within these various schemes.

  16. Interactive Vegetation Phenology, Soil Moisture, and Monthly Temperature Forecasts

    NASA Technical Reports Server (NTRS)

    Koster, R. D.; Walker, G. K.

    2015-01-01

    The time scales that characterize the variations of vegetation phenology are generally much longer than those that characterize atmospheric processes. The explicit modeling of phenological processes in an atmospheric forecast system thus has the potential to provide skill to subseasonal or seasonal forecasts. We examine this possibility here using a forecast system fitted with a dynamic vegetation phenology model. We perform three experiments, each consisting of 128 independent warm-season monthly forecasts: 1) an experiment in which both soil moisture states and carbon states (e.g., those determining leaf area index) are initialized realistically, 2) an experiment in which the carbon states are prescribed to climatology throughout the forecasts, and 3) an experiment in which both the carbon and soil moisture states are prescribed to climatology throughout the forecasts. Evaluating the monthly forecasts of air temperature in each ensemble against observations, as well as quantifying the inherent predictability of temperature within each ensemble, shows that dynamic phenology can indeed contribute positively to subseasonal forecasts, though only to a small extent, with an impact dwarfed by that of soil moisture.

  17. Evaluation of medium-range ensemble flood forecasting based on calibration strategies and ensemble methods in Lanjiang Basin, Southeast China

    NASA Astrophysics Data System (ADS)

    Liu, Li; Gao, Chao; Xuan, Weidong; Xu, Yue-Ping

    2017-11-01

    Ensemble flood forecasts by hydrological models using numerical weather prediction products as forcing data are becoming more commonly used in operational flood forecasting applications. In this study, a hydrological ensemble flood forecasting system comprised of an automatically calibrated Variable Infiltration Capacity model and quantitative precipitation forecasts from TIGGE dataset is constructed for Lanjiang Basin, Southeast China. The impacts of calibration strategies and ensemble methods on the performance of the system are then evaluated. The hydrological model is optimized by the parallel programmed ε-NSGA II multi-objective algorithm. According to the solutions by ε-NSGA II, two differently parameterized models are determined to simulate daily flows and peak flows at each of the three hydrological stations. Then a simple yet effective modular approach is proposed to combine these daily and peak flows at the same station into one composite series. Five ensemble methods and various evaluation metrics are adopted. The results show that ε-NSGA II can provide an objective determination on parameter estimation, and the parallel program permits a more efficient simulation. It is also demonstrated that the forecasts from ECMWF have more favorable skill scores than other Ensemble Prediction Systems. The multimodel ensembles have advantages over all the single model ensembles and the multimodel methods weighted on members and skill scores outperform other methods. Furthermore, the overall performance at three stations can be satisfactory up to ten days, however the hydrological errors can degrade the skill score by approximately 2 days, and the influence persists until a lead time of 10 days with a weakening trend. With respect to peak flows selected by the Peaks Over Threshold approach, the ensemble means from single models or multimodels are generally underestimated, indicating that the ensemble mean can bring overall improvement in forecasting of flows. For peak values taking flood forecasts from each individual member into account is more appropriate.

  18. Development and Implementation of Dynamic Scripts to Support Local Model Verification at National Weather Service Weather Forecast Offices

    NASA Technical Reports Server (NTRS)

    Zavordsky, Bradley; Case, Jonathan L.; Gotway, John H.; White, Kristopher; Medlin, Jeffrey; Wood, Lance; Radell, Dave

    2014-01-01

    Local modeling with a customized configuration is conducted at National Weather Service (NWS) Weather Forecast Offices (WFOs) to produce high-resolution numerical forecasts that can better simulate local weather phenomena and complement larger scale global and regional models. The advent of the Environmental Modeling System (EMS), which provides a pre-compiled version of the Weather Research and Forecasting (WRF) model and wrapper Perl scripts, has enabled forecasters to easily configure and execute the WRF model on local workstations. NWS WFOs often use EMS output to help in forecasting highly localized, mesoscale features such as convective initiation, the timing and inland extent of lake effect snow bands, lake and sea breezes, and topographically-modified winds. However, quantitatively evaluating model performance to determine errors and biases still proves to be one of the challenges in running a local model. Developed at the National Center for Atmospheric Research (NCAR), the Model Evaluation Tools (MET) verification software makes performing these types of quantitative analyses easier, but operational forecasters do not generally have time to familiarize themselves with navigating the sometimes complex configurations associated with the MET tools. To assist forecasters in running a subset of MET programs and capabilities, the Short-term Prediction Research and Transition (SPoRT) Center has developed and transitioned a set of dynamic, easily configurable Perl scripts to collaborating NWS WFOs. The objective of these scripts is to provide SPoRT collaborating partners in the NWS with the ability to evaluate the skill of their local EMS model runs in near real time with little prior knowledge of the MET package. The ultimate goal is to make these verification scripts available to the broader NWS community in a future version of the EMS software. This paper provides an overview of the SPoRT MET scripts, instructions for how the scripts are run, and example use cases.

  19. Wind Information Uplink to Aircraft Performing Interval Management Operations

    NASA Technical Reports Server (NTRS)

    Ahmad, Nashat; Barmore, Bryan; Swieringa, Kurt

    2015-01-01

    The accuracy of the wind information used to generate trajectories for aircraft performing Interval Management (IM) operations is critical to the success of an IM operation. There are two main forms of uncertainty in the wind information used by the Flight Deck Interval Management (FIM) equipment. The first is the accuracy of the forecast modeling done by the weather provider. The second is that only a small subset of the forecast data can be uplinked to the aircraft for use by the FIM equipment, resulting in loss of additional information. This study focuses on what subset of forecast data, such as the number and location of the points where the wind is sampled should be made available to uplink to the aircraft.

  20. Forecasting Occurrences of Activities.

    PubMed

    Minor, Bryan; Cook, Diane J

    2017-07-01

    While activity recognition has been shown to be valuable for pervasive computing applications, less work has focused on techniques for forecasting the future occurrence of activities. We present an activity forecasting method to predict the time that will elapse until a target activity occurs. This method generates an activity forecast using a regression tree classifier and offers an advantage over sequence prediction methods in that it can predict expected time until an activity occurs. We evaluate this algorithm on real-world smart home datasets and provide evidence that our proposed approach is most effective at predicting activity timings.

  1. Working papers: applicability of Box Jenkins techniques to gasoline consumption forecasting

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Not Available

    Reliable consumption forecasts are needed, however, traditional linear time-series techniques don't adequately account for an environment so subject to change. This report evaluates the use of Box Jenkins techniques for gasoline consumption forecasting. Box Jenkins methods were applied to data obtained from the Colorado Petroleum Association and the Colorado Highway Users Fund to ''predict'' 1978 and 1979 consumption. These results prove the Box Jenkins techniques to be quite effective. Forecasts for 1980-81 are included along with suggestions for continuous use of the technique to monitor consumption.

  2. Forecasting Jakarta composite index (IHSG) based on chen fuzzy time series and firefly clustering algorithm

    NASA Astrophysics Data System (ADS)

    Ningrum, R. W.; Surarso, B.; Farikhin; Safarudin, Y. M.

    2018-03-01

    This paper proposes the combination of Firefly Algorithm (FA) and Chen Fuzzy Time Series Forecasting. Most of the existing fuzzy forecasting methods based on fuzzy time series use the static length of intervals. Therefore, we apply an artificial intelligence, i.e., Firefly Algorithm (FA) to set non-stationary length of intervals for each cluster on Chen Method. The method is evaluated by applying on the Jakarta Composite Index (IHSG) and compare with classical Chen Fuzzy Time Series Forecasting. Its performance verified through simulation using Matlab.

  3. Advances in Landslide Hazard Forecasting: Evaluation of Global and Regional Modeling Approach

    NASA Technical Reports Server (NTRS)

    Kirschbaum, Dalia B.; Adler, Robert; Hone, Yang; Kumar, Sujay; Peters-Lidard, Christa; Lerner-Lam, Arthur

    2010-01-01

    A prototype global satellite-based landslide hazard algorithm has been developed to identify areas that exhibit a high potential for landslide activity by combining a calculation of landslide susceptibility with satellite-derived rainfall estimates. A recent evaluation of this algorithm framework found that while this tool represents an important first step in larger-scale landslide forecasting efforts, it requires several modifications before it can be fully realized as an operational tool. The evaluation finds that the landslide forecasting may be more feasible at a regional scale. This study draws upon a prior work's recommendations to develop a new approach for considering landslide susceptibility and forecasting at the regional scale. This case study uses a database of landslides triggered by Hurricane Mitch in 1998 over four countries in Central America: Guatemala, Honduras, EI Salvador and Nicaragua. A regional susceptibility map is calculated from satellite and surface datasets using a statistical methodology. The susceptibility map is tested with a regional rainfall intensity-duration triggering relationship and results are compared to global algorithm framework for the Hurricane Mitch event. The statistical results suggest that this regional investigation provides one plausible way to approach some of the data and resolution issues identified in the global assessment, providing more realistic landslide forecasts for this case study. Evaluation of landslide hazards for this extreme event helps to identify several potential improvements of the algorithm framework, but also highlights several remaining challenges for the algorithm assessment, transferability and performance accuracy. Evaluation challenges include representation errors from comparing susceptibility maps of different spatial resolutions, biases in event-based landslide inventory data, and limited nonlandslide event data for more comprehensive evaluation. Additional factors that may improve algorithm performance accuracy include incorporating additional triggering factors such as tectonic activity, anthropogenic impacts and soil moisture into the algorithm calculation. Despite these limitations, the methodology presented in this regional evaluation is both straightforward to calculate and easy to interpret, making results transferable between regions and allowing findings to be placed within an inter-comparison framework. The regional algorithm scenario represents an important step in advancing regional and global-scale landslide hazard assessment and forecasting.

  4. Analyses and forecasts of a tornadic supercell outbreak using a 3DVAR system ensemble

    NASA Astrophysics Data System (ADS)

    Zhuang, Zhaorong; Yussouf, Nusrat; Gao, Jidong

    2016-05-01

    As part of NOAA's "Warn-On-Forecast" initiative, a convective-scale data assimilation and prediction system was developed using the WRF-ARW model and ARPS 3DVAR data assimilation technique. The system was then evaluated using retrospective short-range ensemble analyses and probabilistic forecasts of the tornadic supercell outbreak event that occurred on 24 May 2011 in Oklahoma, USA. A 36-member multi-physics ensemble system provided the initial and boundary conditions for a 3-km convective-scale ensemble system. Radial velocity and reflectivity observations from four WSR-88Ds were assimilated into the ensemble using the ARPS 3DVAR technique. Five data assimilation and forecast experiments were conducted to evaluate the sensitivity of the system to data assimilation frequencies, in-cloud temperature adjustment schemes, and fixed- and mixed-microphysics ensembles. The results indicated that the experiment with 5-min assimilation frequency quickly built up the storm and produced a more accurate analysis compared with the 10-min assimilation frequency experiment. The predicted vertical vorticity from the moist-adiabatic in-cloud temperature adjustment scheme was larger in magnitude than that from the latent heat scheme. Cycled data assimilation yielded good forecasts, where the ensemble probability of high vertical vorticity matched reasonably well with the observed tornado damage path. Overall, the results of the study suggest that the 3DVAR analysis and forecast system can provide reasonable forecasts of tornadic supercell storms.

  5. Meta-heuristic CRPS minimization for the calibration of short-range probabilistic forecasts

    NASA Astrophysics Data System (ADS)

    Mohammadi, Seyedeh Atefeh; Rahmani, Morteza; Azadi, Majid

    2016-08-01

    This paper deals with the probabilistic short-range temperature forecasts over synoptic meteorological stations across Iran using non-homogeneous Gaussian regression (NGR). NGR creates a Gaussian forecast probability density function (PDF) from the ensemble output. The mean of the normal predictive PDF is a bias-corrected weighted average of the ensemble members and its variance is a linear function of the raw ensemble variance. The coefficients for the mean and variance are estimated by minimizing the continuous ranked probability score (CRPS) during a training period. CRPS is a scoring rule for distributional forecasts. In the paper of Gneiting et al. (Mon Weather Rev 133:1098-1118, 2005), Broyden-Fletcher-Goldfarb-Shanno (BFGS) method is used to minimize the CRPS. Since BFGS is a conventional optimization method with its own limitations, we suggest using the particle swarm optimization (PSO), a robust meta-heuristic method, to minimize the CRPS. The ensemble prediction system used in this study consists of nine different configurations of the weather research and forecasting model for 48-h forecasts of temperature during autumn and winter 2011 and 2012. The probabilistic forecasts were evaluated using several common verification scores including Brier score, attribute diagram and rank histogram. Results show that both BFGS and PSO find the optimal solution and show the same evaluation scores, but PSO can do this with a feasible random first guess and much less computational complexity.

  6. A comparative analysis of errors in long-term econometric forecasts

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tepel, R.

    1986-04-01

    The growing body of literature that documents forecast accuracy falls generally into two parts. The first is prescriptive and is carried out by modelers who use simulation analysis as a tool for model improvement. These studies are ex post, that is, they make use of known values for exogenous variables and generate an error measure wholly attributable to the model. The second type of analysis is descriptive and seeks to measure errors, identify patterns among errors and variables and compare forecasts from different sources. Most descriptive studies use an ex ante approach, that is, they evaluate model outputs based onmore » estimated (or forecasted) exogenous variables. In this case, it is the forecasting process, rather than the model, that is under scrutiny. This paper uses an ex ante approach to measure errors in forecast series prepared by Data Resources Incorporated (DRI), Wharton Econometric Forecasting Associates (Wharton), and Chase Econometrics (Chase) and to determine if systematic patterns of errors can be discerned between services, types of variables (by degree of aggregation), length of forecast and time at which the forecast is made. Errors are measured as the percent difference between actual and forecasted values for the historical period of 1971 to 1983.« less

  7. Forecasting Container Throughput at the Doraleh Port in Djibouti through Time Series Analysis

    NASA Astrophysics Data System (ADS)

    Mohamed Ismael, Hawa; Vandyck, George Kobina

    The Doraleh Container Terminal (DCT) located in Djibouti has been noted as the most technologically advanced container terminal on the African continent. DCT's strategic location at the crossroads of the main shipping lanes connecting Asia, Africa and Europe put it in a unique position to provide important shipping services to vessels plying that route. This paper aims to forecast container throughput through the Doraleh Container Port in Djibouti by Time Series Analysis. A selection of univariate forecasting models has been used, namely Triple Exponential Smoothing Model, Grey Model and Linear Regression Model. By utilizing the above three models and their combination, the forecast of container throughput through the Doraleh port was realized. A comparison of the different forecasting results of the three models, in addition to the combination forecast is then undertaken, based on commonly used evaluation criteria Mean Absolute Deviation (MAD) and Mean Absolute Percentage Error (MAPE). The study found that the Linear Regression forecasting Model was the best prediction method for forecasting the container throughput, since its forecast error was the least. Based on the regression model, a ten (10) year forecast for container throughput at DCT has been made.

  8. Evaluation of a wildfire smoke forecasting system as a tool for public health protection.

    PubMed

    Yao, Jiayun; Brauer, Michael; Henderson, Sarah B

    2013-10-01

    Exposure to wildfire smoke has been associated with cardiopulmonary health impacts. Climate change will increase the severity and frequency of smoke events, suggesting a need for enhanced public health protection. Forecasts of smoke exposure can facilitate public health responses. We evaluated the utility of a wildfire smoke forecasting system (BlueSky) for public health protection by comparing its forecasts with observations and assessing their associations with population-level indicators of respiratory health in British Columbia, Canada. We compared BlueSky PM2.5 forecasts with PM2.5 measurements from air quality monitors, and BlueSky smoke plume forecasts with plume tracings from National Oceanic and Atmospheric Administration Hazard Mapping System remote sensing data. Daily counts of the asthma drug salbutamol sulfate dispensations and asthma-related physician visits were aggregated for each geographic local health area (LHA). Daily continuous measures of PM2.5 and binary measures of smoke plume presence, either forecasted or observed, were assigned to each LHA. Poisson regression was used to estimate the association between exposure measures and health indicators. We found modest agreement between forecasts and observations, which was improved during intense fire periods. A 30-μg/m3 increase in BlueSky PM2.5 was associated with an 8% increase in salbutamol dispensations and a 5% increase in asthma-related physician visits. BlueSky plume coverage was associated with 5% and 6% increases in the two health indicators, respectively. The effects were similar for observed smoke, and generally stronger in very smoky areas. BlueSky forecasts showed modest agreement with retrospective measures of smoke and were predictive of respiratory health indicators, suggesting they can provide useful information for public health protection.

  9. Nonmechanistic forecasts of seasonal influenza with iterative one-week-ahead distributions.

    PubMed

    Brooks, Logan C; Farrow, David C; Hyun, Sangwon; Tibshirani, Ryan J; Rosenfeld, Roni

    2018-06-15

    Accurate and reliable forecasts of seasonal epidemics of infectious disease can assist in the design of countermeasures and increase public awareness and preparedness. This article describes two main contributions we made recently toward this goal: a novel approach to probabilistic modeling of surveillance time series based on "delta densities", and an optimization scheme for combining output from multiple forecasting methods into an adaptively weighted ensemble. Delta densities describe the probability distribution of the change between one observation and the next, conditioned on available data; chaining together nonparametric estimates of these distributions yields a model for an entire trajectory. Corresponding distributional forecasts cover more observed events than alternatives that treat the whole season as a unit, and improve upon multiple evaluation metrics when extracting key targets of interest to public health officials. Adaptively weighted ensembles integrate the results of multiple forecasting methods, such as delta density, using weights that can change from situation to situation. We treat selection of optimal weightings across forecasting methods as a separate estimation task, and describe an estimation procedure based on optimizing cross-validation performance. We consider some details of the data generation process, including data revisions and holiday effects, both in the construction of these forecasting methods and when performing retrospective evaluation. The delta density method and an adaptively weighted ensemble of other forecasting methods each improve significantly on the next best ensemble component when applied separately, and achieve even better cross-validated performance when used in conjunction. We submitted real-time forecasts based on these contributions as part of CDC's 2015/2016 FluSight Collaborative Comparison. Among the fourteen submissions that season, this system was ranked by CDC as the most accurate.

  10. Long-range seasonal streamflow forecasting over the Iberian Peninsula using large-scale atmospheric and oceanic information

    NASA Astrophysics Data System (ADS)

    Hidalgo-Muñoz, J. M.; Gámiz-Fortis, S. R.; Castro-Díez, Y.; Argüeso, D.; Esteban-Parra, M. J.

    2015-05-01

    Identifying the relationship between large-scale climate signals and seasonal streamflow may provide a valuable tool for long-range seasonal forecasting in regions under water stress, such as the Iberian Peninsula (IP). The skill of the main teleconnection indices as predictors of seasonal streamflow in the IP was evaluated. The streamflow database used was composed of 382 stations, covering the period 1975-2008. Predictions were made using a leave-one-out cross-validation approach based on multiple linear regression, combining Variance Inflation Factor and Stepwise Backward selection to avoid multicollinearity and select the best subset of predictors. Predictions were made for four forecasting scenarios, from one to four seasons in advance. The correlation coefficient (RHO), Root Mean Square Error Skill Score (RMSESS), and the Gerrity Skill Score (GSS) were used to evaluate the forecasting skill. For autumn streamflow, good forecasting skill (RHO>0.5, RMSESS>20%, GSS>0.4) was found for a third of the stations located in the Mediterranean Andalusian Basin, the North Atlantic Oscillation of the previous winter being the main predictor. Also, fair forecasting skill (RHO>0.44, RMSESS>10%, GSS>0.2) was found in stations in the northwestern IP (16 of these located in the Douro and Tagus Basins) with two seasons in advance. For winter streamflow, fair forecasting skill was found for one season in advance in 168 stations, with the Snow Advance Index as the main predictor. Finally, forecasting was poorer for spring streamflow than for autumn and winter, since only 16 stations showed fair forecasting skill in with one season in advance, particularly in north-western of IP.

  11. Seasonal drought predictability in Portugal using statistical-dynamical techniques

    NASA Astrophysics Data System (ADS)

    Ribeiro, A. F. S.; Pires, C. A. L.

    2016-08-01

    Atmospheric forecasting and predictability are important to promote adaption and mitigation measures in order to minimize drought impacts. This study estimates hybrid (statistical-dynamical) long-range forecasts of the regional drought index SPI (3-months) over homogeneous regions from mainland Portugal, based on forecasts from the UKMO operational forecasting system, with lead-times up to 6 months. ERA-Interim reanalysis data is used for the purpose of building a set of SPI predictors integrating recent past information prior to the forecast launching. Then, the advantage of combining predictors with both dynamical and statistical background in the prediction of drought conditions at different lags is evaluated. A two-step hybridization procedure is performed, in which both forecasted and observed 500 hPa geopotential height fields are subjected to a PCA in order to use forecasted PCs and persistent PCs as predictors. A second hybridization step consists on a statistical/hybrid downscaling to the regional SPI, based on regression techniques, after the pre-selection of the statistically significant predictors. The SPI forecasts and the added value of combining dynamical and statistical methods are evaluated in cross-validation mode, using the R2 and binary event scores. Results are obtained for the four seasons and it was found that winter is the most predictable season, and that most of the predictive power is on the large-scale fields from past observations. The hybridization improves the downscaling based on the forecasted PCs, since they provide complementary information (though modest) beyond that of persistent PCs. These findings provide clues about the predictability of the SPI, particularly in Portugal, and may contribute to the predictability of crops yields and to some guidance on users (such as farmers) decision making process.

  12. The quality and value of seasonal precipitation forecasts for an early warning of large-scale droughts and floods in West Africa

    NASA Astrophysics Data System (ADS)

    Bliefernicht, Jan; Seidel, Jochen; Salack, Seyni; Waongo, Moussa; Laux, Patrick; Kunstmann, Harald

    2017-04-01

    Seasonal precipitation forecasts are a crucial source of information for an early warning of hydro-meteorological extremes in West Africa. However, the current seasonal forecasting system used by the West African weather services in the framework of the West African Climate Outlook forum (PRESAO) is limited to probabilistic precipitation forecasts of 1-month lead time. To improve this provision, we use an ensemble-based quantile-quantile transformation for bias correction of precipitation forecasts provided by a global seasonal ensemble prediction system, the Climate Forecast System Version 2 (CFS2). The statistical technique eliminates systematic differences between global forecasts and observations with the potential to preserve the signal from the model. The technique has also the advantage that it can be easily implemented at national weather services with low capacities. The statistical technique is used to generate probabilistic forecasts of monthly and seasonal precipitation amount and other precipitation indices useful for an early warning of large-scale drought and floods in West Africa. The evaluation of the statistical technique is done using CFS hindcasts (1982 to 2009) in a cross-validation mode to determine the performance of the precipitation forecasts for several lead times focusing on drought and flood events depicted over the Volta and Niger basins. In addition, operational forecasts provided by PRESAO are analyzed from 1998 to 2015. The precipitation forecasts are compared to low-skill reference forecasts generated from gridded observations (i.e. GPCC, CHIRPS) and a novel in-situ gauge database from national observation networks (see Poster EGU2017-10271). The forecasts are evaluated using state-of-the-art verification techniques to determine specific quality attributes of probabilistic forecasts such as reliability, accuracy and skill. In addition, cost-loss approaches are used to determine the value of probabilistic forecasts for multiple users in warning situations. The outcomes of the hindcasts experiment for the Volta basin illustrate that the statistical technique can clearly improve the CFS precipitation forecasts with the potential to provide skillful and valuable early precipitation warnings for large-scale drought and flood situations several months in ahead. In this presentation we give a detailed overview about the ensemble-based quantile-quantile-transformation, its validation and verification and the possibilities of this technique to complement PRESAO. We also highlight the performance of this technique for extremes such as the Sahel drought in the 80ties and in comparison to the various reference data sets (e.g. CFS2, PRESAO, observational data sets) used in this study.

  13. Drought Monitoring and Forecasting Using the Princeton/U Washington National Hydrologic Forecasting System

    NASA Astrophysics Data System (ADS)

    Wood, E. F.; Yuan, X.; Roundy, J. K.; Lettenmaier, D. P.; Mo, K. C.; Xia, Y.; Ek, M. B.

    2011-12-01

    Extreme hydrologic events in the form of droughts or floods are a significant source of social and economic damage in many parts of the world. Having sufficient warning of extreme events allows managers to prepare for and reduce the severity of their impacts. A hydrologic forecast system can give seasonal predictions that can be used by mangers to make better decisions; however there is still much uncertainty associated with such a system. Therefore it is important to understand the forecast skill of the system before transitioning to operational usage. Seasonal reforecasts (1982 - 2010) from the NCEP Climate Forecast System (both version 1 (CFS) and version 2 (CFSv2), Climate Prediction Center (CPC) outlooks and the European Seasonal Interannual Prediction (EUROSIP) system, are assessed for forecasting skill in drought prediction across the U.S., both singularly and as a multi-model system The Princeton/U Washington national hydrologic monitoring and forecast system is being implemented at NCEP/EMC via their Climate Test Bed as the experimental hydrological forecast system to support U.S. operational drought prediction. Using our system, the seasonal forecasts are biased corrected, downscaled and used to drive the Variable Infiltration Capacity (VIC) land surface model to give seasonal forecasts of hydrologic variables with lead times of up to six months. Results are presented for a number of events, with particular focus on the Apalachicola-Chattahoochee-Flint (ACF) River Basin in the South Eastern United States, which has experienced a number of severe droughts in recent years and is a pilot study basin for the National Integrated Drought Information System (NIDIS). The performance of the VIC land surface model is evaluated using observational forcing when compared to observed streamflow. The effectiveness of the forecast system to predict streamflow and soil moisture is evaluated when compared with observed streamflow and modeled soil moisture driven by observed atmospheric forcing. The forecast skills from the dynamical seasonal models (CFSv1, CFSv2, EUROSIP) and CPC are also compared with forecasts based on the Ensemble Streamflow Prediction (ESP) method, which uses initial conditions and historical forcings to generate seasonal forecasts. The skill of the system to predict drought, drought recovery and related hydrological conditions such as low-flows is assessed, along with quantified uncertainty.

  14. Modeled Forecasts of Dengue Fever in San Juan, Puerto Rico Using NASA Satellite Enhanced Weather Forecasts

    NASA Astrophysics Data System (ADS)

    Morin, C.; Quattrochi, D. A.; Zavodsky, B.; Case, J.

    2015-12-01

    Dengue fever (DF) is an important mosquito transmitted disease that is strongly influenced by meteorological and environmental conditions. Recent research has focused on forecasting DF case numbers based on meteorological data. However, these forecasting tools have generally relied on empirical models that require long DF time series to train. Additionally, their accuracy has been tested retrospectively, using past meteorological data. Consequently, the operational utility of the forecasts are still in question because the error associated with weather and climate forecasts are not reflected in the results. Using up-to-date weekly dengue case numbers for model parameterization and weather forecast data as meteorological input, we produced weekly forecasts of DF cases in San Juan, Puerto Rico. Each week, the past weeks' case counts were used to re-parameterize a process-based DF model driven with updated weather forecast data to generate forecasts of DF case numbers. Real-time weather forecast data was produced using the Weather Research and Forecasting (WRF) numerical weather prediction (NWP) system enhanced using additional high-resolution NASA satellite data. This methodology was conducted in a weekly iterative process with each DF forecast being evaluated using county-level DF cases reported by the Puerto Rico Department of Health. The one week DF forecasts were accurate especially considering the two sources of model error. First, weather forecasts were sometimes inaccurate and generally produced lower than observed temperatures. Second, the DF model was often overly influenced by the previous weeks DF case numbers, though this phenomenon could be lessened by increasing the number of simulations included in the forecast. Although these results are promising, we would like to develop a methodology to produce longer range forecasts so that public health workers can better prepare for dengue epidemics.

  15. The potential of radar-based ensemble forecasts for flash-flood early warning in the southern Swiss Alps

    NASA Astrophysics Data System (ADS)

    Liechti, K.; Panziera, L.; Germann, U.; Zappa, M.

    2013-10-01

    This study explores the limits of radar-based forecasting for hydrological runoff prediction. Two novel radar-based ensemble forecasting chains for flash-flood early warning are investigated in three catchments in the southern Swiss Alps and set in relation to deterministic discharge forecasts for the same catchments. The first radar-based ensemble forecasting chain is driven by NORA (Nowcasting of Orographic Rainfall by means of Analogues), an analogue-based heuristic nowcasting system to predict orographic rainfall for the following eight hours. The second ensemble forecasting system evaluated is REAL-C2, where the numerical weather prediction COSMO-2 is initialised with 25 different initial conditions derived from a four-day nowcast with the radar ensemble REAL. Additionally, three deterministic forecasting chains were analysed. The performance of these five flash-flood forecasting systems was analysed for 1389 h between June 2007 and December 2010 for which NORA forecasts were issued, due to the presence of orographic forcing. A clear preference was found for the ensemble approach. Discharge forecasts perform better when forced by NORA and REAL-C2 rather then by deterministic weather radar data. Moreover, it was observed that using an ensemble of initial conditions at the forecast initialisation, as in REAL-C2, significantly improved the forecast skill. These forecasts also perform better then forecasts forced by ensemble rainfall forecasts (NORA) initialised form a single initial condition of the hydrological model. Thus the best results were obtained with the REAL-C2 forecasting chain. However, for regions where REAL cannot be produced, NORA might be an option for forecasting events triggered by orographic precipitation.

  16. Impact of Seasonal Forecasts on Agriculture

    NASA Astrophysics Data System (ADS)

    Aldor-Noiman, S. C.

    2014-12-01

    More extreme and volatile weather conditions are a threat to U.S. agricultural productivity today, as multiple environmental conditions during the growing season impact crop yields. That's why farmers' agronomic management decisions are dominated by consideration for near, medium and seasonal forecasts of climate. The Climate Corporation aims to help farmers around the world protect and improve their farming operations by providing agronomic decision support tools that leverage forecasts on multiple timescales to provide valuable insights directly to farmers. In this talk, we will discuss the impact of accurate seasonal forecasts on major decisions growers face each season. We will also discuss assessment and evaluation of seasonal forecasts in the context of agricultural applications.

  17. Verification of National Weather Service spot forecasts using surface observations

    NASA Astrophysics Data System (ADS)

    Lammers, Matthew Robert

    Software has been developed to evaluate National Weather Service spot forecasts issued to support prescribed burns and early-stage wildfires. Fire management officials request spot forecasts from National Weather Service Weather Forecast Offices to provide detailed guidance as to atmospheric conditions in the vicinity of planned prescribed burns as well as wildfires that do not have incident meteorologists on site. This open source software with online display capabilities is used to examine an extensive set of spot forecasts of maximum temperature, minimum relative humidity, and maximum wind speed from April 2009 through November 2013 nationwide. The forecast values are compared to the closest available surface observations at stations installed primarily for fire weather and aviation applications. The accuracy of the spot forecasts is compared to those available from the National Digital Forecast Database (NDFD). Spot forecasts for selected prescribed burns and wildfires are used to illustrate issues associated with the verification procedures. Cumulative statistics for National Weather Service County Warning Areas and for the nation are presented. Basic error and accuracy metrics for all available spot forecasts and the entire nation indicate that the skill of the spot forecasts is higher than that available from the NDFD, with the greatest improvement for maximum temperature and the least improvement for maximum wind speed.

  18. Application of a hybrid method combining grey model and back propagation artificial neural networks to forecast hepatitis B in china.

    PubMed

    Gan, Ruijing; Chen, Xiaojun; Yan, Yu; Huang, Daizheng

    2015-01-01

    Accurate incidence forecasting of infectious disease provides potentially valuable insights in its own right. It is critical for early prevention and may contribute to health services management and syndrome surveillance. This study investigates the use of a hybrid algorithm combining grey model (GM) and back propagation artificial neural networks (BP-ANN) to forecast hepatitis B in China based on the yearly numbers of hepatitis B and to evaluate the method's feasibility. The results showed that the proposal method has advantages over GM (1, 1) and GM (2, 1) in all the evaluation indexes.

  19. A study on the predictability of the transition day from the dry to the rainy season over South Korea

    NASA Astrophysics Data System (ADS)

    Lee, Sang-Min; Nam, Ji-Eun; Choi, Hee-Wook; Ha, Jong-Chul; Lee, Yong Hee; Kim, Yeon-Hee; Kang, Hyun-Suk; Cho, ChunHo

    2016-08-01

    This study was conducted to evaluate the prediction accuracies of THe Observing system Research and Predictability EXperiment (THORPEX) Interactive Grand Global Ensemble (TIGGE) data at six operational forecast centers using the root-mean square difference (RMSD) and Brier score (BS) from April to July 2012. And it was performed to test the precipitation predictability of ensemble prediction systems (EPS) on the onset of the summer rainy season, the day of withdrawal in spring drought over South Korea on 29 June 2012 with use of the ensemble mean precipitation, ensemble probability precipitation, 10-day lag ensemble forecasts (ensemble mean and probability precipitation), and effective drought index (EDI). The RMSD analysis of atmospheric variables (geopotential-height at 500 hPa, temperature at 850 hPa, sea-level pressure and specific humidity at 850 hPa) showed that the prediction accuracies of the EPS at the Meteorological Service of Canada (CMC) and China Meteorological Administration (CMA) were poor and those at the European Center for Medium-Range Weather Forecasts (ECMWF) and Korea Meteorological Administration (KMA) were good. Also, ECMWF and KMA showed better results than other EPSs for predicting precipitation in the BS distributions. It is also evaluated that the onset of the summer rainy season could be predicted using ensemble-mean precipitation from 4-day leading time at all forecast centers. In addition, the spatial distributions of predicted precipitation of the EPS at KMA and the Met Office of the United Kingdom (UKMO) were similar to those of observed precipitation; thus, the predictability showed good performance. The precipitation probability forecasts of EPS at CMA, the National Centers for Environmental Prediction (NCEP), and UKMO (ECMWF and KMA) at 1-day lead time produced over-forecasting (under-forecasting) in the reliability diagram. And all the ones at 2˜4-day lead time showed under-forecasting. Also, the precipitation on onset day of the summer rainy season could be predicted from a 4-day lead time to initial time by using the 10-day lag ensemble mean and probability forecasts. Additionally, the predictability for withdrawal day of spring drought to be ended due to precipitation on onset day of summer rainy season was evaluated using Effective Drought Index (EDI) to be calculated by ensemble mean precipitation forecasts and spreads at five EPSs.

  20. A forecast experiment of earthquake activity in Japan under Collaboratory for the Study of Earthquake Predictability (CSEP)

    NASA Astrophysics Data System (ADS)

    Hirata, N.; Yokoi, S.; Nanjo, K. Z.; Tsuruoka, H.

    2012-04-01

    One major focus of the current Japanese earthquake prediction research program (2009-2013), which is now integrated with the research program for prediction of volcanic eruptions, is to move toward creating testable earthquake forecast models. For this purpose we started an experiment of forecasting earthquake activity in Japan under the framework of the Collaboratory for the Study of Earthquake Predictability (CSEP) through an international collaboration. We established the CSEP Testing Centre, an infrastructure to encourage researchers to develop testable models for Japan, and to conduct verifiable prospective tests of their model performance. We started the 1st earthquake forecast testing experiment in Japan within the CSEP framework. We use the earthquake catalogue maintained and provided by the Japan Meteorological Agency (JMA). The experiment consists of 12 categories, with 4 testing classes with different time spans (1 day, 3 months, 1 year, and 3 years) and 3 testing regions called "All Japan," "Mainland," and "Kanto." A total of 105 models were submitted, and are currently under the CSEP official suite of tests for evaluating the performance of forecasts. The experiments were completed for 92 rounds for 1-day, 6 rounds for 3-month, and 3 rounds for 1-year classes. For 1-day testing class all models passed all the CSEP's evaluation tests at more than 90% rounds. The results of the 3-month testing class also gave us new knowledge concerning statistical forecasting models. All models showed a good performance for magnitude forecasting. On the other hand, observation is hardly consistent in space distribution with most models when many earthquakes occurred at a spot. Now we prepare the 3-D forecasting experiment with a depth range of 0 to 100 km in Kanto region. The testing center is improving an evaluation system for 1-day class experiment to finish forecasting and testing results within one day. The special issue of 1st part titled Earthquake Forecast Testing Experiment in Japan was published on the Earth, Planets and Space Vol. 63, No.3, 2011 on March, 2011. The 2nd part of this issue, which is now on line, will be published soon. An outline of the experiment and activities of the Japanese Testing Center are published on our WEB site; http://wwweic.eri.u-tokyo.ac.jp/ZISINyosoku/wiki.en/wiki.cgi

  1. A comparative verification of high resolution precipitation forecasts using model output statistics

    NASA Astrophysics Data System (ADS)

    van der Plas, Emiel; Schmeits, Maurice; Hooijman, Nicolien; Kok, Kees

    2017-04-01

    Verification of localized events such as precipitation has become even more challenging with the advent of high-resolution meso-scale numerical weather prediction (NWP). The realism of a forecast suggests that it should compare well against precipitation radar imagery with similar resolution, both spatially and temporally. Spatial verification methods solve some of the representativity issues that point verification gives rise to. In this study a verification strategy based on model output statistics is applied that aims to address both double penalty and resolution effects that are inherent to comparisons of NWP models with different resolutions. Using predictors based on spatial precipitation patterns around a set of stations, an extended logistic regression (ELR) equation is deduced, leading to a probability forecast distribution of precipitation for each NWP model, analysis and lead time. The ELR equations are derived for predictands based on areal calibrated radar precipitation and SYNOP observations. The aim is to extract maximum information from a series of precipitation forecasts, like a trained forecaster would. The method is applied to the non-hydrostatic model Harmonie (2.5 km resolution), Hirlam (11 km resolution) and the ECMWF model (16 km resolution), overall yielding similar Brier skill scores for the 3 post-processed models, but larger differences for individual lead times. Besides, the Fractions Skill Score is computed using the 3 deterministic forecasts, showing somewhat better skill for the Harmonie model. In other words, despite the realism of Harmonie precipitation forecasts, they only perform similarly or somewhat better than precipitation forecasts from the 2 lower resolution models, at least in the Netherlands.

  2. Calls Forecast for the Moscow Ambulance Service. The Impact of Weather Forecast

    NASA Astrophysics Data System (ADS)

    Gordin, Vladimir; Bykov, Philipp

    2015-04-01

    We use the known statistics of the calls for the current and previous days to predict them for tomorrow and for the following days. We assume that this algorithm will work operatively, will cyclically update the available information and will move the horizon of the forecast. Sure, the accuracy of such forecasts depends on their lead time, and from a choice of some group of diagnoses. For comparison we used the error of the inertial forecast (tomorrow there will be the same number of calls as today). Our technology has demonstrated accuracy that is approximately two times better compared to the inertial forecast. We obtained the following result: the number of calls depends on the actual weather in the city as well as on its rate of change. We were interested in the accuracy of the forecast for 12-hour sum of the calls in real situations. We evaluate the impact of the meteorological errors [1] on the forecast errors of the number of Ambulance calls. The weather and the Ambulance calls number both have seasonal tendencies. Therefore, if we have medical information from one city only, we should separate the impacts of such predictors as "annual variations in the number of calls" and "weather". We need to consider the seasonal tendencies (associated, e. g. with the seasonal migration of the population) and the impact of the air temperature simultaneously, rather than sequentially. We forecasted separately the number of calls with diagnoses of cardiovascular group, where it was demonstrated the advantage of the forecasting method, when we use the maximum daily air temperature as a predictor. We have a chance to evaluate statistically the influence of meteorological factors on the dynamics of medical problems. In some cases it may be useful for understanding of the physiology of disease and possible treatment options. We can assimilate some personal archives of medical parameters for the individuals with concrete diseases and the relative meteorological archive. As a result we hope to evaluate how weather can influence the intensity of the disease. Thus, the knowledge of the weather forecast for several days will help us to predict a state of health. The person will be able to take some proactive actions to avoid the anticipated worsening of his health. Literature 1. A. N. Bagrov, F. L. Bykov, V. A. Gordin. Complex Forecast of Surface Meteorological Parameters. Meteorology and Hydrology, 2014, N 5, 5-16 (Russian), 283-291 (English). 2. Bykov, Ph.L., Gordin, V.A., Objective Analysis of the Structure of Three-Dimensional Atmospheric Fronts. Izvestia of Russian Academy of Sciences. Ser. The Physics of Atmosphere and Ocean, 48 (2) (2012), 172-188 (Russian), 152-168 (English), http://dx.doi.org/10.1134/S0001433812020053 3. V.A.Gordin. Mathematical Problems and Methods in Hydrodynamical Weather Forecasting. Amsterdam etc.: Gordon & Breach Publ. House, 2000. 4. V.A.Gordin. Mathematics, Computer, Weather Forecasting, and Other Mathematical Physics' Scenarios. Moscow, Fizmatlit, 2010, 2012 (Russian).

  3. Operational Earthquake Forecasting: Proposed Guidelines for Implementation (Invited)

    NASA Astrophysics Data System (ADS)

    Jordan, T. H.

    2010-12-01

    The goal of operational earthquake forecasting (OEF) is to provide the public with authoritative information about how seismic hazards are changing with time. During periods of high seismic activity, short-term earthquake forecasts based on empirical statistical models can attain nominal probability gains in excess of 100 relative to the long-term forecasts used in probabilistic seismic hazard analysis (PSHA). Prospective experiments are underway by the Collaboratory for the Study of Earthquake Predictability (CSEP) to evaluate the reliability and skill of these seismicity-based forecasts in a variety of tectonic environments. How such information should be used for civil protection is by no means clear, because even with hundredfold increases, the probabilities of large earthquakes typically remain small, rarely exceeding a few percent over forecasting intervals of days or weeks. Civil protection agencies have been understandably cautious in implementing formal procedures for OEF in this sort of “low-probability environment.” Nevertheless, the need to move more quickly towards OEF has been underscored by recent experiences, such as the 2009 L’Aquila earthquake sequence and other seismic crises in which an anxious public has been confused by informal, inconsistent earthquake forecasts. Whether scientists like it or not, rising public expectations for real-time information, accelerated by the use of social media, will require civil protection agencies to develop sources of authoritative information about the short-term earthquake probabilities. In this presentation, I will discuss guidelines for the implementation of OEF informed by my experience on the California Earthquake Prediction Evaluation Council, convened by CalEMA, and the International Commission on Earthquake Forecasting, convened by the Italian government following the L’Aquila disaster. (a) Public sources of information on short-term probabilities should be authoritative, scientific, open, and timely, and they need to convey the epistemic uncertainties in the operational forecasts. (b) Earthquake probabilities should be based on operationally qualified, regularly updated forecasting systems. All operational procedures should be rigorously reviewed by experts in the creation, delivery, and utility of earthquake forecasts. (c) The quality of all operational models should be evaluated for reliability and skill by retrospective testing, and the models should be under continuous prospective testing in a CSEP-type environment against established long-term forecasts and a wide variety of alternative, time-dependent models. (d) Short-term models used in operational forecasting should be consistent with the long-term forecasts used in PSHA. (e) Alert procedures should be standardized to facilitate decisions at different levels of government and among the public, based in part on objective analysis of costs and benefits. (f) In establishing alert procedures, consideration should also be made of the less tangible aspects of value-of-information, such as gains in psychological preparedness and resilience. Authoritative statements of increased risk, even when the absolute probability is low, can provide a psychological benefit to the public by filling information vacuums that can lead to informal predictions and misinformation.

  4. Forecasts of health care utilization related to pandemic A(H1N1)2009 influenza in the Nord-Pas-de-Calais region, France.

    PubMed

    Giovannelli, J; Loury, P; Lainé, M; Spaccaferri, G; Hubert, B; Chaud, P

    2015-05-01

    To describe and evaluate the forecasts of the load that pandemic A(H1N1)2009 influenza would have on the general practitioners (GP) and hospital care systems, especially during its peak, in the Nord-Pas-de-Calais (NPDC) region, France. Modelling study. The epidemic curve was modelled using an assumption of normal distribution of cases. The values for the forecast parameters were estimated from a literature review of observed data from the Southern hemisphere and French Overseas Territories, where the pandemic had already occurred. Two scenarios were considered, one realistic, the other pessimistic, enabling the authors to evaluate the 'reasonable worst case'. Forecasts were then assessed by comparing them with observed data in the NPDC region--of 4 million people. The realistic scenarios forecasts estimated 300,000 cases, 1500 hospitalizations, 225 intensive care units (ICU) admissions for the pandemic wave; 115 hospital beds and 45 ICU beds would be required per day during the peak. The pessimistic scenario's forecasts were 2-3 times higher than the realistic scenario's forecasts. Observed data were: 235,000 cases, 1585 hospitalizations, 58 ICU admissions; and a maximum of 11.6 ICU beds per day. The realistic scenario correctly estimated the temporal distribution of GP and hospitalized cases but overestimated the number of cases admitted to ICU. Obtaining more robust data for parameters estimation--particularly the rate of ICU admission among the population that the authors recommend to use--may provide better forecasts. Copyright © 2015 The Royal Society for Public Health. Published by Elsevier Ltd. All rights reserved.

  5. Predictability and possible earlier awareness of extreme precipitation across Europe

    NASA Astrophysics Data System (ADS)

    Lavers, David; Pappenberger, Florian; Richardson, David; Zsoter, Ervin

    2017-04-01

    Extreme hydrological events can cause large socioeconomic damages in Europe. In winter, a large proportion of these flood episodes are associated with atmospheric rivers, a region of intense water vapour transport within the warm sector of extratropical cyclones. When preparing for such extreme events, forecasts of precipitation from numerical weather prediction models or river discharge forecasts from hydrological models are generally used. Given the strong link between water vapour transport (integrated vapour transport IVT) and heavy precipitation, it is possible that IVT could be used to warn of extreme events. Furthermore, as IVT is located in extratropical cyclones, it is hypothesized to be a more predictable variable due to its link with synoptic-scale atmospheric dynamics. In this research, we firstly provide an overview of the predictability of IVT and precipitation forecasts, and secondly introduce and evaluate the ECMWF Extreme Forecast Index (EFI) for IVT. The EFI is a tool that has been developed to evaluate how ensemble forecasts differ from the model climate, thus revealing the extremeness of the forecast. The ability of the IVT EFI to capture extreme precipitation across Europe during winter 2013/14, 2014/15, and 2015/16 is presented. The results show that the IVT EFI is more capable than the precipitation EFI of identifying extreme precipitation in forecast week 2 during forecasts initialized in a positive North Atlantic Oscillation (NAO) phase. However, the precipitation EFI is superior during the negative NAO phase and at shorter lead times. An IVT EFI example is shown for storm Desmond in December 2015 highlighting its potential to identify upcoming hydrometeorological extremes.

  6. Impact of Assimilation on Heavy Rainfall Simulations Using WRF Model: Sensitivity of Assimilation Results to Background Error Statistics

    NASA Astrophysics Data System (ADS)

    Rakesh, V.; Kantharao, B.

    2017-03-01

    Data assimilation is considered as one of the effective tools for improving forecast skill of mesoscale models. However, for optimum utilization and effective assimilation of observations, many factors need to be taken into account while designing data assimilation methodology. One of the critical components that determines the amount and propagation observation information into the analysis, is model background error statistics (BES). The objective of this study is to quantify how BES in data assimilation impacts on simulation of heavy rainfall events over a southern state in India, Karnataka. Simulations of 40 heavy rainfall events were carried out using Weather Research and Forecasting Model with and without data assimilation. The assimilation experiments were conducted using global and regional BES while the experiment with no assimilation was used as the baseline for assessing the impact of data assimilation. The simulated rainfall is verified against high-resolution rain-gage observations over Karnataka. Statistical evaluation using several accuracy and skill measures shows that data assimilation has improved the heavy rainfall simulation. Our results showed that the experiment using regional BES outperformed the one which used global BES. Critical thermo-dynamic variables conducive for heavy rainfall like convective available potential energy simulated using regional BES is more realistic compared to global BES. It is pointed out that these results have important practical implications in design of forecast platforms while decision-making during extreme weather events

  7. Comment on "Can assimilation of crowdsourced data in hydrological modelling improve flood prediction?" by Mazzoleni et al. (2017)

    NASA Astrophysics Data System (ADS)

    Viero, Daniele P.

    2018-01-01

    Citizen science and crowdsourcing are gaining increasing attention among hydrologists. In a recent contribution, Mazzoleni et al. (2017) investigated the integration of crowdsourced data (CSD) into hydrological models to improve the accuracy of real-time flood forecasts. The authors used synthetic CSD (i.e. not actually measured), because real CSD were not available at the time of the study. In their work, which is a proof-of-concept study, Mazzoleni et al. (2017) showed that assimilation of CSD improves the overall model performance; the impact of irregular frequency of available CSD, and that of data uncertainty, were also deeply assessed. However, the use of synthetic CSD in conjunction with (semi-)distributed hydrological models deserves further discussion. As a result of equifinality, poor model identifiability, and deficiencies in model structure, internal states of (semi-)distributed models can hardly mimic the actual states of complex systems away from calibration points. Accordingly, the use of synthetic CSD that are drawn from model internal states under best-fit conditions can lead to overestimation of the effectiveness of CSD assimilation in improving flood prediction. Operational flood forecasting, which results in decisions of high societal value, requires robust knowledge of the model behaviour and an in-depth assessment of both model structure and forcing data. Additional guidelines are given that are useful for the a priori evaluation of CSD for real-time flood forecasting and, hopefully, for planning apt design strategies for both model calibration and collection of CSD.

  8. Evaluation of the Plant-Craig stochastic convection scheme in an ensemble forecasting system

    NASA Astrophysics Data System (ADS)

    Keane, R. J.; Plant, R. S.; Tennant, W. J.

    2015-12-01

    The Plant-Craig stochastic convection parameterization (version 2.0) is implemented in the Met Office Regional Ensemble Prediction System (MOGREPS-R) and is assessed in comparison with the standard convection scheme with a simple stochastic element only, from random parameter variation. A set of 34 ensemble forecasts, each with 24 members, is considered, over the month of July 2009. Deterministic and probabilistic measures of the precipitation forecasts are assessed. The Plant-Craig parameterization is found to improve probabilistic forecast measures, particularly the results for lower precipitation thresholds. The impact on deterministic forecasts at the grid scale is neutral, although the Plant-Craig scheme does deliver improvements when forecasts are made over larger areas. The improvements found are greater in conditions of relatively weak synoptic forcing, for which convective precipitation is likely to be less predictable.

  9. The impact of Sun-weather research on forecasting

    NASA Technical Reports Server (NTRS)

    Larsen, M. F.

    1979-01-01

    The possible impact of Sun-weather research on forecasting is examined. The type of knowledge of the effect is evaluated to determine if it is in a form that can be used for forecasting purposes. It is concluded that the present understanding of the effect does not lend itself readily to applications for forecast purposes. The limits of present predictive skill are examined and it is found that skill is most lacking for prediction of the smallest scales of atmospheric motion. However, it is not expected that Sun-weather research will have any significant impact on forecasting the smaller scales since predictability at these scales is limited by the finite grid size resolution and the time scales of turbulent diffusion. The predictability limits for the largest scales are on the order of several weeks although presently only a one week forecast is achievable.

  10. The probability forecast evaluation of hazard and storm wind over the territories of Russia and Europe

    NASA Astrophysics Data System (ADS)

    Perekhodtseva, E. V.

    2012-04-01

    The results of the probability forecast methods of summer storm and hazard wind over territories of Russia and Europe are submitted at this paper. These methods use the hydrodynamic-statistical model of these phenomena. The statistical model was developed for the recognition of the situation involving these phenomena. For this perhaps the samples of the values of atmospheric parameters (n=40) for the presence and for the absence of these phenomena of storm and hazard wind were accumulated. The compressing of the predictors space without the information losses was obtained by special algorithm (k=7<19m/s, the values of 65%24m/s, the values of 75%29m/s or the area of the tornado and strong squalls. The evaluation of this probability forecast was provided by criterion of Brayer. The estimation was successful and was equal for the European part of Russia B=0,37. The application of the probability forecast of storm and hazard winds allows to mitigate the economic losses when the errors of the first and second kinds of storm wind categorical forecast are not so small. A lot of examples of the storm wind probability forecast are submitted at this report.

  11. Evaluation of precipitation nowcasting techniques for the Alpine region

    NASA Astrophysics Data System (ADS)

    Panziera, L.; Mandapaka, P.; Atencia, A.; Hering, A.; Germann, U.; Gabella, M.; Buzzi, M.

    2010-09-01

    This study presents a large sample evaluation of different nowcasting systems over the Southern Swiss Alps. Radar observations are taken as a reference against which to assess the performance of the following short-term quantitative precipitation forecasting methods: -Eulerian persistence: the current radar image is taken as forecast. -Lagrangian persistence: precipitation patterns are advected following the field of storm motion (the MAPLE algorithm is used). -NORA: novel nowcasting system which exploits the presence of the orographic forcing; by comparing meteorological predictors estimated in real-time with those from the large historical data set, the events with the highest resemblance are picked to produce the forecast. -COSMO2, the limited area numerical model operationally used at MeteoSwiss -Blending of the aforementioned nowcasting tools precipitation forecasts. The investigation is aimed to set up a probabilistic radar rainfall runoff model experiment for steep Alpine catchments as part of the European research project IMPRINTS.

  12. Streamflow forecasts from WRF precipitation for flood early warning in mountain tropical areas

    NASA Astrophysics Data System (ADS)

    Rogelis, María Carolina; Werner, Micha

    2018-02-01

    Numerical weather prediction (NWP) models are fundamental to extend forecast lead times beyond the concentration time of a watershed. Particularly for flash flood forecasting in tropical mountainous watersheds, forecast precipitation is required to provide timely warnings. This paper aims to assess the potential of NWP for flood early warning purposes, and the possible improvement that bias correction can provide, in a tropical mountainous area. The paper focuses on the comparison of streamflows obtained from the post-processed precipitation forecasts, particularly the comparison of ensemble forecasts and their potential in providing skilful flood forecasts. The Weather Research and Forecasting (WRF) model is used to produce precipitation forecasts that are post-processed and used to drive a hydrologic model. Discharge forecasts obtained from the hydrological model are used to assess the skill of the WRF model. The results show that post-processed WRF precipitation adds value to the flood early warning system when compared to zero-precipitation forecasts, although the precipitation forecast used in this analysis showed little added value when compared to climatology. However, the reduction of biases obtained from the post-processed ensembles show the potential of this method and model to provide usable precipitation forecasts in tropical mountainous watersheds. The need for more detailed evaluation of the WRF model in the study area is highlighted, particularly the identification of the most suitable parameterisation, due to the inability of the model to adequately represent the convective precipitation found in the study area.

  13. Forecasting approaches to the Mekong River

    NASA Astrophysics Data System (ADS)

    Plate, E. J.

    2009-04-01

    Hydrologists distinguish between flood forecasts, which are concerned with events of the immediate future, and flood predictions, which are concerned with events that are possible, but whose date of occurrence is not determined. Although in principle both involve the determination of runoff from rainfall, the analytical approaches differ because of different objectives. The differences between the two approaches will be discussed, starting with an analysis of the forecasting process. The Mekong River in south-east Asia is used as an example. Prediction is defined as forecast for a hypothetical event, such as the 100-year flood, which is usually sufficiently specified by its magnitude and its probability of occurrence. It forms the basis for designing flood protection structures and risk management activities. The method for determining these quantities is hydrological modeling combined with extreme value statistics, today usually applied both to rainfall events and to observed river discharges. A rainfall-runoff model converts extreme rainfall events into extreme discharges, which at certain gage points along a river are calibrated against observed discharges. The quality of the model output is assessed against the mean value by means of the Nash-Sutcliffe quality criterion. The result of this procedure is a design hydrograph (or a family of design hydrographs) which are used as inputs into a hydraulic model, which converts the hydrograph into design water levels according to the hydraulic situation of the location. The accuracy of making a prediction in this sense is not particularly high: hydrologists know that the 100-year flood is a statistical quantity which can be estimated only within comparatively wide error bounds, and the hydraulics of a river site, in particular under conditions of heavy sediment loads has many uncertainties. Safety margins, such as additional freeboards are arranged to compensate for the uncertainty of the prediction. Forecasts, on the other hand, have as objective to obtain an accurate hydrograph of the near future. The method by means of which this is done is not as important as the accuracy of the forecast. A mathematical rainfall-runoff model is not necessarily a good forecast model. It has to be very carefully designed, and in many cases statistical models are found to give better results than mathematical models. Forecasters have the advantage of knowing the course of the hydrographs up to the point in time where forecasts have to be made. Therefore, models can be calibrated on line against the hydrograph of the immediate past. To assess the quality of a forecast, the quality criterion should not be based on the mean value, as does the Nash-Sutcliffe criterion, but should be based on the best forecast given the information up to the forecast time. Without any additional information, the best forecast when only the present day value is known is to assume a no-change scenario, i.e. to assume that the present value does not change in the immediate future. For the Mekong there exists a forecasting system which is based on a rainfall-runoff model operated by the Mekong River Commission. This model is found not to be adequate for forecasting for periods longer than one or two days ahead. Improvements are sought through two approaches: a strictly deterministic rainfall-runoff model, and a strictly statistical model based on regression with upstream stations. The two approaches are com-pared, and suggestions are made how to best combine the advantages of both approaches. This requires that due consideration is given to critical hydraulic conditions of the river at and in between the gauging stations. Critical situations occur in two ways: when the river overtops, in which case the rainfall-runoff model is incomplete unless overflow losses are considered, and at the confluence with tributaries. Of particular importance is the role of the large Tonle Sap Lake, which dampens the hydrograph downstream of Phnom Penh. The effect of these components of river hydraulics on forecasting accuracy will be assessed.

  14. Software selection based on analysis and forecasting methods, practised in 1C

    NASA Astrophysics Data System (ADS)

    Vazhdaev, A. N.; Chernysheva, T. Y.; Lisacheva, E. I.

    2015-09-01

    The research focuses on the problem of a “1C: Enterprise 8” platform inboard mechanisms for data analysis and forecasting. It is important to evaluate and select proper software to develop effective strategies for customer relationship management in terms of sales, as well as implementation and further maintenance of software. Research data allows creating new forecast models to schedule further software distribution.

  15. PERFORMANCE AND DIAGNOSTIC EVALUATION OF OZONE PREDICTIONS BY THE ETA-COMMUNITY MULTISCALE AIR QUALITY FORECAST SYSTEM DURING THE 2002 NEW ENGLAND AIR QUALITY STUDY

    EPA Science Inventory

    A real-time air quality forecasting system (Eta-CMAQ model suite) has been developed by linking the NCEP Eta model to the U.S. EPA CMAQ model. This work presents results from the application of the Eta-CMAQ modeling system for forecasting O3 over the northeastern U.S d...

  16. Cb-LIKE - Thunderstorm forecasts up to six hours with fuzzy logic

    NASA Astrophysics Data System (ADS)

    Köhler, Martin; Tafferner, Arnold

    2016-04-01

    Thunderstorms with their accompanying effects like heavy rain, hail, or downdrafts cause delays and flight cancellations and therefore high additional cost for airlines and airport operators. A reliable thunderstorm forecast up to several hours could provide more time for decision makers in air traffic for an appropriate reaction on possible storm cells and initiation of adequate counteractions. To provide the required forecasts Cb-LIKE (Cumulonimbus-LIKElihood) has been developed at the DLR (Deutsches Zentrum für Luft- und Raumfahrt) Institute of Atmospheric Physics. The new algorithm is an automated system which designates areas with possible thunderstorm development by using model data of the COSMO-DE weather model, which is driven by the German Meteorological Service (DWD). A newly developed "Best-Member- Selection" method allows the automatic selection of that particular model run of a time-lagged COSMO- DE model ensemble, which matches best the current thunderstorm situation. Thereby the application of the best available data basis for the calculation of the thunderstorm forecasts by Cb-LIKE is ensured. Altogether there are four different modes for the selection of the best member. Four atmospheric parameters (CAPE, vertical wind velocity, radar reflectivity and cloud top temperature) of the model output are used within the algorithm. A newly developed fuzzy logic system enables the subsequent combination of the model parameters and the calculation of a thunderstorm indicator within a value range of 12 up to 88 for each grid point of the model domain for the following six hours in one hour intervals. The higher the indicator value the more the model parameters imply the development of thunderstorms. The quality of the Cb-LIKE thunderstorm forecasts was evaluated by a substantial verification using a neighborhood verification approach and multi-event contingency tables. The verification was performed for the whole summer period of 2012. On the basis of a deterministic object comparison with heavy precipitation cells observed by the radar-based thunderstorm tracking algorithm Rad-TRAM, several verification scores like BIAS, POD, FAR and CSI were calculated to identify possible advantages of the new algorithm. The presentation illustrates in detail the concept of the Cb-LIKE algorithm with regard to the fuzzy logic system and the Best-Member-Selection. Additionally some case studies and the most important results of the verification will be shown. The implementation of the forecasts into the DLR WxFUSION system, an user oriented forecasting system for air traffic, will also be included.

  17. A prospective earthquake forecast experiment in the western Pacific

    NASA Astrophysics Data System (ADS)

    Eberhard, David A. J.; Zechar, J. Douglas; Wiemer, Stefan

    2012-09-01

    Since the beginning of 2009, the Collaboratory for the Study of Earthquake Predictability (CSEP) has been conducting an earthquake forecast experiment in the western Pacific. This experiment is an extension of the Kagan-Jackson experiments begun 15 years earlier and is a prototype for future global earthquake predictability experiments. At the beginning of each year, seismicity models make a spatially gridded forecast of the number of Mw≥ 5.8 earthquakes expected in the next year. For the three participating statistical models, we analyse the first two years of this experiment. We use likelihood-based metrics to evaluate the consistency of the forecasts with the observed target earthquakes and we apply measures based on Student's t-test and the Wilcoxon signed-rank test to compare the forecasts. Overall, a simple smoothed seismicity model (TripleS) performs the best, but there are some exceptions that indicate continued experiments are vital to fully understand the stability of these models, the robustness of model selection and, more generally, earthquake predictability in this region. We also estimate uncertainties in our results that are caused by uncertainties in earthquake location and seismic moment. Our uncertainty estimates are relatively small and suggest that the evaluation metrics are relatively robust. Finally, we consider the implications of our results for a global earthquake forecast experiment.

  18. Bayesian flood forecasting methods: A review

    NASA Astrophysics Data System (ADS)

    Han, Shasha; Coulibaly, Paulin

    2017-08-01

    Over the past few decades, floods have been seen as one of the most common and largely distributed natural disasters in the world. If floods could be accurately forecasted in advance, then their negative impacts could be greatly minimized. It is widely recognized that quantification and reduction of uncertainty associated with the hydrologic forecast is of great importance for flood estimation and rational decision making. Bayesian forecasting system (BFS) offers an ideal theoretic framework for uncertainty quantification that can be developed for probabilistic flood forecasting via any deterministic hydrologic model. It provides suitable theoretical structure, empirically validated models and reasonable analytic-numerical computation method, and can be developed into various Bayesian forecasting approaches. This paper presents a comprehensive review on Bayesian forecasting approaches applied in flood forecasting from 1999 till now. The review starts with an overview of fundamentals of BFS and recent advances in BFS, followed with BFS application in river stage forecasting and real-time flood forecasting, then move to a critical analysis by evaluating advantages and limitations of Bayesian forecasting methods and other predictive uncertainty assessment approaches in flood forecasting, and finally discusses the future research direction in Bayesian flood forecasting. Results show that the Bayesian flood forecasting approach is an effective and advanced way for flood estimation, it considers all sources of uncertainties and produces a predictive distribution of the river stage, river discharge or runoff, thus gives more accurate and reliable flood forecasts. Some emerging Bayesian forecasting methods (e.g. ensemble Bayesian forecasting system, Bayesian multi-model combination) were shown to overcome limitations of single model or fixed model weight and effectively reduce predictive uncertainty. In recent years, various Bayesian flood forecasting approaches have been developed and widely applied, but there is still room for improvements. Future research in the context of Bayesian flood forecasting should be on assimilation of various sources of newly available information and improvement of predictive performance assessment methods.

  19. A Gaussian Processes Technique for Short-term Load Forecasting with Considerations of Uncertainty

    NASA Astrophysics Data System (ADS)

    Ohmi, Masataro; Mori, Hiroyuki

    In this paper, an efficient method is proposed to deal with short-term load forecasting with the Gaussian Processes. Short-term load forecasting plays a key role to smooth power system operation such as economic load dispatching, unit commitment, etc. Recently, the deregulated and competitive power market increases the degree of uncertainty. As a result, it is more important to obtain better prediction results to save the cost. One of the most important aspects is that power system operator needs the upper and lower bounds of the predicted load to deal with the uncertainty while they require more accurate predicted values. The proposed method is based on the Bayes model in which output is expressed in a distribution rather than a point. To realize the model efficiently, this paper proposes the Gaussian Processes that consists of the Bayes linear model and kernel machine to obtain the distribution of the predicted value. The proposed method is successively applied to real data of daily maximum load forecasting.

  20. A genetic-algorithm-based remnant grey prediction model for energy demand forecasting.

    PubMed

    Hu, Yi-Chung

    2017-01-01

    Energy demand is an important economic index, and demand forecasting has played a significant role in drawing up energy development plans for cities or countries. As the use of large datasets and statistical assumptions is often impractical to forecast energy demand, the GM(1,1) model is commonly used because of its simplicity and ability to characterize an unknown system by using a limited number of data points to construct a time series model. This paper proposes a genetic-algorithm-based remnant GM(1,1) (GARGM(1,1)) with sign estimation to further improve the forecasting accuracy of the original GM(1,1) model. The distinctive feature of GARGM(1,1) is that it simultaneously optimizes the parameter specifications of the original and its residual models by using the GA. The results of experiments pertaining to a real case of energy demand in China showed that the proposed GARGM(1,1) outperforms other remnant GM(1,1) variants.

  1. A genetic-algorithm-based remnant grey prediction model for energy demand forecasting

    PubMed Central

    2017-01-01

    Energy demand is an important economic index, and demand forecasting has played a significant role in drawing up energy development plans for cities or countries. As the use of large datasets and statistical assumptions is often impractical to forecast energy demand, the GM(1,1) model is commonly used because of its simplicity and ability to characterize an unknown system by using a limited number of data points to construct a time series model. This paper proposes a genetic-algorithm-based remnant GM(1,1) (GARGM(1,1)) with sign estimation to further improve the forecasting accuracy of the original GM(1,1) model. The distinctive feature of GARGM(1,1) is that it simultaneously optimizes the parameter specifications of the original and its residual models by using the GA. The results of experiments pertaining to a real case of energy demand in China showed that the proposed GARGM(1,1) outperforms other remnant GM(1,1) variants. PMID:28981548

  2. Application of a medium-range global hydrologic probabilistic forecast scheme to the Ohio River Basin

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Voisin, Nathalie; Pappenberger, Florian; Lettenmaier, D. P.

    2011-08-15

    A 10-day globally applicable flood prediction scheme was evaluated using the Ohio River basin as a test site for the period 2003-2007. The Variable Infiltration Capacity (VIC) hydrology model was initialized with the European Centre for Medium Range Weather Forecasts (ECMWF) analysis temperatures and wind, and Tropical Rainfall Monitoring Mission Multi Satellite Precipitation Analysis (TMPA) precipitation up to the day of forecast. In forecast mode, the VIC model was then forced with a calibrated and statistically downscaled ECMWF ensemble prediction system (EPS) 10-day ensemble forecast. A parallel set up was used where ECMWF EPS forecasts were interpolated to the spatialmore » scale of the hydrology model. Each set of forecasts was extended by 5 days using monthly mean climatological variables and zero precipitation in order to account for the effect of initial conditions. The 15-day spatially distributed ensemble runoff forecasts were then routed to four locations in the basin, each with different drainage areas. Surrogates for observed daily runoff and flow were provided by the reference run, specifically VIC simulation forced with ECMWF analysis fields and TMPA precipitation fields. The flood prediction scheme using the calibrated and downscaled ECMWF EPS forecasts was shown to be more accurate and reliable than interpolated forecasts for both daily distributed runoff forecasts and daily flow forecasts. Initial and antecedent conditions dominated the flow forecasts for lead times shorter than the time of concentration depending on the flow forecast amounts and the drainage area sizes. The flood prediction scheme had useful skill for the 10 following days at all sites.« less

  3. Optimization of the Costs and the Safety of Maritime Transport by Routing: The use of Currents Forecast in the Routing of Racing Sail Boats as a Prototype of Rout Optimization for Trading Ships

    NASA Astrophysics Data System (ADS)

    Theunynck, Denis; Peze, Thierry; Toumazou, Vincent; Zunquin, Gauthier; Cohen, Olivier; Monges, Arnaud

    2005-03-01

    It is interesting to see whether the model of routing designed for races and great Navy operations could be transferred to commercial navigation and if so, within which framework.Sail boat routing conquered its letters of nobility during great races like the « Route du Rhum » or the transatlantic race « Jacques Vabre ». It is the ultimate stage of the step begun by the Navy at the time of great operations, like D-day (Overlord )June 6, 1944, in Normandy1.Routing is, from the beginning, mainly based on statistical knowledge and weather forecast, but with the recent availability of reliable currents forecast, sail boats routers and/or skippers now have to learn how to use both winds and currents to obtain the best performance, that is to travel between two points in the shortest time possible in acceptable security conditions.Are the currents forecast only useful to racing sail boat ? Of course not, they are a great help to fisherman for whom the knowledge of currents is also the knowledge of sea temperature who indicates the probability of fish presence. They are also used in offshore work to predict the hardness of the sea during operation.A less developed field of application is the route optimization of trading ships. The idea is to optimize the use of currents to increase the relative speed of ships with no augmentation of fuel expense. This new field will require that currents forecasters learn about the specific needs of another type of clients. There is also a need of teaching because the future customers will have to learn how to use the information they will get.At this point, the introduction of the use of currents forecast in racing sail boats routing is only the first step. It is of great interest because it can rely on a high knowledge in routing.The main difference is of course that the wind direction and its force are of greater importance to a sail boat that they are for a trading ship for whom the point of interest will be the fuel consumption and the ETA respect.Despite that, sail boat routing could be use as a prototype to determine the needs, both in term of information and formations of ship routers and skippers2.

  4. Model-free aftershock forecasts constructed from similar sequences in the past

    NASA Astrophysics Data System (ADS)

    van der Elst, N.; Page, M. T.

    2017-12-01

    The basic premise behind aftershock forecasting is that sequences in the future will be similar to those in the past. Forecast models typically use empirically tuned parametric distributions to approximate past sequences, and project those distributions into the future to make a forecast. While parametric models do a good job of describing average outcomes, they are not explicitly designed to capture the full range of variability between sequences, and can suffer from over-tuning of the parameters. In particular, parametric forecasts may produce a high rate of "surprises" - sequences that land outside the forecast range. Here we present a non-parametric forecast method that cuts out the parametric "middleman" between training data and forecast. The method is based on finding past sequences that are similar to the target sequence, and evaluating their outcomes. We quantify similarity as the Poisson probability that the observed event count in a past sequence reflects the same underlying intensity as the observed event count in the target sequence. Event counts are defined in terms of differential magnitude relative to the mainshock. The forecast is then constructed from the distribution of past sequences outcomes, weighted by their similarity. We compare the similarity forecast with the Reasenberg and Jones (RJ95) method, for a set of 2807 global aftershock sequences of M≥6 mainshocks. We implement a sequence-specific RJ95 forecast using a global average prior and Bayesian updating, but do not propagate epistemic uncertainty. The RJ95 forecast is somewhat more precise than the similarity forecast: 90% of observed sequences fall within a factor of two of the median RJ95 forecast value, whereas the fraction is 85% for the similarity forecast. However, the surprise rate is much higher for the RJ95 forecast; 10% of observed sequences fall in the upper 2.5% of the (Poissonian) forecast range. The surprise rate is less than 3% for the similarity forecast. The similarity forecast may be useful to emergency managers and non-specialists when confidence or expertise in parametric forecasting may be lacking. The method makes over-tuning impossible, and minimizes the rate of surprises. At the least, this forecast constitutes a useful benchmark for more precisely tuned parametric forecasts.

  5. An Ensemble-Based Forecasting Framework to Optimize Reservoir Releases

    NASA Astrophysics Data System (ADS)

    Ramaswamy, V.; Saleh, F.

    2017-12-01

    Increasing frequency of extreme precipitation events are stressing the need to manage water resources on shorter timescales. Short-term management of water resources becomes proactive when inflow forecasts are available and this information can be effectively used in the control strategy. This work investigates the utility of short term hydrological ensemble forecasts for operational decision making during extreme weather events. An advanced automated hydrologic prediction framework integrating a regional scale hydrologic model, GIS datasets and the meteorological ensemble predictions from the European Center for Medium Range Weather Forecasting (ECMWF) was coupled to an implicit multi-objective dynamic programming model to optimize releases from a water supply reservoir. The proposed methodology was evaluated by retrospectively forecasting the inflows to the Oradell reservoir in the Hackensack River basin in New Jersey during the extreme hydrologic event, Hurricane Irene. Additionally, the flexibility of the forecasting framework was investigated by forecasting the inflows from a moderate rainfall event to provide important perspectives on using the framework to assist reservoir operations during moderate events. The proposed forecasting framework seeks to provide a flexible, assistive tool to alleviate the complexity of operational decision-making.

  6. A Hybrid Neural Network Model for Sales Forecasting Based on ARIMA and Search Popularity of Article Titles.

    PubMed

    Omar, Hani; Hoang, Van Hai; Liu, Duen-Ren

    2016-01-01

    Enhancing sales and operations planning through forecasting analysis and business intelligence is demanded in many industries and enterprises. Publishing industries usually pick attractive titles and headlines for their stories to increase sales, since popular article titles and headlines can attract readers to buy magazines. In this paper, information retrieval techniques are adopted to extract words from article titles. The popularity measures of article titles are then analyzed by using the search indexes obtained from Google search engine. Backpropagation Neural Networks (BPNNs) have successfully been used to develop prediction models for sales forecasting. In this study, we propose a novel hybrid neural network model for sales forecasting based on the prediction result of time series forecasting and the popularity of article titles. The proposed model uses the historical sales data, popularity of article titles, and the prediction result of a time series, Autoregressive Integrated Moving Average (ARIMA) forecasting method to learn a BPNN-based forecasting model. Our proposed forecasting model is experimentally evaluated by comparing with conventional sales prediction techniques. The experimental result shows that our proposed forecasting method outperforms conventional techniques which do not consider the popularity of title words.

  7. A Hybrid Neural Network Model for Sales Forecasting Based on ARIMA and Search Popularity of Article Titles

    PubMed Central

    Omar, Hani; Hoang, Van Hai; Liu, Duen-Ren

    2016-01-01

    Enhancing sales and operations planning through forecasting analysis and business intelligence is demanded in many industries and enterprises. Publishing industries usually pick attractive titles and headlines for their stories to increase sales, since popular article titles and headlines can attract readers to buy magazines. In this paper, information retrieval techniques are adopted to extract words from article titles. The popularity measures of article titles are then analyzed by using the search indexes obtained from Google search engine. Backpropagation Neural Networks (BPNNs) have successfully been used to develop prediction models for sales forecasting. In this study, we propose a novel hybrid neural network model for sales forecasting based on the prediction result of time series forecasting and the popularity of article titles. The proposed model uses the historical sales data, popularity of article titles, and the prediction result of a time series, Autoregressive Integrated Moving Average (ARIMA) forecasting method to learn a BPNN-based forecasting model. Our proposed forecasting model is experimentally evaluated by comparing with conventional sales prediction techniques. The experimental result shows that our proposed forecasting method outperforms conventional techniques which do not consider the popularity of title words. PMID:27313605

  8. Statistical evaluation of forecasts

    NASA Astrophysics Data System (ADS)

    Mader, Malenka; Mader, Wolfgang; Gluckman, Bruce J.; Timmer, Jens; Schelter, Björn

    2014-08-01

    Reliable forecasts of extreme but rare events, such as earthquakes, financial crashes, and epileptic seizures, would render interventions and precautions possible. Therefore, forecasting methods have been developed which intend to raise an alarm if an extreme event is about to occur. In order to statistically validate the performance of a prediction system, it must be compared to the performance of a random predictor, which raises alarms independent of the events. Such a random predictor can be obtained by bootstrapping or analytically. We propose an analytic statistical framework which, in contrast to conventional methods, allows for validating independently the sensitivity and specificity of a forecasting method. Moreover, our method accounts for the periods during which an event has to remain absent or occur after a respective forecast.

  9. An Objective Verification of the North American Mesoscale Model for Kennedy Space Center and Cape Canaveral Air Force Station

    NASA Technical Reports Server (NTRS)

    Bauman, William H., III

    2010-01-01

    The 45th Weather Squadron (45 WS) Launch Weather Officers (LWO's) use the 12-km resolution North American Mesoscale (NAM) model (MesoNAM) text and graphical product forecasts extensively to support launch weather operations. However, the actual performance of the model at Kennedy Space Center (KSC) and Cape Canaveral Air Force Station (CCAFS) has not been measured objectively. In order to have tangible evidence of model performance, the 45 WS tasked the Applied Meteorology Unit (AMU; Bauman et ai, 2004) to conduct a detailed statistical analysis of model output compared to observed values. The model products are provided to the 45 WS by ACTA, Inc. and include hourly forecasts from 0 to 84 hours based on model initialization times of 00, 06, 12 and 18 UTC. The objective analysis compared the MesoNAM forecast winds, temperature (T) and dew pOint (T d), as well as the changes in these parameters over time, to the observed values from the sensors in the KSC/CCAFS wind tower network shown in Table 1. These objective statistics give the forecasters knowledge of the model's strengths and weaknesses, which will result in improved forecasts for operations.

  10. Forecasting Epidemics Through Nonparametric Estimation of Time-Dependent Transmission Rates Using the SEIR Model.

    PubMed

    Smirnova, Alexandra; deCamp, Linda; Chowell, Gerardo

    2017-05-02

    Deterministic and stochastic methods relying on early case incidence data for forecasting epidemic outbreaks have received increasing attention during the last few years. In mathematical terms, epidemic forecasting is an ill-posed problem due to instability of parameter identification and limited available data. While previous studies have largely estimated the time-dependent transmission rate by assuming specific functional forms (e.g., exponential decay) that depend on a few parameters, here we introduce a novel approach for the reconstruction of nonparametric time-dependent transmission rates by projecting onto a finite subspace spanned by Legendre polynomials. This approach enables us to effectively forecast future incidence cases, the clear advantage over recovering the transmission rate at finitely many grid points within the interval where the data are currently available. In our approach, we compare three regularization algorithms: variational (Tikhonov's) regularization, truncated singular value decomposition (TSVD), and modified TSVD in order to determine the stabilizing strategy that is most effective in terms of reliability of forecasting from limited data. We illustrate our methodology using simulated data as well as case incidence data for various epidemics including the 1918 influenza pandemic in San Francisco and the 2014-2015 Ebola epidemic in West Africa.

  11. Measuring and forecasting great tsunamis by GNSS-based vertical positioning of multiple ships

    NASA Astrophysics Data System (ADS)

    Inazu, D.; Waseda, T.; Hibiya, T.; Ohta, Y.

    2016-12-01

    Vertical ship positioning by the Global Navigation Satellite System (GNSS) was investigated for measuring and forecasting great tsunamis. We first examined existing GNSS vertical position data of a navigating vessel. The result indicated that by using the kinematic Precise Point Positioning (PPP) method, tsunamis greater than 10^-1 m can be detected from the vertical position of the ship. Based on Automatic Identification System (AIS) data, tens of cargo ships and tankers are regularly identified navigating over the Nankai Trough, southwest of Japan. We then assumed that a future Nankai Trough great earthquake tsunami will be observed by ships at locations based on AIS data. The tsunami forecast capability by these virtual offshore tsunami measurements was examined. A conventional Green's function based inversion was used to determine the initial tsunami height distribution. Tsunami forecast tests over the Nankai Trough were carried out using simulated tsunami data of the vertical positions of multiple cargo ships/tankers on a certain day, and of the currently operating observations by deep-sea pressure gauges and Global Positioning System (GPS) buoys. The forecast capability of ship-based tsunami height measurements alone was shown to be comparable to or better than that using the existing offshore observations.

  12. Snowmelt runoff modeling in simulation and forecasting modes with the Martinec-Mango model

    NASA Technical Reports Server (NTRS)

    Shafer, B.; Jones, E. B.; Frick, D. M. (Principal Investigator)

    1982-01-01

    The Martinec-Rango snowmelt runoff model was applied to two watersheds in the Rio Grande basin, Colorado-the South Fork Rio Grande, a drainage encompassing 216 sq mi without reservoirs or diversions and the Rio Grande above Del Norte, a drainage encompassing 1,320 sq mi without major reservoirs. The model was successfully applied to both watersheds when run in a simulation mode for the period 1973-79. This period included both high and low runoff seasons. Central to the adaptation of the model to run in a forecast mode was the need to develop a technique to forecast the shape of the snow cover depletion curves between satellite data points. Four separate approaches were investigated-simple linear estimation, multiple regression, parabolic exponential, and type curve. Only the parabolic exponential and type curve methods were run on the South Fork and Rio Grande watersheds for the 1980 runoff season using satellite snow cover updates when available. Although reasonable forecasts were obtained in certain situations, neither method seemed ready for truly operational forecasts, possibly due to a large amount of estimated climatic data for one or two primary base stations during the 1980 season.

  13. Evaluation of the fast orthogonal search method for forecasting chloride levels in the Deltona groundwater supply (Florida, USA)

    NASA Astrophysics Data System (ADS)

    El-Jaat, Majda; Hulley, Michael; Tétreault, Michel

    2018-02-01

    Despite the broad impact and importance of saltwater intrusion in coastal aquifers, little research has been directed towards forecasting saltwater intrusion in areas where the source of saltwater is uncertain. Saline contamination in inland groundwater supplies is a concern for numerous communities in the southern US including the city of Deltona, Florida. Furthermore, conventional numerical tools for forecasting saltwater contamination are heavily dependent on reliable characterization of the physical characteristics of underlying aquifers, information that is often absent or challenging to obtain. To overcome these limitations, a reliable alternative data-driven model for forecasting salinity in a groundwater supply was developed for Deltona using the fast orthogonal search (FOS) method. FOS was applied on monthly water-demand data and corresponding chloride concentrations at water supply wells. Groundwater salinity measurements from Deltona water supply wells were applied to evaluate the forecasting capability and accuracy of the FOS model. Accurate and reliable groundwater salinity forecasting is necessary to support effective and sustainable coastal-water resource planning and management. The available (27) water supply wells for Deltona were randomly split into three test groups for the purposes of FOS model development and performance assessment. Based on four performance indices (RMSE, RSR, NSEC, and R), the FOS model proved to be a reliable and robust forecaster of groundwater salinity. FOS is relatively inexpensive to apply, is not based on rigorous physical characterization of the water supply aquifer, and yields reliable estimates of groundwater salinity in active water supply wells.

  14. Ensemble forecasting of short-term system scale irrigation demands using real-time flow data and numerical weather predictions

    NASA Astrophysics Data System (ADS)

    Perera, Kushan C.; Western, Andrew W.; Robertson, David E.; George, Biju; Nawarathna, Bandara

    2016-06-01

    Irrigation demands fluctuate in response to weather variations and a range of irrigation management decisions, which creates challenges for water supply system operators. This paper develops a method for real-time ensemble forecasting of irrigation demand and applies it to irrigation command areas of various sizes for lead times of 1 to 5 days. The ensemble forecasts are based on a deterministic time series model coupled with ensemble representations of the various inputs to that model. Forecast inputs include past flow, precipitation, and potential evapotranspiration. These inputs are variously derived from flow observations from a modernized irrigation delivery system; short-term weather forecasts derived from numerical weather prediction models and observed weather data available from automatic weather stations. The predictive performance for the ensemble spread of irrigation demand was quantified using rank histograms, the mean continuous rank probability score (CRPS), the mean CRPS reliability and the temporal mean of the ensemble root mean squared error (MRMSE). The mean forecast was evaluated using root mean squared error (RMSE), Nash-Sutcliffe model efficiency (NSE) and bias. The NSE values for evaluation periods ranged between 0.96 (1 day lead time, whole study area) and 0.42 (5 days lead time, smallest command area). Rank histograms and comparison of MRMSE, mean CRPS, mean CRPS reliability and RMSE indicated that the ensemble spread is generally a reliable representation of the forecast uncertainty for short lead times but underestimates the uncertainty for long lead times.

  15. Chemical weather forecasting for the Yangtze River Delta

    NASA Astrophysics Data System (ADS)

    Xie, Y.; Xu, J.; Zhou, G.; Chang, L.; Chen, B.

    2016-12-01

    Shanghai is one of the largest megacities in the world. With rapid economic growth of the city and its surrounding areas in recent years, air pollution has posed adverse effects on public health and ecosystem. In winter heavy pollution episodes are often associated with PM exceedances under stagnant conditions or transport events, whereas in summer the region frequently experiences elevated O3 levels. Chemical weather prediction systems with the WRF-Chem and CMAQ models are being developed to support air quality and haze forecasting for Shanghai and the Yangtze River Delta region. We will present main components of the modeling system, forecasting products, as well as evaluation results. Evaluation of the WRF-Chem forecasts show the model has generally good ability to capture the temporal variations of O3 and PM2.5. Substantial regional differences exist, with the best performance in Shanghai. Meanwhile, the forecasts tend to degrade during highly polluted episodes and transitional time periods, which highlights the need to improve model representation of key process (e.g. meteorological fields and formation of secondary pollutants). Recent work includes using the ECMWF global model forecasts as chemical boundary conditions for our regional model. We investigate the impact of chemical downscaling, and also compare the results from different models participated in the PANDA (PArtnership with chiNa on space Data) project. Results from ongoing efforts (e.g. chemical weather forecasting driven by SMS regional high resolution NWP) will also be presented.

  16. Can we use Earth Observations to improve monthly water level forecasts?

    NASA Astrophysics Data System (ADS)

    Slater, L. J.; Villarini, G.

    2017-12-01

    Dynamical-statistical hydrologic forecasting approaches benefit from different strengths in comparison with traditional hydrologic forecasting systems: they are computationally efficient, can integrate and `learn' from a broad selection of input data (e.g., General Circulation Model (GCM) forecasts, Earth Observation time series, teleconnection patterns), and can take advantage of recent progress in machine learning (e.g. multi-model blending, post-processing and ensembling techniques). Recent efforts to develop a dynamical-statistical ensemble approach for forecasting seasonal streamflow using both GCM forecasts and changing land cover have shown promising results over the U.S. Midwest. Here, we use climate forecasts from several GCMs of the North American Multi Model Ensemble (NMME) alongside 15-minute stage time series from the National River Flow Archive (NRFA) and land cover classes extracted from the European Space Agency's Climate Change Initiative 300 m annual Global Land Cover time series. With these data, we conduct systematic long-range probabilistic forecasting of monthly water levels in UK catchments over timescales ranging from one to twelve months ahead. We evaluate the improvement in model fit and model forecasting skill that comes from using land cover classes as predictors in the models. This work opens up new possibilities for combining Earth Observation time series with GCM forecasts to predict a variety of hazards from space using data science techniques.

  17. An improved car-following model from the perspective of driver’s forecast behavior

    NASA Astrophysics Data System (ADS)

    Liu, Da-Wei; Shi, Zhong-Ke; Ai, Wen-Huan

    In this paper, a new car-following model considering effect of the driver’s forecast behavior is proposed based on the full velocity difference model (FVDM). Using the new model, we investigate the starting process of the vehicle motion under a traffic signal and find that the delay time of vehicle motion is reduced. Then the stability condition of the new model is derived and the modified Korteweg-de Vries (mKdV) equation is constructed to describe the traffic behavior near the critical point. Numerical simulation is compatible with the analysis of theory such as density wave, hysteresis loop, which shows that the new model is reasonable. The results show that considering the effect of driver’s forecast behavior can help to enhance the stability of traffic flow.

  18. Road icing forecasting and detecting system

    NASA Astrophysics Data System (ADS)

    Xu, Hongke; Zheng, Jinnan; Li, Peiqi; Wang, Qiucai

    2017-05-01

    Regard for the facts that the low accuracy and low real-time of the artificial observation to determine the road icing condition, and it is difficult to forecast icing situation, according to the main factors influencing the road-icing, and the electrical characteristics reflected by the pavement ice layer, this paper presents an innovative system, that is, ice-forecasting of the highway's dangerous section. The system bases on road surface water salinity measurements and pavement temperature measurement to calculate the freezing point of water and temperature change trend, and then predicts the occurrence time of road icing; using capacitance measurements to verdict the road surface is frozen or not; This paper expounds the method of using single chip microcomputer as the core of the control system and described the business process of the system.

  19. Results from the second year of a collaborative effort to forecast influenza seasons in the United States.

    PubMed

    Biggerstaff, Matthew; Johansson, Michael; Alper, David; Brooks, Logan C; Chakraborty, Prithwish; Farrow, David C; Hyun, Sangwon; Kandula, Sasikiran; McGowan, Craig; Ramakrishnan, Naren; Rosenfeld, Roni; Shaman, Jeffrey; Tibshirani, Rob; Tibshirani, Ryan J; Vespignani, Alessandro; Yang, Wan; Zhang, Qian; Reed, Carrie

    2018-02-24

    Accurate forecasts could enable more informed public health decisions. Since 2013, CDC has worked with external researchers to improve influenza forecasts by coordinating seasonal challenges for the United States and the 10 Health and Human Service Regions. Forecasted targets for the 2014-15 challenge were the onset week, peak week, and peak intensity of the season and the weekly percent of outpatient visits due to influenza-like illness (ILI) 1-4 weeks in advance. We used a logarithmic scoring rule to score the weekly forecasts, averaged the scores over an evaluation period, and then exponentiated the resulting logarithmic score. Poor forecasts had a score near 0, and perfect forecasts a score of 1. Five teams submitted forecasts from seven different models. At the national level, the team scores for onset week ranged from <0.01 to 0.41, peak week ranged from 0.08 to 0.49, and peak intensity ranged from <0.01 to 0.17. The scores for predictions of ILI 1-4 weeks in advance ranged from 0.02-0.38 and was highest 1 week ahead. Forecast skill varied by HHS region. Forecasts can predict epidemic characteristics that inform public health actions. CDC, state and local health officials, and researchers are working together to improve forecasts. Published by Elsevier B.V.

  20. A first large-scale flood inundation forecasting model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schumann, Guy J-P; Neal, Jeffrey C.; Voisin, Nathalie

    2013-11-04

    At present continental to global scale flood forecasting focusses on predicting at a point discharge, with little attention to the detail and accuracy of local scale inundation predictions. Yet, inundation is actually the variable of interest and all flood impacts are inherently local in nature. This paper proposes a first large scale flood inundation ensemble forecasting model that uses best available data and modeling approaches in data scarce areas and at continental scales. The model was built for the Lower Zambezi River in southeast Africa to demonstrate current flood inundation forecasting capabilities in large data-scarce regions. The inundation model domainmore » has a surface area of approximately 170k km2. ECMWF meteorological data were used to force the VIC (Variable Infiltration Capacity) macro-scale hydrological model which simulated and routed daily flows to the input boundary locations of the 2-D hydrodynamic model. Efficient hydrodynamic modeling over large areas still requires model grid resolutions that are typically larger than the width of many river channels that play a key a role in flood wave propagation. We therefore employed a novel sub-grid channel scheme to describe the river network in detail whilst at the same time representing the floodplain at an appropriate and efficient scale. The modeling system was first calibrated using water levels on the main channel from the ICESat (Ice, Cloud, and land Elevation Satellite) laser altimeter and then applied to predict the February 2007 Mozambique floods. Model evaluation showed that simulated flood edge cells were within a distance of about 1 km (one model resolution) compared to an observed flood edge of the event. Our study highlights that physically plausible parameter values and satisfactory performance can be achieved at spatial scales ranging from tens to several hundreds of thousands of km2 and at model grid resolutions up to several km2. However, initial model test runs in forecast mode revealed that it is crucial to account for basin-wide hydrological response time when assessing lead time performances notwithstanding structural limitations in the hydrological model and possibly large inaccuracies in precipitation data.« less

  1. CSEP-Japan: The Japanese node of the collaboratory for the study of earthquake predictability

    NASA Astrophysics Data System (ADS)

    Yokoi, S.; Tsuruoka, H.; Nanjo, K.; Hirata, N.

    2011-12-01

    Collaboratory for the Study of Earthquake Predictability (CSEP) is a global project of earthquake predictability research. The final goal of this project is to have a look for the intrinsic predictability of the earthquake rupture process through forecast testing experiments. The Earthquake Research Institute, the University of Tokyo joined the CSEP and started the Japanese testing center called as CSEP-Japan. This testing center constitutes an open access to researchers contributing earthquake forecast models for applied to Japan. A total of 91 earthquake forecast models were submitted on the prospective experiment starting from 1 November 2009. The models are separated into 4 testing classes (1 day, 3 months, 1 year and 3 years) and 3 testing regions covering an area of Japan including sea area, Japanese mainland and Kanto district. We evaluate the performance of the models in the official suite of tests defined by the CSEP. The experiments of 1-day, 3-month, 1-year and 3-year forecasting classes were implemented for 92 rounds, 4 rounds, 1round and 0 round (now in progress), respectively. The results of the 3-month class gave us new knowledge concerning statistical forecasting models. All models showed a good performance for magnitude forecasting. On the other hand, observation is hardly consistent in space-distribution with most models in some cases where many earthquakes occurred at the same spot. Throughout the experiment, it has been clarified that some properties of the CSEP's evaluation tests such as the L-test show strong correlation with the N-test. We are now processing to own (cyber-) infrastructure to support the forecast experiment as follows. (1) Japanese seismicity has changed since the 2011 Tohoku earthquake. The 3rd call for forecasting models was announced in order to promote model improvement for forecasting earthquakes after this earthquake. So, we provide Japanese seismicity catalog maintained by JMA for modelers to study how seismicity changes in Japan. (2) Now we prepare the 3-D forecasting experiment with a depth range of 0 to 100 km in Kanto region. (3) The testing center improved an evaluation system for 1-day class experiment because this testing class required fast calculation ability to finish forecasting and testing results within one day. This development will make a real-time forecasting system come true. (4) The special issue of 1st part titled Earthquake Forecast Testing Experiment in Japan was published on the Earth, Planets and Space Vol. 63, No.3, 2011 on March, 2011. This issue includes papers of algorithm of statistical models participating our experiment and outline of the experiment in Japan. The 2nd part of this issue, which is now on line, will be published soon. In this presentation, we will overview CSEP-Japan and results of the experiments, and discuss direction of our activity. An outline of the experiment and activities of the Japanese Testing Center are published on our WEB site;

  2. Flash-flood early warning using weather radar data: from nowcasting to forecasting

    NASA Astrophysics Data System (ADS)

    Liechti, Katharina; Panziera, Luca; Germann, Urs; Zappa, Massimiliano

    2013-04-01

    In our study we explore the limits of radar-based forecasting for hydrological runoff prediction. Two novel probabilistic radar-based forecasting chains for flash-flood early warning are investigated in three catchments in the Southern Swiss Alps and set in relation to deterministic discharge forecast for the same catchments. The first probabilistic radar-based forecasting chain is driven by NORA (Nowcasting of Orographic Rainfall by means of Analogues), an analogue-based heuristic nowcasting system to predict orographic rainfall for the following eight hours. The second probabilistic forecasting system evaluated is REAL-C2, where the numerical weather prediction COSMO-2 is initialized with 25 different initial conditions derived from a four-day nowcast with the radar ensemble REAL. Additionally, three deterministic forecasting chains were analysed. The performance of these five flash-flood forecasting systems was analysed for 1389 hours between June 2007 and December 2010 for which NORA forecasts were issued, due to the presence of orographic forcing. We found a clear preference for the probabilistic approach. Discharge forecasts perform better when forced by NORA rather than by a persistent radar QPE for lead times up to eight hours and for all discharge thresholds analysed. The best results were, however, obtained with the REAL-C2 forecasting chain, which was also remarkably skilful even with the highest thresholds. However, for regions where REAL cannot be produced, NORA might be an option for forecasting events triggered by orographic forcing.

  3. Flash-flood early warning using weather radar data: from nowcasting to forecasting

    NASA Astrophysics Data System (ADS)

    Liechti, K.; Panziera, L.; Germann, U.; Zappa, M.

    2013-01-01

    This study explores the limits of radar-based forecasting for hydrological runoff prediction. Two novel probabilistic radar-based forecasting chains for flash-flood early warning are investigated in three catchments in the Southern Swiss Alps and set in relation to deterministic discharge forecast for the same catchments. The first probabilistic radar-based forecasting chain is driven by NORA (Nowcasting of Orographic Rainfall by means of Analogues), an analogue-based heuristic nowcasting system to predict orographic rainfall for the following eight hours. The second probabilistic forecasting system evaluated is REAL-C2, where the numerical weather prediction COSMO-2 is initialized with 25 different initial conditions derived from a four-day nowcast with the radar ensemble REAL. Additionally, three deterministic forecasting chains were analysed. The performance of these five flash-flood forecasting systems was analysed for 1389 h between June 2007 and December 2010 for which NORA forecasts were issued, due to the presence of orographic forcing. We found a clear preference for the probabilistic approach. Discharge forecasts perform better when forced by NORA rather than by a persistent radar QPE for lead times up to eight hours and for all discharge thresholds analysed. The best results were, however, obtained with the REAL-C2 forecasting chain, which was also remarkably skilful even with the highest thresholds. However, for regions where REAL cannot be produced, NORA might be an option for forecasting events triggered by orographic precipitation.

  4. Assessment of seasonal soil moisture forecasts over Southern South America with emphasis on dry and wet events

    NASA Astrophysics Data System (ADS)

    Spennemann, Pablo; Rivera, Juan Antonio; Osman, Marisol; Saulo, Celeste; Penalba, Olga

    2017-04-01

    The importance of forecasting extreme wet and dry conditions from weeks to months in advance relies on the need to prevent considerable socio-economic losses, mainly in regions of large populations and where agriculture is a key value for the economies, like Southern South America (SSA). Therefore, to improve the understanding of the performance and uncertainties of seasonal soil moisture and precipitation forecasts over SSA, this study aims to: 1) perform a general assessment of the Climate Forecast System version-2 (CFSv2) soil moisture and precipitation forecasts; and 2) evaluate the CFSv2 ability to represent an extreme drought event merging observations with forecasted Standardized Precipitation Index (SPI) and the Standardized Soil Moisture Anomalies (SSMA) based on GLDAS-2.0 simulations. Results show that both SPI and SSMA forecast skill are regionally and seasonally dependent. In general a fast degradation of the forecasts skill is observed as the lead time increases with no significant metrics for forecast lead times longer than 2 months. Based on the assessment of the 2008-2009 extreme drought event it is evident that the CFSv2 forecasts have limitations regarding the identification of drought onset, duration, severity and demise, considering both meteorological (SPI) and agricultural (SSMA) drought conditions. These results have some implications upon the use of seasonal forecasts to assist agricultural practices in SSA, given that forecast skill is still too low to be useful for lead times longer than 2 months.

  5. Implementing drought early warning systems: policy lessons and future needs

    NASA Astrophysics Data System (ADS)

    Iglesias, Ana; Werner, Micha; Maia, Rodrigo; Garrote, Luis; Nyabeze, Washington

    2014-05-01

    Drought forecasting and Warning provides the potential of reducing impacts to society due to drought events. The implementation of effective drought forecasting and warning, however, requires not only science to support reliable forecasting, but also adequate policy and societal response. Here we propose a protocol to develop drought forecasting and early warning based in the international cooperation of African and European institutions in the DEWFORA project (EC, 7th Framework Programme). The protocol includes four major phases that address the scientific knowledge and the social capacity to use the knowledge: (a) What is the science available? Evaluating how signs of impending drought can be detected and predicted, defining risk levels, and analysing of the signs of drought in an integrated vulnerability approach. (b) What are the societal capacities? In this the institutional framework that enables policy development is evaluated. The protocol gathers information on vulnerability and pending hazard in advance so that early warnings can be declared at sufficient lead time and drought mitigation planning can be implemented at an early stage. (c) How can science be translated into policy? Linking science indicators into the actions/interventions that society needs to implement, and evaluating how policy is implemented. Key limitations to planning for drought are the social capacities to implement early warning systems. Vulnerability assessment contributes to identify these limitations and therefore provides crucial information to policy development. Based on the assessment of vulnerability we suggest thresholds for management actions to respond to drought forecasts and link predictive indicators to relevant potential mitigation strategies. Vulnerability assessment is crucial to identify relief, coping and management responses that contribute to a more resilient society. (d) How can society benefit from the forecast? Evaluating how information is provided to potentially affected groups, and how mitigation strategies can be taken in response. This paper presents an outline of the protocol that was developed in the DEWFORA project, outlining the complementary roles of science, policy and societal uptake in effective drought forecasting and warning. A consensus on the need to emphasise the social component of early warning was reached when testing the DEWFORA early warning system protocol among experts from 18 countries.

  6. Evaluation of Air Force and Navy Demand Forecasting Systems

    DTIC Science & Technology

    1994-01-01

    forecasting approach, the Air Force Material Command is questioning the adoption of the Navy’s Statistical Demand Forecasting System ( Gitman , 1994). The...Recoverable Item Process in the Requirements Data Bank System is to manage reparable spare parts ( Gitman , 1994). Although RDB will have the capability of...D062) ( Gitman , 1994). Since a comparison is made to address Air Force concerns, this research only limits its analysis to the range of Air Force

  7. Operational planning using Climatological Observations for Maritime Prediction and Analysis Support Service (COMPASS)

    NASA Astrophysics Data System (ADS)

    O'Connor, Alison; Kirtman, Benjamin; Harrison, Scott; Gorman, Joe

    2016-05-01

    The US Navy faces several limitations when planning operations in regard to forecasting environmental conditions. Currently, mission analysis and planning tools rely heavily on short-term (less than a week) forecasts or long-term statistical climate products. However, newly available data in the form of weather forecast ensembles provides dynamical and statistical extended-range predictions that can produce more accurate predictions if ensemble members can be combined correctly. Charles River Analytics is designing the Climatological Observations for Maritime Prediction and Analysis Support Service (COMPASS), which performs data fusion over extended-range multi-model ensembles, such as the North American Multi-Model Ensemble (NMME), to produce a unified forecast for several weeks to several seasons in the future. We evaluated thirty years of forecasts using machine learning to select predictions for an all-encompassing and superior forecast that can be used to inform the Navy's decision planning process.

  8. A novel hybrid ensemble learning paradigm for tourism forecasting

    NASA Astrophysics Data System (ADS)

    Shabri, Ani

    2015-02-01

    In this paper, a hybrid forecasting model based on Empirical Mode Decomposition (EMD) and Group Method of Data Handling (GMDH) is proposed to forecast tourism demand. This methodology first decomposes the original visitor arrival series into several Intrinsic Model Function (IMFs) components and one residual component by EMD technique. Then, IMFs components and the residual components is forecasted respectively using GMDH model whose input variables are selected by using Partial Autocorrelation Function (PACF). The final forecasted result for tourism series is produced by aggregating all the forecasted results. For evaluating the performance of the proposed EMD-GMDH methodologies, the monthly data of tourist arrivals from Singapore to Malaysia are used as an illustrative example. Empirical results show that the proposed EMD-GMDH model outperforms the EMD-ARIMA as well as the GMDH and ARIMA (Autoregressive Integrated Moving Average) models without time series decomposition.

  9. Monthly mean forecast experiments with the GISS model

    NASA Technical Reports Server (NTRS)

    Spar, J.; Atlas, R. M.; Kuo, E.

    1976-01-01

    The GISS general circulation model was used to compute global monthly mean forecasts for January 1973, 1974, and 1975 from initial conditions on the first day of each month and constant sea surface temperatures. Forecasts were evaluated in terms of global and hemispheric energetics, zonally averaged meridional and vertical profiles, forecast error statistics, and monthly mean synoptic fields. Although it generated a realistic mean meridional structure, the model did not adequately reproduce the observed interannual variations in the large scale monthly mean energetics and zonally averaged circulation. The monthly mean sea level pressure field was not predicted satisfactorily, but annual changes in the Icelandic low were simulated. The impact of temporal sea surface temperature variations on the forecasts was investigated by comparing two parallel forecasts for January 1974, one using climatological ocean temperatures and the other observed daily ocean temperatures. The use of daily updated sea surface temperatures produced no discernible beneficial effect.

  10. Evaluation of the Plant-Craig stochastic convection scheme (v2.0) in the ensemble forecasting system MOGREPS-R (24 km) based on the Unified Model (v7.3)

    NASA Astrophysics Data System (ADS)

    Keane, Richard J.; Plant, Robert S.; Tennant, Warren J.

    2016-05-01

    The Plant-Craig stochastic convection parameterization (version 2.0) is implemented in the Met Office Regional Ensemble Prediction System (MOGREPS-R) and is assessed in comparison with the standard convection scheme with a simple stochastic scheme only, from random parameter variation. A set of 34 ensemble forecasts, each with 24 members, is considered, over the month of July 2009. Deterministic and probabilistic measures of the precipitation forecasts are assessed. The Plant-Craig parameterization is found to improve probabilistic forecast measures, particularly the results for lower precipitation thresholds. The impact on deterministic forecasts at the grid scale is neutral, although the Plant-Craig scheme does deliver improvements when forecasts are made over larger areas. The improvements found are greater in conditions of relatively weak synoptic forcing, for which convective precipitation is likely to be less predictable.

  11. Forecasting electricity usage using univariate time series models

    NASA Astrophysics Data System (ADS)

    Hock-Eam, Lim; Chee-Yin, Yip

    2014-12-01

    Electricity is one of the important energy sources. A sufficient supply of electricity is vital to support a country's development and growth. Due to the changing of socio-economic characteristics, increasing competition and deregulation of electricity supply industry, the electricity demand forecasting is even more important than before. It is imperative to evaluate and compare the predictive performance of various forecasting methods. This will provide further insights on the weakness and strengths of each method. In literature, there are mixed evidences on the best forecasting methods of electricity demand. This paper aims to compare the predictive performance of univariate time series models for forecasting the electricity demand using a monthly data of maximum electricity load in Malaysia from January 2003 to December 2013. Results reveal that the Box-Jenkins method produces the best out-of-sample predictive performance. On the other hand, Holt-Winters exponential smoothing method is a good forecasting method for in-sample predictive performance.

  12. AN OPERATIONAL EVALUATION OF THE ETA - CMAQ AIR QUALITY FORECAST MODEL

    EPA Science Inventory

    The National Oceanic and Atmospheric Administration (NOAA), in partnership with the United States Environmental Protection Agency (EPA), are developing an operational, nationwide Air Quality Forecasting (AQF) system. An experimental phase of this program, which couples NOAA's Et...

  13. Technical Note: Initial assessment of a multi-method approach to spring-flood forecasting in Sweden

    NASA Astrophysics Data System (ADS)

    Olsson, J.; Uvo, C. B.; Foster, K.; Yang, W.

    2016-02-01

    Hydropower is a major energy source in Sweden, and proper reservoir management prior to the spring-flood onset is crucial for optimal production. This requires accurate forecasts of the accumulated discharge in the spring-flood period (i.e. the spring-flood volume, SFV). Today's SFV forecasts are generated using a model-based climatological ensemble approach, where time series of precipitation and temperature from historical years are used to force a calibrated and initialized set-up of the HBV model. In this study, a number of new approaches to spring-flood forecasting that reflect the latest developments with respect to analysis and modelling on seasonal timescales are presented and evaluated. Three main approaches, represented by specific methods, are evaluated in SFV hindcasts for the Swedish river Vindelälven over a 10-year period with lead times between 0 and 4 months. In the first approach, historically analogue years with respect to the climate in the period preceding the spring flood are identified and used to compose a reduced ensemble. In the second, seasonal meteorological ensemble forecasts are used to drive the HBV model over the spring-flood period. In the third approach, statistical relationships between SFV and the large-sale atmospheric circulation are used to build forecast models. None of the new approaches consistently outperform the climatological ensemble approach, but for early forecasts improvements of up to 25 % are found. This potential is reasonably well realized in a multi-method system, which over all forecast dates reduced the error in SFV by ˜ 4 %. This improvement is limited but potentially significant for e.g. energy trading.

  14. Impact of Land Surface Initialization Approach on Subseasonal Forecast Skill: a Regional Analysis in the Southern Hemisphere

    NASA Technical Reports Server (NTRS)

    Hirsch, Annette L.; Kala, Jatin; Pitman, Andy J.; Carouge, Claire; Evans, Jason P.; Haverd, Vanessa; Mocko, David

    2014-01-01

    The authors use a sophisticated coupled land-atmosphere modeling system for a Southern Hemisphere subdomain centered over southeastern Australia to evaluate differences in simulation skill from two different land surface initialization approaches. The first approach uses equilibrated land surface states obtained from offline simulations of the land surface model, and the second uses land surface states obtained from reanalyses. The authors find that land surface initialization using prior offline simulations contribute to relative gains in subseasonal forecast skill. In particular, relative gains in forecast skill for temperature of 10%-20% within the first 30 days of the forecast can be attributed to the land surface initialization method using offline states. For precipitation there is no distinct preference for the land surface initialization method, with limited gains in forecast skill irrespective of the lead time. The authors evaluated the asymmetry between maximum and minimum temperatures and found that maximum temperatures had the largest gains in relative forecast skill, exceeding 20% in some regions. These results were statistically significant at the 98% confidence level at up to 60 days into the forecast period. For minimum temperature, using reanalyses to initialize the land surface contributed to relative gains in forecast skill, reaching 40% in parts of the domain that were statistically significant at the 98% confidence level. The contrasting impact of the land surface initialization method between maximum and minimum temperature was associated with different soil moisture coupling mechanisms. Therefore, land surface initialization from prior offline simulations does improve predictability for temperature, particularly maximum temperature, but with less obvious improvements for precipitation and minimum temperature over southeastern Australia.

  15. Electric-Field Instrument With Ac-Biased Corona Point

    NASA Technical Reports Server (NTRS)

    Markson, R.; Anderson, B.; Govaert, J.

    1993-01-01

    Measurements indicative of incipient lightning yield additional information. New instrument gives reliable readings. High-voltage ac bias applied to needle point through high-resistance capacitance network provides corona discharge at all times, enabling more-slowly-varying component of electrostatic potential of needle to come to equilibrium with surrounding air. High resistance of high-voltage coupling makes instrument insensitive to wind. Improved corona-point instrument expected to yield additional information assisting in safety-oriented forecasting of lighting.

  16. Increasing the temporal resolution of direct normal solar irradiance forecasted series

    NASA Astrophysics Data System (ADS)

    Fernández-Peruchena, Carlos M.; Gastón, Martin; Schroedter-Homscheidt, Marion; Marco, Isabel Martínez; Casado-Rubio, José L.; García-Moya, José Antonio

    2017-06-01

    A detailed knowledge of the solar resource is a critical point in the design and control of Concentrating Solar Power (CSP) plants. In particular, accurate forecasting of solar irradiance is essential for the efficient operation of solar thermal power plants, the management of energy markets, and the widespread implementation of this technology. Numerical weather prediction (NWP) models are commonly used for solar radiation forecasting. In the ECMWF deterministic forecasting system, all forecast parameters are commercially available worldwide at 3-hourly intervals. Unfortunately, as Direct Normal solar Irradiance (DNI) exhibits a great variability due to the dynamic effects of passing clouds, 3-h time resolution is insufficient for accurate simulations of CSP plants due to their nonlinear response to DNI, governed by various thermal inertias due to their complex response characteristics. DNI series of hourly or sub-hourly frequency resolution are normally used for an accurate modeling and analysis of transient processes in CSP technologies. In this context, the objective of this study is to propose a methodology for generating synthetic DNI time series at 1-h (or higher) temporal resolution from 3-h DNI series. The methodology is based upon patterns as being defined with help of the clear-sky envelope approach together with a forecast of maximum DNI value, and it has been validated with high quality measured DNI data.

  17. An experiment in hurricane track prediction using parallel computing methods

    NASA Technical Reports Server (NTRS)

    Song, Chang G.; Jwo, Jung-Sing; Lakshmivarahan, S.; Dhall, S. K.; Lewis, John M.; Velden, Christopher S.

    1994-01-01

    The barotropic model is used to explore the advantages of parallel processing in deterministic forecasting. We apply this model to the track forecasting of hurricane Elena (1985). In this particular application, solutions to systems of elliptic equations are the essence of the computational mechanics. One set of equations is associated with the decomposition of the wind into irrotational and nondivergent components - this determines the initial nondivergent state. Another set is associated with recovery of the streamfunction from the forecasted vorticity. We demonstrate that direct parallel methods based on accelerated block cyclic reduction (BCR) significantly reduce the computational time required to solve the elliptic equations germane to this decomposition and forecast problem. A 72-h track prediction was made using incremental time steps of 16 min on a network of 3000 grid points nominally separated by 100 km. The prediction took 30 sec on the 8-processor Alliant FX/8 computer. This was a speed-up of 3.7 when compared to the one-processor version. The 72-h prediction of Elena's track was made as the storm moved toward Florida's west coast. Approximately 200 km west of Tampa Bay, Elena executed a dramatic recurvature that ultimately changed its course toward the northwest. Although the barotropic track forecast was unable to capture the hurricane's tight cycloidal looping maneuver, the subsequent northwesterly movement was accurately forecasted as was the location and timing of landfall near Mobile Bay.

  18. A quality assessment of the MARS crop yield forecasting system for the European Union

    NASA Astrophysics Data System (ADS)

    van der Velde, Marijn; Bareuth, Bettina

    2015-04-01

    Timely information on crop production forecasts can become of increasing importance as commodity markets are more and more interconnected. Impacts across large crop production areas due to (e.g.) extreme weather and pest outbreaks can create ripple effects that may affect food prices and availability elsewhere. The MARS Unit (Monitoring Agricultural ResourceS), DG Joint Research Centre, European Commission, has been providing forecasts of European crop production levels since 1993. The operational crop production forecasting is carried out with the MARS Crop Yield Forecasting System (M-CYFS). The M-CYFS is used to monitor crop growth development, evaluate short-term effects of anomalous meteorological events, and provide monthly forecasts of crop yield at national and European Union level. The crop production forecasts are published in the so-called MARS bulletins. Forecasting crop yield over large areas in the operational context requires quality benchmarks. Here we present an analysis of the accuracy and skill of past crop yield forecasts of the main crops (e.g. soft wheat, grain maize), throughout the growing season, and specifically for the final forecast before harvest. Two simple benchmarks to assess the skill of the forecasts were defined as comparing the forecasts to 1) a forecast equal to the average yield and 2) a forecast using a linear trend established through the crop yield time-series. These reveal a variability in performance as a function of crop and Member State. In terms of production, the yield forecasts of 67% of the EU-28 soft wheat production and 80% of the EU-28 maize production have been forecast superior to both benchmarks during the 1993-2013 period. In a changing and increasingly variable climate crop yield forecasts can become increasingly valuable - provided they are used wisely. We end our presentation by discussing research activities that could contribute to this goal.

  19. Impact of Ozone Radiative Feedbacks on Global Weather Forecasting

    NASA Astrophysics Data System (ADS)

    Ivanova, I.; de Grandpré, J.; Rochon, Y. J.; Sitwell, M.

    2017-12-01

    A coupled Chemical Data Assimilation system for ozone is being developed at Environment and Climate Change Canada (ECCC) with the goals to improve the forecasting of UV index and the forecasting of air quality with the Global Environmental Multi-scale (GEM) Model for Air quality and Chemistry (MACH). Furthermore, this system provides an opportunity to evaluate the benefit of ozone assimilation for improving weather forecasting with the ECCC Global Deterministic Prediction System (GDPS) for Numerical Weather Prediction (NWP). The present UV index forecasting system uses a statistical approach for evaluating the impact of ozone in clear-sky and cloudy conditions, and the use of real-time ozone analysis and ozone forecasts is highly desirable. Improving air quality forecasting with GEM-MACH further necessitates the development of integrated dynamical-chemical assimilation system. Upon its completion, real-time ozone analysis and ozone forecasts will also be available for piloting the regional air quality system, and for the computation of ozone heating rates, in replacement of the monthly mean ozone distribution currently used in the GDPS. Experiments with ozone radiative feedbacks were run with the GDPS at 25km resolution and 84 levels with a lid at 0.1 hPa and were initialized with ozone analysis that has assimilated total ozone column from OMI, OMPS, and GOME satellite instruments. The results show that the use of prognostic ozone for the computation of the heating/cooling rates has a significant impact on the temperature distribution throughout the stratosphere and upper troposphere regions. The impact of ozone assimilation is especially significant in the tropopause region, where ozone heating in the infrared wavelengths is important and ozone lifetime is relatively long. The implementation of the ozone radiative feedback in the GDPS requires addressing various issues related to model biases (temperature and humidity) and biases in equilibrium state (ozone mixing ratio, air temperature and overhead column ozone) used for the calculation of the linearized photochemical production and loss of ozone. Furthermore the radiative budget in the tropopause region is strongly affected by water vapor cooling, which impact requires further evaluation for the use in chemically coupled operational NWP systems.

  20. Improving groundwater predictions utilizing seasonal precipitation forecasts from general circulation models forced with sea surface temperature forecasts

    USGS Publications Warehouse

    Almanaseer, Naser; Sankarasubramanian, A.; Bales, Jerad

    2014-01-01

    Recent studies have found a significant association between climatic variability and basin hydroclimatology, particularly groundwater levels, over the southeast United States. The research reported in this paper evaluates the potential in developing 6-month-ahead groundwater-level forecasts based on the precipitation forecasts from ECHAM 4.5 General Circulation Model Forced with Sea Surface Temperature forecasts. Ten groundwater wells and nine streamgauges from the USGS Groundwater Climate Response Network and Hydro-Climatic Data Network were selected to represent groundwater and surface water flows, respectively, having minimal anthropogenic influences within the Flint River Basin in Georgia, United States. The writers employ two low-dimensional models [principle component regression (PCR) and canonical correlation analysis (CCA)] for predicting groundwater and streamflow at both seasonal and monthly timescales. Three modeling schemes are considered at the beginning of January to predict winter (January, February, and March) and spring (April, May, and June) streamflow and groundwater for the selected sites within the Flint River Basin. The first scheme (model 1) is a null model and is developed using PCR for every streamflow and groundwater site using previous 3-month observations (October, November, and December) available at that particular site as predictors. Modeling schemes 2 and 3 are developed using PCR and CCA, respectively, to evaluate the role of precipitation forecasts in improving monthly and seasonal groundwater predictions. Modeling scheme 3, which employs a CCA approach, is developed for each site by considering observed groundwater levels from nearby sites as predictands. The performance of these three schemes is evaluated using two metrics (correlation coefficient and relative RMS error) by developing groundwater-level forecasts based on leave-five-out cross-validation. Results from the research reported in this paper show that using precipitation forecasts in climate models improves the ability to predict the interannual variability of winter and spring streamflow and groundwater levels over the basin. However, significant conditional bias exists in all the three modeling schemes, which indicates the need to consider improved modeling schemes as well as the availability of longer time-series of observed hydroclimatic information over the basin.

Top