Sample records for forecast verification methods

  1. Verification of operational solar flare forecast: Case of Regional Warning Center Japan

    NASA Astrophysics Data System (ADS)

    Kubo, Yûki; Den, Mitsue; Ishii, Mamoru

    2017-08-01

    In this article, we discuss a verification study of an operational solar flare forecast in the Regional Warning Center (RWC) Japan. The RWC Japan has been issuing four-categorical deterministic solar flare forecasts for a long time. In this forecast verification study, we used solar flare forecast data accumulated over 16 years (from 2000 to 2015). We compiled the forecast data together with solar flare data obtained with the Geostationary Operational Environmental Satellites (GOES). Using the compiled data sets, we estimated some conventional scalar verification measures with 95% confidence intervals. We also estimated a multi-categorical scalar verification measure. These scalar verification measures were compared with those obtained by the persistence method and recurrence method. As solar activity varied during the 16 years, we also applied verification analyses to four subsets of forecast-observation pair data with different solar activity levels. We cannot conclude definitely that there are significant performance differences between the forecasts of RWC Japan and the persistence method, although a slightly significant difference is found for some event definitions. We propose to use a scalar verification measure to assess the judgment skill of the operational solar flare forecast. Finally, we propose a verification strategy for deterministic operational solar flare forecasting. For dichotomous forecast, a set of proposed verification measures is a frequency bias for bias, proportion correct and critical success index for accuracy, probability of detection for discrimination, false alarm ratio for reliability, Peirce skill score for forecast skill, and symmetric extremal dependence index for association. For multi-categorical forecast, we propose a set of verification measures as marginal distributions of forecast and observation for bias, proportion correct for accuracy, correlation coefficient and joint probability distribution for association, the likelihood distribution for discrimination, the calibration distribution for reliability and resolution, and the Gandin-Murphy-Gerrity score and judgment skill score for skill.

  2. Verification of Weather Running Estimate-Nowcast (WRE-N) Forecasts Using a Spatial-Categorical Method

    DTIC Science & Technology

    2017-07-01

    forecasts and observations on a common grid, which enables the application a number of different spatial verification methods that reveal various...forecasts of continuous meteorological variables using categorical and object-based methods . White Sands Missile Range (NM): Army Research Laboratory (US... Research version of the Weather Research and Forecasting Model adapted for generating short-range nowcasts and gridded observations produced by the

  3. Verification and intercomparison of mesoscale ensemble prediction systems in the Beijing 2008 Olympics Research and Development Project

    NASA Astrophysics Data System (ADS)

    Kunii, Masaru; Saito, Kazuo; Seko, Hiromu; Hara, Masahiro; Hara, Tabito; Yamaguchi, Munehiko; Gong, Jiandong; Charron, Martin; Du, Jun; Wang, Yong; Chen, Dehui

    2011-05-01

    During the period around the Beijing 2008 Olympic Games, the Beijing 2008 Olympics Research and Development Project (B08RDP) was conducted as part of the World Weather Research Program short-range weather forecasting research project. Mesoscale ensemble prediction (MEP) experiments were carried out by six organizations in near-real time, in order to share their experiences in the development of MEP systems. The purpose of this study is to objectively verify these experiments and to clarify the problems associated with the current MEP systems through the same experiences. Verification was performed using the MEP outputs interpolated into a common verification domain with a horizontal resolution of 15 km. For all systems, the ensemble spreads grew as the forecast time increased, and the ensemble mean improved the forecast errors compared with individual control forecasts in the verification against the analysis fields. However, each system exhibited individual characteristics according to the MEP method. Some participants used physical perturbation methods. The significance of these methods was confirmed by the verification. However, the mean error (ME) of the ensemble forecast in some systems was worse than that of the individual control forecast. This result suggests that it is necessary to pay careful attention to physical perturbations.

  4. QPF verification using different radar-based analyses: a case study

    NASA Astrophysics Data System (ADS)

    Moré, J.; Sairouni, A.; Rigo, T.; Bravo, M.; Mercader, J.

    2009-09-01

    Verification of QPF in NWP models has been always challenging not only for knowing what scores are better to quantify a particular skill of a model but also for choosing the more appropriate methodology when comparing forecasts with observations. On the one hand, an objective verification technique can provide conclusions that are not in agreement with those ones obtained by the "eyeball" method. Consequently, QPF can provide valuable information to forecasters in spite of having poor scores. On the other hand, there are difficulties in knowing the "truth" so different results can be achieved depending on the procedures used to obtain the precipitation analysis. The aim of this study is to show the importance of combining different precipitation analyses and verification methodologies to obtain a better knowledge of the skills of a forecasting system. In particular, a short range precipitation forecasting system based on MM5 at 12 km coupled with LAPS is studied in a local convective precipitation event that took place in NE Iberian Peninsula on October 3rd 2008. For this purpose, a variety of verification methods (dichotomous, recalibration and object oriented methods) are used to verify this case study. At the same time, different precipitation analyses are used in the verification process obtained by interpolating radar data using different techniques.

  5. A comparative verification of high resolution precipitation forecasts using model output statistics

    NASA Astrophysics Data System (ADS)

    van der Plas, Emiel; Schmeits, Maurice; Hooijman, Nicolien; Kok, Kees

    2017-04-01

    Verification of localized events such as precipitation has become even more challenging with the advent of high-resolution meso-scale numerical weather prediction (NWP). The realism of a forecast suggests that it should compare well against precipitation radar imagery with similar resolution, both spatially and temporally. Spatial verification methods solve some of the representativity issues that point verification gives rise to. In this study a verification strategy based on model output statistics is applied that aims to address both double penalty and resolution effects that are inherent to comparisons of NWP models with different resolutions. Using predictors based on spatial precipitation patterns around a set of stations, an extended logistic regression (ELR) equation is deduced, leading to a probability forecast distribution of precipitation for each NWP model, analysis and lead time. The ELR equations are derived for predictands based on areal calibrated radar precipitation and SYNOP observations. The aim is to extract maximum information from a series of precipitation forecasts, like a trained forecaster would. The method is applied to the non-hydrostatic model Harmonie (2.5 km resolution), Hirlam (11 km resolution) and the ECMWF model (16 km resolution), overall yielding similar Brier skill scores for the 3 post-processed models, but larger differences for individual lead times. Besides, the Fractions Skill Score is computed using the 3 deterministic forecasts, showing somewhat better skill for the Harmonie model. In other words, despite the realism of Harmonie precipitation forecasts, they only perform similarly or somewhat better than precipitation forecasts from the 2 lower resolution models, at least in the Netherlands.

  6. Generalization of information-based concepts in forecast verification

    NASA Astrophysics Data System (ADS)

    Tödter, J.; Ahrens, B.

    2012-04-01

    This work deals with information-theoretical methods in probabilistic forecast verification. Recent findings concerning the Ignorance Score are shortly reviewed, then the generalization to continuous forecasts is shown. For ensemble forecasts, the presented measures can be calculated exactly. The Brier Score (BS) and its generalizations to the multi-categorical Ranked Probability Score (RPS) and to the Continuous Ranked Probability Score (CRPS) are the prominent verification measures for probabilistic forecasts. Particularly, their decompositions into measures quantifying the reliability, resolution and uncertainty of the forecasts are attractive. Information theory sets up the natural framework for forecast verification. Recently, it has been shown that the BS is a second-order approximation of the information-based Ignorance Score (IGN), which also contains easily interpretable components and can also be generalized to a ranked version (RIGN). Here, the IGN, its generalizations and decompositions are systematically discussed in analogy to the variants of the BS. Additionally, a Continuous Ranked IGN (CRIGN) is introduced in analogy to the CRPS. The applicability and usefulness of the conceptually appealing CRIGN is illustrated, together with an algorithm to evaluate its components reliability, resolution, and uncertainty for ensemble-generated forecasts. This is also directly applicable to the more traditional CRPS.

  7. Verification of different forecasts of Hungarian Meteorological Service

    NASA Astrophysics Data System (ADS)

    Feher, B.

    2009-09-01

    In this paper I show the results of the forecasts made by the Hungarian Meteorological Service. I focus on the general short- and medium-range forecasts, which contains cloudiness, precipitation, wind speed and temperature for six regions of Hungary. I would like to show the results of some special forecasts as well, such as precipitation predictions which are made for the catchment area of Danube and Tisza rivers, and daily mean temperature predictions used by Hungarian energy companies. The product received by the user is made by the general forecaster, but these predictions are based on the ALADIN and ECMWF outputs. Because of these, the product of the forecaster and the models were also verified. Method like this is able to show us, which weather elements are more difficult to forecast or which regions have higher errors. During the verification procedure the basic errors (mean error, mean absolute error) are calculated. Precipitation amount is classified into five categories, and scores like POD, TS, PC,…etc. were defined by contingency table determined by these categories. The procedure runs fully automatically, all the things forecasters have to do is to print the daily result each morning. Beside the daily result, verification is also made for longer periods like week, month or year. Analyzing the results of longer periods we can say that the best predictions are made for the first few days, and precipitation forecasts are less good for mountainous areas, even, the scores of the forecasters sometimes are higher than the errors of the models. Since forecaster receive results next day, it can helps him/her to reduce mistakes and learn the weakness of the models. This paper contains the verification scores, their trends, the method by which these scores are calculated, and some case studies on worse forecasts.

  8. Flare forecasting at the Met Office Space Weather Operations Centre

    NASA Astrophysics Data System (ADS)

    Murray, S. A.; Bingham, S.; Sharpe, M.; Jackson, D. R.

    2017-04-01

    The Met Office Space Weather Operations Centre produces 24/7/365 space weather guidance, alerts, and forecasts to a wide range of government and commercial end-users across the United Kingdom. Solar flare forecasts are one of its products, which are issued multiple times a day in two forms: forecasts for each active region on the solar disk over the next 24 h and full-disk forecasts for the next 4 days. Here the forecasting process is described in detail, as well as first verification of archived forecasts using methods commonly used in operational weather prediction. Real-time verification available for operational flare forecasting use is also described. The influence of human forecasters is highlighted, with human-edited forecasts outperforming original model results and forecasting skill decreasing over longer forecast lead times.

  9. National Centers for Environmental Prediction

    Science.gov Websites

    Products Operational Forecast Graphics Experimental Forecast Graphics Verification and Diagnostics Model PARALLEL/EXPERIMENTAL MODEL FORECAST GRAPHICS OPERATIONAL VERIFICATION / DIAGNOSTICS PARALLEL VERIFICATION Developmental Air Quality Forecasts and Verification Back to Table of Contents 2. PARALLEL/EXPERIMENTAL GRAPHICS

  10. National Centers for Environmental Prediction

    Science.gov Websites

    Operational Forecast Graphics Experimental Forecast Graphics Verification and Diagnostics Model Configuration /EXPERIMENTAL MODEL FORECAST GRAPHICS OPERATIONAL VERIFICATION / DIAGNOSTICS PARALLEL VERIFICATION / DIAGNOSTICS Developmental Air Quality Forecasts and Verification Back to Table of Contents 2. PARALLEL/EXPERIMENTAL GRAPHICS

  11. New Aspects of Probabilistic Forecast Verification Using Information Theory

    NASA Astrophysics Data System (ADS)

    Tödter, Julian; Ahrens, Bodo

    2013-04-01

    This work deals with information-theoretical methods in probabilistic forecast verification, particularly concerning ensemble forecasts. Recent findings concerning the "Ignorance Score" are shortly reviewed, then a consistent generalization to continuous forecasts is motivated. For ensemble-generated forecasts, the presented measures can be calculated exactly. The Brier Score (BS) and its generalizations to the multi-categorical Ranked Probability Score (RPS) and to the Continuous Ranked Probability Score (CRPS) are prominent verification measures for probabilistic forecasts. Particularly, their decompositions into measures quantifying the reliability, resolution and uncertainty of the forecasts are attractive. Information theory sets up a natural framework for forecast verification. Recently, it has been shown that the BS is a second-order approximation of the information-based Ignorance Score (IGN), which also contains easily interpretable components and can also be generalized to a ranked version (RIGN). Here, the IGN, its generalizations and decompositions are systematically discussed in analogy to the variants of the BS. Additionally, a Continuous Ranked IGN (CRIGN) is introduced in analogy to the CRPS. The useful properties of the conceptually appealing CRIGN are illustrated, together with an algorithm to evaluate its components reliability, resolution, and uncertainty for ensemble-generated forecasts. This algorithm can also be used to calculate the decomposition of the more traditional CRPS exactly. The applicability of the "new" measures is demonstrated in a small evaluation study of ensemble-based precipitation forecasts.

  12. Assessing the impact of different satellite retrieval methods on forecast available potential energy

    NASA Technical Reports Server (NTRS)

    Whittaker, Linda M.; Horn, Lyle H.

    1990-01-01

    The effects of the inclusion of satellite temperature retrieval data, and of different satellite retrieval methods, on forecasts made with the NASA Goddard Laboratory for Atmospheres (GLA) fourth-order model were investigated using, as the parameter, the available potential energy (APE) in its isentropic form. Calculation of the APE were used to study the differences in the forecast sets both globally and in the Northern Hemisphere during 72-h forecast period. The analysis data sets used for the forecasts included one containing the NESDIS TIROS-N retrievals, the GLA retrievals using the physical inversion method, and a third, which did not contain satellite data, used as a control; two data sets, with and without satellite data, were used for verification. For all three data sets, the Northern Hemisphere values for the total APE showed an increase throughout the forecast period, mostly due to an increase in the zonal component, in contrast to the verification sets, which showed a steady level of total APE.

  13. Verification of Space Weather Forecasts using Terrestrial Weather Approaches

    NASA Astrophysics Data System (ADS)

    Henley, E.; Murray, S.; Pope, E.; Stephenson, D.; Sharpe, M.; Bingham, S.; Jackson, D.

    2015-12-01

    The Met Office Space Weather Operations Centre (MOSWOC) provides a range of 24/7 operational space weather forecasts, alerts, and warnings, which provide valuable information on space weather that can degrade electricity grids, radio communications, and satellite electronics. Forecasts issued include arrival times of coronal mass ejections (CMEs), and probabilistic forecasts for flares, geomagnetic storm indices, and energetic particle fluxes and fluences. These forecasts are produced twice daily using a combination of output from models such as Enlil, near-real-time observations, and forecaster experience. Verification of forecasts is crucial for users, researchers, and forecasters to understand the strengths and limitations of forecasters, and to assess forecaster added value. To this end, the Met Office (in collaboration with Exeter University) has been adapting verification techniques from terrestrial weather, and has been working closely with the International Space Environment Service (ISES) to standardise verification procedures. We will present the results of part of this work, analysing forecast and observed CME arrival times, assessing skill using 2x2 contingency tables. These MOSWOC forecasts can be objectively compared to those produced by the NASA Community Coordinated Modelling Center - a useful benchmark. This approach cannot be taken for the other forecasts, as they are probabilistic and categorical (e.g., geomagnetic storm forecasts give probabilities of exceeding levels from minor to extreme). We will present appropriate verification techniques being developed to address these forecasts, such as rank probability skill score, and comparing forecasts against climatology and persistence benchmarks. As part of this, we will outline the use of discrete time Markov chains to assess and improve the performance of our geomagnetic storm forecasts. We will also discuss work to adapt a terrestrial verification visualisation system to space weather, to help MOSWOC forecasters view verification results in near real-time; plans to objectively assess flare forecasts under the EU Horizon 2020 FLARECAST project; and summarise ISES efforts to achieve consensus on verification.

  14. A New Integrated Threshold Selection Methodology for Spatial Forecast Verification of Extreme Events

    NASA Astrophysics Data System (ADS)

    Kholodovsky, V.

    2017-12-01

    Extreme weather and climate events such as heavy precipitation, heat waves and strong winds can cause extensive damage to the society in terms of human lives and financial losses. As climate changes, it is important to understand how extreme weather events may change as a result. Climate and statistical models are often independently used to model those phenomena. To better assess performance of the climate models, a variety of spatial forecast verification methods have been developed. However, spatial verification metrics that are widely used in comparing mean states, in most cases, do not have an adequate theoretical justification to benchmark extreme weather events. We proposed a new integrated threshold selection methodology for spatial forecast verification of extreme events that couples existing pattern recognition indices with high threshold choices. This integrated approach has three main steps: 1) dimension reduction; 2) geometric domain mapping; and 3) thresholds clustering. We apply this approach to an observed precipitation dataset over CONUS. The results are evaluated by displaying threshold distribution seasonally, monthly and annually. The method offers user the flexibility of selecting a high threshold that is linked to desired geometrical properties. The proposed high threshold methodology could either complement existing spatial verification methods, where threshold selection is arbitrary, or be directly applicable in extreme value theory.

  15. Assessing Hourly Precipitation Forecast Skill with the Fractions Skill Score

    NASA Astrophysics Data System (ADS)

    Zhao, Bin; Zhang, Bo

    2018-02-01

    Statistical methods for category (yes/no) forecasts, such as the Threat Score, are typically used in the verification of precipitation forecasts. However, these standard methods are affected by the so-called "double-penalty" problem caused by slight displacements in either space or time with respect to the observations. Spatial techniques have recently been developed to help solve this problem. The fractions skill score (FSS), a neighborhood spatial verification method, directly compares the fractional coverage of events in windows surrounding the observations and forecasts. We applied the FSS to hourly precipitation verification by taking hourly forecast products from the GRAPES (Global/Regional Assimilation Prediction System) regional model and quantitative precipitation estimation products from the National Meteorological Information Center of China during July and August 2016, and investigated the difference between these results and those obtained with the traditional category score. We found that the model spin-up period affected the assessment of stability. Systematic errors had an insignificant role in the fraction Brier score and could be ignored. The dispersion of observations followed a diurnal cycle and the standard deviation of the forecast had a similar pattern to the reference maximum of the fraction Brier score. The coefficient of the forecasts and the observations is similar to the FSS; that is, the FSS may be a useful index that can be used to indicate correlation. Compared with the traditional skill score, the FSS has obvious advantages in distinguishing differences in precipitation time series, especially in the assessment of heavy rainfall.

  16. International Cooperative for Aerosol Prediction Workshop on Aerosol Forecast Verification

    NASA Technical Reports Server (NTRS)

    Benedetti, Angela; Reid, Jeffrey S.; Colarco, Peter R.

    2011-01-01

    The purpose of this workshop was to reinforce the working partnership between centers who are actively involved in global aerosol forecasting, and to discuss issues related to forecast verification. Participants included representatives from operational centers with global aerosol forecasting requirements, a panel of experts on Numerical Weather Prediction and Air Quality forecast verification, data providers, and several observers from the research community. The presentations centered on a review of current NWP and AQ practices with subsequent discussion focused on the challenges in defining appropriate verification measures for the next generation of aerosol forecast systems.

  17. Workshop Summary: International Cooperative for Aerosol Prediction Workshop On Aerosol Forecast Verification

    NASA Technical Reports Server (NTRS)

    Benedetti, Angela; Reid, Jeffrey S.; Colarco, Peter R.

    2011-01-01

    The purpose of this workshop was to reinforce the working partnership between centers who are actively involved in global aerosol forecasting, and to discuss issues related to forecast verification. Participants included representatives from operational centers with global aerosol forecasting requirements, a panel of experts on Numerical Weather Prediction and Air Quality forecast verification, data providers, and several observers from the research community. The presentations centered on a review of current NWP and AQ practices with subsequent discussion focused on the challenges in defining appropriate verification measures for the next generation of aerosol forecast systems.

  18. Using Meteorological Analogues for Reordering Postprocessed Precipitation Ensembles in Hydrological Forecasting

    NASA Astrophysics Data System (ADS)

    Bellier, Joseph; Bontron, Guillaume; Zin, Isabella

    2017-12-01

    Meteorological ensemble forecasts are nowadays widely used as input of hydrological models for probabilistic streamflow forecasting. These forcings are frequently biased and have to be statistically postprocessed, using most of the time univariate techniques that apply independently to individual locations, lead times and weather variables. Postprocessed ensemble forecasts therefore need to be reordered so as to reconstruct suitable multivariate dependence structures. The Schaake shuffle and ensemble copula coupling are the two most popular methods for this purpose. This paper proposes two adaptations of them that make use of meteorological analogues for reconstructing spatiotemporal dependence structures of precipitation forecasts. Performances of the original and adapted techniques are compared through a multistep verification experiment using real forecasts from the European Centre for Medium-Range Weather Forecasts. This experiment evaluates not only multivariate precipitation forecasts but also the corresponding streamflow forecasts that derive from hydrological modeling. Results show that the relative performances of the different reordering methods vary depending on the verification step. In particular, the standard Schaake shuffle is found to perform poorly when evaluated on streamflow. This emphasizes the crucial role of the precipitation spatiotemporal dependence structure in hydrological ensemble forecasting.

  19. A quantitative comparison of precipitation forecasts between the storm-scale numerical weather prediction model and auto-nowcast system in Jiangsu, China

    NASA Astrophysics Data System (ADS)

    Wang, Gaili; Yang, Ji; Wang, Dan; Liu, Liping

    2016-11-01

    Extrapolation techniques and storm-scale Numerical Weather Prediction (NWP) models are two primary approaches for short-term precipitation forecasts. The primary objective of this study is to verify precipitation forecasts and compare the performances of two nowcasting schemes: a Beijing Auto-Nowcast system (BJ-ANC) based on extrapolation techniques and a storm-scale NWP model called the Advanced Regional Prediction System (ARPS). The verification and comparison takes into account six heavy precipitation events that occurred in the summer of 2014 and 2015 in Jiangsu, China. The forecast performances of the two schemes were evaluated for the next 6 h at 1-h intervals using gridpoint-based measures of critical success index, bias, index of agreement, root mean square error, and using an object-based verification method called Structure-Amplitude-Location (SAL) score. Regarding gridpoint-based measures, BJ-ANC outperforms ARPS at first, but then the forecast accuracy decreases rapidly with lead time and performs worse than ARPS after 4-5 h of the initial forecast. Regarding the object-based verification method, most forecasts produced by BJ-ANC focus on the center of the diagram at the 1-h lead time and indicate high-quality forecasts. As the lead time increases, BJ-ANC overestimates precipitation amount and produces widespread precipitation, especially at a 6-h lead time. The ARPS model overestimates precipitation at all lead times, particularly at first.

  20. National Centers for Environmental Prediction

    Science.gov Websites

    / VISION | About EMC EMC > NAM > EXPERIMENTAL DATA Home NAM Operational Products HIRESW Operational Products Operational Forecast Graphics Experimental Forecast Graphics Verification and Diagnostics Model PARALLEL/EXPERIMENTAL MODEL FORECAST GRAPHICS OPERATIONAL VERIFICATION / DIAGNOSTICS PARALLEL VERIFICATION

  1. From Tornadoes to Earthquakes: Forecast Verification for Binary Events Applied to the 1999 Chi-Chi, Taiwan, Earthquake

    NASA Astrophysics Data System (ADS)

    Chen, C.; Rundle, J. B.; Holliday, J. R.; Nanjo, K.; Turcotte, D. L.; Li, S.; Tiampo, K. F.

    2005-12-01

    Forecast verification procedures for statistical events with binary outcomes typically rely on the use of contingency tables and Relative Operating Characteristic (ROC) diagrams. Originally developed for the statistical evaluation of tornado forecasts on a county-by-county basis, these methods can be adapted to the evaluation of competing earthquake forecasts. Here we apply these methods retrospectively to two forecasts for the m = 7.3 1999 Chi-Chi, Taiwan, earthquake. These forecasts are based on a method, Pattern Informatics (PI), that locates likely sites for future large earthquakes based on large change in activity of the smallest earthquakes. A competing null hypothesis, Relative Intensity (RI), is based on the idea that future large earthquake locations are correlated with sites having the greatest frequency of small earthquakes. We show that for Taiwan, the PI forecast method is superior to the RI forecast null hypothesis. Inspection of the two maps indicates that their forecast locations are indeed quite different. Our results confirm an earlier result suggesting that the earthquake preparation process for events such as the Chi-Chi earthquake involves anomalous changes in activation or quiescence, and that signatures of these processes can be detected in precursory seismicity data. Furthermore, we find that our methods can accurately forecast the locations of aftershocks from precursory seismicity changes alone, implying that the main shock together with its aftershocks represent a single manifestation of the formation of a high-stress region nucleating prior to the main shock.

  2. Verification of space weather forecasts at the UK Met Office

    NASA Astrophysics Data System (ADS)

    Bingham, S.; Sharpe, M.; Jackson, D.; Murray, S.

    2017-12-01

    The UK Met Office Space Weather Operations Centre (MOSWOC) has produced space weather guidance twice a day since its official opening in 2014. Guidance includes 4-day probabilistic forecasts of X-ray flares, geomagnetic storms, high-energy electron events and high-energy proton events. Evaluation of such forecasts is important to forecasters, stakeholders, model developers and users to understand the performance of these forecasts and also strengths and weaknesses to enable further development. Met Office terrestrial near real-time verification systems have been adapted to provide verification of X-ray flare and geomagnetic storm forecasts. Verification is updated daily to produce Relative Operating Characteristic (ROC) curves and Reliability diagrams, and rolling Ranked Probability Skill Scores (RPSSs) thus providing understanding of forecast performance and skill. Results suggest that the MOSWOC issued X-ray flare forecasts are usually not statistically significantly better than a benchmark climatological forecast (where the climatology is based on observations from the previous few months). By contrast, the issued geomagnetic storm activity forecast typically performs better against this climatological benchmark.

  3. Spatial Evaluation and Verification of Earthquake Simulators

    NASA Astrophysics Data System (ADS)

    Wilson, John Max; Yoder, Mark R.; Rundle, John B.; Turcotte, Donald L.; Schultz, Kasey W.

    2017-06-01

    In this paper, we address the problem of verifying earthquake simulators with observed data. Earthquake simulators are a class of computational simulations which attempt to mirror the topological complexity of fault systems on which earthquakes occur. In addition, the physics of friction and elastic interactions between fault elements are included in these simulations. Simulation parameters are adjusted so that natural earthquake sequences are matched in their scaling properties. Physically based earthquake simulators can generate many thousands of years of simulated seismicity, allowing for a robust capture of the statistical properties of large, damaging earthquakes that have long recurrence time scales. Verification of simulations against current observed earthquake seismicity is necessary, and following past simulator and forecast model verification methods, we approach the challenges in spatial forecast verification to simulators; namely, that simulator outputs are confined to the modeled faults, while observed earthquake epicenters often occur off of known faults. We present two methods for addressing this discrepancy: a simplistic approach whereby observed earthquakes are shifted to the nearest fault element and a smoothing method based on the power laws of the epidemic-type aftershock (ETAS) model, which distributes the seismicity of each simulated earthquake over the entire test region at a decaying rate with epicentral distance. To test these methods, a receiver operating characteristic plot was produced by comparing the rate maps to observed m>6.0 earthquakes in California since 1980. We found that the nearest-neighbor mapping produced poor forecasts, while the ETAS power-law method produced rate maps that agreed reasonably well with observations.

  4. Convective Weather Forecast Accuracy Analysis at Center and Sector Levels

    NASA Technical Reports Server (NTRS)

    Wang, Yao; Sridhar, Banavar

    2010-01-01

    This paper presents a detailed convective forecast accuracy analysis at center and sector levels. The study is aimed to provide more meaningful forecast verification measures to aviation community, as well as to obtain useful information leading to the improvements in the weather translation capacity models. In general, the vast majority of forecast verification efforts over past decades have been on the calculation of traditional standard verification measure scores over forecast and observation data analyses onto grids. These verification measures based on the binary classification have been applied in quality assurance of weather forecast products at the national level for many years. Our research focuses on the forecast at the center and sector levels. We calculate the standard forecast verification measure scores for en-route air traffic centers and sectors first, followed by conducting the forecast validation analysis and related verification measures for weather intensities and locations at centers and sectors levels. An approach to improve the prediction of sector weather coverage by multiple sector forecasts is then developed. The weather severe intensity assessment was carried out by using the correlations between forecast and actual weather observation airspace coverage. The weather forecast accuracy on horizontal location was assessed by examining the forecast errors. The improvement in prediction of weather coverage was determined by the correlation between actual sector weather coverage and prediction. observed and forecasted Convective Weather Avoidance Model (CWAM) data collected from June to September in 2007. CWAM zero-minute forecast data with aircraft avoidance probability of 60% and 80% are used as the actual weather observation. All forecast measurements are based on 30-minute, 60- minute, 90-minute, and 120-minute forecasts with the same avoidance probabilities. The forecast accuracy analysis for times under one-hour showed that the errors in intensity and location for center forecast are relatively low. For example, 1-hour forecast intensity and horizontal location errors for ZDC center were about 0.12 and 0.13. However, the correlation between sector 1-hour forecast and actual weather coverage was weak, for sector ZDC32, about 32% of the total variation of observation weather intensity was unexplained by forecast; the sector horizontal location error was about 0.10. The paper also introduces an approach to estimate the sector three-dimensional actual weather coverage by using multiple sector forecasts, which turned out to produce better predictions. Using Multiple Linear Regression (MLR) model for this approach, the correlations between actual observation and the multiple sector forecast model prediction improved by several percents at 95% confidence level in comparison with single sector forecast.

  5. Forecast Verification: Identification of small changes in weather forecasting skill

    NASA Astrophysics Data System (ADS)

    Weatherhead, E. C.; Jensen, T. L.

    2017-12-01

    Global and regonal weather forecasts have improved over the past seven decades most often because of small, incrmental improvements. The identificaiton and verification of forecast improvement due to proposed small changes in forecasting can be expensive and, if not carried out efficiently, can slow progress in forecasting development. This presentation will look at the skill of commonly used verification techniques and show how the ability to detect improvements can depend on the magnitude of the improvement, the number of runs used to test the improvement, the location on the Earth and the statistical techniques used. For continuous variables, such as temperture, wind and humidity, the skill of a forecast can be directly compared using a pair-wise statistical test that accommodates the natural autocorrelation and magnitude of variability. For discrete variables, such as tornado outbreaks, or icing events, the challenges is to reduce the false alarm rate while improving the rate of correctly identifying th discrete event. For both continuus and discrete verification results, proper statistical approaches can reduce the number of runs needed to identify a small improvement in forecasting skill. Verification within the Next Generation Global Prediction System is an important component to the many small decisions needed to make stat-of-the-art improvements to weather forecasting capabilities. The comparison of multiple skill scores with often conflicting results requires not only appropriate testing, but also scientific judgment to assure that the choices are appropriate not only for improvements in today's forecasting capabilities, but allow improvements that will come in the future.

  6. Influenza forecasting with Google Flu Trends.

    PubMed

    Dugas, Andrea Freyer; Jalalpour, Mehdi; Gel, Yulia; Levin, Scott; Torcaso, Fred; Igusa, Takeru; Rothman, Richard E

    2013-01-01

    We developed a practical influenza forecast model based on real-time, geographically focused, and easy to access data, designed to provide individual medical centers with advanced warning of the expected number of influenza cases, thus allowing for sufficient time to implement interventions. Secondly, we evaluated the effects of incorporating a real-time influenza surveillance system, Google Flu Trends, and meteorological and temporal information on forecast accuracy. Forecast models designed to predict one week in advance were developed from weekly counts of confirmed influenza cases over seven seasons (2004-2011) divided into seven training and out-of-sample verification sets. Forecasting procedures using classical Box-Jenkins, generalized linear models (GLM), and generalized linear autoregressive moving average (GARMA) methods were employed to develop the final model and assess the relative contribution of external variables such as, Google Flu Trends, meteorological data, and temporal information. A GARMA(3,0) forecast model with Negative Binomial distribution integrating Google Flu Trends information provided the most accurate influenza case predictions. The model, on the average, predicts weekly influenza cases during 7 out-of-sample outbreaks within 7 cases for 83% of estimates. Google Flu Trend data was the only source of external information to provide statistically significant forecast improvements over the base model in four of the seven out-of-sample verification sets. Overall, the p-value of adding this external information to the model is 0.0005. The other exogenous variables did not yield a statistically significant improvement in any of the verification sets. Integer-valued autoregression of influenza cases provides a strong base forecast model, which is enhanced by the addition of Google Flu Trends confirming the predictive capabilities of search query based syndromic surveillance. This accessible and flexible forecast model can be used by individual medical centers to provide advanced warning of future influenza cases.

  7. Verification of Meteorological and Oceanographic Ensemble Forecasts in the U.S. Navy

    NASA Astrophysics Data System (ADS)

    Klotz, S.; Hansen, J.; Pauley, P.; Sestak, M.; Wittmann, P.; Skupniewicz, C.; Nelson, G.

    2013-12-01

    The Navy Ensemble Forecast Verification System (NEFVS) has been promoted recently to operational status at the U.S. Navy's Fleet Numerical Meteorology and Oceanography Center (FNMOC). NEFVS processes FNMOC and National Centers for Environmental Prediction (NCEP) meteorological and ocean wave ensemble forecasts, gridded forecast analyses, and innovation (observational) data output by FNMOC's data assimilation system. The NEFVS framework consists of statistical analysis routines, a variety of pre- and post-processing scripts to manage data and plot verification metrics, and a master script to control application workflow. NEFVS computes metrics that include forecast bias, mean-squared error, conditional error, conditional rank probability score, and Brier score. The system also generates reliability and Receiver Operating Characteristic diagrams. In this presentation we describe the operational framework of NEFVS and show examples of verification products computed from ensemble forecasts, meteorological observations, and forecast analyses. The construction and deployment of NEFVS addresses important operational and scientific requirements within Navy Meteorology and Oceanography. These include computational capabilities for assessing the reliability and accuracy of meteorological and ocean wave forecasts in an operational environment, for quantifying effects of changes and potential improvements to the Navy's forecast models, and for comparing the skill of forecasts from different forecast systems. NEFVS also supports the Navy's collaboration with the U.S. Air Force, NCEP, and Environment Canada in the North American Ensemble Forecast System (NAEFS) project and with the Air Force and the National Oceanic and Atmospheric Administration (NOAA) in the National Unified Operational Prediction Capability (NUOPC) program. This program is tasked with eliminating unnecessary duplication within the three agencies, accelerating the transition of new technology, such as multi-model ensemble forecasting, to U.S. Department of Defense use, and creating a superior U.S. global meteorological and oceanographic prediction capability. Forecast verification is an important component of NAEFS and NUOPC. Distribution Statement A: Approved for Public Release; distribution is unlimited

  8. Verification of Meteorological and Oceanographic Ensemble Forecasts in the U.S. Navy

    NASA Astrophysics Data System (ADS)

    Klotz, S. P.; Hansen, J.; Pauley, P.; Sestak, M.; Wittmann, P.; Skupniewicz, C.; Nelson, G.

    2012-12-01

    The Navy Ensemble Forecast Verification System (NEFVS) has been promoted recently to operational status at the U.S. Navy's Fleet Numerical Meteorology and Oceanography Center (FNMOC). NEFVS processes FNMOC and National Centers for Environmental Prediction (NCEP) meteorological and ocean wave ensemble forecasts, gridded forecast analyses, and innovation (observational) data output by FNMOC's data assimilation system. The NEFVS framework consists of statistical analysis routines, a variety of pre- and post-processing scripts to manage data and plot verification metrics, and a master script to control application workflow. NEFVS computes metrics that include forecast bias, mean-squared error, conditional error, conditional rank probability score, and Brier score. The system also generates reliability and Receiver Operating Characteristic diagrams. In this presentation we describe the operational framework of NEFVS and show examples of verification products computed from ensemble forecasts, meteorological observations, and forecast analyses. The construction and deployment of NEFVS addresses important operational and scientific requirements within Navy Meteorology and Oceanography (METOC). These include computational capabilities for assessing the reliability and accuracy of meteorological and ocean wave forecasts in an operational environment, for quantifying effects of changes and potential improvements to the Navy's forecast models, and for comparing the skill of forecasts from different forecast systems. NEFVS also supports the Navy's collaboration with the U.S. Air Force, NCEP, and Environment Canada in the North American Ensemble Forecast System (NAEFS) project and with the Air Force and the National Oceanic and Atmospheric Administration (NOAA) in the National Unified Operational Prediction Capability (NUOPC) program. This program is tasked with eliminating unnecessary duplication within the three agencies, accelerating the transition of new technology, such as multi-model ensemble forecasting, to U.S. Department of Defense use, and creating a superior U.S. global meteorological and oceanographic prediction capability. Forecast verification is an important component of NAEFS and NUOPC.

  9. Verification of forecast ensembles in complex terrain including observation uncertainty

    NASA Astrophysics Data System (ADS)

    Dorninger, Manfred; Kloiber, Simon

    2017-04-01

    Traditionally, verification means to verify a forecast (ensemble) with the truth represented by observations. The observation errors are quite often neglected arguing that they are small when compared to the forecast error. In this study as part of the MesoVICT (Mesoscale Verification Inter-comparison over Complex Terrain) project it will be shown, that observation errors have to be taken into account for verification purposes. The observation uncertainty is estimated from the VERA (Vienna Enhanced Resolution Analysis) and represented via two analysis ensembles which are compared to the forecast ensemble. For the whole study results from COSMO-LEPS provided by Arpae-SIMC Emilia-Romagna are used as forecast ensemble. The time period covers the MesoVICT core case from 20-22 June 2007. In a first step, all ensembles are investigated concerning their distribution. Several tests have been executed (Kolmogorov-Smirnov-Test, Finkelstein-Schafer Test, Chi-Square Test etc.) showing no exact mathematical distribution. So the main focus is on non-parametric statistics (e.g. Kernel density estimation, Boxplots etc.) and also the deviation between "forced" normal distributed data and the kernel density estimations. In a next step the observational deviations due to the analysis ensembles are analysed. In a first approach scores are multiple times calculated with every single ensemble member from the analysis ensemble regarded as "true" observation. The results are presented as boxplots for the different scores and parameters. Additionally, the bootstrapping method is also applied to the ensembles. These possible approaches to incorporating observational uncertainty into the computation of statistics will be discussed in the talk.

  10. EMC: Air Quality Forecast Home page

    Science.gov Websites

    archive NAM Verification Meteorology Error Time Series EMC NAM Spatial Maps Real Time Mesoscale Analysis Precipitation verification NAQFC VERIFICATION CMAQ Ozone & PM Error Time Series AOD Error Time Series HYSPLIT Smoke forecasts vs GASP satellite Dust and Smoke Error Time Series HYSPLIT WCOSS Upgrade (July

  11. Statistical Post-Processing of Wind Speed Forecasts to Estimate Relative Economic Value

    NASA Astrophysics Data System (ADS)

    Courtney, Jennifer; Lynch, Peter; Sweeney, Conor

    2013-04-01

    The objective of this research is to get the best possible wind speed forecasts for the wind energy industry by using an optimal combination of well-established forecasting and post-processing methods. We start with the ECMWF 51 member ensemble prediction system (EPS) which is underdispersive and hence uncalibrated. We aim to produce wind speed forecasts that are more accurate and calibrated than the EPS. The 51 members of the EPS are clustered to 8 weighted representative members (RMs), chosen to minimize the within-cluster spread, while maximizing the inter-cluster spread. The forecasts are then downscaled using two limited area models, WRF and COSMO, at two resolutions, 14km and 3km. This process creates four distinguishable ensembles which are used as input to statistical post-processes requiring multi-model forecasts. Two such processes are presented here. The first, Bayesian Model Averaging, has been proven to provide more calibrated and accurate wind speed forecasts than the ECMWF EPS using this multi-model input data. The second, heteroscedastic censored regression is indicating positive results also. We compare the two post-processing methods, applied to a year of hindcast wind speed data around Ireland, using an array of deterministic and probabilistic verification techniques, such as MAE, CRPS, probability transform integrals and verification rank histograms, to show which method provides the most accurate and calibrated forecasts. However, the value of a forecast to an end-user cannot be fully quantified by just the accuracy and calibration measurements mentioned, as the relationship between skill and value is complex. Capturing the full potential of the forecast benefits also requires detailed knowledge of the end-users' weather sensitive decision-making processes and most importantly the economic impact it will have on their income. Finally, we present the continuous relative economic value of both post-processing methods to identify which is more beneficial to the wind energy industry of Ireland.

  12. Applications of Bayesian Procrustes shape analysis to ensemble radar reflectivity nowcast verification

    NASA Astrophysics Data System (ADS)

    Fox, Neil I.; Micheas, Athanasios C.; Peng, Yuqiang

    2016-07-01

    This paper introduces the use of Bayesian full Procrustes shape analysis in object-oriented meteorological applications. In particular, the Procrustes methodology is used to generate mean forecast precipitation fields from a set of ensemble forecasts. This approach has advantages over other ensemble averaging techniques in that it can produce a forecast that retains the morphological features of the precipitation structures and present the range of forecast outcomes represented by the ensemble. The production of the ensemble mean avoids the problems of smoothing that result from simple pixel or cell averaging, while producing credible sets that retain information on ensemble spread. Also in this paper, the full Bayesian Procrustes scheme is used as an object verification tool for precipitation forecasts. This is an extension of a previously presented Procrustes shape analysis based verification approach into a full Bayesian format designed to handle the verification of precipitation forecasts that match objects from an ensemble of forecast fields to a single truth image. The methodology is tested on radar reflectivity nowcasts produced in the Warning Decision Support System - Integrated Information (WDSS-II) by varying parameters in the K-means cluster tracking scheme.

  13. Spectral Analysis of Forecast Error Investigated with an Observing System Simulation Experiment

    NASA Technical Reports Server (NTRS)

    Prive, N. C.; Errico, Ronald M.

    2015-01-01

    The spectra of analysis and forecast error are examined using the observing system simulation experiment (OSSE) framework developed at the National Aeronautics and Space Administration Global Modeling and Assimilation Office (NASAGMAO). A global numerical weather prediction model, the Global Earth Observing System version 5 (GEOS-5) with Gridpoint Statistical Interpolation (GSI) data assimilation, is cycled for two months with once-daily forecasts to 336 hours to generate a control case. Verification of forecast errors using the Nature Run as truth is compared with verification of forecast errors using self-analysis; significant underestimation of forecast errors is seen using self-analysis verification for up to 48 hours. Likewise, self analysis verification significantly overestimates the error growth rates of the early forecast, as well as mischaracterizing the spatial scales at which the strongest growth occurs. The Nature Run-verified error variances exhibit a complicated progression of growth, particularly for low wave number errors. In a second experiment, cycling of the model and data assimilation over the same period is repeated, but using synthetic observations with different explicitly added observation errors having the same error variances as the control experiment, thus creating a different realization of the control. The forecast errors of the two experiments become more correlated during the early forecast period, with correlations increasing for up to 72 hours before beginning to decrease.

  14. Adaptive correction of ensemble forecasts

    NASA Astrophysics Data System (ADS)

    Pelosi, Anna; Battista Chirico, Giovanni; Van den Bergh, Joris; Vannitsem, Stephane

    2017-04-01

    Forecasts from numerical weather prediction (NWP) models often suffer from both systematic and non-systematic errors. These are present in both deterministic and ensemble forecasts, and originate from various sources such as model error and subgrid variability. Statistical post-processing techniques can partly remove such errors, which is particularly important when NWP outputs concerning surface weather variables are employed for site specific applications. Many different post-processing techniques have been developed. For deterministic forecasts, adaptive methods such as the Kalman filter are often used, which sequentially post-process the forecasts by continuously updating the correction parameters as new ground observations become available. These methods are especially valuable when long training data sets do not exist. For ensemble forecasts, well-known techniques are ensemble model output statistics (EMOS), and so-called "member-by-member" approaches (MBM). Here, we introduce a new adaptive post-processing technique for ensemble predictions. The proposed method is a sequential Kalman filtering technique that fully exploits the information content of the ensemble. One correction equation is retrieved and applied to all members, however the parameters of the regression equations are retrieved by exploiting the second order statistics of the forecast ensemble. We compare our new method with two other techniques: a simple method that makes use of a running bias correction of the ensemble mean, and an MBM post-processing approach that rescales the ensemble mean and spread, based on minimization of the Continuous Ranked Probability Score (CRPS). We perform a verification study for the region of Campania in southern Italy. We use two years (2014-2015) of daily meteorological observations of 2-meter temperature and 10-meter wind speed from 18 ground-based automatic weather stations distributed across the region, comparing them with the corresponding COSMO-LEPS ensemble forecasts. Deterministic verification scores (e.g., mean absolute error, bias) and probabilistic scores (e.g., CRPS) are used to evaluate the post-processing techniques. We conclude that the new adaptive method outperforms the simpler running bias-correction. The proposed adaptive method often outperforms the MBM method in removing bias. The MBM method has the advantage of correcting the ensemble spread, although it needs more training data.

  15. Evaluation and economic value of winter weather forecasts

    NASA Astrophysics Data System (ADS)

    Snyder, Derrick W.

    State and local highway agencies spend millions of dollars each year to deploy winter operation teams to plow snow and de-ice roadways. Accurate and timely weather forecast information is critical for effective decision making. Students from Purdue University partnered with the Indiana Department of Transportation to create an experimental winter weather forecast service for the 2012-2013 winter season in Indiana to assist in achieving these goals. One forecast product, an hourly timeline of winter weather hazards produced daily, was evaluated for quality and economic value. Verification of the forecasts was performed with data from the Rapid Refresh numerical weather model. Two objective verification criteria were developed to evaluate the performance of the timeline forecasts. Using both criteria, the timeline forecasts had issues with reliability and discrimination, systematically over-forecasting the amount of winter weather that was observed while also missing significant winter weather events. Despite these quality issues, the forecasts still showed significant, but varied, economic value compared to climatology. Economic value of the forecasts was estimated to be 29.5 million or 4.1 million, depending on the verification criteria used. Limitations of this valuation system are discussed and a framework is developed for more thorough studies in the future.

  16. Large Scale Skill in Regional Climate Modeling and the Lateral Boundary Condition Scheme

    NASA Astrophysics Data System (ADS)

    Veljović, K.; Rajković, B.; Mesinger, F.

    2009-04-01

    Several points are made concerning the somewhat controversial issue of regional climate modeling: should a regional climate model (RCM) be expected to maintain the large scale skill of the driver global model that is supplying its lateral boundary condition (LBC)? Given that this is normally desired, is it able to do so without help via the fairly popular large scale nudging? Specifically, without such nudging, will the RCM kinetic energy necessarily decrease with time compared to that of the driver model or analysis data as suggested by a study using the Regional Atmospheric Modeling System (RAMS)? Finally, can the lateral boundary condition scheme make a difference: is the almost universally used but somewhat costly relaxation scheme necessary for a desirable RCM performance? Experiments are made to explore these questions running the Eta model in two versions differing in the lateral boundary scheme used. One of these schemes is the traditional relaxation scheme, and the other the Eta model scheme in which information is used at the outermost boundary only, and not all variables are prescribed at the outflow boundary. Forecast lateral boundary conditions are used, and results are verified against the analyses. Thus, skill of the two RCM forecasts can be and is compared not only against each other but also against that of the driver global forecast. A novel verification method is used in the manner of customary precipitation verification in that forecast spatial wind speed distribution is verified against analyses by calculating bias adjusted equitable threat scores and bias scores for wind speeds greater than chosen wind speed thresholds. In this way, focusing on a high wind speed value in the upper troposphere, verification of large scale features we suggest can be done in a manner that may be more physically meaningful than verifications via spectral decomposition that are a standard RCM verification method. The results we have at this point are somewhat limited in view of the integrations having being done only for 10-day forecasts. Even so, one should note that they are among very few done using forecast as opposed to reanalysis or analysis global driving data. Our results suggest that (1) running the Eta as an RCM no significant loss of large-scale kinetic energy with time seems to be taking place; (2) no disadvantage from using the Eta LBC scheme compared to the relaxation scheme is seen, while enjoying the advantage of the scheme being significantly less demanding than the relaxation given that it needs driver model fields at the outermost domain boundary only; and (3) the Eta RCM skill in forecasting large scales, with no large scale nudging, seems to be just about the same as that of the driver model, or, in the terminology of Castro et al., the Eta RCM does not lose "value of the large scale" which exists in the larger global analyses used for the initial condition and for verification.

  17. Evaluating Snow Data Assimilation Framework for Streamflow Forecasting Applications Using Hindcast Verification

    NASA Astrophysics Data System (ADS)

    Barik, M. G.; Hogue, T. S.; Franz, K. J.; He, M.

    2012-12-01

    Snow water equivalent (SWE) estimation is a key factor in producing reliable streamflow simulations and forecasts in snow dominated areas. However, measuring or predicting SWE has significant uncertainty. Sequential data assimilation, which updates states using both observed and modeled data based on error estimation, has been shown to reduce streamflow simulation errors but has had limited testing for forecasting applications. In the current study, a snow data assimilation framework integrated with the National Weather System River Forecasting System (NWSRFS) is evaluated for use in ensemble streamflow prediction (ESP). Seasonal water supply ESP hindcasts are generated for the North Fork of the American River Basin (NFARB) in northern California. Parameter sets from the California Nevada River Forecast Center (CNRFC), the Differential Evolution Adaptive Metropolis (DREAM) algorithm and the Multistep Automated Calibration Scheme (MACS) are tested both with and without sequential data assimilation. The traditional ESP method considers uncertainty in future climate conditions using historical temperature and precipitation time series to generate future streamflow scenarios conditioned on the current basin state. We include data uncertainty analysis in the forecasting framework through the DREAM-based parameter set which is part of a recently developed Integrated Uncertainty and Ensemble-based data Assimilation framework (ICEA). Extensive verification of all tested approaches is undertaken using traditional forecast verification measures, including root mean square error (RMSE), Nash-Sutcliffe efficiency coefficient (NSE), volumetric bias, joint distribution, rank probability score (RPS), and discrimination and reliability plots. In comparison to the RFC parameters, the DREAM and MACS sets show significant improvement in volumetric bias in flow. Use of assimilation improves hindcasts of higher flows but does not significantly improve performance in the mid flow and low flow categories.

  18. Hydrology for urban land planning - A guidebook on the hydrologic effects of urban land use

    USGS Publications Warehouse

    Leopold, Luna Bergere

    1968-01-01

    The application of current knowledge of the hydrologic effects of urbanization to the Brandywine should be viewed as a forecast of conditions which may be expected as urbanization proceeds. By making such forecasts in advance of actual urban development, the methods can be tested, data can be extended, and procedures improved as verification becomes possible.

  19. Verification of a Non-Hydrostatic Dynamical Core Using Horizontally Spectral Element Vertically Finite Difference Method: 2D Aspects

    DTIC Science & Technology

    2014-04-01

    hydrostatic pressure vertical coordinate, which are the same as those used in the Weather Research and Forecasting ( WRF ) model, but a hybrid sigma...hydrostatic pressure vertical coordinate, which are the 33 same as those used in the Weather Research and Forecasting ( WRF ) model, but a hybrid 34 sigma...Weather Research and Forecasting 79 ( WRF ) Model. The Euler equations are in flux form based on the hydrostatic pressure vertical 80 coordinate. In

  20. Fuzzy logic-based analogue forecasting and hybrid modelling of horizontal visibility

    NASA Astrophysics Data System (ADS)

    Tuba, Zoltán; Bottyán, Zsolt

    2018-04-01

    Forecasting visibility is one of the greatest challenges in aviation meteorology. At the same time, high accuracy visibility forecasts can significantly reduce or make avoidable weather-related risk in aviation as well. To improve forecasting visibility, this research links fuzzy logic-based analogue forecasting and post-processed numerical weather prediction model outputs in hybrid forecast. Performance of analogue forecasting model was improved by the application of Analytic Hierarchy Process. Then, linear combination of the mentioned outputs was applied to create ultra-short term hybrid visibility prediction which gradually shifts the focus from statistical to numerical products taking their advantages during the forecast period. It gives the opportunity to bring closer the numerical visibility forecast to the observations even it is wrong initially. Complete verification of categorical forecasts was carried out; results are available for persistence and terminal aerodrome forecasts (TAF) as well in order to compare. The average value of Heidke Skill Score (HSS) of examined airports of analogue and hybrid forecasts shows very similar results even at the end of forecast period where the rate of analogue prediction in the final hybrid output is 0.1-0.2 only. However, in case of poor visibility (1000-2500 m), hybrid (0.65) and analogue forecasts (0.64) have similar average of HSS in the first 6 h of forecast period, and have better performance than persistence (0.60) or TAF (0.56). Important achievement that hybrid model takes into consideration physics and dynamics of the atmosphere due to the increasing part of the numerical weather prediction. In spite of this, its performance is similar to the most effective visibility forecasting methods and does not follow the poor verification results of clearly numerical outputs.

  1. Evaluation of the 29-km Eta Model for Weather Support to the United States Space Program

    NASA Technical Reports Server (NTRS)

    Manobianco, John; Nutter, Paul

    1997-01-01

    The Applied Meteorology Unit (AMU) conducted a year-long evaluation of NCEP's 29-km mesoscale Eta (meso-eta) weather prediction model in order to identify added value to forecast operations in support of the United States space program. The evaluation was stratified over warm and cool seasons and considered both objective and subjective verification methodologies. Objective verification results generally indicate that meso-eta model point forecasts at selected stations exhibit minimal error growth in terms of RMS errors and are reasonably unbiased. Conversely, results from the subjective verification demonstrate that model forecasts of developing weather events such as thunderstorms, sea breezes, and cold fronts, are not always as accurate as implied by the seasonal error statistics. Sea-breeze case studies reveal that the model generates a dynamically-consistent thermally direct circulation over the Florida peninsula, although at a larger scale than observed. Thunderstorm verification reveals that the meso-eta model is capable of predicting areas of organized convection, particularly during the late afternoon hours but is not capable of forecasting individual thunderstorms. Verification of cold fronts during the cool season reveals that the model is capable of forecasting a majority of cold frontal passages through east central Florida to within +1-h of observed frontal passage.

  2. Applications of Machine Learning to Downscaling and Verification

    NASA Astrophysics Data System (ADS)

    Prudden, R.

    2017-12-01

    Downscaling, sometimes known as super-resolution, means converting model data into a more detailed local forecast. It is a problem which could be highly amenable to machine learning approaches, provided that sufficient historical forecast data and observations are available. It is also closely linked to the subject of verification, since improving a forecast requires a way to measure that improvement. This talk will describe some early work towards downscaling Met Office ensemble forecasts, and discuss how the output may be usefully evaluated.

  3. Evaluation of streamflow forecast for the National Water Model of U.S. National Weather Service

    NASA Astrophysics Data System (ADS)

    Rafieeinasab, A.; McCreight, J. L.; Dugger, A. L.; Gochis, D.; Karsten, L. R.; Zhang, Y.; Cosgrove, B.; Liu, Y.

    2016-12-01

    The National Water Model (NWM), an implementation of the community WRF-Hydro modeling system, is an operational hydrologic forecasting model for the contiguous United States. The model forecasts distributed hydrologic states and fluxes, including soil moisture, snowpack, ET, and ponded water. In particular, the NWM provides streamflow forecasts at more than 2.7 million river reaches for three forecast ranges: short (15 hr), medium (10 days), and long (30 days). In this study, we verify short and medium range streamflow forecasts in the context of the verification of their respective quantitative precipitation forecasts/forcing (QPF), the High Resolution Rapid Refresh (HRRR) and the Global Forecast System (GFS). The streamflow evaluation is performed for summer of 2016 at more than 6,000 USGS gauges. Both individual forecasts and forecast lead times are examined. Selected case studies of extreme events aim to provide insight into the quality of the NWM streamflow forecasts. A goal of this comparison is to address how much streamflow bias originates from precipitation forcing bias. To this end, precipitation verification is performed over the contributing areas above (and between assimilated) USGS gauge locations. Precipitation verification is based on the aggregated, blended StageIV/StageII data as the "reference truth". We summarize the skill of the streamflow forecasts, their skill relative to the QPF, and make recommendations for improving NWM forecast skill.

  4. Peak Wind Tool for General Forecasting

    NASA Technical Reports Server (NTRS)

    Barrett, Joe H., III

    2010-01-01

    The expected peak wind speed of the day is an important forecast element in the 45th Weather Squadron's (45 WS) daily 24-Hour and Weekly Planning Forecasts. The forecasts are used for ground and space launch operations at the Kennedy Space Center (KSC) and Cape Canaveral Air Force Station (CCAFS). The 45 WS also issues wind advisories for KSC/CCAFS when they expect wind gusts to meet or exceed 25 kt, 35 kt and 50 kt thresholds at any level from the surface to 300 ft. The 45 WS forecasters have indicated peak wind speeds are challenging to forecast, particularly in the cool season months of October - April. In Phase I of this task, the Applied Meteorology Unit (AMU) developed a tool to help the 45 WS forecast non-convective winds at KSC/CCAFS for the 24-hour period of 0800 to 0800 local time. The tool was delivered as a Microsoft Excel graphical user interface (GUI). The GUI displayed the forecast of peak wind speed, 5-minute average wind speed at the time of the peak wind, timing of the peak wind and probability the peak speed would meet or exceed 25 kt, 35 kt and 50 kt. For the current task (Phase II ), the 45 WS requested additional observations be used for the creation of the forecast equations by expanding the period of record (POR). Additional parameters were evaluated as predictors, including wind speeds between 500 ft and 3000 ft, static stability classification, Bulk Richardson Number, mixing depth, vertical wind shear, temperature inversion strength and depth and wind direction. Using a verification data set, the AMU compared the performance of the Phase I and II prediction methods. Just as in Phase I, the tool was delivered as a Microsoft Excel GUI. The 45 WS requested the tool also be available in the Meteorological Interactive Data Display System (MIDDS). The AMU first expanded the POR by two years by adding tower observations, surface observations and CCAFS (XMR) soundings for the cool season months of March 2007 to April 2009. The POR was expanded again by six years, from October 1996 to April 2002, by interpolating 1000-ft sounding data to 100-ft increments. The Phase II developmental data set included observations for the cool season months of October 1996 to February 2007. The AMU calculated 68 candidate predictors from the XMR soundings, to include 19 stability parameters, 48 wind speed parameters and one wind shear parameter. Each day in the data set was stratified by synoptic weather pattern, low-level wind direction, precipitation and Richardson Number, for a total of 60 stratification methods. Linear regression equations, using the 68 predictors and 60 stratification methods, were created for the tool's three forecast parameters: the highest peak wind speed of the day (PWSD), 5-minute average speed at the same time (A WSD), and timing of the PWSD. For PWSD and A WSD, 30 Phase II methods were selected for evaluation in the verification data set. For timing of the PWSD, 12 Phase\\I methods were selected for evaluation. The verification data set contained observations for the cool season months of March 2007 to April 2009. The data set was used to compare the Phase I and II forecast methods to climatology, model forecast winds and wind advisories issued by the 45 WS. The model forecast winds were derived from the 0000 and 1200 UTC runs of the 12-km North American Mesoscale (MesoNAM) model. The forecast methods that performed the best in the verification data set were selected for the Phase II version of the tool. For PWSD and A WSD, linear regression equations based on MesoNAM forecasts performed significantly better than the Phase I and II methods. For timing of the PWSD, none of the methods performed significantly bener than climatology. The AMU then developed the Microsoft Excel and MIDDS GUls. The GUIs display the forecasts for PWSD, AWSD and the probability the PWSD will meet or exceed 25 kt, 35 kt and 50 kt. Since none of the prediction methods for timing of the PWSD performed significantly better thanlimatology, the tool no longer displays this predictand. The Excel and MIDDS GUIs display forecasts for Day-I to Day-3 and Day-I to Day-5, respectively. The Excel GUI uses MesoNAM forecasts as input, while the MIDDS GUI uses input from the MesoNAM and Global Forecast System model. Based on feedback from the 45 WS, the AMU added the daily average wind speed from 30 ft to 60 ft to the tool, which is one of the parameters in the 24-Hour and Weekly Planning Forecasts issued by the 45 WS. In addition, the AMU expanded the MIDDS GUI to include forecasts out to Day-7.

  5. A comparison of ensemble post-processing approaches that preserve correlation structures

    NASA Astrophysics Data System (ADS)

    Schefzik, Roman; Van Schaeybroeck, Bert; Vannitsem, Stéphane

    2016-04-01

    Despite the fact that ensemble forecasts address the major sources of uncertainty, they exhibit biases and dispersion errors and therefore are known to improve by calibration or statistical post-processing. For instance the ensemble model output statistics (EMOS) method, also known as non-homogeneous regression approach (Gneiting et al., 2005) is known to strongly improve forecast skill. EMOS is based on fitting and adjusting a parametric probability density function (PDF). However, EMOS and other common post-processing approaches apply to a single weather quantity at a single location for a single look-ahead time. They are therefore unable of taking into account spatial, inter-variable and temporal dependence structures. Recently many research efforts have been invested in designing post-processing methods that resolve this drawback but also in verification methods that enable the detection of dependence structures. New verification methods are applied on two classes of post-processing methods, both generating physically coherent ensembles. A first class uses the ensemble copula coupling (ECC) that starts from EMOS but adjusts the rank structure (Schefzik et al., 2013). The second class is a member-by-member post-processing (MBM) approach that maps each raw ensemble member to a corrected one (Van Schaeybroeck and Vannitsem, 2015). We compare variants of the EMOS-ECC and MBM classes and highlight a specific theoretical connection between them. All post-processing variants are applied in the context of the ensemble system of the European Centre of Weather Forecasts (ECMWF) and compared using multivariate verification tools including the energy score, the variogram score (Scheuerer and Hamill, 2015) and the band depth rank histogram (Thorarinsdottir et al., 2015). Gneiting, Raftery, Westveld, and Goldman, 2005: Calibrated probabilistic forecasting using ensemble model output statistics and minimum CRPS estimation. Mon. Wea. Rev., {133}, 1098-1118. Scheuerer and Hamill, 2015. Variogram-based proper scoring rules for probabilistic forecasts of multivariate quantities. Mon. Wea. Rev. {143},1321-1334. Schefzik, Thorarinsdottir, Gneiting. Uncertainty quantification in complex simulation models using ensemble copula coupling. Statistical Science {28},616-640, 2013. Thorarinsdottir, M. Scheuerer, and C. Heinz, 2015. Assessing the calibration of high-dimensional ensemble forecasts using rank histograms, arXiv:1310.0236. Van Schaeybroeck and Vannitsem, 2015: Ensemble post-processing using member-by-member approaches: theoretical aspects. Q.J.R. Meteorol. Soc., 141: 807-818.

  6. Verification of Advances in a Coupled Snow-runoff Modeling Framework for Operational Streamflow Forecasts

    NASA Astrophysics Data System (ADS)

    Barik, M. G.; Hogue, T. S.; Franz, K. J.; He, M.

    2011-12-01

    The National Oceanic and Atmospheric Administration's (NOAA's) River Forecast Centers (RFCs) issue hydrologic forecasts related to flood events, reservoir operations for water supply, streamflow regulation, and recreation on the nation's streams and rivers. The RFCs use the National Weather Service River Forecast System (NWSRFS) for streamflow forecasting which relies on a coupled snow model (i.e. SNOW17) and rainfall-runoff model (i.e. SAC-SMA) in snow-dominated regions of the US. Errors arise in various steps of the forecasting system from input data, model structure, model parameters, and initial states. The goal of the current study is to undertake verification of potential improvements in the SNOW17-SAC-SMA modeling framework developed for operational streamflow forecasts. We undertake verification for a range of parameters sets (i.e. RFC, DREAM (Differential Evolution Adaptive Metropolis)) as well as a data assimilation (DA) framework developed for the coupled models. Verification is also undertaken for various initial conditions to observe the influence of variability in initial conditions on the forecast. The study basin is the North Fork America River Basin (NFARB) located on the western side of the Sierra Nevada Mountains in northern California. Hindcasts are verified using both deterministic (i.e. Nash Sutcliffe efficiency, root mean square error, and joint distribution) and probabilistic (i.e. reliability diagram, discrimination diagram, containing ratio, and Quantile plots) statistics. Our presentation includes comparison of the performance of different optimized parameters and the DA framework as well as assessment of the impact associated with the initial conditions used for streamflow forecasts for the NFARB.

  7. A Practitioners Perspective on Verification

    NASA Astrophysics Data System (ADS)

    Steenburgh, R. A.

    2017-12-01

    NOAAs Space Weather Prediction Center offers a wide range of products and services to meet the needs of an equally wide range of customers. A robust verification program is essential to the informed use of model guidance and other tools by both forecasters and end users alike. In this talk, we present current SWPC practices and results, and examine emerging requirements and potential approaches to satisfy them. We explore the varying verification needs of forecasters and end users, as well as the role of subjective and objective verification. Finally, we describe a vehicle used in the meteorological community to unify approaches to model verification and facilitate intercomparison.

  8. Evaluation of the 29-km Eta Model. Part 1; Objective Verification at Three Selected Stations

    NASA Technical Reports Server (NTRS)

    Nutter, Paul A.; Manobianco, John; Merceret, Francis J. (Technical Monitor)

    1998-01-01

    This paper describes an objective verification of the National Centers for Environmental Prediction (NCEP) 29-km eta model from May 1996 through January 1998. The evaluation was designed to assess the model's surface and upper-air point forecast accuracy at three selected locations during separate warm (May - August) and cool (October - January) season periods. In order to enhance sample sizes available for statistical calculations, the objective verification includes two consecutive warm and cool season periods. Systematic model deficiencies comprise the larger portion of the total error in most of the surface forecast variables that were evaluated. The error characteristics for both surface and upper-air forecasts vary widely by parameter, season, and station location. At upper levels, a few characteristic biases are identified. Overall however, the upper-level errors are more nonsystematic in nature and could be explained partly by observational measurement uncertainty. With a few exceptions, the upper-air results also indicate that 24-h model error growth is not statistically significant. In February and August 1997, NCEP implemented upgrades to the eta model's physical parameterizations that were designed to change some of the model's error characteristics near the surface. The results shown in this paper indicate that these upgrades led to identifiable and statistically significant changes in forecast accuracy for selected surface parameters. While some of the changes were expected, others were not consistent with the intent of the model updates and further emphasize the need for ongoing sensitivity studies and localized statistical verification efforts. Objective verification of point forecasts is a stringent measure of model performance, but when used alone, is not enough to quantify the overall value that model guidance may add to the forecast process. Therefore, results from a subjective verification of the meso-eta model over the Florida peninsula are discussed in the companion paper by Manobianco and Nutter. Overall verification results presented here and in part two should establish a reasonable benchmark from which model users and developers may pursue the ongoing eta model verification strategies in the future.

  9. Verification of temperature, precipitation, and streamflow forecasts from the NOAA/NWS Hydrologic Ensemble Forecast Service (HEFS): 1. Experimental design and forcing verification

    NASA Astrophysics Data System (ADS)

    Brown, James D.; Wu, Limin; He, Minxue; Regonda, Satish; Lee, Haksu; Seo, Dong-Jun

    2014-11-01

    Retrospective forecasts of precipitation, temperature, and streamflow were generated with the Hydrologic Ensemble Forecast Service (HEFS) of the U.S. National Weather Service (NWS) for a 20-year period between 1979 and 1999. The hindcasts were produced for two basins in each of four River Forecast Centers (RFCs), namely the Arkansas-Red Basin RFC, the Colorado Basin RFC, the California-Nevada RFC, and the Middle Atlantic RFC. Precipitation and temperature forecasts were produced with the HEFS Meteorological Ensemble Forecast Processor (MEFP). Inputs to the MEFP comprised ;raw; precipitation and temperature forecasts from the frozen (circa 1997) version of the NWS Global Forecast System (GFS) and a climatological ensemble, which involved resampling historical observations in a moving window around the forecast valid date (;resampled climatology;). In both cases, the forecast horizon was 1-14 days. This paper outlines the hindcasting and verification strategy, and then focuses on the quality of the temperature and precipitation forecasts from the MEFP. A companion paper focuses on the quality of the streamflow forecasts from the HEFS. In general, the precipitation forecasts are more skillful than resampled climatology during the first week, but comprise little or no skill during the second week. In contrast, the temperature forecasts improve upon resampled climatology at all forecast lead times. However, there are notable differences among RFCs and for different seasons, aggregation periods and magnitudes of the observed and forecast variables, both for precipitation and temperature. For example, the MEFP-GFS precipitation forecasts show the highest correlations and greatest skill in the California Nevada RFC, particularly during the wet season (November-April). While generally reliable, the MEFP forecasts typically underestimate the largest observed precipitation amounts (a Type-II conditional bias). As a statistical technique, the MEFP cannot detect, and thus appropriately correct for, conditions that are undetected by the GFS. The calibration of the MEFP to provide reliable and skillful forecasts of a range of precipitation amounts (not only large amounts) is a secondary factor responsible for these Type-II conditional biases. Interpretation of the verification results leads to guidance on the expected performance and limitations of the MEFP, together with recommendations on future enhancements.

  10. National Centers for Environmental Prediction

    Science.gov Websites

    Reference List Table of Contents NCEP OPERATIONAL MODEL FORECAST GRAPHICS PARALLEL/EXPERIMENTAL MODEL Developmental Air Quality Forecasts and Verification Back to Table of Contents 2. PARALLEL/EXPERIMENTAL GRAPHICS VERIFICATION (GRID VS.OBS) WEB PAGE (NCEP EXPERIMENTAL PAGE, INTERNAL USE ONLY) Interactive web page tool for

  11. MMAB Sea Ice Forecast Page

    Science.gov Websites

    verification statistics Grumbine, R. W., Virtual Floe Ice Drift Forecast Model Intercomparison, Weather and Forecasting, 13, 886-890, 1998. MMAB Note: Virtual Floe Ice Drift Forecast Model Intercomparison 1996 pdf ~47

  12. Latent fluctuation periods and long-term forecasting of the level of Markakol lake

    NASA Astrophysics Data System (ADS)

    Madibekov, A. S.; Babkin, A. V.; Musakulkyzy, A.; Cherednichenko, A. V.

    2018-01-01

    The analysis of time series of the level of Markakol Lake by the method of “Periodicities” reveals in its variations the harmonics with the periods of 12 and 14 years, respectively. The verification forecasts of the lake level by the trend tendency and by its combination with these sinusoids were computed with the lead time of 5 and 10 years. The estimation of the forecast results by the new independent data permitted to conclude that forecasts by the combination of the sinusoids and trend tendency are better than by the trend tendency only. They are no worse than the mean value prediction.

  13. Cb-LIKE - Thunderstorm forecasts up to six hours with fuzzy logic

    NASA Astrophysics Data System (ADS)

    Köhler, Martin; Tafferner, Arnold

    2016-04-01

    Thunderstorms with their accompanying effects like heavy rain, hail, or downdrafts cause delays and flight cancellations and therefore high additional cost for airlines and airport operators. A reliable thunderstorm forecast up to several hours could provide more time for decision makers in air traffic for an appropriate reaction on possible storm cells and initiation of adequate counteractions. To provide the required forecasts Cb-LIKE (Cumulonimbus-LIKElihood) has been developed at the DLR (Deutsches Zentrum für Luft- und Raumfahrt) Institute of Atmospheric Physics. The new algorithm is an automated system which designates areas with possible thunderstorm development by using model data of the COSMO-DE weather model, which is driven by the German Meteorological Service (DWD). A newly developed "Best-Member- Selection" method allows the automatic selection of that particular model run of a time-lagged COSMO- DE model ensemble, which matches best the current thunderstorm situation. Thereby the application of the best available data basis for the calculation of the thunderstorm forecasts by Cb-LIKE is ensured. Altogether there are four different modes for the selection of the best member. Four atmospheric parameters (CAPE, vertical wind velocity, radar reflectivity and cloud top temperature) of the model output are used within the algorithm. A newly developed fuzzy logic system enables the subsequent combination of the model parameters and the calculation of a thunderstorm indicator within a value range of 12 up to 88 for each grid point of the model domain for the following six hours in one hour intervals. The higher the indicator value the more the model parameters imply the development of thunderstorms. The quality of the Cb-LIKE thunderstorm forecasts was evaluated by a substantial verification using a neighborhood verification approach and multi-event contingency tables. The verification was performed for the whole summer period of 2012. On the basis of a deterministic object comparison with heavy precipitation cells observed by the radar-based thunderstorm tracking algorithm Rad-TRAM, several verification scores like BIAS, POD, FAR and CSI were calculated to identify possible advantages of the new algorithm. The presentation illustrates in detail the concept of the Cb-LIKE algorithm with regard to the fuzzy logic system and the Best-Member-Selection. Additionally some case studies and the most important results of the verification will be shown. The implementation of the forecasts into the DLR WxFUSION system, an user oriented forecasting system for air traffic, will also be included.

  14. Verification of Space Weather Forecasts Issued by the Met Office Space Weather Operations Centre

    NASA Astrophysics Data System (ADS)

    Sharpe, M. A.; Murray, S. A.

    2017-10-01

    The Met Office Space Weather Operations Centre was founded in 2014 and part of its remit is a daily Space Weather Technical Forecast to help the UK build resilience to space weather impacts; guidance includes 4 day geomagnetic storm forecasts (GMSF) and X-ray flare forecasts (XRFF). It is crucial for forecasters, users, modelers, and stakeholders to understand the strengths and weaknesses of these forecasts; therefore, it is important to verify against the most reliable truth data source available. The present study contains verification results for XRFFs using Geo-Orbiting Earth Satellite 15 satellite data and GMSF using planetary K-index (Kp) values from the GFZ Helmholtz Centre. To assess the value of the verification results, it is helpful to compare them against a reference forecast and the frequency of occurrence during a rolling prediction period is used for this purpose. An analysis of the rolling 12 month performance over a 19 month period suggests that both the XRFF and GMSF struggle to provide a better prediction than the reference. However, a relative operating characteristic and reliability analysis of the full 19 month period reveals that although the GMSF and XRFF possess discriminatory skill, events tend to be overforecast.

  15. A New Integrated Weighted Model in SNOW-V10: Verification of Categorical Variables

    NASA Astrophysics Data System (ADS)

    Huang, Laura X.; Isaac, George A.; Sheng, Grant

    2014-01-01

    This paper presents the verification results for nowcasts of seven categorical variables from an integrated weighted model (INTW) and the underlying numerical weather prediction (NWP) models. Nowcasting, or short range forecasting (0-6 h), over complex terrain with sufficient accuracy is highly desirable but a very challenging task. A weighting, evaluation, bias correction and integration system (WEBIS) for generating nowcasts by integrating NWP forecasts and high frequency observations was used during the Vancouver 2010 Olympic and Paralympic Winter Games as part of the Science of Nowcasting Olympic Weather for Vancouver 2010 (SNOW-V10) project. Forecast data from Canadian high-resolution deterministic NWP system with three nested grids (at 15-, 2.5- and 1-km horizontal grid-spacing) were selected as background gridded data for generating the integrated nowcasts. Seven forecast variables of temperature, relative humidity, wind speed, wind gust, visibility, ceiling and precipitation rate are treated as categorical variables for verifying the integrated weighted forecasts. By analyzing the verification of forecasts from INTW and the NWP models among 15 sites, the integrated weighted model was found to produce more accurate forecasts for the 7 selected forecast variables, regardless of location. This is based on the multi-categorical Heidke skill scores for the test period 12 February to 21 March 2010.

  16. Towards an improved ensemble precipitation forecast: A probabilistic post-processing approach

    NASA Astrophysics Data System (ADS)

    Khajehei, Sepideh; Moradkhani, Hamid

    2017-03-01

    Recently, ensemble post-processing (EPP) has become a commonly used approach for reducing the uncertainty in forcing data and hence hydrologic simulation. The procedure was introduced to build ensemble precipitation forecasts based on the statistical relationship between observations and forecasts. More specifically, the approach relies on a transfer function that is developed based on a bivariate joint distribution between the observations and the simulations in the historical period. The transfer function is used to post-process the forecast. In this study, we propose a Bayesian EPP approach based on copula functions (COP-EPP) to improve the reliability of the precipitation ensemble forecast. Evaluation of the copula-based method is carried out by comparing the performance of the generated ensemble precipitation with the outputs from an existing procedure, i.e. mixed type meta-Gaussian distribution. Monthly precipitation from Climate Forecast System Reanalysis (CFS) and gridded observation from Parameter-Elevation Relationships on Independent Slopes Model (PRISM) have been employed to generate the post-processed ensemble precipitation. Deterministic and probabilistic verification frameworks are utilized in order to evaluate the outputs from the proposed technique. Distribution of seasonal precipitation for the generated ensemble from the copula-based technique is compared to the observation and raw forecasts for three sub-basins located in the Western United States. Results show that both techniques are successful in producing reliable and unbiased ensemble forecast, however, the COP-EPP demonstrates considerable improvement in the ensemble forecast in both deterministic and probabilistic verification, in particular in characterizing the extreme events in wet seasons.

  17. Comparing Two Approaches for Assessing Observation Impact

    NASA Technical Reports Server (NTRS)

    Todling, Ricardo

    2013-01-01

    Langland and Baker introduced an approach to assess the impact of observations on the forecasts. In that approach, a state-space aspect of the forecast is defined and a procedure is derived ultimately relating changes in the aspect with changes in the observing system. Some features of the state-space approach are to be noted: the typical choice of forecast aspect is rather subjective and leads to incomplete assessment of the observing system, it requires availability of a verification state that is in practice correlated with the forecast, and it involves the adjoint operator of the entire data assimilation system and is thus constrained by the validity of this operator. This article revisits the topic of observation impacts from the perspective of estimation theory. An observation-space metric is used to allow inferring observation impact on the forecasts without the limitations just mentioned. Using differences of observation-minus-forecast residuals obtained from consecutive forecasts leads to the following advantages: (i) it suggests a rather natural choice of forecast aspect that directly links to the data assimilation procedure, (ii) it avoids introducing undesirable correlations in the forecast aspect since verification is done against the observations, and (iii) it does not involve linearization and use of adjoints. The observation-space approach has the additional advantage of being nearly cost free and very simple to implement. In its simplest form it reduces to evaluating the statistics of observationminus- background and observation-minus-analysis residuals with traditional methods. Illustrations comparing the approaches are given using the NASA Goddard Earth Observing System.

  18. Verification of Ensemble Forecasts for the New York City Operations Support Tool

    NASA Astrophysics Data System (ADS)

    Day, G.; Schaake, J. C.; Thiemann, M.; Draijer, S.; Wang, L.

    2012-12-01

    The New York City water supply system operated by the Department of Environmental Protection (DEP) serves nine million people. It covers 2,000 square miles of portions of the Catskill, Delaware, and Croton watersheds, and it includes nineteen reservoirs and three controlled lakes. DEP is developing an Operations Support Tool (OST) to support its water supply operations and planning activities. OST includes historical and real-time data, a model of the water supply system complete with operating rules, and lake water quality models developed to evaluate alternatives for managing turbidity in the New York City Catskill reservoirs. OST will enable DEP to manage turbidity in its unfiltered system while satisfying its primary objective of meeting the City's water supply needs, in addition to considering secondary objectives of maintaining ecological flows, supporting fishery and recreation releases, and mitigating downstream flood peaks. The current version of OST relies on statistical forecasts of flows in the system based on recent observed flows. To improve short-term decision making, plans are being made to transition to National Weather Service (NWS) ensemble forecasts based on hydrologic models that account for short-term weather forecast skill, longer-term climate information, as well as the hydrologic state of the watersheds and recent observed flows. To ensure that the ensemble forecasts are unbiased and that the ensemble spread reflects the actual uncertainty of the forecasts, a statistical model has been developed to post-process the NWS ensemble forecasts to account for hydrologic model error as well as any inherent bias and uncertainty in initial model states, meteorological data and forecasts. The post-processor is designed to produce adjusted ensemble forecasts that are consistent with the DEP historical flow sequences that were used to develop the system operating rules. A set of historical hindcasts that is representative of the real-time ensemble forecasts is needed to verify that the post-processed forecasts are unbiased, statistically reliable, and preserve the skill inherent in the "raw" NWS ensemble forecasts. A verification procedure and set of metrics will be presented that provide an objective assessment of ensemble forecasts. The procedure will be applied to both raw ensemble hindcasts and to post-processed ensemble hindcasts. The verification metrics will be used to validate proper functioning of the post-processor and to provide a benchmark for comparison of different types of forecasts. For example, current NWS ensemble forecasts are based on climatology, using each historical year to generate a forecast trace. The NWS Hydrologic Ensemble Forecast System (HEFS) under development will utilize output from both the National Oceanic Atmospheric Administration (NOAA) Global Ensemble Forecast System (GEFS) and the Climate Forecast System (CFS). Incorporating short-term meteorological forecasts and longer-term climate forecast information should provide sharper, more accurate forecasts. Hindcasts from HEFS will enable New York City to generate verification results to validate the new forecasts and further fine-tune system operating rules. Project verification results will be presented for different watersheds across a range of seasons, lead times, and flow levels to assess the quality of the current ensemble forecasts.

  19. Forecasting forecast skill

    NASA Technical Reports Server (NTRS)

    Kalnay, Eugenia; Dalcher, Amnon

    1987-01-01

    It is shown that it is possible to predict the skill of numerical weather forecasts - a quantity which is variable from day to day and region to region. This has been accomplished using as predictor the dispersion (measured by the average correlation) between members of an ensemble of forecasts started from five different analyses. The analyses had been previously derived for satellite-data-impact studies and included, in the Northern Hemisphere, moderate perturbations associated with the use of different observing systems. When the Northern Hemisphere was used as a verification region, the prediction of skill was rather poor. This is due to the fact that such a large area usually contains regions with excellent forecasts as well as regions with poor forecasts, and does not allow for discrimination between them. However, when regional verifications were used, the ensemble forecast dispersion provided a very good prediction of the quality of the individual forecasts.

  20. Space Weather Models and Their Validation and Verification at the CCMC

    NASA Technical Reports Server (NTRS)

    Hesse, Michael

    2010-01-01

    The Community Coordinated l\\lodeling Center (CCMC) is a US multi-agency activity with a dual mission. With equal emphasis, CCMC strives to provide science support to the international space research community through the execution of advanced space plasma simulations, and it endeavors to support the space weather needs of the CS and partners. Space weather support involves a broad spectrum, from designing robust forecasting systems and transitioning them to forecasters, to providing space weather updates and forecasts to NASA's robotic mission operators. All of these activities have to rely on validation and verification of models and their products, so users and forecasters have the means to assign confidence levels to the space weather information. In this presentation, we provide an overview of space weather models resident at CCMC, as well as of validation and verification activities undertaken at CCMC or through the use of CCMC services.

  1. Maximizing Statistical Power When Verifying Probabilistic Forecasts of Hydrometeorological Events

    NASA Astrophysics Data System (ADS)

    DeChant, C. M.; Moradkhani, H.

    2014-12-01

    Hydrometeorological events (i.e. floods, droughts, precipitation) are increasingly being forecasted probabilistically, owing to the uncertainties in the underlying causes of the phenomenon. In these forecasts, the probability of the event, over some lead time, is estimated based on some model simulations or predictive indicators. By issuing probabilistic forecasts, agencies may communicate the uncertainty in the event occurring. Assuming that the assigned probability of the event is correct, which is referred to as a reliable forecast, the end user may perform some risk management based on the potential damages resulting from the event. Alternatively, an unreliable forecast may give false impressions of the actual risk, leading to improper decision making when protecting resources from extreme events. Due to this requisite for reliable forecasts to perform effective risk management, this study takes a renewed look at reliability assessment in event forecasts. Illustrative experiments will be presented, showing deficiencies in the commonly available approaches (Brier Score, Reliability Diagram). Overall, it is shown that the conventional reliability assessment techniques do not maximize the ability to distinguish between a reliable and unreliable forecast. In this regard, a theoretical formulation of the probabilistic event forecast verification framework will be presented. From this analysis, hypothesis testing with the Poisson-Binomial distribution is the most exact model available for the verification framework, and therefore maximizes one's ability to distinguish between a reliable and unreliable forecast. Application of this verification system was also examined within a real forecasting case study, highlighting the additional statistical power provided with the use of the Poisson-Binomial distribution.

  2. A probabilistic verification score for contours demonstrated with idealized ice-edge forecasts

    NASA Astrophysics Data System (ADS)

    Goessling, Helge; Jung, Thomas

    2017-04-01

    We introduce a probabilistic verification score for ensemble-based forecasts of contours: the Spatial Probability Score (SPS). Defined as the spatial integral of local (Half) Brier Scores, the SPS can be considered the spatial analog of the Continuous Ranked Probability Score (CRPS). Applying the SPS to idealized seasonal ensemble forecasts of the Arctic sea-ice edge in a global coupled climate model, we demonstrate that the SPS responds properly to ensemble size, bias, and spread. When applied to individual forecasts or ensemble means (or quantiles), the SPS is reduced to the 'volume' of mismatch, in case of the ice edge corresponding to the Integrated Ice Edge Error (IIEE).

  3. Assessing the viability of `over-the-loop' real-time short-to-medium range ensemble streamflow forecasts

    NASA Astrophysics Data System (ADS)

    Wood, A. W.; Clark, E.; Mendoza, P. A.; Nijssen, B.; Newman, A. J.; Clark, M. P.; Arnold, J.; Nowak, K. C.

    2016-12-01

    Many if not most national operational short-to-medium range streamflow prediction systems rely on a forecaster-in-the-loop approach in which some parts of the forecast workflow are automated, but others require the hands-on-effort of an experienced human forecaster. This approach evolved out of the need to correct for deficiencies in the models and datasets that were available for forecasting, and often leads to skillful predictions despite the use of relatively simple, conceptual models. On the other hand, the process is not reproducible, which limits opportunities to assess and incorporate process variations, and the effort required to make forecasts in this way is an obstacle to expanding forecast services - e.g., though adding new forecast locations or more frequent forecast updates, running more complex models, or producing forecast ensembles and hindcasts that can support verification. In the last decade, the hydrologic forecasting community has begun to develop more centralized, `over-the-loop' systems. The quality of these new forecast products will depend on their ability to leverage research in areas including earth system modeling, parameter estimation, data assimilation, statistical post-processing, weather and climate prediction, verification, and uncertainty estimation through the use of ensembles. Currently, the operational streamflow forecasting and water management communities have little experience with the strengths and weaknesses of over-the-loop approaches, even as the systems are being rolled out in major operational forecasting centers. There is thus a need both to evaluate these forecasting advances and to demonstrate their potential in a public arena, raising awareness in forecast user communities and development programs alike. To address this need, the National Center for Atmospheric Research is collaborating with the University of Washington, the Bureau of Reclamation and the US Army Corps of Engineers, using the NCAR 'System for Hydromet Analysis, Research, and Prediction' (SHARP) to implement, assess and demonstrate real-time over-the-loop forecasts. We present early hindcast and verification results from SHARP for short to medium range streamflow forecasts in a number of US case study watersheds.

  4. EMC: Verification

    Science.gov Websites

    , GFS, RAP, HRRR, HIRESW, SREF mean, International Global Models, HPC analysis Precipitation Skill Scores : 1995-Present NAM, GFS, NAM CONUS nest, International Models EMC Forecast Verfication Stats: NAM ) Real Time Verification of NCEP Operational Models against observations Real Time Verification of NCEP

  5. Tool for Forecasting Cool-Season Peak Winds Across Kennedy Space Center and Cape Canaveral Air Force Station (CCAFS)

    NASA Technical Reports Server (NTRS)

    Barrett, Joe H., III; Roeder, William P.

    2010-01-01

    Peak wind speed is important element in 24-Hour and Weekly Planning Forecasts issued by 45th Weather Squadron (45 WS). Forecasts issued for planning operations at KSC/CCAFS. 45 WS wind advisories issued for wind gusts greater than or equal to 25 kt. 35 kt and 50 kt from surface to 300 ft. AMU developed cool-season (Oct - Apr) tool to help 45 WS forecast: daily peak wind speed, 5-minute average speed at time of peak wind, and probability peak speed greater than or equal to 25 kt, 35 kt, 50 kt. AMU tool also forecasts daily average wind speed from 30 ft to 60 ft. Phase I and II tools delivered as a Microsoft Excel graphical user interface (GUI). Phase II tool also delivered as Meteorological Interactive Data Display System (MIDDS) GUI. Phase I and II forecast methods were compared to climatology, 45 WS wind advisories and North American Mesoscale model (MesoNAM) forecasts in a verification data set.

  6. Verification of National Weather Service spot forecasts using surface observations

    NASA Astrophysics Data System (ADS)

    Lammers, Matthew Robert

    Software has been developed to evaluate National Weather Service spot forecasts issued to support prescribed burns and early-stage wildfires. Fire management officials request spot forecasts from National Weather Service Weather Forecast Offices to provide detailed guidance as to atmospheric conditions in the vicinity of planned prescribed burns as well as wildfires that do not have incident meteorologists on site. This open source software with online display capabilities is used to examine an extensive set of spot forecasts of maximum temperature, minimum relative humidity, and maximum wind speed from April 2009 through November 2013 nationwide. The forecast values are compared to the closest available surface observations at stations installed primarily for fire weather and aviation applications. The accuracy of the spot forecasts is compared to those available from the National Digital Forecast Database (NDFD). Spot forecasts for selected prescribed burns and wildfires are used to illustrate issues associated with the verification procedures. Cumulative statistics for National Weather Service County Warning Areas and for the nation are presented. Basic error and accuracy metrics for all available spot forecasts and the entire nation indicate that the skill of the spot forecasts is higher than that available from the NDFD, with the greatest improvement for maximum temperature and the least improvement for maximum wind speed.

  7. 46 CFR 45.191 - Pre-departure requirements.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ..., verification of mooring/docking space availability, and weather forecast checks were performed, and record the... voyage, the towing vessel master must conduct the following: (a) Weather forecast. Determine the marine weather forecast along the planned route, and contact the dock operator at the destination port to get an...

  8. 46 CFR 45.191 - Pre-departure requirements.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ..., verification of mooring/docking space availability, and weather forecast checks were performed, and record the... voyage, the towing vessel master must conduct the following: (a) Weather forecast. Determine the marine weather forecast along the planned route, and contact the dock operator at the destination port to get an...

  9. 46 CFR 45.191 - Pre-departure requirements.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ..., verification of mooring/docking space availability, and weather forecast checks were performed, and record the... voyage, the towing vessel master must conduct the following: (a) Weather forecast. Determine the marine weather forecast along the planned route, and contact the dock operator at the destination port to get an...

  10. 46 CFR 45.191 - Pre-departure requirements.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ..., verification of mooring/docking space availability, and weather forecast checks were performed, and record the... voyage, the towing vessel master must conduct the following: (a) Weather forecast. Determine the marine weather forecast along the planned route, and contact the dock operator at the destination port to get an...

  11. Verification of Cloud Forecasts over the Eastern Pacific Using Passive Satellite Retrievals

    DTIC Science & Technology

    2009-10-01

    with increasing sample area. Ebert (2008) reviews a number of these methods, some examples include upscaling ( Zepeda -Arce et al. 2000), wavelet...evaluation of mesoscale simula- tions of the Algiers 2001 flash flood by the model-to-satellite approach. Adv. Geosci., 7, 247–250. Zepeda -Arce, J., E

  12. Applications systems verification and transfer project. Volume 4: Operational applications of satellite snow cover observations. Colorado Field Test Center

    NASA Technical Reports Server (NTRS)

    Shafer, B. A.; Leaf, C. F.; Danielson, J. A.; Moravec, G. F.

    1981-01-01

    The study was conducted on six watersheds ranging in size from 277 km to 3460 km in the Rio Grande and Arkansas River basins of southwestern Colorado. Six years of satellite data in the period 1973-78 were analyzed and snowcover maps prepared for all available image dates. Seven snowmapping techniques were explored; the photointerpretative method was selected as the most accurate. Three schemes to forecast snowmelt runoff employing satellite snowcover observations were investigated. They included a conceptual hydrologic model, a statistical model, and a graphical method. A reduction of 10% in the current average forecast error is estimated when snowcover data in snowmelt runoff forecasting is shown to be extremely promising. Inability to obtain repetitive coverage due to the 18 day cycle of LANDSAT, the occurrence of cloud cover and slow image delivery are obstacles to the immediate implementation of satellite derived snowcover in operational streamflow forecasting programs.

  13. Meta-heuristic CRPS minimization for the calibration of short-range probabilistic forecasts

    NASA Astrophysics Data System (ADS)

    Mohammadi, Seyedeh Atefeh; Rahmani, Morteza; Azadi, Majid

    2016-08-01

    This paper deals with the probabilistic short-range temperature forecasts over synoptic meteorological stations across Iran using non-homogeneous Gaussian regression (NGR). NGR creates a Gaussian forecast probability density function (PDF) from the ensemble output. The mean of the normal predictive PDF is a bias-corrected weighted average of the ensemble members and its variance is a linear function of the raw ensemble variance. The coefficients for the mean and variance are estimated by minimizing the continuous ranked probability score (CRPS) during a training period. CRPS is a scoring rule for distributional forecasts. In the paper of Gneiting et al. (Mon Weather Rev 133:1098-1118, 2005), Broyden-Fletcher-Goldfarb-Shanno (BFGS) method is used to minimize the CRPS. Since BFGS is a conventional optimization method with its own limitations, we suggest using the particle swarm optimization (PSO), a robust meta-heuristic method, to minimize the CRPS. The ensemble prediction system used in this study consists of nine different configurations of the weather research and forecasting model for 48-h forecasts of temperature during autumn and winter 2011 and 2012. The probabilistic forecasts were evaluated using several common verification scores including Brier score, attribute diagram and rank histogram. Results show that both BFGS and PSO find the optimal solution and show the same evaluation scores, but PSO can do this with a feasible random first guess and much less computational complexity.

  14. Tool for Forecasting Cool-Season Peak Winds Across Kennedy Space Center and Cape Canaveral Air Force Station

    NASA Technical Reports Server (NTRS)

    Barrett, Joe H., III; Roeder, William P.

    2010-01-01

    The expected peak wind speed for the day is an important element in the daily morning forecast for ground and space launch operations at Kennedy Space Center (KSC) and Cape Canaveral Air Force Station (CCAFS). The 45th Weather Squadron (45 WS) must issue forecast advisories for KSC/CCAFS when they expect peak gusts for >= 25, >= 35, and >= 50 kt thresholds at any level from the surface to 300 ft. In Phase I of this task, the 45 WS tasked the Applied Meteorology Unit (AMU) to develop a cool-season (October - April) tool to help forecast the non-convective peak wind from the surface to 300 ft at KSC/CCAFS. During the warm season, these wind speeds are rarely exceeded except during convective winds or under the influence of tropical cyclones, for which other techniques are already in use. The tool used single and multiple linear regression equations to predict the peak wind from the morning sounding. The forecaster manually entered several observed sounding parameters into a Microsoft Excel graphical user interface (GUI), and then the tool displayed the forecast peak wind speed, average wind speed at the time of the peak wind, the timing of the peak wind and the probability the peak wind will meet or exceed 35, 50 and 60 kt. The 45 WS customers later dropped the requirement for >= 60 kt wind warnings. During Phase II of this task, the AMU expanded the period of record (POR) by six years to increase the number of observations used to create the forecast equations. A large number of possible predictors were evaluated from archived soundings, including inversion depth and strength, low-level wind shear, mixing height, temperature lapse rate and winds from the surface to 3000 ft. Each day in the POR was stratified in a number of ways, such as by low-level wind direction, synoptic weather pattern, precipitation and Bulk Richardson number. The most accurate Phase II equations were then selected for an independent verification. The Phase I and II forecast methods were compared using an independent verification data set. The two methods were compared to climatology, wind warnings and advisories issued by the 45 WS, and North American Mesoscale (NAM) model (MesoNAM) forecast winds. The performance of the Phase I and II methods were similar with respect to mean absolute error. Since the Phase I data were not stratified by precipitation, this method's peak wind forecasts had a large negative bias on days with precipitation and a small positive bias on days with no precipitation. Overall, the climatology methods performed the worst while the MesoNAM performed the best. Since the MesoNAM winds were the most accurate in the comparison, the final version of the tool was based on the MesoNAM winds. The probability the peak wind will meet or exceed the warning thresholds were based on the one standard deviation error bars from the linear regression. For example, the linear regression might forecast the most likely peak speed to be 35 kt and the error bars used to calculate that the probability of >= 25 kt = 76%, the probability of >= 35 kt = 50%, and the probability of >= 50 kt = 19%. The authors have not seen this application of linear regression error bars in any other meteorological applications. Although probability forecast tools should usually be developed with logistic regression, this technique could be easily generalized to any linear regression forecast tool to estimate the probability of exceeding any desired threshold . This could be useful for previously developed linear regression forecast tools or new forecast applications where statistical analysis software to perform logistic regression is not available. The tool was delivered in two formats - a Microsoft Excel GUI and a Tool Command Language/Tool Kit (Tcl/Tk) GUI in the Meteorological Interactive Data Display System (MIDDS). The Microsoft Excel GUI reads a MesoNAM text file containing hourly forecasts from 0 to 84 hours, from one model run (00 or 12 UTC). The GUI then displays e peak wind speed, average wind speed, and the probability the peak wind will meet or exceed the 25-, 35- and 50-kt thresholds. The user can display the Day-1 through Day-3 peak wind forecasts, and separate forecasts are made for precipitation and non-precipitation days. The MIDDS GUI uses data from the NAM and Global Forecast System (GFS), instead of the MesoNAM. It can display Day-1 and Day-2 forecasts using NAM data, and Day-1 through Day-5 forecasts using GFS data. The timing of the peak wind is not displayed, since the independent verification showed that none of the forecast methods performed significantly better than climatology. The forecaster should use the climatological timing of the peak wind (2248 UTC) as a first guess and then adjust it based on the movement of weather features.

  15. Evaluation of ensemble forecast uncertainty using a new proper score: application to medium-range and seasonal forecasts

    NASA Astrophysics Data System (ADS)

    Christensen, Hannah; Moroz, Irene; Palmer, Tim

    2015-04-01

    Forecast verification is important across scientific disciplines as it provides a framework for evaluating the performance of a forecasting system. In the atmospheric sciences, probabilistic skill scores are often used for verification as they provide a way of unambiguously ranking the performance of different probabilistic forecasts. In order to be useful, a skill score must be proper -- it must encourage honesty in the forecaster, and reward forecasts which are reliable and which have good resolution. A new score, the Error-spread Score (ES), is proposed which is particularly suitable for evaluation of ensemble forecasts. It is formulated with respect to the moments of the forecast. The ES is confirmed to be a proper score, and is therefore sensitive to both resolution and reliability. The ES is tested on forecasts made using the Lorenz '96 system, and found to be useful for summarising the skill of the forecasts. The European Centre for Medium-Range Weather Forecasts (ECMWF) ensemble prediction system (EPS) is evaluated using the ES. Its performance is compared to a perfect statistical probabilistic forecast -- the ECMWF high resolution deterministic forecast dressed with the observed error distribution. This generates a forecast that is perfectly reliable if considered over all time, but which does not vary from day to day with the predictability of the atmospheric flow. The ES distinguishes between the dynamically reliable EPS forecasts and the statically reliable dressed deterministic forecasts. Other skill scores are tested and found to be comparatively insensitive to this desirable forecast quality. The ES is used to evaluate seasonal range ensemble forecasts made with the ECMWF System 4. The ensemble forecasts are found to be skilful when compared with climatological or persistence forecasts, though this skill is dependent on region and time of year.

  16. Evaluation of Mesoscale Model Phenomenological Verification Techniques

    NASA Technical Reports Server (NTRS)

    Lambert, Winifred

    2006-01-01

    Forecasters at the Spaceflight Meteorology Group, 45th Weather Squadron, and National Weather Service in Melbourne, FL use mesoscale numerical weather prediction model output in creating their operational forecasts. These models aid in forecasting weather phenomena that could compromise the safety of launch, landing, and daily ground operations and must produce reasonable weather forecasts in order for their output to be useful in operations. Considering the importance of model forecasts to operations, their accuracy in forecasting critical weather phenomena must be verified to determine their usefulness. The currently-used traditional verification techniques involve an objective point-by-point comparison of model output and observations valid at the same time and location. The resulting statistics can unfairly penalize high-resolution models that make realistic forecasts of a certain phenomena, but are offset from the observations in small time and/or space increments. Manual subjective verification can provide a more valid representation of model performance, but is time-consuming and prone to personal biases. An objective technique that verifies specific meteorological phenomena, much in the way a human would in a subjective evaluation, would likely produce a more realistic assessment of model performance. Such techniques are being developed in the research community. The Applied Meteorology Unit (AMU) was tasked to conduct a literature search to identify phenomenological verification techniques being developed, determine if any are ready to use operationally, and outline the steps needed to implement any operationally-ready techniques into the Advanced Weather Information Processing System (AWIPS). The AMU conducted a search of all literature on the topic of phenomenological-based mesoscale model verification techniques and found 10 different techniques in various stages of development. Six of the techniques were developed to verify precipitation forecasts, one to verify sea breeze forecasts, and three were capable of verifying several phenomena. The AMU also determined the feasibility of transitioning each technique into operations and rated the operational capability of each technique on a subjective 1-10 scale: (1) 1 indicates that the technique is only in the initial stages of development, (2) 2-5 indicates that the technique is still undergoing modifications and is not ready for operations, (3) 6-8 indicates a higher probability of integrating the technique into AWIPS with code modifications, and (4) 9-10 indicates that the technique was created for AWIPS and is ready for implementation. Eight of the techniques were assigned a rating of 5 or below. The other two received ratings of 6 and 7, and none of the techniques a rating of 9-10. At the current time, there are no phenomenological model verification techniques ready for operational use. However, several of the techniques described in this report may become viable techniques in the future and should be monitored for updates in the literature. The desire to use a phenomenological verification technique is widespread in the modeling community, and it is likely that other techniques besides those described herein are being developed, but the work has not yet been published. Therefore, the AMIU recommends that the literature continue to be monitored for updates to the techniques described in this report and for new techniques being developed whose results have not yet been published. 111

  17. Verification of Spatial Forecasts of Continuous Meteorological Variables Using Categorical and Object-Based Methods

    DTIC Science & Technology

    2016-08-01

    Using Categorical and Object-Based Methods by John W Raby and Huaqing Cai Approved for public release; distribution...by John W Raby and Huaqing Cai Computational and Information Sciences Directorate, ARL Approved for public release...AUTHOR(S) John W Raby and Huaqing Cai 5d. PROJECT NUMBER 5e. TASK NUMBER 5f. WORK UNIT NUMBER 7. PERFORMING ORGANIZATION NAME(S) AND

  18. A real-time evaluation and demonstration of strategies for 'Over-The-Loop' ensemble streamflow forecasting in US watersheds

    NASA Astrophysics Data System (ADS)

    Wood, Andy; Clark, Elizabeth; Mendoza, Pablo; Nijssen, Bart; Newman, Andy; Clark, Martyn; Nowak, Kenneth; Arnold, Jeffrey

    2017-04-01

    Many if not most national operational streamflow prediction systems rely on a forecaster-in-the-loop approach that require the hands-on-effort of an experienced human forecaster. This approach evolved from the need to correct for long-standing deficiencies in the models and datasets used in forecasting, and the practice often leads to skillful flow predictions despite the use of relatively simple, conceptual models. Yet the 'in-the-loop' forecast process is not reproducible, which limits opportunities to assess and incorporate new techniques systematically, and the effort required to make forecasts in this way is an obstacle to expanding forecast services - e.g., though adding new forecast locations or more frequent forecast updates, running more complex models, or producing forecast and hindcasts that can support verification. In the last decade, the hydrologic forecasting community has begun develop more centralized, 'over-the-loop' systems. The quality of these new forecast products will depend on their ability to leverage research in areas including earth system modeling, parameter estimation, data assimilation, statistical post-processing, weather and climate prediction, verification, and uncertainty estimation through the use of ensembles. Currently, many national operational streamflow forecasting and water management communities have little experience with the strengths and weaknesses of over-the-loop approaches, even as such systems are beginning to be deployed operationally in centers such as ECMWF. There is thus a need both to evaluate these forecasting advances and to demonstrate their potential in a public arena, raising awareness in forecast user communities and development programs alike. To address this need, the US National Center for Atmospheric Research is collaborating with the University of Washington, the Bureau of Reclamation and the US Army Corps of Engineers, using the NCAR 'System for Hydromet Analysis Research and Prediction Applications' (SHARP) to implement, assess and demonstrate real-time over-the-loop ensemble flow forecasts in a range of US watersheds. The system relies on fully ensemble techniques, including: an 100-member ensemble of meteorological model forcings and an ensemble particle filter data assimilation for initializing watershed states; analog/regression-based downscaling of ensemble weather forecasts from GEFS; and statistical post-processing of ensemble forecast outputs, all of which run in real-time within a workflow managed by ECWMF's ecFlow libraries over large US regional domains. We describe SHARP and present early hindcast and verification results for short to seasonal range streamflow forecasts in a number of US case study watersheds.

  19. Ensemble forecasting for renewable energy applications - status and current challenges for their generation and verification

    NASA Astrophysics Data System (ADS)

    Pinson, Pierre

    2016-04-01

    The operational management of renewable energy generation in power systems and electricity markets requires forecasts in various forms, e.g., deterministic or probabilistic, continuous or categorical, depending upon the decision process at hand. Besides, such forecasts may also be necessary at various spatial and temporal scales, from high temporal resolutions (in the order of minutes) and very localized for an offshore wind farm, to coarser temporal resolutions (hours) and covering a whole country for day-ahead power scheduling problems. As of today, weather predictions are a common input to forecasting methodologies for renewable energy generation. Since for most decision processes, optimal decisions can only be made if accounting for forecast uncertainties, ensemble predictions and density forecasts are increasingly seen as the product of choice. After discussing some of the basic approaches to obtaining ensemble forecasts of renewable power generation, it will be argued that space-time trajectories of renewable power production may or may not be necessitate post-processing ensemble forecasts for relevant weather variables. Example approaches and test case applications will be covered, e.g., looking at the Horns Rev offshore wind farm in Denmark, or gridded forecasts for the whole continental Europe. Eventually, we will illustrate some of the limitations of current frameworks to forecast verification, which actually make it difficult to fully assess the quality of post-processing approaches to obtain renewable energy predictions.

  20. Quantitative precipitation forecasts in the Alps - an assessment from the Forecast Demonstration Project MAP D-PHASE

    NASA Astrophysics Data System (ADS)

    Ament, F.; Weusthoff, T.; Arpagaus, M.; Rotach, M.

    2009-04-01

    The main aim of the WWRP Forecast Demonstration Project MAP D-PHASE is to demonstrate the performance of today's models to forecast heavy precipitation and flood events in the Alpine region. Therefore an end-to-end, real-time forecasting system was installed and operated during the D PHASE Operations Period from June to November 2007. Part of this system are 30 numerical weather prediction models (deterministic as well as ensemble systems) operated by weather services and research institutes, which issue alerts if predicted precipitation accumulations exceed critical thresholds. Additionally to the real-time alerts, all relevant model fields of these simulations are stored in a central data archive. This comprehensive data set allows a detailed assessment of today's quantitative precipitation forecast (QPF) performance in the Alpine region. We will present results of QPF verifications against Swiss radar and rain gauge data both from a qualitative point of view, in terms of alerts, as well as from a quantitative perspective, in terms of precipitation rate. Various influencing factors like lead time, accumulation time, selection of warning thresholds, or bias corrections will be discussed. Additional to traditional verifications of area average precipitation amounts, the performance of the models to predict the correct precipitation statistics without requiring a point-to-point match will be described by using modern Fuzzy verification techniques. Both analyses reveal significant advantages of deep convection resolving models compared to coarser models with parameterized convection. An intercomparison of the model forecasts themselves reveals a remarkably high variability between different models, and makes it worthwhile to evaluate the potential of a multi-model ensemble. Various multi-model ensemble strategies will be tested by combining D-PHASE models to virtual ensemble systems.

  1. Development and verification of a real-time stochastic precipitation nowcasting system for urban hydrology in Belgium

    NASA Astrophysics Data System (ADS)

    Foresti, L.; Reyniers, M.; Seed, A.; Delobbe, L.

    2016-01-01

    The Short-Term Ensemble Prediction System (STEPS) is implemented in real-time at the Royal Meteorological Institute (RMI) of Belgium. The main idea behind STEPS is to quantify the forecast uncertainty by adding stochastic perturbations to the deterministic Lagrangian extrapolation of radar images. The stochastic perturbations are designed to account for the unpredictable precipitation growth and decay processes and to reproduce the dynamic scaling of precipitation fields, i.e., the observation that large-scale rainfall structures are more persistent and predictable than small-scale convective cells. This paper presents the development, adaptation and verification of the STEPS system for Belgium (STEPS-BE). STEPS-BE provides in real-time 20-member ensemble precipitation nowcasts at 1 km and 5 min resolutions up to 2 h lead time using a 4 C-band radar composite as input. In the context of the PLURISK project, STEPS forecasts were generated to be used as input in sewer system hydraulic models for nowcasting urban inundations in the cities of Ghent and Leuven. Comprehensive forecast verification was performed in order to detect systematic biases over the given urban areas and to analyze the reliability of probabilistic forecasts for a set of case studies in 2013 and 2014. The forecast biases over the cities of Leuven and Ghent were found to be small, which is encouraging for future integration of STEPS nowcasts into the hydraulic models. Probabilistic forecasts of exceeding 0.5 mm h-1 are reliable up to 60-90 min lead time, while the ones of exceeding 5.0 mm h-1 are only reliable up to 30 min. The STEPS ensembles are slightly under-dispersive and represent only 75-90 % of the forecast errors.

  2. Development and verification of a real-time stochastic precipitation nowcasting system for urban hydrology in Belgium

    NASA Astrophysics Data System (ADS)

    Foresti, L.; Reyniers, M.; Seed, A.; Delobbe, L.

    2015-07-01

    The Short-Term Ensemble Prediction System (STEPS) is implemented in real-time at the Royal Meteorological Institute (RMI) of Belgium. The main idea behind STEPS is to quantify the forecast uncertainty by adding stochastic perturbations to the deterministic Lagrangian extrapolation of radar images. The stochastic perturbations are designed to account for the unpredictable precipitation growth and decay processes and to reproduce the dynamic scaling of precipitation fields, i.e. the observation that large scale rainfall structures are more persistent and predictable than small scale convective cells. This paper presents the development, adaptation and verification of the system STEPS for Belgium (STEPS-BE). STEPS-BE provides in real-time 20 member ensemble precipitation nowcasts at 1 km and 5 min resolution up to 2 h lead time using a 4 C-band radar composite as input. In the context of the PLURISK project, STEPS forecasts were generated to be used as input in sewer system hydraulic models for nowcasting urban inundations in the cities of Ghent and Leuven. Comprehensive forecast verification was performed in order to detect systematic biases over the given urban areas and to analyze the reliability of probabilistic forecasts for a set of case studies in 2013 and 2014. The forecast biases over the cities of Leuven and Ghent were found to be small, which is encouraging for future integration of STEPS nowcasts into the hydraulic models. Probabilistic forecasts of exceeding 0.5 mm h-1 are reliable up to 60-90 min lead time, while the ones of exceeding 5.0 mm h-1 are only reliable up to 30 min. The STEPS ensembles are slightly under-dispersive and represent only 80-90 % of the forecast errors.

  3. AIR QUALITY FORECAST VERIFICATION USING SATELLITE DATA

    EPA Science Inventory

    NOAA 's operational geostationary satellite retrievals of aerosol optical depths (AODs) were used to verify National Weather Service (NWS) experimental (research mode) particulate matter (PM2.5) forecast guidance issued during the summer 2004 International Consortium for Atmosp...

  4. A New Approach in Generating Meteorological Forecasts for Ensemble Streamflow Forecasting using Multivariate Functions

    NASA Astrophysics Data System (ADS)

    Khajehei, S.; Madadgar, S.; Moradkhani, H.

    2014-12-01

    The reliability and accuracy of hydrological predictions are subject to various sources of uncertainty, including meteorological forcing, initial conditions, model parameters and model structure. To reduce the total uncertainty in hydrological applications, one approach is to reduce the uncertainty in meteorological forcing by using the statistical methods based on the conditional probability density functions (pdf). However, one of the requirements for current methods is to assume the Gaussian distribution for the marginal distribution of the observed and modeled meteorology. Here we propose a Bayesian approach based on Copula functions to develop the conditional distribution of precipitation forecast needed in deriving a hydrologic model for a sub-basin in the Columbia River Basin. Copula functions are introduced as an alternative approach in capturing the uncertainties related to meteorological forcing. Copulas are multivariate joint distribution of univariate marginal distributions, which are capable to model the joint behavior of variables with any level of correlation and dependency. The method is applied to the monthly forecast of CPC with 0.25x0.25 degree resolution to reproduce the PRISM dataset over 1970-2000. Results are compared with Ensemble Pre-Processor approach as a common procedure used by National Weather Service River forecast centers in reproducing observed climatology during a ten-year verification period (2000-2010).

  5. 75 FR 69036 - Notice of Data Availability Regarding Potential Changes to Required Ozone Monitoring Seasons for...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-11-10

    ... is specifically considering how these more recent data could impact changes to the current and... reporting to the public, ozone forecasting programs, and the verification of real-time air quality forecast...

  6. A New Multivariate Approach in Generating Ensemble Meteorological Forcings for Hydrological Forecasting

    NASA Astrophysics Data System (ADS)

    Khajehei, Sepideh; Moradkhani, Hamid

    2015-04-01

    Producing reliable and accurate hydrologic ensemble forecasts are subject to various sources of uncertainty, including meteorological forcing, initial conditions, model structure, and model parameters. Producing reliable and skillful precipitation ensemble forecasts is one approach to reduce the total uncertainty in hydrological applications. Currently, National Weather Prediction (NWP) models are developing ensemble forecasts for various temporal ranges. It is proven that raw products from NWP models are biased in mean and spread. Given the above state, there is a need for methods that are able to generate reliable ensemble forecasts for hydrological applications. One of the common techniques is to apply statistical procedures in order to generate ensemble forecast from NWP-generated single-value forecasts. The procedure is based on the bivariate probability distribution between the observation and single-value precipitation forecast. However, one of the assumptions of the current method is fitting Gaussian distribution to the marginal distributions of observed and modeled climate variable. Here, we have described and evaluated a Bayesian approach based on Copula functions to develop an ensemble precipitation forecast from the conditional distribution of single-value precipitation forecasts. Copula functions are known as the multivariate joint distribution of univariate marginal distributions, which are presented as an alternative procedure in capturing the uncertainties related to meteorological forcing. Copulas are capable of modeling the joint distribution of two variables with any level of correlation and dependency. This study is conducted over a sub-basin in the Columbia River Basin in USA using the monthly precipitation forecasts from Climate Forecast System (CFS) with 0.5x0.5 Deg. spatial resolution to reproduce the observations. The verification is conducted on a different period and the superiority of the procedure is compared with Ensemble Pre-Processor approach currently used by National Weather Service River Forecast Centers in USA.

  7. Convective Weather Forecast Quality Metrics for Air Traffic Management Decision-Making

    NASA Technical Reports Server (NTRS)

    Chatterji, Gano B.; Gyarfas, Brett; Chan, William N.; Meyn, Larry A.

    2006-01-01

    Since numerical weather prediction models are unable to accurately forecast the severity and the location of the storm cells several hours into the future when compared with observation data, there has been a growing interest in probabilistic description of convective weather. The classical approach for generating uncertainty bounds consists of integrating the state equations and covariance propagation equations forward in time. This step is readily recognized as the process update step of the Kalman Filter algorithm. The second well known method, known as the Monte Carlo method, consists of generating output samples by driving the forecast algorithm with input samples selected from distributions. The statistical properties of the distributions of the output samples are then used for defining the uncertainty bounds of the output variables. This method is computationally expensive for a complex model compared to the covariance propagation method. The main advantage of the Monte Carlo method is that a complex non-linear model can be easily handled. Recently, a few different methods for probabilistic forecasting have appeared in the literature. A method for computing probability of convection in a region using forecast data is described in Ref. 5. Probability at a grid location is computed as the fraction of grid points, within a box of specified dimensions around the grid location, with forecast convection precipitation exceeding a specified threshold. The main limitation of this method is that the results are dependent on the chosen dimensions of the box. The examples presented Ref. 5 show that this process is equivalent to low-pass filtering of the forecast data with a finite support spatial filter. References 6 and 7 describe the technique for computing percentage coverage within a 92 x 92 square-kilometer box and assigning the value to the center 4 x 4 square-kilometer box. This technique is same as that described in Ref. 5. Characterizing the forecast, following the process described in Refs. 5 through 7, in terms of percentage coverage or confidence level is notionally sound compared to characterizing in terms of probabilities because the probability of the forecast being correct can only be determined using actual observations. References 5 through 7 only use the forecast data and not the observations. The method for computing the probability of detection, false alarm ratio and several forecast quality metrics (Skill Scores) using both the forecast and observation data are given in Ref. 2. This paper extends the statistical verification method in Ref. 2 to determine co-occurrence probabilities. The method consists of computing the probability that a severe weather cell (grid location) is detected in the observation data in the neighborhood of the severe weather cell in the forecast data. Probabilities of occurrence at the grid location and in its neighborhood with higher severity, and with lower severity in the observation data compared to that in the forecast data are examined. The method proposed in Refs. 5 through 7 is used for computing the probability that a certain number of cells in the neighborhood of severe weather cells in the forecast data are seen as severe weather cells in the observation data. Finally, the probability of existence of gaps in the observation data in the neighborhood of severe weather cells in forecast data is computed. Gaps are defined as openings between severe weather cells through which an aircraft can safely fly to its intended destination. The rest of the paper is organized as follows. Section II summarizes the statistical verification method described in Ref. 2. The extension of this method for computing the co-occurrence probabilities in discussed in Section HI. Numerical examples using NCWF forecast data and NCWD observation data are presented in Section III to elucidate the characteristics of the co-occurrence probabilities. This section also discusses the procedure for computing throbabilities that the severity of convection in the observation data will be higher or lower in the neighborhood of grid locations compared to that indicated at the grid locations in the forecast data. The probability of coverage of neighborhood grid cells is also described via examples in this section. Section IV discusses the gap detection algorithm and presents a numerical example to illustrate the method. The locations of the detected gaps in the observation data are used along with the locations of convective weather cells in the forecast data to determine the probability of existence of gaps in the neighborhood of these cells. Finally, the paper is concluded in Section V.

  8. Near-real-time Estimation and Forecast of Total Precipitable Water in Europe

    NASA Astrophysics Data System (ADS)

    Bartholy, J.; Kern, A.; Barcza, Z.; Pongracz, R.; Ihasz, I.; Kovacs, R.; Ferencz, C.

    2013-12-01

    Information about the amount and spatial distribution of atmospheric water vapor (or total precipitable water) is essential for understanding weather and the environment including the greenhouse effect, the climate system with its feedbacks and the hydrological cycle. Numerical weather prediction (NWP) models need accurate estimations of water vapor content to provide realistic forecasts including representation of clouds and precipitation. In the present study we introduce our research activity for the estimation and forecast of atmospheric water vapor in Central Europe using both observations and models. The Eötvös Loránd University (Hungary) operates a polar orbiting satellite receiving station in Budapest since 2002. This station receives Earth observation data from polar orbiting satellites including MODerate resolution Imaging Spectroradiometer (MODIS) Direct Broadcast (DB) data stream from satellites Terra and Aqua. The received DB MODIS data are automatically processed using freely distributed software packages. Using the IMAPP Level2 software total precipitable water is calculated operationally using two different methods. Quality of the TPW estimations is a crucial question for further application of the results, thus validation of the remotely sensed total precipitable water fields is presented using radiosonde data. In a current research project in Hungary we aim to compare different estimations of atmospheric water vapor content. Within the frame of the project we use a NWP model (DBCRAS; Direct Broadcast CIMSS Regional Assimilation System numerical weather prediction software developed by the University of Wisconsin, Madison) to forecast TPW. DBCRAS uses near real time Level2 products from the MODIS data processing chain. From the wide range of the derived Level2 products the MODIS TPW parameter found within the so-called mod07 results (Atmospheric Profiles Product) and the cloud top pressure and cloud effective emissivity parameters from the so-called mod06 results (Cloud Product) are assimilated twice a day (at 00 and 12 UTC) by DBCRAS. DBCRAS creates 72 hours long weather forecasts with 48 km horizontal resolution. DBCRAS is operational at the University since 2009 which means that by now sufficient data is available for the verification of the model. In the present study verification results for the DBCRAS total precipitable water forecasts are presented based on analysis data from the European Centre for Medium-Range Weather Forecasts (ECMWF). Numerical indices are calculated to quantify the performance of DBCRAS. During a limited time period DBCRAS was also ran without assimilating MODIS products which means that there is possibility to quantify the effect of assimilating MODIS physical products on the quality of the forecasts. For this limited time period verification indices are compared to decide whether MODIS data improves forecast quality or not.

  9. Verification and implementation of microburst day potential index (MDPI) and wind INDEX (WINDEX) forecasting tools at Cape Canaveral Air Station

    NASA Technical Reports Server (NTRS)

    Wheeler, Mark

    1996-01-01

    This report details the research, development, utility, verification and transition on wet microburst forecasting and detection the Applied Meteorology Unit (AMU) did in support of ground and launch operations at Kennedy Space Center (KSC) and Cape Canaveral Air Station (CCAS). The unforecasted wind event on 16 August 1994 of 33.5 ms-1 (65 knots) at the Shuttle Landing Facility raised the issue of wet microburst detection and forecasting. The AMU researched and analyzed the downburst wind event and determined it was a wet microburst event. A program was developed for operational use on the Meteorological Interactive Data Display System (MIDDS) weather system to analyze, compute and display Theta(epsilon) profiles, the microburst day potential index (MDPI), and wind index (WINDEX) maximum wind gust value. Key microburst nowcasting signatures using the WSR-88D data were highlighted. Verification of the data sets indicated that the MDPI has good potential in alerting the duty forecaster to the potential of wet microburst and the WINDEX values computed from the hourly surface data do have potential in showing a trend for the maximum gust potential. WINDEX should help in filling in the temporal hole between the MDPI on the last Cape Canaveral rawinsonde and the nowcasting radar data tools.

  10. Implementation and Verification of the Chen Prediction Technique for Forecasting Large Nonrecurrent Storms*

    NASA Astrophysics Data System (ADS)

    Arge, C. N.; Chen, J.; Slinker, S.; Pizzo, V. J.

    2000-05-01

    The method of Chen et al. [1997, JGR, 101, 27499] is designed to accurately identify and predict the occurrence, duration, and strength of largegeomagnetic storms using real-time solar wind data. The method estimates the IMF and the geoeffectiveness of the solar wind upstream of a monitor and can provide warning times that range from a few hours to more than 10 hours. The model uses physical features of solar wind structures that cause large storms: long durations of southward interplanetary magnetic field. It is currently undergoing testing, improvement, and validation at NOAA/SEC in effort to transition it into a real-time space weather forecasting tool. The original version of the model has modified so that it now makes hourly (as opposed to daily) predictions and has been improved in effort to enhance both its predictive capability and reliability. In this paper, we report on the results of a 2-year historical verification study of the model using ACE real-time data. The prediction performances of the original and improved versions of the model are then compared. A real-time prediction web page has been developed and is on line at NOAA/SEC. *Work supported by ONR.

  11. Development and Implementation of Dynamic Scripts to Support Local Model Verification at National Weather Service Weather Forecast Offices

    NASA Technical Reports Server (NTRS)

    Zavordsky, Bradley; Case, Jonathan L.; Gotway, John H.; White, Kristopher; Medlin, Jeffrey; Wood, Lance; Radell, Dave

    2014-01-01

    Local modeling with a customized configuration is conducted at National Weather Service (NWS) Weather Forecast Offices (WFOs) to produce high-resolution numerical forecasts that can better simulate local weather phenomena and complement larger scale global and regional models. The advent of the Environmental Modeling System (EMS), which provides a pre-compiled version of the Weather Research and Forecasting (WRF) model and wrapper Perl scripts, has enabled forecasters to easily configure and execute the WRF model on local workstations. NWS WFOs often use EMS output to help in forecasting highly localized, mesoscale features such as convective initiation, the timing and inland extent of lake effect snow bands, lake and sea breezes, and topographically-modified winds. However, quantitatively evaluating model performance to determine errors and biases still proves to be one of the challenges in running a local model. Developed at the National Center for Atmospheric Research (NCAR), the Model Evaluation Tools (MET) verification software makes performing these types of quantitative analyses easier, but operational forecasters do not generally have time to familiarize themselves with navigating the sometimes complex configurations associated with the MET tools. To assist forecasters in running a subset of MET programs and capabilities, the Short-term Prediction Research and Transition (SPoRT) Center has developed and transitioned a set of dynamic, easily configurable Perl scripts to collaborating NWS WFOs. The objective of these scripts is to provide SPoRT collaborating partners in the NWS with the ability to evaluate the skill of their local EMS model runs in near real time with little prior knowledge of the MET package. The ultimate goal is to make these verification scripts available to the broader NWS community in a future version of the EMS software. This paper provides an overview of the SPoRT MET scripts, instructions for how the scripts are run, and example use cases.

  12. National Centers for Environmental Prediction

    Science.gov Websites

    / VISION | About EMC EMC > NAM > Home NAM Operational Products HIRESW Operational Products Operational Forecast Graphics Experimental Forecast Graphics Verification and Diagnostics Model Configuration Collaborators Documentation and Code FAQ Operational Change Log Parallel Experiment Change Log Contacts

  13. Development of an Adaptable Display and Diagnostic System for the Evaluation of Tropical Cyclone Forecasts

    NASA Astrophysics Data System (ADS)

    Kucera, P. A.; Burek, T.; Halley-Gotway, J.

    2015-12-01

    NCAR's Joint Numerical Testbed Program (JNTP) focuses on the evaluation of experimental forecasts of tropical cyclones (TCs) with the goal of developing new research tools and diagnostic evaluation methods that can be transitioned to operations. Recent activities include the development of new TC forecast verification methods and the development of an adaptable TC display and diagnostic system. The next generation display and diagnostic system is being developed to support evaluation needs of the U.S. National Hurricane Center (NHC) and broader TC research community. The new hurricane display and diagnostic capabilities allow forecasters and research scientists to more deeply examine the performance of operational and experimental models. The system is built upon modern and flexible technology that includes OpenLayers Mapping tools that are platform independent. The forecast track and intensity along with associated observed track information are stored in an efficient MySQL database. The system provides easy-to-use interactive display system, and provides diagnostic tools to examine forecast track stratified by intensity. Consensus forecasts can be computed and displayed interactively. The system is designed to display information for both real-time and for historical TC cyclones. The display configurations are easily adaptable to meet the needs of the end-user preferences. Ongoing enhancements include improving capabilities for stratification and evaluation of historical best tracks, development and implementation of additional methods to stratify and compute consensus hurricane track and intensity forecasts, and improved graphical display tools. The display is also being enhanced to incorporate gridded forecast, satellite, and sea surface temperature fields. The presentation will provide an overview of the display and diagnostic system development and demonstration of the current capabilities.

  14. A COMPARISON OF FLARE FORECASTING METHODS. I. RESULTS FROM THE “ALL-CLEAR” WORKSHOP

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Barnes, G.; Leka, K. D.; Dunn, T.

    2016-10-01

    Solar flares produce radiation that can have an almost immediate effect on the near-Earth environment, making it crucial to forecast flares in order to mitigate their negative effects. The number of published approaches to flare forecasting using photospheric magnetic field observations has proliferated, with varying claims about how well each works. Because of the different analysis techniques and data sets used, it is essentially impossible to compare the results from the literature. This problem is exacerbated by the low event rates of large solar flares. The challenges of forecasting rare events have long been recognized in the meteorology community, butmore » have yet to be fully acknowledged by the space weather community. During the interagency workshop on “all clear” forecasts held in Boulder, CO in 2009, the performance of a number of existing algorithms was compared on common data sets, specifically line-of-sight magnetic field and continuum intensity images from the Michelson Doppler Imager, with consistent definitions of what constitutes an event. We demonstrate the importance of making such systematic comparisons, and of using standard verification statistics to determine what constitutes a good prediction scheme. When a comparison was made in this fashion, no one method clearly outperformed all others, which may in part be due to the strong correlations among the parameters used by different methods to characterize an active region. For M-class flares and above, the set of methods tends toward a weakly positive skill score (as measured with several distinct metrics), with no participating method proving substantially better than climatological forecasts.« less

  15. VERIFICATION OF SURFACE LAYER OZONE FORECASTS IN THE NOAA/EPA AIR QUALITY FORECAST SYSTEM IN DIFFERENT REGIONS UNDER DIFFERENT SYNOPTIC SCENARIOS

    EPA Science Inventory

    An air quality forecast (AQF) system has been established at NOAA/NCEP since 2003 as a collaborative effort of NOAA and EPA. The system is based on NCEP's Eta mesoscale meteorological model and EPA's CMAQ air quality model (Davidson et al, 2004). The vision behind this system is ...

  16. The impact of satellite temperature soundings on the forecasts of a small national meteorological service

    NASA Technical Reports Server (NTRS)

    Wolfson, N.; Thomasell, A.; Alperson, Z.; Brodrick, H.; Chang, J. T.; Gruber, A.; Ohring, G.

    1984-01-01

    The impact of introducing satellite temperature sounding data on a numerical weather prediction model of a national weather service is evaluated. A dry five level, primitive equation model which covers most of the Northern Hemisphere, is used for these experiments. Series of parallel forecast runs out to 48 hours are made with three different sets of initial conditions: (1) NOSAT runs, only conventional surface and upper air observations are used; (2) SAT runs, satellite soundings are added to the conventional data over oceanic regions and North Africa; and (3) ALLSAT runs, the conventional upper air observations are replaced by satellite soundings over the entire model domain. The impact on the forecasts is evaluated by three verification methods: the RMS errors in sea level pressure forecasts, systematic errors in sea level pressure forecasts, and errors in subjective forecasts of significant weather elements for a selected portion of the model domain. For the relatively short range of the present forecasts, the major beneficial impacts on the sea level pressure forecasts are found precisely in those areas where the satellite sounding are inserted and where conventional upper air observations are sparse. The RMS and systematic errors are reduced in these regions. The subjective forecasts of significant weather elements are improved with the use of the satellite data. It is found that the ALLSAT forecasts are of a quality comparable to the SAR forecasts.

  17. Visualization and Nowcasting for Aviation using online verified ensemble weather radar extrapolation.

    NASA Astrophysics Data System (ADS)

    Kaltenboeck, Rudolf; Kerschbaum, Markus; Hennermann, Karin; Mayer, Stefan

    2013-04-01

    Nowcasting of precipitation events, especially thunderstorm events or winter storms, has high impact on flight safety and efficiency for air traffic management. Future strategic planning by air traffic control will result in circumnavigation of potential hazardous areas, reduction of load around efficiency hot spots by offering alternatives, increase of handling capacity, anticipation of avoidance manoeuvres and increase of awareness before dangerous areas are entered by aircraft. To facilitate this rapid update forecasts of location, intensity, size, movement and development of local storms are necessary. Weather radar data deliver precipitation analysis of high temporal and spatial resolution close to real time by using clever scanning strategies. These data are the basis to generate rapid update forecasts in a time frame up to 2 hours and more for applications in aviation meteorological service provision, such as optimizing safety and economic impact in the context of sub-scale phenomena. On the basis of tracking radar echoes by correlation the movement vectors of successive weather radar images are calculated. For every new successive radar image a set of ensemble precipitation fields is collected by using different parameter sets like pattern match size, different time steps, filter methods and an implementation of history of tracking vectors and plausibility checks. This method considers the uncertainty in rain field displacement and different scales in time and space. By validating manually a set of case studies, the best verification method and skill score is defined and implemented into an online-verification scheme which calculates the optimized forecasts for different time steps and different areas by using different extrapolation ensemble members. To get information about the quality and reliability of the extrapolation process additional information of data quality (e.g. shielding in Alpine areas) is extrapolated and combined with an extrapolation-quality-index. Subsequently the probability and quality information of the forecast ensemble is available and flexible blending to numerical prediction model for each subarea is possible. Simultaneously with automatic processing the ensemble nowcasting product is visualized in a new innovative way which combines the intensity, probability and quality information for different subareas in one forecast image.

  18. Space Weather Forecasting at IZMIRAN

    NASA Astrophysics Data System (ADS)

    Gaidash, S. P.; Belov, A. V.; Abunina, M. A.; Abunin, A. A.

    2017-12-01

    Since 1998, the Institute of Terrestrial Magnetism, Ionosphere, and Radio Wave Propagation (IZMIRAN) has had an operating heliogeophysical service—the Center for Space Weather Forecasts. This center transfers the results of basic research in solar-terrestrial physics into daily forecasting of various space weather parameters for various lead times. The forecasts are promptly available to interested consumers. This article describes the center and the main types of forecasts it provides: solar and geomagnetic activity, magnetospheric electron fluxes, and probabilities of proton increases. The challenges associated with the forecasting of effects of coronal mass ejections and coronal holes are discussed. Verification data are provided for the center's forecasts.

  19. Value of the GENS Forecast Ensemble as a Tool for Adaptation of Economic Activity to Climate Change

    NASA Astrophysics Data System (ADS)

    Hancock, L. O.; Alpert, J. C.; Kordzakhia, M.

    2009-12-01

    In an atmosphere of uncertainty as to the magnitude and direction of climate change in upcoming decades, one adaptation mechanism has emerged with consensus support: the upgrade and dissemination of spatially-resolved, accurate forecasts tailored to the needs of users. Forecasting can facilitate the changeover from dependence on climatology that is increasingly out of date. The best forecasters are local, but local forecasters face great constraints in some countries. Indeed, it is no coincidence that some areas subject to great weather variability and strong processes of climate change are economically vulnerable: mountainous regions, for example, where heavy and erratic flooding can destroy the value built up by households over years. It follows that those best placed to benefit from forecasting upgrades may not be those who have invested in the greatest capacity to date. More-flexible use of the global forecasts may contribute to adaptation. NOAA anticipated several years ago that their forecasts could be used in new ways in the future, and accordingly prepared sockets for easy access to their archives. These could be used to empower various national and regional capacities. Verification to identify practical lead times for the economically important variables is a needed first step. This presentation presents the verification that our team has undertaken, a pilot effort in which we considered variables of interest to economic actors in several lower income countries, cf. shepherds in a remote area of Central Asia, and verified the ensemble forecasts of those variables.

  20. NCEP Air Quality Forecast(AQF) Verification. NOAA/NWS/NCEP/EMC

    Science.gov Websites

    Southwest Desert All regions PROD All regions PARA Select averaged hour: 8 hr sfc average 1 hr sfc average Select forecast four: Diurnal period 01-24 hr by day 25-48 hr by day Select statistic type: BIAS RMSE

  1. Exploring the calibration of a wind forecast ensemble for energy applications

    NASA Astrophysics Data System (ADS)

    Heppelmann, Tobias; Ben Bouallegue, Zied; Theis, Susanne

    2015-04-01

    In the German research project EWeLiNE, Deutscher Wetterdienst (DWD) and Fraunhofer Institute for Wind Energy and Energy System Technology (IWES) are collaborating with three German Transmission System Operators (TSO) in order to provide the TSOs with improved probabilistic power forecasts. Probabilistic power forecasts are derived from probabilistic weather forecasts, themselves derived from ensemble prediction systems (EPS). Since the considered raw ensemble wind forecasts suffer from underdispersiveness and bias, calibration methods are developed for the correction of the model bias and the ensemble spread bias. The overall aim is to improve the ensemble forecasts such that the uncertainty of the possible weather deployment is depicted by the ensemble spread from the first forecast hours. Additionally, the ensemble members after calibration should remain physically consistent scenarios. We focus on probabilistic hourly wind forecasts with horizon of 21 h delivered by the convection permitting high-resolution ensemble system COSMO-DE-EPS which has become operational in 2012 at DWD. The ensemble consists of 20 ensemble members driven by four different global models. The model area includes whole Germany and parts of Central Europe with a horizontal resolution of 2.8 km and a vertical resolution of 50 model levels. For verification we use wind mast measurements around 100 m height that corresponds to the hub height of wind energy plants that belong to wind farms within the model area. Calibration of the ensemble forecasts can be performed by different statistical methods applied to the raw ensemble output. Here, we explore local bivariate Ensemble Model Output Statistics at individual sites and quantile regression with different predictors. Applying different methods, we already show an improvement of ensemble wind forecasts from COSMO-DE-EPS for energy applications. In addition, an ensemble copula coupling approach transfers the time-dependencies of the raw ensemble to the calibrated ensemble. The calibrated wind forecasts are evaluated first with univariate probabilistic scores and additionally with diagnostics of wind ramps in order to assess the time-consistency of the calibrated ensemble members.

  2. Application of a Fuzzy Verification Technique for Assessment of the Weather Running Estimate-Nowcast (WRE-N) Model

    DTIC Science & Technology

    2016-10-01

    comes when considering numerous scores and statistics during a preliminary evaluation of the applicability of the fuzzy- verification minimum coverage...The selection of thresholds with which to generate categorical-verification scores and statistics from the application of both traditional and...of statistically significant numbers of cases; the latter presents a challenge of limited application for assessment of the forecast models’ ability

  3. The surface drifter program for real time and off-line validation of ocean forecasts and reanalyses

    NASA Astrophysics Data System (ADS)

    Hernandez, Fabrice; Regnier, Charly; Drévillon, Marie

    2017-04-01

    As part of the Global Ocean Observing System, the Global Drifter Program (GDP) is comprised of an array of about 1250 drifting buoys spread over the global ocean, that provide operational, near-real time surface velocity, sea surface temperature (SST) and sea level pressure observations. This information is used mainly used for numerical weather forecasting, research, and in-situ calibration/verification of satellite observations. Since 2013 the drifting buoy SST measurements are used for near real time assessment of global forecasting systems from Canada, France, UK, USA, Australia in the frame of the GODAE OceanView Intercomparison and Validation Task. For most of these operational systems, these data are not used for assimilation, and offer an independent observation assessment. This approach mimics the validation performed for SST satellite products. More recently, validation procedures have been proposed in order to assess the surface dynamics of Mercator Océan global and regional forecast and reanalyses. Velocities deduced from drifter trajectories are used in two ways. First, the Eulerian approach where buoy and ocean model velocity values are compared at the position of drifters. Then, from discrepancies, statistics are computed and provide an evaluation of the ocean model's surface dynamics reliability. Second, the Lagrangian approach, where drifting trajectories are simulated at each location of the real drifter trajectory using the ocean model velocity fields. Then, on daily basis, real and simulated drifter trajectories are compared by analyzing the spread after one day, two days etc…. The cumulated statistics on specific geographical boxes are evaluated in term of dispersion properties of the "real ocean" as captured by drifters, and those properties in the ocean model. This approach allows to better evaluate forecasting score for surface dispersion applications, like Search and Rescue, oil spill forecast, drift of other objects or contaminant, larvae dispersion etc… These Eulerian and Lagrangian validation approach can be applied for real time or offline assessment of ocean velocity products. In real time, the main limitation is our capability to detect drifter drogue's loss, causing erroneous assessment. Several methods, by comparison to wind entrainment effect or other velocity estimates like from satellite altimetry, are used. These Eulerian and Lagrangian surface velocity validation methods are planned to be adopted by the GODAE OceanView operational community in order to offer independent verification of surface current forecast.

  4. Skill of Predicting Heavy Rainfall Over India: Improvement in Recent Years Using UKMO Global Model

    NASA Astrophysics Data System (ADS)

    Sharma, Kuldeep; Ashrit, Raghavendra; Bhatla, R.; Mitra, A. K.; Iyengar, G. R.; Rajagopal, E. N.

    2017-11-01

    The quantitative precipitation forecast (QPF) performance for heavy rains is still a challenge, even for the most advanced state-of-art high-resolution Numerical Weather Prediction (NWP) modeling systems. This study aims to evaluate the performance of UK Met Office Unified Model (UKMO) over India for prediction of high rainfall amounts (>2 and >5 cm/day) during the monsoon period (JJAS) from 2007 to 2015 in short range forecast up to Day 3. Among the various modeling upgrades and improvements in the parameterizations during this period, the model horizontal resolution has seen an improvement from 40 km in 2007 to 17 km in 2015. Skill of short range rainfall forecast has improved in UKMO model in recent years mainly due to increased horizontal and vertical resolution along with improved physics schemes. Categorical verification carried out using the four verification metrics, namely, probability of detection (POD), false alarm ratio (FAR), frequency bias (Bias) and Critical Success Index, indicates that QPF has improved by >29 and >24% in case of POD and FAR. Additionally, verification scores like EDS (Extreme Dependency Score), EDI (Extremal Dependence Index) and SEDI (Symmetric EDI) are used with special emphasis on verification of extreme and rare rainfall events. These scores also show an improvement by 60% (EDS) and >34% (EDI and SEDI) during the period of study, suggesting an improved skill of predicting heavy rains.

  5. Evaluating the improvements of the BOLAM meteorological model operational at ISPRA: A case study approach - preliminary results

    NASA Astrophysics Data System (ADS)

    Mariani, S.; Casaioli, M.; Lastoria, B.; Accadia, C.; Flavoni, S.

    2009-04-01

    The Institute for Environmental Protection and Research - ISPRA (former Agency for Environmental Protection and Technical Services - APAT) runs operationally since 2000 an integrated meteo-marine forecasting chain, named the Hydro-Meteo-Marine Forecasting System (Sistema Idro-Meteo-Mare - SIMM), formed by a cascade of four numerical models, telescoping from the Mediterranean basin to the Venice Lagoon, and initialized by means of analyses and forecasts from the European Centre for Medium-Range Weather Forecasts (ECMWF). The operational integrated system consists of a meteorological model, the parallel verision of BOlogna Limited Area Model (BOLAM), coupled over the Mediterranean sea with a WAve Model (WAM), a high-resolution shallow-water model of the Adriatic and Ionian Sea, namely the Princeton Ocean Model (POM), and a finite-element version of the same model (VL-FEM) on the Venice Lagoon, aimed to forecast the acqua alta events. Recently, the physically based, fully distributed, rainfall-runoff TOPographic Kinematic APproximation and Integration (TOPKAPI) model has been integrated into the system, coupled to BOLAM, over two river basins, located in the central and northeastern part of Italy, respectively. However, at the present time, this latter part of the forecasting chain is not operational and it is used in a research configuration. BOLAM was originally implemented in 2000 onto the Quadrics parallel supercomputer (and for this reason referred to as QBOLAM, as well) and only at the end of 2006 it was ported (together with the other operational marine models of the forecasting chain) onto the Silicon Graphics Inc. (SGI) Altix 8-processor machine. In particular, due to the Quadrics implementation, the Kuo scheme was formerly implemented into QBOLAM for the cumulus convection parameterization. On the contrary, when porting SIMM onto the Altix Linux cluster, it was achievable to implement into QBOLAM the more advanced convection parameterization by Kain and Fritsch. A fully updated serial version of the BOLAM code has been recently acquired. Code improvements include a more precise advection scheme (Weighted Average Flux); explicit advection of five hydrometeors, and state-of-the-art parameterization schemes for radiation, convection, boundary layer turbulence and soil processes (also with possible choice among different available schemes). The operational implementation of the new code into the SIMM model chain, which requires the development of a parallel version, will be achieved during 2009. In view of this goal, the comparative verification of the different model versions' skill represents a fundamental task. On this purpose, it has been decided to evaluate the performance improvement of the new BOLAM code (in the available serial version, hereinafter BOLAM 2007) with respect to the version with the Kain-Fritsch scheme (hereinafter KF version) and to the older one employing the Kuo scheme (hereinafter Kuo version). In the present work, verification of precipitation forecasts from the three BOLAM versions is carried on in a case study approach. The intense rainfall episode occurred on 10th - 17th December 2008 over Italy has been considered. This event produced indeed severe damages in Rome and its surrounding areas. Objective and subjective verification methods have been employed in order to evaluate model performance against an observational dataset including rain gauge observations and satellite imagery. Subjective comparison of observed and forecast precipitation fields is suitable to give an overall description of the forecast quality. Spatial errors (e.g., shifting and pattern errors) and rainfall volume error can be assessed quantitatively by means of object-oriented methods. By comparing satellite images with model forecast fields, it is possible to investigate the differences between the evolution of the observed weather system and the predicted ones, and its sensitivity to the improvements in the model code. Finally, the error in forecasting the cyclone evolution can be tentatively related with the precipitation forecast error.

  6. Forbush Decrease Prediction Based on Remote Solar Observations

    NASA Astrophysics Data System (ADS)

    Dumbovic, Mateja; Vrsnak, Bojan; Calogovic, Jasa

    2016-04-01

    We study the relation between remote observations of coronal mass ejections (CMEs), their associated solar flares and short-term depressions in the galactic cosmic-ray flux (so called Forbush decreases). Statistical relations between Forbush decrease magnitude and several CME/flare parameters are examined. In general we find that Forbush decrease magnitude is larger for faster CMEs with larger apparent width, which is associated with stronger flares that originate close to the center of the solar disk and are (possibly) involved in a CME-CME interaction. The statistical relations are quantified and employed to forecast expected Forbush decrease magnitude range based on the selected remote solar observations of the CME and associated solar flare. Several verification measures are used to evaluate the forecast method. We find that the forecast is most reliable in predicting whether or not a CME will produce a Forbush decrease with a magnitude >3 %. The main advantage of the method is that it provides an early prediction, 1-4 days in advance. Based on the presented research, an online forecast tool was developed (Forbush Decrease Forecast Tool, FDFT) available at Hvar Observatory web page: http://oh.geof.unizg.hr/FDFT/fdft.php. We acknowledge the support of Croatian Science Foundation under the project 6212 "Solar and Stellar Variability" and of European social fond under the project "PoKRet".

  7. Real-Time Kennedy Space Center and Cape Canaveral Air Force Station High-Resolution Model Implementation and Verification

    NASA Technical Reports Server (NTRS)

    Shafer, Jaclyn A.; Watson, Leela R.

    2015-01-01

    Customer: NASA's Launch Services Program (LSP), Ground Systems Development and Operations (GSDO), and Space Launch System (SLS) programs. NASA's LSP, GSDO, SLS and other programs at Kennedy Space Center (KSC) and Cape Canaveral Air Force Station (CCAFS) use the daily and weekly weather forecasts issued by the 45th Weather Squadron (45 WS) as decision tools for their day-to-day and launch operations on the Eastern Range (ER). For example, to determine if they need to limit activities such as vehicle transport to the launch pad, protect people, structures or exposed launch vehicles given a threat of severe weather, or reschedule other critical operations. The 45 WS uses numerical weather prediction models as a guide for these weather forecasts, particularly the Air Force Weather Agency (AFWA) 1.67 kilometer Weather Research and Forecasting (WRF) model. Considering the 45 WS forecasters' and Launch Weather Officers' (LWO) extensive use of the AFWA model, the 45 WS proposed a task at the September 2013 Applied Meteorology Unit (AMU) Tasking Meeting requesting the AMU verify this model. Due to the lack of archived model data available from AFWA, verification is not yet possible. Instead, the AMU proposed to implement and verify the performance of an ER version of the AMU high-resolution WRF Environmental Modeling System (EMS) model (Watson 2013) in real-time. The tasking group agreed to this proposal; therefore the AMU implemented the WRF-EMS model on the second of two NASA AMU modeling clusters. The model was set up with a triple-nested grid configuration over KSC/CCAFS based on previous AMU work (Watson 2013). The outer domain (D01) has 12-kilometer grid spacing, the middle domain (D02) has 4-kilometer grid spacing, and the inner domain (D03) has 1.33-kilometer grid spacing. The model runs a 12-hour forecast every hour, D01 and D02 domain outputs are available once an hour and D03 is every 15 minutes during the forecast period. The AMU assessed the WRF-EMS 1.33-kilometer domain model performance for the 2014 warm season (May-September). Verification statistics were computed using the Model Evaluation Tools, which compared the model forecasts to observations. The mean error values were close to 0 and the root mean square error values were less than 1.8 for mean sea-level pressure (millibars), temperature (degrees Kelvin), dewpoint temperature (degrees Kelvin), and wind speed (per millisecond), all very small differences between the forecast and observations considering the normal magnitudes of the parameters. The precipitation forecast verification results showed consistent under-forecasting of the precipitation object size. This could be an artifact of calculating the statistics for each hour rather than for the entire 12-hour period. The AMU will continue to generate verification statistics for the 1.33-kilometer WRF-EMS domain as data become available in future cool and warm seasons. More data will produce more robust statistics and reveal a more accurate assessment of model performance. Once the formal task was complete, the AMU conducted additional work to better understand the wind direction results. The results were stratified diurnally and by wind speed to determine what effects the stratifications would have on the model wind direction verification statistics. The results are summarized in the addendum at the end of this report. In addition to verifying the model's performance, the AMU also made the output available in the Advanced Weather Interactive Processing System II (AWIPS II). This allows the 45 WS and AMU staff to customize the model output display on the AMU and Range Weather Operations AWIPS II client computers and conduct real-time subjective analyses. In the future, the AMU will implement an updated version of the WRF-EMS model that incorporates local data assimilation. This model will also run in real-time and be made available in AWIPS II.

  8. A Comparative Verification of Forecasts from Two Operational Solar Wind Models

    DTIC Science & Technology

    2010-12-16

    knowing how much confidence to place on predicted parameters. Cost /benefit information is provided to administrators who decide to sustain or...components of the magnetic field vector in the geocentric solar magnetospheric (GSM) coordinate system at each hour of forecast time. For an example of a

  9. National Centers for Environmental Prediction

    Science.gov Websites

    Operational Forecast Graphics Experimental Forecast Graphics Verification and Diagnostics Model Configuration consists of the following components: - The NOAA Environmental Modeling System (NEMS) version of the Non updates for the 12 km parent domain and the 3 km CONUS/Alaska nests. The non-cycled nests (Hawaii, Puerto

  10. Growth of Errors and Uncertainties in Medium Range Ensemble Forecasts of U.S. East Coast Cool Season Extratropical Cyclones

    NASA Astrophysics Data System (ADS)

    Zheng, Minghua

    Cool-season extratropical cyclones near the U.S. East Coast often have significant impacts on the safety, health, environment and economy of this most densely populated region. Hence it is of vital importance to forecast these high-impact winter storm events as accurately as possible by numerical weather prediction (NWP), including in the medium-range. Ensemble forecasts are appealing to operational forecasters when forecasting such events because they can provide an envelope of likely solutions to serve user communities. However, it is generally accepted that ensemble outputs are not used efficiently in NWS operations mainly due to the lack of simple and quantitative tools to communicate forecast uncertainties and ensemble verification to assess model errors and biases. Ensemble sensitivity analysis (ESA), which employs a linear correlation and regression between a chosen forecast metric and the forecast state vector, can be used to analyze the forecast uncertainty development for both short- and medium-range forecasts. The application of ESA to a high-impact winter storm in December 2010 demonstrated that the sensitivity signals based on different forecast metrics are robust. In particular, the ESA based on the leading two EOF PCs can separate sensitive regions associated with cyclone amplitude and intensity uncertainties, respectively. The sensitivity signals were verified using the leave-one-out cross validation (LOOCV) method based on a multi-model ensemble from CMC, ECMWF, and NCEP. The climatology of ensemble sensitivities for the leading two EOF PCs based on 3-day and 6-day forecasts of historical cyclone cases was presented. It was found that the EOF1 pattern often represents the intensity variations while the EOF2 pattern represents the track variations along west-southwest and east-northeast direction. For PC1, the upper-level trough associated with the East Coast cyclone and its downstream ridge are important to the forecast uncertainty in cyclone strength. The initial differences in forecasting the ridge along the west coast of North America impact the EOF1 pattern most. For PC2, it was shown that the shift of the tri-polar structure is most significantly related to the cyclone track forecasts. The EOF/fuzzy clustering tool was applied to diagnose the scenarios in operational ensemble forecast of East Coast winter storms. It was shown that the clustering method could efficiently separate the forecast scenarios associated with East Coast storms based on the 90-member multi-model ensemble. A scenario-based ensemble verification method has been proposed and applied it to examine the capability of different EPSs in capturing the analysis scenarios for historical East Coast cyclone cases at lead times of 1-9 days. The results suggest that the NCEP model performs better in short-range forecasts in capturing the analysis scenario although it is under-dispersed. The ECMWF ensemble shows the best performance in the medium range. The CMC model is found to show the smallest percentage of members in the analysis group and a relatively high missing rate, suggesting that it is less reliable regarding capturing the analysis scenario when compared with the other two EPSs. A combination of NCEP and CMC models has been found to reduce the missing rate and improve the error-spread skill in medium- to extended-range forecasts. Based on the orthogonal features of the EOF patterns, the model errors for 1-6-day forecasts have been decomposed for the leading two EOF patterns. The results for error decomposition show that the NCEP model tends to better represent both EOF1 and EOF2 patterns by showing less intensity and displacement errors during 1-3 days. The ECMWF model is found to have the smallest errors in both EOF1 and EOF2 patterns during 4-6 days. We have also found that East Coast cyclones in the ECMWF forecast tend to be towards the southwest of the other two models in representing the EOF2 pattern, which is associated with the southwest-northeast shifting of the cyclone. This result suggests that ECMWF model may have a tendency to show a closer-to-shore solution in forecasting East Coast winter storms. The downstream impacts of Rossby wave packets (RWPs) on the predictability of winter storms are investigated to explore the source of ensemble uncertainties. The composited RWPA anomalies show that there are enhanced RWPs propagating across the Pacific in both large-error and large-spread cases over the verification regions. There are also indications that the errors might propagate with a speed comparable with the group velocity of RWPs. Based on the composite results as well as our observations of the operation daily RWPA, a conceptual model of errors/uncertainty development associated with RWPs has been proposed to serve as a practical tool to understand the evolution of forecast errors and uncertainties associated with the coherent RWPs originating from upstream as far as western Pacific. (Abstract shortened by ProQuest.).

  11. The forecaster's added value in QPF

    NASA Astrophysics Data System (ADS)

    Turco, M.; Milelli, M.

    2009-04-01

    To the authors' knowledge there are relatively few studies that try to answer this topic: "Are humans able to add value to computer-generated forecasts and warnings ?". Moreover, the answers are not always positive. In particular some postprocessing method is competitive or superior to human forecast (see for instance Baars et al., 2005, Charba et al., 2002, Doswell C., 2003, Roebber et al., 1996, Sanders F., 1986). Within the alert system of ARPA Piemonte it is possible to study in an objective manner if the human forecaster is able to add value with respect to computer-generated forecasts. Every day the meteorology group of the Centro Funzionale of Regione Piemonte produces the HQPF (Human QPF) in terms of an areal average for each of the 13 regional warning areas, which have been created according to meteo-hydrological criteria. This allows the decision makers to produce an evaluation of the expected effects by comparing these HQPFs with predefined rainfall thresholds. Another important ingredient in this study is the very dense non-GTS network of rain gauges available that makes possible a high resolution verification. In this context the most useful verification approach is the measure of the QPF and HQPF skills by first converting precipitation expressed as continuous amounts into ‘‘exceedance'' categories (yes-no statements indicating whether precipitation equals or exceeds selected thresholds) and then computing the performances for each threshold. In particular in this work we compare the performances of the latest three years of QPF derived from two meteorological models COSMO-I7 (the Italian version of the COSMO Model, a mesoscale model developed in the framework of the COSMO Consortium) and IFS (the ECMWF global model) with the HQPF. In this analysis it is possible to introduce the hypothesis test developed by Hamill (1999), in which a confidence interval is calculated with the bootstrap method in order to establish the real difference between the skill scores of two competitive forecast. It is important to underline that the conclusions refer to the analysis of the Piemonte operational alert system, so they cannot be directly taken as universally true. But we think that some of the main lessons that can be derived from this study could be useful for the meteorological community. In details, the main conclusions are the following: · despite the overall improvement in global scale and the fact that the resolution of the limited area models has increased considerably over recent years, the QPF produced by the meteorological models involved in this study has not improved enough to allow its direct use: the subjective HQPF continues to offer the best performance; · in the forecast process, the step where humans have the largest added value with respect to mathematical models, is the communication. In fact the human characterisation and communication of the forecast uncertainty to end users cannot be replaced by any computer code; · the QPFs verification is one of the most important activities of a Centro Funzionale because it allows a better understanding of the model behaviour in the different meteorological configurations, highlights the systematic characteristics, and helps in evaluating the reliability, in average or extreme values, over long term or in current situations; · eventually, although there is no novelty in this study, we would like to show that the correct application of appropriated statistical tecniques permits a better definition and quantification of the errors and, mostly important, allows a correct (unbiased) communication between forecasters and decision makers.

  12. Verification of FLYSAFE Clear Air Turbulence (CAT) objects against aircraft turbulence measurements

    NASA Astrophysics Data System (ADS)

    Lunnon, R.; Gill, P.; Reid, L.; Mirza, A.

    2009-09-01

    Prediction of gridded CAT fields The main causes of CAT are (a) Vertical wind shear - low Richardson Number (b) Mountain waves (c) Convection. All three causes contribute roughly equally to CAT occurrences, globally Prediction of shear induced CAT The predictions of shear induced CAT has a longer history than either mountain-wave induced CAT or convectively induced CAT. Both Global Aviation Forecasting Centres are currently using the Ellrod TI1 algorithm (Ellrod and Knapp, 1992). This predictor is the scalar product of deformation [akm1]and vertical wind shear. More sophisticated algorithms can amplify errors in non-linear, differentiated quantities so it is very likely that Ellrod will out-perform other algorithms when verified globally. Prediction of mountain wave CAT The Global Aviation Forecasting Centre in the UK has been generating automated forecasts of mountain wave CAT since the late 1990s, based on the diagnosis of gravity wave drag. Generation of CAT objects In the FLYSAFE project it was decided at an early stage that short range forecasts of meteorological hazards, i.e. icing, Clear Air Turbulence, Cumulonimbus Clouds, should be represented as weather objects, that is, descriptions of individual hazardous volumes of airspace. For CAT, the forecast information on which the weather objects were based was gridded, that comprised a representation of a hazard level for all points in a pre-defined 3-D grid, for a range of forecast times. A "grid-to-objects" capability was generated. This is discussed further in Mirza and Drouin (this conference). Verification of CAT forecasts Verification was performed using digital accelerometer data from aircraft in the British Airways Boeing 747 fleet. A preliminary processing of the aircraft data were performed to generate a truth field on a scale similar to that used to provide gridded forecasts to airlines. This truth field was binary, i.e. each flight segment was characterised as being either "turbulent" or "benign". A gridded forecast field is a continuously changing variable. In contrast, a simple weather object must be characterised by a specific threshold. For a gridded forecast and a binary truth measure it is possible to generate Relative Operating Characteristic (ROC) curves. For weather objects, a single point in the hit-rate/false-alarm-rate space can be generated. If this point is plotted on a ROC curve graph then the skill of the forecast using weather objects can be compared with the skill of the gridded forecast.

  13. Seasonal forecast of St. Louis encephalitis virus transmission, Florida.

    PubMed

    Shaman, Jeffrey; Day, Jonathan F; Stieglitz, Marc; Zebiak, Stephen; Cane, Mark

    2004-05-01

    Disease transmission forecasts can help minimize human and domestic animal health risks by indicating where disease control and prevention efforts should be focused. For disease systems in which weather-related variables affect pathogen proliferation, dispersal, or transmission, the potential for disease forecasting exists. We present a seasonal forecast of St. Louis encephalitis virus transmission in Indian River County, Florida. We derive an empiric relationship between modeled land surface wetness and levels of SLEV transmission in humans. We then use these data to forecast SLEV transmission with a seasonal lead. Forecast skill is demonstrated, and a real-time seasonal forecast of epidemic SLEV transmission is presented. This study demonstrates how weather and climate forecast skill-verification analyses may be applied to test the predictability of an empiric disease forecast model.

  14. Seasonal Forecast of St. Louis Encephalitis Virus Transmission, Florida

    PubMed Central

    Day, Jonathan F.; Stieglitz, Marc; Zebiak, Stephen; Cane, Mark

    2004-01-01

    Disease transmission forecasts can help minimize human and domestic animal health risks by indicating where disease control and prevention efforts should be focused. For disease systems in which weather-related variables affect pathogen proliferation, dispersal, or transmission, the potential for disease forecasting exists. We present a seasonal forecast of St. Louis encephalitis virus transmission in Indian River County, Florida. We derive an empirical relationship between modeled land surface wetness and levels of SLEV transmission in humans. We then use these data to forecast SLEV transmission with a seasonal lead. Forecast skill is demonstrated, and a real-time seasonal forecast of epidemic SLEV transmission is presented. This study demonstrates how weather and climate forecast skill verification analyses may be applied to test the predictability of an empirical disease forecast model. PMID:15200812

  15. Assessing Applications of GPM and IMERG Passive Microwave Rain Rates in Modeling and Operational Forecasting

    NASA Astrophysics Data System (ADS)

    Zavodsky, B.; Le Roy, A.; Smith, M. R.; Case, J.

    2016-12-01

    In support of NASA's recently launched GPM `core' satellite, the NASA-SPoRT project is leveraging experience in research-to-operations transitions and training to provide feedback on the operational utility of GPM products. Thus far, SPoRT has focused on evaluating the Level 2 GPROF passive microwave and IMERG rain rate estimates. Formal evaluations with end-users have occurred, as well as internal evaluations of the datasets. One set of end users for these products is National Weather Service Forecast Offices (WFOs) and National Weather Service River Forecast Centers (RFCs), comprising forecasters and hydrologists. SPoRT has hosted a series of formal assessments to determine uses and utility of these datasets for NWS operations at specific offices. Forecasters primarily have used Level 2 swath rain rates to observe rainfall in otherwise data-void regions and to confirm model QPF for their nowcasting or short-term forecasting. Hydrologists have been evaluating both the Level 2 rain rates and the IMERG rain rates, including rain rate accumulations derived from IMERG; hydrologists have used these data to supplement gauge data for post-event analysis as well as for longer-term forecasting. Results from specific evaluations will be presented. Another evaluation of the GPM passive microwave rain rates has been in using the data within other products that are currently transitioned to end-users, rather than as stand-alone observations. For example, IMERG Early data is being used as a forcing mechanism in the NASA Land Information System (LIS) for real-time soil moisture product over eastern Africa. IMERG is providing valuable precipitation information to LIS in an otherwise data-void region. Results and caveats will briefly be discussed. A third application of GPM data is using the IMERG Late and Final products for model verification in remote regions where high-quality gridded precipitation fields are not readily available. These datasets can now be used to verify NWP model forecasts over Eastern Africa using the SPoRT-MET scripts verification package, a wrapper around the NCAR Model Evaluation Toolkit (MET) verification software.

  16. Upgrade Summer Severe Weather Tool

    NASA Technical Reports Server (NTRS)

    Watson, Leela

    2011-01-01

    The goal of this task was to upgrade to the existing severe weather database by adding observations from the 2010 warm season, update the verification dataset with results from the 2010 warm season, use statistical logistic regression analysis on the database and develop a new forecast tool. The AMU analyzed 7 stability parameters that showed the possibility of providing guidance in forecasting severe weather, calculated verification statistics for the Total Threat Score (TTS), and calculated warm season verification statistics for the 2010 season. The AMU also performed statistical logistic regression analysis on the 22-year severe weather database. The results indicated that the logistic regression equation did not show an increase in skill over the previously developed TTS. The equation showed less accuracy than TTS at predicting severe weather, little ability to distinguish between severe and non-severe weather days, and worse standard categorical accuracy measures and skill scores over TTS.

  17. A Four-Stage Hybrid Model for Hydrological Time Series Forecasting

    PubMed Central

    Di, Chongli; Yang, Xiaohua; Wang, Xiaochao

    2014-01-01

    Hydrological time series forecasting remains a difficult task due to its complicated nonlinear, non-stationary and multi-scale characteristics. To solve this difficulty and improve the prediction accuracy, a novel four-stage hybrid model is proposed for hydrological time series forecasting based on the principle of ‘denoising, decomposition and ensemble’. The proposed model has four stages, i.e., denoising, decomposition, components prediction and ensemble. In the denoising stage, the empirical mode decomposition (EMD) method is utilized to reduce the noises in the hydrological time series. Then, an improved method of EMD, the ensemble empirical mode decomposition (EEMD), is applied to decompose the denoised series into a number of intrinsic mode function (IMF) components and one residual component. Next, the radial basis function neural network (RBFNN) is adopted to predict the trend of all of the components obtained in the decomposition stage. In the final ensemble prediction stage, the forecasting results of all of the IMF and residual components obtained in the third stage are combined to generate the final prediction results, using a linear neural network (LNN) model. For illustration and verification, six hydrological cases with different characteristics are used to test the effectiveness of the proposed model. The proposed hybrid model performs better than conventional single models, the hybrid models without denoising or decomposition and the hybrid models based on other methods, such as the wavelet analysis (WA)-based hybrid models. In addition, the denoising and decomposition strategies decrease the complexity of the series and reduce the difficulties of the forecasting. With its effective denoising and accurate decomposition ability, high prediction precision and wide applicability, the new model is very promising for complex time series forecasting. This new forecast model is an extension of nonlinear prediction models. PMID:25111782

  18. A four-stage hybrid model for hydrological time series forecasting.

    PubMed

    Di, Chongli; Yang, Xiaohua; Wang, Xiaochao

    2014-01-01

    Hydrological time series forecasting remains a difficult task due to its complicated nonlinear, non-stationary and multi-scale characteristics. To solve this difficulty and improve the prediction accuracy, a novel four-stage hybrid model is proposed for hydrological time series forecasting based on the principle of 'denoising, decomposition and ensemble'. The proposed model has four stages, i.e., denoising, decomposition, components prediction and ensemble. In the denoising stage, the empirical mode decomposition (EMD) method is utilized to reduce the noises in the hydrological time series. Then, an improved method of EMD, the ensemble empirical mode decomposition (EEMD), is applied to decompose the denoised series into a number of intrinsic mode function (IMF) components and one residual component. Next, the radial basis function neural network (RBFNN) is adopted to predict the trend of all of the components obtained in the decomposition stage. In the final ensemble prediction stage, the forecasting results of all of the IMF and residual components obtained in the third stage are combined to generate the final prediction results, using a linear neural network (LNN) model. For illustration and verification, six hydrological cases with different characteristics are used to test the effectiveness of the proposed model. The proposed hybrid model performs better than conventional single models, the hybrid models without denoising or decomposition and the hybrid models based on other methods, such as the wavelet analysis (WA)-based hybrid models. In addition, the denoising and decomposition strategies decrease the complexity of the series and reduce the difficulties of the forecasting. With its effective denoising and accurate decomposition ability, high prediction precision and wide applicability, the new model is very promising for complex time series forecasting. This new forecast model is an extension of nonlinear prediction models.

  19. Estimating predictive hydrological uncertainty by dressing deterministic and ensemble forecasts; a comparison, with application to Meuse and Rhine

    NASA Astrophysics Data System (ADS)

    Verkade, J. S.; Brown, J. D.; Davids, F.; Reggiani, P.; Weerts, A. H.

    2017-12-01

    Two statistical post-processing approaches for estimation of predictive hydrological uncertainty are compared: (i) 'dressing' of a deterministic forecast by adding a single, combined estimate of both hydrological and meteorological uncertainty and (ii) 'dressing' of an ensemble streamflow forecast by adding an estimate of hydrological uncertainty to each individual streamflow ensemble member. Both approaches aim to produce an estimate of the 'total uncertainty' that captures both the meteorological and hydrological uncertainties. They differ in the degree to which they make use of statistical post-processing techniques. In the 'lumped' approach, both sources of uncertainty are lumped by post-processing deterministic forecasts using their verifying observations. In the 'source-specific' approach, the meteorological uncertainties are estimated by an ensemble of weather forecasts. These ensemble members are routed through a hydrological model and a realization of the probability distribution of hydrological uncertainties (only) is then added to each ensemble member to arrive at an estimate of the total uncertainty. The techniques are applied to one location in the Meuse basin and three locations in the Rhine basin. Resulting forecasts are assessed for their reliability and sharpness, as well as compared in terms of multiple verification scores including the relative mean error, Brier Skill Score, Mean Continuous Ranked Probability Skill Score, Relative Operating Characteristic Score and Relative Economic Value. The dressed deterministic forecasts are generally more reliable than the dressed ensemble forecasts, but the latter are sharper. On balance, however, they show similar quality across a range of verification metrics, with the dressed ensembles coming out slightly better. Some additional analyses are suggested. Notably, these include statistical post-processing of the meteorological forecasts in order to increase their reliability, thus increasing the reliability of the streamflow forecasts produced with ensemble meteorological forcings.

  20. NCEP Air Quality Forecast(AQF) Verification. NOAA/NWS/NCEP/EMC

    Science.gov Websites

    average Select forecast four: Day 1 AOD skill for all thresholds Day 1 Time series for AOD GT 0 Day 2 AOD skill for all thresholds Day 2 Time series for AOD GT 0 Diurnal plots for AOD GT 0 Select statistic type

  1. A Wind Forecasting System for Energy Application

    NASA Astrophysics Data System (ADS)

    Courtney, Jennifer; Lynch, Peter; Sweeney, Conor

    2010-05-01

    Accurate forecasting of available energy is crucial for the efficient management and use of wind power in the national power grid. With energy output critically dependent upon wind strength there is a need to reduce the errors associated wind forecasting. The objective of this research is to get the best possible wind forecasts for the wind energy industry. To achieve this goal, three methods are being applied. First, a mesoscale numerical weather prediction (NWP) model called WRF (Weather Research and Forecasting) is being used to predict wind values over Ireland. Currently, a gird resolution of 10km is used and higher model resolutions are being evaluated to establish whether they are economically viable given the forecast skill improvement they produce. Second, the WRF model is being used in conjunction with ECMWF (European Centre for Medium-Range Weather Forecasts) ensemble forecasts to produce a probabilistic weather forecasting product. Due to the chaotic nature of the atmosphere, a single, deterministic weather forecast can only have limited skill. The ECMWF ensemble methods produce an ensemble of 51 global forecasts, twice a day, by perturbing initial conditions of a 'control' forecast which is the best estimate of the initial state of the atmosphere. This method provides an indication of the reliability of the forecast and a quantitative basis for probabilistic forecasting. The limitation of ensemble forecasting lies in the fact that the perturbed model runs behave differently under different weather patterns and each model run is equally likely to be closest to the observed weather situation. Models have biases, and involve assumptions about physical processes and forcing factors such as underlying topography. Third, Bayesian Model Averaging (BMA) is being applied to the output from the ensemble forecasts in order to statistically post-process the results and achieve a better wind forecasting system. BMA is a promising technique that will offer calibrated probabilistic wind forecasts which will be invaluable in wind energy management. In brief, this method turns the ensemble forecasts into a calibrated predictive probability distribution. Each ensemble member is provided with a 'weight' determined by its relative predictive skill over a training period of around 30 days. Verification of data is carried out using observed wind data from operational wind farms. These are then compared to existing forecasts produced by ECMWF and Met Eireann in relation to skill scores. We are developing decision-making models to show the benefits achieved using the data produced by our wind energy forecasting system. An energy trading model will be developed, based on the rules currently used by the Single Electricity Market Operator for energy trading in Ireland. This trading model will illustrate the potential for financial savings by using the forecast data generated by this research.

  2. Verification of ECMWF System 4 for seasonal hydrological forecasting in a northern climate

    NASA Astrophysics Data System (ADS)

    Bazile, Rachel; Boucher, Marie-Amélie; Perreault, Luc; Leconte, Robert

    2017-11-01

    Hydropower production requires optimal dam and reservoir management to prevent flooding damage and avoid operation losses. In a northern climate, where spring freshet constitutes the main inflow volume, seasonal forecasts can help to establish a yearly strategy. Long-term hydrological forecasts often rely on past observations of streamflow or meteorological data. Another alternative is to use ensemble meteorological forecasts produced by climate models. In this paper, those produced by the ECMWF (European Centre for Medium-Range Forecast) System 4 are examined and bias is characterized. Bias correction, through the linear scaling method, improves the performance of the raw ensemble meteorological forecasts in terms of continuous ranked probability score (CRPS). Then, three seasonal ensemble hydrological forecasting systems are compared: (1) the climatology of simulated streamflow, (2) the ensemble hydrological forecasts based on climatology (ESP) and (3) the hydrological forecasts based on bias-corrected ensemble meteorological forecasts from System 4 (corr-DSP). Simulated streamflow computed using observed meteorological data is used as benchmark. Accounting for initial conditions is valuable even for long-term forecasts. ESP and corr-DSP both outperform the climatology of simulated streamflow for lead times from 1 to 5 months depending on the season and watershed. Integrating information about future meteorological conditions also improves monthly volume forecasts. For the 1-month lead time, a gain exists for almost all watersheds during winter, summer and fall. However, volume forecasts performance for spring varies from one watershed to another. For most of them, the performance is close to the performance of ESP. For longer lead times, the CRPS skill score is mostly in favour of ESP, even if for many watersheds, ESP and corr-DSP have comparable skill. Corr-DSP appears quite reliable but, in some cases, under-dispersion or bias is observed. A more complex bias-correction method should be further investigated to remedy this weakness and take more advantage of the ensemble forecasts produced by the climate model. Overall, in this study, bias-corrected ensemble meteorological forecasts appear to be an interesting source of information for hydrological forecasting for lead times up to 1 month. They could also complement ESP for longer lead times.

  3. A Bayesian joint probability modeling approach for seasonal forecasting of streamflows at multiple sites

    NASA Astrophysics Data System (ADS)

    Wang, Q. J.; Robertson, D. E.; Chiew, F. H. S.

    2009-05-01

    Seasonal forecasting of streamflows can be highly valuable for water resources management. In this paper, a Bayesian joint probability (BJP) modeling approach for seasonal forecasting of streamflows at multiple sites is presented. A Box-Cox transformed multivariate normal distribution is proposed to model the joint distribution of future streamflows and their predictors such as antecedent streamflows and El Niño-Southern Oscillation indices and other climate indicators. Bayesian inference of model parameters and uncertainties is implemented using Markov chain Monte Carlo sampling, leading to joint probabilistic forecasts of streamflows at multiple sites. The model provides a parametric structure for quantifying relationships between variables, including intersite correlations. The Box-Cox transformed multivariate normal distribution has considerable flexibility for modeling a wide range of predictors and predictands. The Bayesian inference formulated allows the use of data that contain nonconcurrent and missing records. The model flexibility and data-handling ability means that the BJP modeling approach is potentially of wide practical application. The paper also presents a number of statistical measures and graphical methods for verification of probabilistic forecasts of continuous variables. Results for streamflows at three river gauges in the Murrumbidgee River catchment in southeast Australia show that the BJP modeling approach has good forecast quality and that the fitted model is consistent with observed data.

  4. Introducing an operational method to forecast long-term regional drought based on the application of artificial intelligence capabilities

    NASA Astrophysics Data System (ADS)

    Kousari, Mohammad Reza; Hosseini, Mitra Esmaeilzadeh; Ahani, Hossein; Hakimelahi, Hemila

    2017-01-01

    An effective forecast of the drought definitely gives lots of advantages in regard to the management of water resources being used in agriculture, industry, and households consumption. To introduce such a model applying simple data inputs, in this study a regional drought forecast method on the basis of artificial intelligence capabilities (artificial neural networks) and Standardized Precipitation Index (SPI in 3, 6, 9, 12, 18, and 24 monthly series) has been presented in Fars Province of Iran. The precipitation data of 41 rain gauge stations were applied for computing SPI values. Besides, weather signals including Multivariate ENSO Index (MEI), North Atlantic Oscillation (NAO), Southern Oscillation Index (SOI), NINO1+2, anomaly NINO1+2, NINO3, anomaly NINO3, NINO4, anomaly NINO4, NINO3.4, and anomaly NINO3.4 were also used as the predictor variables for SPI time series forecast the next 12 months. Frequent testing and validating steps were considered to obtain the best artificial neural networks (ANNs) models. The forecasted values were mapped in verification sector then they were compared with the observed maps at the same dates. Results showed considerable spatial and temporal relationships even among the maps of different SPI time series. Also, the first 6 months forecasted maps showed an average of 73 % agreements with the observed ones. The most important finding and the strong point of this study was the fact that although drought forecast in each station and time series was completely independent, the relationships between spatial and temporal predictions remained. This strong point mainly referred to frequent testing and validating steps in order to explore the best drought forecast models from plenty of produced ANNs models. Finally, wherever the precipitation data are available, the practical application of the presented method is possible.

  5. A New Objective Technique for Verifying Mesoscale Numerical Weather Prediction Models

    NASA Technical Reports Server (NTRS)

    Case, Jonathan L.; Manobianco, John; Lane, John E.; Immer, Christopher D.

    2003-01-01

    This report presents a new objective technique to verify predictions of the sea-breeze phenomenon over east-central Florida by the Regional Atmospheric Modeling System (RAMS) mesoscale numerical weather prediction (NWP) model. The Contour Error Map (CEM) technique identifies sea-breeze transition times in objectively-analyzed grids of observed and forecast wind, verifies the forecast sea-breeze transition times against the observed times, and computes the mean post-sea breeze wind direction and speed to compare the observed and forecast winds behind the sea-breeze front. The CEM technique is superior to traditional objective verification techniques and previously-used subjective verification methodologies because: It is automated, requiring little manual intervention, It accounts for both spatial and temporal scales and variations, It accurately identifies and verifies the sea-breeze transition times, and It provides verification contour maps and simple statistical parameters for easy interpretation. The CEM uses a parallel lowpass boxcar filter and a high-order bandpass filter to identify the sea-breeze transition times in the observed and model grid points. Once the transition times are identified, CEM fits a Gaussian histogram function to the actual histogram of transition time differences between the model and observations. The fitted parameters of the Gaussian function subsequently explain the timing bias and variance of the timing differences across the valid comparison domain. Once the transition times are all identified at each grid point, the CEM computes the mean wind direction and speed during the remainder of the day for all times and grid points after the sea-breeze transition time. The CEM technique performed quite well when compared to independent meteorological assessments of the sea-breeze transition times and results from a previously published subjective evaluation. The algorithm correctly identified a forecast or observed sea-breeze occurrence or absence 93% of the time during the two- month evaluation period from July and August 2000. Nearly all failures in CEM were the result of complex precipitation features (observed or forecast) that contaminated the wind field, resulting in a false identification of a sea-breeze transition. A qualitative comparison between the CEM timing errors and the subjectively determined observed and forecast transition times indicate that the algorithm performed very well overall. Most discrepancies between the CEM results and the subjective analysis were again caused by observed or forecast areas of precipitation that led to complex wind patterns. The CEM also failed on a day when the observed sea- breeze transition affected only a very small portion of the verification domain. Based on the results of CEM, the RAMS tended to predict the onset and movement of the sea-breeze transition too early and/or quickly. The domain-wide timing biases provided by CEM indicated an early bias on 30 out of 37 days when both an observed and forecast sea breeze occurred over the same portions of the analysis domain. These results are consistent with previous subjective verifications of the RAMS sea breeze predictions. A comparison of the mean post-sea breeze winds indicate that RAMS has a positive wind-speed bias for .all days, which is also consistent with the early bias in the sea-breeze transition time since the higher wind speeds resulted in a faster inland penetration of the sea breeze compared to reality.

  6. Utility of flood warning systems for emergency management

    NASA Astrophysics Data System (ADS)

    Molinari, Daniela; Ballio, Francesco; Menoni, Scira

    2010-05-01

    The presentation is focused on a simple and crucial question for warning systems: are flood and hydrological modelling and forecasting helpful to manage flood events? Indeed, it is well known that a warning process can be invalidated by inadequate forecasts so that the accuracy and robustness of the previsional model is a key issue for any flood warning procedure. However, one problem still arises at this perspective: when forecasts can be considered to be adequate? According to Murphy (1993, Wea. Forecasting 8, 281-293), forecasts hold no intrinsic value but they acquire it through their ability to influence the decisions made by their users. Moreover, we can add that forecasts value depends on the particular problem at stake showing, this way, a multifaceted nature. As a result, forecasts verification should not be seen as a universal process, instead it should be tailored to the particular context in which forecasts are implemented. This presentation focuses on warning problems in mountain regions, whereas the short time which is distinctive of flood events makes the provision of adequate forecasts particularly significant. In this context, the quality of a forecast is linked to its capability to reduce the impact of a flood by improving the correctness of the decision about issuing (or not) a warning as well as of the implementation of a proper set of actions aimed at lowering potential flood damages. The present study evaluates the performance of a real flood forecasting system from this perspective. In detail, a back analysis of past flood events and available verification tools have been implemented. The final objective was to evaluate the system ability to support appropriate decisions with respect not only to the flood characteristics but also to the peculiarities of the area at risk as well as to the uncertainty of forecasts. This meant to consider also flood damages and forecasting uncertainty among the decision variables. Last but not least, the presentation explains how the procedure implemented in the case study could support the definition of a proper warning rule.

  7. The diagnosis and forecast system of hydrometeorological characteristics for the White, Barents, Kara and Pechora Seas

    NASA Astrophysics Data System (ADS)

    Fomin, Vladimir; Diansky, Nikolay; Gusev, Anatoly; Kabatchenko, Ilia; Panasenkova, Irina

    2017-04-01

    The diagnosis and forecast system for simulating hydrometeorological characteristics of the Russian Western Arctic seas is presented. It performs atmospheric forcing computation with the regional non-hydrostatic atmosphere model Weather Research and Forecasting model (WRF) with spatial resolution 15 km, as well as computation of circulation, sea level, temperature, salinity and sea ice with the marine circulation model INMOM (Institute of Numerical Mathematics Ocean Model) with spatial resolution 2.7 km, and the computation of wind wave parameters using the Russian wind-wave model (RWWM) with spatial resolution 5 km. Verification of the meteorological characteristics is done for air temperature, air pressure, wind velocity, water temperature, currents, sea level anomaly, wave characteristics such as wave height and wave period. The results of the hydrometeorological characteristic verification are presented for both retrospective and forecast computations. The retrospective simulation of the hydrometeorological characteristics for the White, Barents, Kara and Pechora Seas was performed with the diagnosis and forecast system for the period 1986-2015. The important features of the Kara Sea circulation are presented. Water exchange between Pechora and Kara Seas is described. The importance is shown of using non-hydrostatic atmospheric circulation model for the atmospheric forcing computation in coastal areas. According to the computation results, extreme values of hydrometeorological characteristics were obtained for the Russian Western Arctic seas.

  8. Toward Improved Land Surface Initialization in Support of Regional WRF Forecasts at the Kenya Meteorological Service (KMS)

    NASA Technical Reports Server (NTRS)

    Case, Jonathan L.; Mungai, John; Sakwa, Vincent; Kabuchanga, Eric; Zavodsky, Bradley T.; Limaye, Ashutosh S.

    2014-01-01

    SPoRT/SERVIR/RCMRD/KMS Collaboration: Builds off strengths of each organization. SPoRT: Transition of satellite, modeling and verification capabilities; SERVIR-Africa/RCMRD: International capacity-building expertise; KMS: Operational organization with regional weather forecasting expertise in East Africa. Hypothesis: Improved land-surface initialization over Eastern Africa can lead to better temperature, moisture, and ultimately precipitation forecasts in NWP models. KMS currently initializes Weather Research and Forecasting (WRF) model with NCEP/Global Forecast System (GFS) model 0.5-deg initial / boundary condition data. LIS will provide much higher-resolution land-surface data at a scale more representative to regional WRF configuration. Future implementation of real-time NESDIS/VIIRS vegetation fraction to further improve land surface representativeness.

  9. Verification of short lead time forecast models: applied to Kp and Dst forecasting

    NASA Astrophysics Data System (ADS)

    Wintoft, Peter; Wik, Magnus

    2016-04-01

    In the ongoing EU/H2020 project PROGRESS models that predicts Kp, Dst, and AE from L1 solar wind data will be used as inputs to radiation belt models. The possible lead times from L1 measurements are shorter (10s of minutes to hours) than the typical duration of the physical phenomena that should be forecast. Under these circumstances several metrics fail to single out trivial cases, such as persistence. In this work we explore metrics and approaches for short lead time forecasts. We apply these to current Kp and Dst forecast models. This project has received funding from the European Union's Horizon 2020 research and innovation programme under grant agreement No 637302.

  10. CFS Forecast Verification

    Science.gov Websites

    history of Nino3.4 SST anomalies of individual forecasts Forecast anomalies Target season Nino SST Global SST Global Prec Global T2m US Prec US T2m US SM z200 NDJ 2004/2005 Nino SST Global SST Global Prec Global T2m US Prec US T2m US SM z200 DJF 2005 Nino SST Global SST Global Prec Global T2m US Prec US T2m

  11. Evaluation and Quality Control for the Copernicus Seasonal Forecast Systems

    NASA Astrophysics Data System (ADS)

    Manubens, N.; Hunter, A.; Bedia, J.; Bretonnière, P. A.; Bhend, J.; Doblas-Reyes, F. J.

    2017-12-01

    The EU funded Copernicus Climate Change Service (C3S) will provide authoritative information about past, current and future climate for a wide range of users, from climate scientists to stakeholders from a wide range of sectors including insurance, energy or transport. It has been recognized that providing information about the products' quality and provenance is paramount to establish trust in the service and allow users to make best use of the available information. This presentation outlines the work being conducted within the Quality Assurance for Multi-model Seasonal Forecast Products project (QA4Seas). The aim of QA4Seas is to develop a strategy for the evaluation and quality control (EQC) of the multi-model seasonal forecasts provided by C3S. First, we present the set of guidelines the data providers must comply with, ensuring the data is fully traceable and harmonized across data sets. Second, we discuss the ongoing work on defining a provenance and metadata model that is able to encode such information, and that can be extended to describe the steps followed to obtain the final verification products such as maps and time series of forecast quality measures. The metadata model is based on the Resource Description Framework W3C standard, being thus extensible and reusable. It benefits from widely adopted vocabularies to describe data provenance and workflows, as well as from expert consensus and community-support for the development of the verification and downscaling specific ontologies. Third, we describe the open source software being developed to generate fully reproducible and certifiable seasonal forecast products, which also attaches provenance and metadata information to the verification measures and enables the user to visually inspect the quality of the C3S products. QA4Seas is seeking collaboration with similar initiatives, as well as extending the discussion to interested parties outside the C3S community to share experiences and establish global common guidelines or best practices regarding data provenance.

  12. National Centers for Environmental Prediction

    Science.gov Websites

    / VISION | About EMC EMC > NOAH > PEOPLE Home Operational Products Experimental Data Verification / Development Contacts Change Log Events Calendar Events People Numerical Forecast Systems Coming Soon. NOAA

  13. The Automation of Nowcast Model Assessment Processes

    DTIC Science & Technology

    2016-09-01

    that will automate real-time WRE-N model simulations, collect and quality control check weather observations for assimilation and verification, and...domains centered near White Sands Missile Range, New Mexico, where the Meteorological Sensor Array (MSA) will be located. The MSA will provide...observations and performing quality -control checks for the pre-forecast data assimilation period. 2. Run the WRE-N model to generate model forecast data

  14. WRF Simulation over the Eastern Africa by use of Land Surface Initialization

    NASA Astrophysics Data System (ADS)

    Sakwa, V. N.; Case, J.; Limaye, A. S.; Zavodsky, B.; Kabuchanga, E. S.; Mungai, J.

    2014-12-01

    The East Africa region experiences severe weather events associated with hazards of varying magnitude. It receives heavy precipitation which leads to wide spread flooding and lack of sufficient rainfall in some parts results into drought. Cases of flooding and drought are two key forecasting challenges for the Kenya Meteorological Service (KMS). The source of heat and moisture depends on the state of the land surface which interacts with the boundary layer of the atmosphere to produce excessive precipitation or lack of it that leads to severe drought. The development and evolution of precipitation systems are affected by heat and moisture fluxes from the land surface within weakly-sheared environments, such as in the tropics and sub-tropics. These heat and moisture fluxes during the day can be strongly influenced by land cover, vegetation, and soil moisture content. Therefore, it is important to represent the land surface state as accurately as possible in numerical weather prediction models. Improved modeling capabilities within the region have the potential to enhance forecast guidance in support of daily operations and high-impact weather over East Africa. KMS currently runs a configuration of the Weather Research and Forecasting (WRF) model in real time to support its daily forecasting operations, invoking the Non-hydrostatic Mesoscale Model (NMM) dynamical core. They make use of the National Oceanic and Atmospheric Administration / National Weather Service Science and Training Resource Center's Environmental Modeling System (EMS) to manage and produce the WRF-NMM model runs on a 7-km regional grid over Eastern Africa.SPoRT and SERVIR provide land surface initialization datasets and model verification tool. The NASA Land Information System (LIS) provide real-time, daily soil initialization data in place of interpolated Global Forecast System soil moisture and temperature data. Model verification is done using the Model Evaluation Tools (MET) package, in order to quantify possible improvements in simulated temperature, moisture and precipitation resulting from the experimental land surface initialization. These MET tools enable KMS to monitor model forecast accuracy in near real time. This study highlights verification results of WRF runs over East Africa using the LIS land surface initialization.

  15. Statistical analysis of NWP rainfall data from Poland..

    NASA Astrophysics Data System (ADS)

    Starosta, Katarzyna; Linkowska, Joanna

    2010-05-01

    A goal of this work is to summarize the latest results of precipitation verification in Poland. In IMGW, COSMO_PL version 4.0 has been running. The model configuration is: 14 km horizontal grid spacing, initial time at 00 UTC and 12 UTC, the forecast range 72 h. The fields from the model had been verified with Polish SYNOP stations. The verification was performed using a new verification tool. For the accumulated precipitation indices FBI, POD, FAR, ETS from contingency table are calculated. In this paper the comparison of monthly and seasonal verification of 6h, 12h, 24h accumulated precipitation in 2009 is presented. Since February 2010 the model with 7 km grid spacing will be running in IMGW. The results of precipitation verification for two different models' resolution will be shown.

  16. A Beacon Transmission Power Control Algorithm Based on Wireless Channel Load Forecasting in VANETs.

    PubMed

    Mo, Yuanfu; Yu, Dexin; Song, Jun; Zheng, Kun; Guo, Yajuan

    2015-01-01

    In a vehicular ad hoc network (VANET), the periodic exchange of single-hop status information broadcasts (beacon frames) produces channel loading, which causes channel congestion and induces information conflict problems. To guarantee fairness in beacon transmissions from each node and maximum network connectivity, adjustment of the beacon transmission power is an effective method for reducing and preventing channel congestion. In this study, the primary factors that influence wireless channel loading are selected to construct the KF-BCLF, which is a channel load forecasting algorithm based on a recursive Kalman filter and employs multiple regression equation. By pre-adjusting the transmission power based on the forecasted channel load, the channel load was kept within a predefined range; therefore, channel congestion was prevented. Based on this method, the CLF-BTPC, which is a transmission power control algorithm, is proposed. To verify KF-BCLF algorithm, a traffic survey method that involved the collection of floating car data along a major traffic road in Changchun City is employed. By comparing this forecast with the measured channel loads, the proposed KF-BCLF algorithm was proven to be effective. In addition, the CLF-BTPC algorithm is verified by simulating a section of eight-lane highway and a signal-controlled urban intersection. The results of the two verification process indicate that this distributed CLF-BTPC algorithm can effectively control channel load, prevent channel congestion, and enhance the stability and robustness of wireless beacon transmission in a vehicular network.

  17. A Beacon Transmission Power Control Algorithm Based on Wireless Channel Load Forecasting in VANETs

    PubMed Central

    Mo, Yuanfu; Yu, Dexin; Song, Jun; Zheng, Kun; Guo, Yajuan

    2015-01-01

    In a vehicular ad hoc network (VANET), the periodic exchange of single-hop status information broadcasts (beacon frames) produces channel loading, which causes channel congestion and induces information conflict problems. To guarantee fairness in beacon transmissions from each node and maximum network connectivity, adjustment of the beacon transmission power is an effective method for reducing and preventing channel congestion. In this study, the primary factors that influence wireless channel loading are selected to construct the KF-BCLF, which is a channel load forecasting algorithm based on a recursive Kalman filter and employs multiple regression equation. By pre-adjusting the transmission power based on the forecasted channel load, the channel load was kept within a predefined range; therefore, channel congestion was prevented. Based on this method, the CLF-BTPC, which is a transmission power control algorithm, is proposed. To verify KF-BCLF algorithm, a traffic survey method that involved the collection of floating car data along a major traffic road in Changchun City is employed. By comparing this forecast with the measured channel loads, the proposed KF-BCLF algorithm was proven to be effective. In addition, the CLF-BTPC algorithm is verified by simulating a section of eight-lane highway and a signal-controlled urban intersection. The results of the two verification process indicate that this distributed CLF-BTPC algorithm can effectively control channel load, prevent channel congestion, and enhance the stability and robustness of wireless beacon transmission in a vehicular network. PMID:26571042

  18. Forecasting of monsoon heavy rains: challenges in NWP

    NASA Astrophysics Data System (ADS)

    Sharma, Kuldeep; Ashrit, Raghavendra; Iyengar, Gopal; Bhatla, R.; Rajagopal, E. N.

    2016-05-01

    Last decade has seen a tremendous improvement in the forecasting skill of numerical weather prediction (NWP) models. This is attributed to increased sophistication in NWP models, which resolve complex physical processes, advanced data assimilation, increased grid resolution and satellite observations. However, prediction of heavy rains is still a challenge since the models exhibit large error in amounts as well as spatial and temporal distribution. Two state-of-art NWP models have been investigated over the Indian monsoon region to assess their ability in predicting the heavy rainfall events. The unified model operational at National Center for Medium Range Weather Forecasting (NCUM) and the unified model operational at the Australian Bureau of Meteorology (Australian Community Climate and Earth-System Simulator -- Global (ACCESS-G)) are used in this study. The recent (JJAS 2015) Indian monsoon season witnessed 6 depressions and 2 cyclonic storms which resulted in heavy rains and flooding. The CRA method of verification allows the decomposition of forecast errors in terms of error in the rainfall volume, pattern and location. The case by case study using CRA technique shows that contribution to the rainfall errors come from pattern and displacement is large while contribution due to error in predicted rainfall volume is least.

  19. The Experimental Regional Ensemble Forecast System (ExREF): Its Use in NWS Forecast Operations and Preliminary Verification

    NASA Technical Reports Server (NTRS)

    Reynolds, David; Rasch, William; Kozlowski, Daniel; Burks, Jason; Zavodsky, Bradley; Bernardet, Ligia; Jankov, Isidora; Albers, Steve

    2014-01-01

    The Experimental Regional Ensemble Forecast (ExREF) system is a tool for the development and testing of new Numerical Weather Prediction (NWP) methodologies. ExREF is run in near-realtime by the Global Systems Division (GSD) of the NOAA Earth System Research Laboratory (ESRL) and its products are made available through a website, an ftp site, and via the Unidata Local Data Manager (LDM). The ExREF domain covers most of North America and has 9-km horizontal grid spacing. The ensemble has eight members, all employing WRF-ARW. The ensemble uses a variety of initial conditions from LAPS and the Global Forecasting System (GFS) and multiple boundary conditions from the GFS ensemble. Additionally, a diversity of physical parameterizations is used to increase ensemble spread and to account for the uncertainty in forecasting extreme precipitation events. ExREF has been a component of the Hydrometeorology Testbed (HMT) NWP suite in the 2012-2013 and 2013-2014 winters. A smaller domain covering just the West Coast was created to minimize band-width consumption for the NWS. This smaller domain has and is being distributed to the National Weather Service (NWS) Weather Forecast Office and California Nevada River Forecast Center in Sacramento, California, where it is ingested into the Advanced Weather Interactive Processing System (AWIPS I and II) to provide guidance on the forecasting of extreme precipitation events. This paper will review the cooperative effort employed by NOAA ESRL, NASA SPoRT (Short-term Prediction Research and Transition Center), and the NWS to facilitate the ingest and display of ExREF data utilizing the AWIPS I and II D2D and GFE (Graphical Software Editor) software. Within GFE is a very useful verification software package called BoiVer that allows the NWS to utilize the River Forecast Center's 4 km gridded QPE to compare with all operational NWP models 6-hr QPF along with the ExREF mean 6-hr QPF so the forecasters can build confidence in the use of the ExREF in preparing their rainfall forecasts. Preliminary results will be presented.

  20. Probabilistic forecasting of extreme weather events based on extreme value theory

    NASA Astrophysics Data System (ADS)

    Van De Vyver, Hans; Van Schaeybroeck, Bert

    2016-04-01

    Extreme events in weather and climate such as high wind gusts, heavy precipitation or extreme temperatures are commonly associated with high impacts on both environment and society. Forecasting extreme weather events is difficult, and very high-resolution models are needed to describe explicitly extreme weather phenomena. A prediction system for such events should therefore preferably be probabilistic in nature. Probabilistic forecasts and state estimations are nowadays common in the numerical weather prediction community. In this work, we develop a new probabilistic framework based on extreme value theory that aims to provide early warnings up to several days in advance. We consider the combined events when an observation variable Y (for instance wind speed) exceeds a high threshold y and its corresponding deterministic forecasts X also exceeds a high forecast threshold y. More specifically two problems are addressed:} We consider pairs (X,Y) of extreme events where X represents a deterministic forecast, and Y the observation variable (for instance wind speed). More specifically two problems are addressed: Given a high forecast X=x_0, what is the probability that Y>y? In other words: provide inference on the conditional probability: [ Pr{Y>y|X=x_0}. ] Given a probabilistic model for Problem 1, what is the impact on the verification analysis of extreme events. These problems can be solved with bivariate extremes (Coles, 2001), and the verification analysis in (Ferro, 2007). We apply the Ramos and Ledford (2009) parametric model for bivariate tail estimation of the pair (X,Y). The model accommodates different types of extremal dependence and asymmetry within a parsimonious representation. Results are presented using the ensemble reforecast system of the European Centre of Weather Forecasts (Hagedorn, 2008). Coles, S. (2001) An Introduction to Statistical modelling of Extreme Values. Springer-Verlag.Ferro, C.A.T. (2007) A probability model for verifying deterministic forecasts of extreme events. Wea. Forecasting {22}, 1089-1100.Hagedorn, R. (2008) Using the ECMWF reforecast dataset to calibrate EPS forecasts. ECMWF Newsletter, {117}, 8-13.Ramos, A., Ledford, A. (2009) A new class of models for bivariate joint tails. J.R. Statist. Soc. B {71}, 219-241.

  1. Comparison of three different methods of perturbing the potential vorticity field in mesoscale forecasts of Mediterranean heavy precipitation events: PV-gradient, PV-adjoint and PV-satellite

    NASA Astrophysics Data System (ADS)

    Vich, M.; Romero, R.; Richard, E.; Arbogast, P.; Maynard, K.

    2010-09-01

    Heavy precipitation events occur regularly in the western Mediterranean region. These events often have a high impact on the society due to economic and personal losses. The improvement of the mesoscale numerical forecasts of these events can be used to prevent or minimize their impact on the society. In previous studies, two ensemble prediction systems (EPSs) based on perturbing the model initial and boundary conditions were developed and tested for a collection of high-impact MEDEX cyclonic episodes. These EPSs perturb the initial and boundary potential vorticity (PV) field through a PV inversion algorithm. This technique ensures modifications of all the meteorological fields without compromising the mass-wind balance. One EPS introduces the perturbations along the zones of the three-dimensional PV structure presenting the local most intense values and gradients of the field (a semi-objective choice, PV-gradient), while the other perturbs the PV field over the MM5 adjoint model calculated sensitivity zones (an objective method, PV-adjoint). The PV perturbations are set from a PV error climatology (PVEC) that characterizes typical PV errors in the ECMWF forecasts, both in intensity and displacement. This intensity and displacement perturbation of the PV field is chosen randomly, while its location is given by the perturbation zones defined in each ensemble generation method. Encouraged by the good results obtained by these two EPSs that perturb the PV field, a new approach based on a manual perturbation of the PV field has been tested and compared with the previous results. This technique uses the satellite water vapor (WV) observations to guide the correction of initial PV structures. The correction of the PV field intents to improve the match between the PV distribution and the WV image, taking advantage of the relation between dark and bright features of WV images and PV anomalies, under some assumptions. Afterwards, the PV inversion algorithm is applied to run a forecast with the corresponding perturbed initial state (PV-satellite). The non hydrostatic MM5 mesoscale model has been used to run all forecasts. The simulations are performed for a two-day period with a 22.5 km resolution domain (Domain 1 in http://mm5forecasts.uib.es) nested in the ECMWF large-scale forecast fields. The MEDEX cyclone of 10 June 2000, also known as the Montserrat Case, is a suitable testbed to compare the performance of each ensemble and the PV-satellite method. This case is characterized by an Atlantic upper-level trough and low-level cold front which generated a stationary mesoscale cyclone over the Spanish Mediterranean coast, advecting warm and moist air toward Catalonia from the Mediterranean Sea. The consequences of the resulting mesoscale convective system were 6-h accumulated rainfall amounts of 180 mm with estimated material losses to exceed 65 million euros by media. The performace of both ensemble forecasting systems and PV-satellite technique for our case study is evaluated through the verification of the rainfall field. Since the EPSs are probabilistic forecasts and the PV-satellite is deterministic, their comparison is done using the individual ensemble members. Therefore the verification procedure uses deterministic scores, like the ROC curve, the Taylor diagram or the Q-Q plot. These scores cover the different quality attributes of the forecast such as reliability, resolution, uncertainty and sharpness. The results show that the PV-satellite technique performance lies within the performance range obtained by both ensembles; it is even better than the non-perturbed ensemble member. Thus, perturbing randomly using the PV error climatology and introducing the perturbations in the zones given by each EPS captures the mismatch between PV and WV fields better than manual perturbations made by an expert forecaster, at least for this case study.

  2. Assessing the value of increased model resolution in forecasting fire danger

    Treesearch

    Jeanne Hoadley; Miriam Rorig; Ken Westrick; Larry Bradshaw; Sue Ferguson; Scott Goodrick; Paul Werth

    2003-01-01

    The fire season of 2000 was used as a case study to assess the value of increasing mesoscale model resolution for fire weather and fire danger forecasting. With a domain centered on Western Montana and Northern Idaho, MM5 simulations were run at 36, 12, and 4-km resolutions for a 30 day period at the height of the fire season. Verification analyses for meteorological...

  3. An Extended Objective Evaluation of the 29-km Eta Model for Weather Support to the United States Space Program

    NASA Technical Reports Server (NTRS)

    Nutter, Paul; Manobianco, John

    1998-01-01

    This report describes the Applied Meteorology Unit's objective verification of the National Centers for Environmental Prediction 29-km eta model during separate warm and cool season periods from May 1996 through January 1998. The verification of surface and upper-air point forecasts was performed at three selected stations important for 45th Weather Squadron, Spaceflight Meteorology Group, and National Weather Service, Melbourne operational weather concerns. The statistical evaluation identified model biases that may result from inadequate parameterization of physical processes. Since model biases are relatively small compared to the random error component, most of the total model error results from day-to-day variability in the forecasts and/or observations. To some extent, these nonsystematic errors reflect the variability in point observations that sample spatial and temporal scales of atmospheric phenomena that cannot be resolved by the model. On average, Meso-Eta point forecasts provide useful guidance for predicting the evolution of the larger scale environment. A more substantial challenge facing model users in real time is the discrimination of nonsystematic errors that tend to inflate the total forecast error. It is important that model users maintain awareness of ongoing model changes. Such changes are likely to modify the basic error characteristics, particularly near the surface.

  4. The GISS sounding temperature impact test

    NASA Technical Reports Server (NTRS)

    Halem, M.; Ghil, M.; Atlas, R.; Susskind, J.; Quirk, W. J.

    1978-01-01

    The impact of DST 5 and DST 6 satellite sounding data on mid-range forecasting was studied. The GISS temperature sounding technique, the GISS time-continuous four-dimensional assimilation procedure based on optimal statistical analysis, the GISS forecast model, and the verification techniques developed, including impact on local precipitation forecasts are described. It is found that the impact of sounding data was substantial and beneficial for the winter test period, Jan. 29 - Feb. 21. 1976. Forecasts started from initial state obtained with the aid of satellite data showed a mean improvement of about 4 points in the 48 and 772 hours Sub 1 scores as verified over North America and Europe. This corresponds to an 8 to 12 hour forecast improvement in the forecast range at 48 hours. An automated local precipitation forecast model applied to 128 cities in the United States showed on an average 15% improvement when satellite data was used for numerical forecasts. The improvement was 75% in the midwest.

  5. MesoNAM Verification Phase II

    NASA Technical Reports Server (NTRS)

    Watson, Leela R.

    2011-01-01

    The 45th Weather Squadron Launch Weather Officers use the 12-km resolution North American Mesoscale model (MesoNAM) forecasts to support launch weather operations. In Phase I, the performance of the model at KSC/CCAFS was measured objectively by conducting a detailed statistical analysis of model output compared to observed values. The objective analysis compared the MesoNAM forecast winds, temperature, and dew point to the observed values from the sensors in the KSC/CCAFS wind tower network. In Phase II, the AMU modified the current tool by adding an additional 15 months of model output to the database and recalculating the verification statistics. The bias, standard deviation of bias, Root Mean Square Error, and Hypothesis test for bias were calculated to verify the performance of the model. The results indicated that the accuracy decreased as the forecast progressed, there was a diurnal signal in temperature with a cool bias during the late night and a warm bias during the afternoon, and there was a diurnal signal in dewpoint temperature with a low bias during the afternoon and a high bias during the late night.

  6. Analog-Based Postprocessing of Navigation-Related Hydrological Ensemble Forecasts

    NASA Astrophysics Data System (ADS)

    Hemri, S.; Klein, B.

    2017-11-01

    Inland waterway transport benefits from probabilistic forecasts of water levels as they allow to optimize the ship load and, hence, to minimize the transport costs. Probabilistic state-of-the-art hydrologic ensemble forecasts inherit biases and dispersion errors from the atmospheric ensemble forecasts they are driven with. The use of statistical postprocessing techniques like ensemble model output statistics (EMOS) allows for a reduction of these systematic errors by fitting a statistical model based on training data. In this study, training periods for EMOS are selected based on forecast analogs, i.e., historical forecasts that are similar to the forecast to be verified. Due to the strong autocorrelation of water levels, forecast analogs have to be selected based on entire forecast hydrographs in order to guarantee similar hydrograph shapes. Custom-tailored measures of similarity for forecast hydrographs comprise hydrological series distance (SD), the hydrological matching algorithm (HMA), and dynamic time warping (DTW). Verification against observations reveals that EMOS forecasts for water level at three gauges along the river Rhine with training periods selected based on SD, HMA, and DTW compare favorably with reference EMOS forecasts, which are based on either seasonal training periods or on training periods obtained by dividing the hydrological forecast trajectories into runoff regimes.

  7. Meteorological and Environmental Inputs to Aviation Systems

    NASA Technical Reports Server (NTRS)

    Camp, Dennis W. (Editor); Frost, Walter (Editor)

    1988-01-01

    Reports on aviation meteorology, most of them informal, are presented by representatives of the National Weather Service, the Bracknell (England) Meteorological Office, the NOAA Wave Propagation Lab., the Fleet Numerical Oceanography Center, and the Aircraft Owners and Pilots Association. Additional presentations are included on aircraft/lidar turbulence comparison, lightning detection and locating systems, objective detection and forecasting of clear air turbulence, comparative verification between the Generalized Exponential Markov (GEM) Model and official aviation terminal forecasts, the evaluation of the Prototype Regional Observation and Forecast System (PROFS) mesoscale weather products, and the FAA/MIT Lincoln Lab. Doppler Weather Radar Program.

  8. The quality and value of seasonal precipitation forecasts for an early warning of large-scale droughts and floods in West Africa

    NASA Astrophysics Data System (ADS)

    Bliefernicht, Jan; Seidel, Jochen; Salack, Seyni; Waongo, Moussa; Laux, Patrick; Kunstmann, Harald

    2017-04-01

    Seasonal precipitation forecasts are a crucial source of information for an early warning of hydro-meteorological extremes in West Africa. However, the current seasonal forecasting system used by the West African weather services in the framework of the West African Climate Outlook forum (PRESAO) is limited to probabilistic precipitation forecasts of 1-month lead time. To improve this provision, we use an ensemble-based quantile-quantile transformation for bias correction of precipitation forecasts provided by a global seasonal ensemble prediction system, the Climate Forecast System Version 2 (CFS2). The statistical technique eliminates systematic differences between global forecasts and observations with the potential to preserve the signal from the model. The technique has also the advantage that it can be easily implemented at national weather services with low capacities. The statistical technique is used to generate probabilistic forecasts of monthly and seasonal precipitation amount and other precipitation indices useful for an early warning of large-scale drought and floods in West Africa. The evaluation of the statistical technique is done using CFS hindcasts (1982 to 2009) in a cross-validation mode to determine the performance of the precipitation forecasts for several lead times focusing on drought and flood events depicted over the Volta and Niger basins. In addition, operational forecasts provided by PRESAO are analyzed from 1998 to 2015. The precipitation forecasts are compared to low-skill reference forecasts generated from gridded observations (i.e. GPCC, CHIRPS) and a novel in-situ gauge database from national observation networks (see Poster EGU2017-10271). The forecasts are evaluated using state-of-the-art verification techniques to determine specific quality attributes of probabilistic forecasts such as reliability, accuracy and skill. In addition, cost-loss approaches are used to determine the value of probabilistic forecasts for multiple users in warning situations. The outcomes of the hindcasts experiment for the Volta basin illustrate that the statistical technique can clearly improve the CFS precipitation forecasts with the potential to provide skillful and valuable early precipitation warnings for large-scale drought and flood situations several months in ahead. In this presentation we give a detailed overview about the ensemble-based quantile-quantile-transformation, its validation and verification and the possibilities of this technique to complement PRESAO. We also highlight the performance of this technique for extremes such as the Sahel drought in the 80ties and in comparison to the various reference data sets (e.g. CFS2, PRESAO, observational data sets) used in this study.

  9. Climatology and Predictability of Cool-Season High Wind Events in the New York City Metropolitan and Surrounding Area

    NASA Astrophysics Data System (ADS)

    Layer, Michael

    Damaging wind events not associated with severe convective storms or tropical cyclones can occur over the Northeast U.S. during the cool season and can cause significant problems with transportation, infrastructure, and public safety. These non-convective wind events (NCWEs) events are difficult for operational forecasters to predict in the NYC region as revealed by relatively poor verification statistics in recent years. This study investigates the climatology of NCWEs occurring between 15 September and 15 May over 13 seasons from 2000-2001 through 2012-2013. The events are broken down into three distinct types commonly observed in the region: pre-cold frontal (PRF), post-cold frontal (POF), and nor'easter/coastal storm (NEC) cases. Relationships between observed winds and some atmospheric parameters such as 900 hPa height gradient, 3-hour MSLP tendency, low-level wind profile, and stability are also studied. Overall, PRF and NEC events exhibit stronger height gradients, stronger low-level winds, and stronger low-level stability than POF events. Model verification is also conducted over the 2009-2014 time period using the Short Range Ensemble Forecast system (SREF) from the National Centers for Environmental Prediction (NCEP). Both deterministic and probabilistic verification metrics are used to evaluate the performance of the ensemble during NCWEs. Although the SREF has better forecast skill than most of the deterministic SREF control members, it is rather poorly calibrated, and exhibits a significant overforecasting, or positive wind speed bias in the lower atmosphere.

  10. Hydrologic Forecasting in the 21st Century: Challenges and Directions of Research

    NASA Astrophysics Data System (ADS)

    Restrepo, P.; Schaake, J.

    2009-04-01

    Traditionally, the role of the Hydrology program of the National Weather Service has been centered around forecasting floods, in order to minimize loss of lives and damage to property as a result of floods as well as water levels for navigable rivers, and water supply in some areas of the country. A number of factors, including shifting population patterns, widespread drought and concerns about climate change have made it imperative to widen the focus to cover forecasting flows ranging from drought to floods and anything in between. Because of these concerns, it is imperative to develop models that rely more on the physical characteristics of the watershed for parameterization and less on historical observations. Furthermore, it is also critical to consider explicitly the sources of uncertainty in the forecasting process, including parameter values, model structure, forcings (both observations and forecasts), initial conditions, and streamflow observations. A consequence of more widespread occurrence of low flows as a result either of the already evident earlier snowmelt in the Western United States, or of the predicted changes in precipitation patterns, is the issue of water quality: lower flows will have higher concentrations of certain pollutants. This paper describes the current projects and future directions of research for hydrologic forecasting in the United States. Ongoing projects on quantitative precipitation and temperature estimates and forecasts, uncertainty modeling by the use of ensembles, data assimilation, verification, distributed conceptual modeling will be reviewed. Broad goals of the research directions are: 1) reliable modeling of the different sources of uncertainty. 2) a more expeditious and cost-effective approach by reducing the effort required in model calibration; 3) improvements in forecast lead-time and accuracy; 4) an approach for rapid adjustment of model parameters to account for changes in the watershed, both rapid as the result from forest fires or levee breaches, and slow, as the result of watershed reforestation, reforestation or urban development; 5) an expanded suite of products, including soil moisture and temperature forecasts, and water quality constituents; and 6) a comprehensive verification system to assess the effectiveness of the other 5 goals. To this end, the research plan places an emphasis on research of models with parameters that can be derived from physical watershed characteristics. Purely physically based models may be unattainable or impractical, and, therefore, models resulting from a combination of physically and conceptually approached processes may be required With respect to the hydrometeorological forcings the research plan emphasizes the development of improved precipitation estimation techniques through the synthesis of radar, rain gauge, satellite, and numerical weather prediction model output, particularly in those areas where ground-based sensors are inadequate to detect spatial variability in precipitation. Better estimation and forecasting of precipitation are most likely to be achieved by statistical merging of remote-sensor observations and forecasts from high-resolution numerical prediction models. Enhancements to the satellite-based precipitation products will include use of TRMM precipitation data in preparation for information to be supplied by the Global Precipitation Mission satellites not yet deployed. Because of a growing need for services in water resources, including low-flow forecasts for water supply customers, we will be directing research into coupled surface-groundwater models that will eventually replace the groundwater component of the existing models, and will be part of the new generation of models. Finally, the research plan covers the directions of research for probabilistic forecasting using ensembles, data assimilation and the verification and validation of both deterministic and probabilistic forecasts.

  11. The forecaster's added value

    NASA Astrophysics Data System (ADS)

    Turco, M.; Milelli, M.

    2009-09-01

    To the authors' knowledge there are relatively few studies that try to answer this topic: "Are humans able to add value to computer-generated forecasts and warnings ?". Moreover, the answers are not always positive. In particular some postprocessing method is competitive or superior to human forecast (see for instance Baars et al., 2005, Charba et al., 2002, Doswell C., 2003, Roebber et al., 1996, Sanders F., 1986). Within the alert system of ARPA Piemonte it is possible to study in an objective manner if the human forecaster is able to add value with respect to computer-generated forecasts. Every day the meteorology group of the Centro Funzionale of Regione Piemonte produces the HQPF (Human QPF) in terms of an areal average for each of the 13 regional warning areas, which have been created according to meteo-hydrological criteria. This allows the decision makers to produce an evaluation of the expected effects by comparing these HQPFs with predefined rainfall thresholds. Another important ingredient in this study is the very dense non-GTS network of rain gauges available that makes possible a high resolution verification. In this context the most useful verification approach is the measure of the QPF and HQPF skills by first converting precipitation expressed as continuous amounts into ‘‘exceedance'' categories (yes-no statements indicating whether precipitation equals or exceeds selected thresholds) and then computing the performances for each threshold. In particular in this work we compare the performances of the latest three years of QPF derived from two meteorological models COSMO-I7 (the Italian version of the COSMO Model, a mesoscale model developed in the framework of the COSMO Consortium) and IFS (the ECMWF global model) with the HQPF. In this analysis it is possible to introduce the hypothesis test developed by Hamill (1999), in which a confidence interval is calculated with the bootstrap method in order to establish the real difference between the skill scores of two competitive forecast. It is important to underline that the conclusions refer to the analysis of the Piemonte operational alert system, so they cannot be directly taken as universally true. But we think that some of the main lessons that can be derived from this study could be useful for the meteorological community. In details, the main conclusions are the following: - despite the overall improvement in global scale and the fact that the resolution of the limited area models has increased considerably over recent years, the QPF produced by the meteorological models involved in this study has not improved enough to allow its direct use, that is, the subjective HQPF continues to offer the best performance; - in the forecast process, the step where humans have the largest added value with respect to mathematical models, is the communication. In fact the human characterisation and communication of the forecast uncertainty to end users cannot be replaced by any computer code; - eventually, although there is no novelty in this study, we would like to show that the correct application of appropriated statistical techniques permits a better definition and quantification of the errors and, mostly important, allows a correct (unbiased) communication between forecasters and decision makers.

  12. Using Bayesian Model Averaging (BMA) to calibrate probabilistic surface temperature forecasts over Iran

    NASA Astrophysics Data System (ADS)

    Soltanzadeh, I.; Azadi, M.; Vakili, G. A.

    2011-07-01

    Using Bayesian Model Averaging (BMA), an attempt was made to obtain calibrated probabilistic numerical forecasts of 2-m temperature over Iran. The ensemble employs three limited area models (WRF, MM5 and HRM), with WRF used with five different configurations. Initial and boundary conditions for MM5 and WRF are obtained from the National Centers for Environmental Prediction (NCEP) Global Forecast System (GFS) and for HRM the initial and boundary conditions come from analysis of Global Model Europe (GME) of the German Weather Service. The resulting ensemble of seven members was run for a period of 6 months (from December 2008 to May 2009) over Iran. The 48-h raw ensemble outputs were calibrated using BMA technique for 120 days using a 40 days training sample of forecasts and relative verification data. The calibrated probabilistic forecasts were assessed using rank histogram and attribute diagrams. Results showed that application of BMA improved the reliability of the raw ensemble. Using the weighted ensemble mean forecast as a deterministic forecast it was found that the deterministic-style BMA forecasts performed usually better than the best member's deterministic forecast.

  13. Urban flood early warning systems: approaches to hydrometeorological forecasting and communicating risk

    NASA Astrophysics Data System (ADS)

    Cranston, Michael; Speight, Linda; Maxey, Richard; Tavendale, Amy; Buchanan, Peter

    2015-04-01

    One of the main challenges for the flood forecasting community remains the provision of reliable early warnings of surface (or pluvial) flooding. The Scottish Flood Forecasting Service has been developing approaches for forecasting the risk of surface water flooding including capitalising on the latest developments in quantitative precipitation forecasting from the Met Office. A probabilistic Heavy Rainfall Alert decision support tool helps operational forecasters assess the likelihood of surface water flooding against regional rainfall depth-duration estimates from MOGREPS-UK linked to historical short-duration flooding in Scotland. The surface water flood risk is communicated through the daily Flood Guidance Statement to emergency responders. A more recent development is an innovative risk-based hydrometeorological approach that links 24-hour ensemble rainfall forecasts through a hydrological model (Grid-to-Grid) to a library of impact assessments (Speight et al., 2015). The early warning tool - FEWS Glasgow - presents the risk of flooding to people, property and transport across a 1km grid over the city of Glasgow with a lead time of 24 hours. Communication of the risk was presented in a bespoke surface water flood forecast product designed based on emergency responder requirements and trialled during the 2014 Commonwealth Games in Glasgow. The development of new approaches to surface water flood forecasting are leading to improved methods of communicating the risk and better performance in early warning with a reduction in false alarm rates with summer flood guidance in 2014 (67%) compared to 2013 (81%) - although verification of instances of surface water flooding remains difficult. However the introduction of more demanding hydrometeorological capabilities with associated greater levels of uncertainty does lead to an increased demand on operational flood forecasting skills and resources. Speight, L., Cole, S.J., Moore, R.J., Pierce, C., Wright, B., Golding, B., Cranston, M., Tavendale, A., Ghimire, S., and Dhondia, J. (2015) Developing surface water flood forecasting capabilities in Scotland: an operational pilot for the 2014 Commonwealth Games in Glasgow. Journal of Flood Risk Management, In Press.

  14. Post-processing ECMWF precipitation and temperature ensemble reforecasts for operational hydrologic forecasting at various spatial scales

    NASA Astrophysics Data System (ADS)

    Verkade, J. S.; Brown, J. D.; Reggiani, P.; Weerts, A. H.

    2013-09-01

    The ECMWF temperature and precipitation ensemble reforecasts are evaluated for biases in the mean, spread and forecast probabilities, and how these biases propagate to streamflow ensemble forecasts. The forcing ensembles are subsequently post-processed to reduce bias and increase skill, and to investigate whether this leads to improved streamflow ensemble forecasts. Multiple post-processing techniques are used: quantile-to-quantile transform, linear regression with an assumption of bivariate normality and logistic regression. Both the raw and post-processed ensembles are run through a hydrologic model of the river Rhine to create streamflow ensembles. The results are compared using multiple verification metrics and skill scores: relative mean error, Brier skill score and its decompositions, mean continuous ranked probability skill score and its decomposition, and the ROC score. Verification of the streamflow ensembles is performed at multiple spatial scales: relatively small headwater basins, large tributaries and the Rhine outlet at Lobith. The streamflow ensembles are verified against simulated streamflow, in order to isolate the effects of biases in the forcing ensembles and any improvements therein. The results indicate that the forcing ensembles contain significant biases, and that these cascade to the streamflow ensembles. Some of the bias in the forcing ensembles is unconditional in nature; this was resolved by a simple quantile-to-quantile transform. Improvements in conditional bias and skill of the forcing ensembles vary with forecast lead time, amount, and spatial scale, but are generally moderate. The translation to streamflow forecast skill is further muted, and several explanations are considered, including limitations in the modelling of the space-time covariability of the forcing ensembles and the presence of storages.

  15. Solar activity simulation and forecast with a flux-transport dynamo

    NASA Astrophysics Data System (ADS)

    Macario-Rojas, Alejandro; Smith, Katharine L.; Roberts, Peter C. E.

    2018-06-01

    We present the assessment of a diffusion-dominated mean field axisymmetric dynamo model in reproducing historical solar activity and forecast for solar cycle 25. Previous studies point to the Sun's polar magnetic field as an important proxy for solar activity prediction. Extended research using this proxy has been impeded by reduced observational data record only available from 1976. However, there is a recognised need for a solar dynamo model with ample verification over various activity scenarios to improve theoretical standards. The present study aims to explore the use of helioseismology data and reconstructed solar polar magnetic field, to foster the development of robust solar activity forecasts. The research is based on observationally inferred differential rotation morphology, as well as observed and reconstructed polar field using artificial neural network methods via the hemispheric sunspot areas record. Results show consistent reproduction of historical solar activity trends with enhanced results by introducing a precursor rise time coefficient. A weak solar cycle 25, with slow rise time and maximum activity -14.4% (±19.5%) with respect to the current cycle 24 is predicted.

  16. Towards guided data assimilation for operational hydrologic forecasting in the US Tennessee River basin

    NASA Astrophysics Data System (ADS)

    Weerts, A.; Wood, A. W.; Clark, M. P.; Carney, S.; Day, G. N.; Lemans, M.; Sumihar, J.; Newman, A. J.

    2014-12-01

    In the US, the forecasting approach used by the NWS River Forecast Centers and other regional organizations such as the Bonneville Power Administration (BPA) or Tennessee Valley Authority (TVA) has traditionally involved manual model input and state modifications made by forecasters in real-time. This process is time consuming and requires expert knowledge and experience. The benefits of automated data assimilation (DA) as a strategy for avoiding manual modification approaches have been demonstrated in research studies (eg. Seo et al., 2009). This study explores the usage of various ensemble DA algorithms within the operational platform used by TVA. The final goal is to identify a DA algorithm that will guide the manual modification process used by TVA forecasters and realize considerable time gains (without loss of quality or even enhance the quality) within the forecast process. We evaluate the usability of various popular algorithms for DA that have been applied on a limited basis for operational hydrology. To this end, Delft-FEWS was wrapped (via piwebservice) in OpenDA to enable execution of FEWS workflows (and the chained models within these workflows, including SACSMA, UNITHG and LAGK) in a DA framework. Within OpenDA, several filter methods are available. We considered 4 algorithms: particle filter (RRF), Ensemble Kalman Filter and Asynchronous Ensemble Kalman and Particle filter. Retrospective simulation results for one location and algorithm (AEnKF) are illustrated in Figure 1. The initial results are promising. We will present verification results for these methods (and possible more) for a variety of sub basins in the Tennessee River basin. Finally, we will offer recommendations for guided DA based on our results. References Seo, D.-J., L. Cajina, R. Corby and T. Howieson, 2009: Automatic State Updating for Operational Streamflow Forecasting via Variational Data Assimilation, 367, Journal of Hydrology, 255-275. Figure 1. Retrospectively simulated streamflow for the headwater basin above Powell River at Jonesville (red is observed flow, blue is simulated flow without DA, black is simulated flow with DA)

  17. Calibration of limited-area ensemble precipitation forecasts for hydrological predictions

    NASA Astrophysics Data System (ADS)

    Diomede, Tommaso; Marsigli, Chiara; Montani, Andrea; Nerozzi, Fabrizio; Paccagnella, Tiziana

    2015-04-01

    The main objective of this study is to investigate the impact of calibration for limited-area ensemble precipitation forecasts, to be used for driving discharge predictions up to 5 days in advance. A reforecast dataset, which spans 30 years, based on the Consortium for Small Scale Modeling Limited-Area Ensemble Prediction System (COSMO-LEPS) was used for testing the calibration strategy. Three calibration techniques were applied: quantile-to-quantile mapping, linear regression, and analogs. The performance of these methodologies was evaluated in terms of statistical scores for the precipitation forecasts operationally provided by COSMO-LEPS in the years 2003-2007 over Germany, Switzerland, and the Emilia-Romagna region (northern Italy). The analog-based method seemed to be preferred because of its capability of correct position errors and spread deficiencies. A suitable spatial domain for the analog search can help to handle model spatial errors as systematic errors. However, the performance of the analog-based method may degrade in cases where a limited training dataset is available. A sensitivity test on the length of the training dataset over which to perform the analog search has been performed. The quantile-to-quantile mapping and linear regression methods were less effective, mainly because the forecast-analysis relation was not so strong for the available training dataset. A comparison between the calibration based on the deterministic reforecast and the calibration based on the full operational ensemble used as training dataset has been considered, with the aim to evaluate whether reforecasts are really worthy for calibration, given that their computational cost is remarkable. The verification of the calibration process was then performed by coupling ensemble precipitation forecasts with a distributed rainfall-runoff model. This test was carried out for a medium-sized catchment located in Emilia-Romagna, showing a beneficial impact of the analog-based method on the reduction of missed events for discharge predictions.

  18. Comparative verification between GEM model and official aviation terminal forecasts

    NASA Technical Reports Server (NTRS)

    Miller, Robert G.

    1988-01-01

    The Generalized Exponential Markov (GEM) model uses the local standard airways observation (SAO) to predict hour-by-hour the following elements: temperature, pressure, dew point depression, first and second cloud-layer height and amount, ceiling, total cloud amount, visibility, wind, and present weather conditions. GEM is superior to persistence at all projections for all elements in a large independent sample. A minute-by-minute GEM forecasting system utilizing the Automated Weather Observation System (AWOS) is under development.

  19. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Morley, Steven

    The PyForecastTools package provides Python routines for calculating metrics for model validation, forecast verification and model comparison. For continuous predictands the package provides functions for calculating bias (mean error, mean percentage error, median log accuracy, symmetric signed bias), and for calculating accuracy (mean squared error, mean absolute error, mean absolute scaled error, normalized RMSE, median symmetric accuracy). Convenience routines to calculate the component parts (e.g. forecast error, scaled error) of each metric are also provided. To compare models the package provides: generic skill score; percent better. Robust measures of scale including median absolute deviation, robust standard deviation, robust coefficient ofmore » variation and the Sn estimator are all provided by the package. Finally, the package implements Python classes for NxN contingency tables. In the case of a multi-class prediction, accuracy and skill metrics such as proportion correct and the Heidke and Peirce skill scores are provided as object methods. The special case of a 2x2 contingency table inherits from the NxN class and provides many additional metrics for binary classification: probability of detection, probability of false detection, false alarm ration, threat score, equitable threat score, bias. Confidence intervals for many of these quantities can be calculated using either the Wald method or Agresti-Coull intervals.« less

  20. Life beyond MSE and R2 — improving validation of predictive models with observations

    NASA Astrophysics Data System (ADS)

    Papritz, Andreas; Nussbaum, Madlene

    2017-04-01

    Machine learning and statistical predictive methods are evaluated by the closeness of predictions to observations of a test dataset. Common criteria for rating predictive methods are bias and mean square error (MSE), characterizing systematic and random prediction errors. Many studies also report R2-values, but their meaning is not always clear (correlation between observations and predictions or MSE skill score; Wilks, 2011). The same criteria are also used for choosing tuning parameters of predictive procedures by cross-validation and bagging (e.g. Hastie et al., 2009). For evident reasons, atmospheric sciences have developed a rich box of tools for forecast verification. Specific criteria have been proposed for evaluating deterministic and probabilistic predictions of binary, multinomial, ordinal and continuous responses (see reviews by Wilks, 2011, Jollie and Stephenson, 2012 and Gneiting et al., 2007). It appears that these techniques are not very well-known in the geosciences community interested in machine learning. In our presentation we review techniques that offer more insight into proximity of data and predictions than bias, MSE and R2 alone. We mention here only examples: (i) Graphing observations vs. predictions is usually more appropriate than the reverse (Piñeiro et al., 2008). (ii) The decomposition of the Brier score score (= MSE for probabilistic predictions of binary yes/no data) into reliability and resolution reveals (conditional) bias and capability of discriminating yes/no observations by the predictions. We illustrate the approaches by applications from digital soil mapping studies. Gneiting, T., Balabdaoui, F., and Raftery, A. E. (2007). Probabilistic forecasts, calibration and sharpness. Journal of the Royal Statistical Society Series B, 69, 243-268. Hastie, T., Tibshirani, R., and Friedman, J. (2009). The Elements of Statistical Learning; Data Mining, Inference and Prediction. Springer, New York, second edition. Jolliffe, I. T. and Stephenson, D. B., editors (2012). Forecast Verification: A Practitioner's Guide in Atmospheric Science. Wiley-Blackwell, second edition. Piñeiro, G., Perelman, S., Guerschman, J., and Paruelo, J. (2008). How to evaluate models: Observed vs. predicted or predicted vs. observed? Ecological Modelling, 216, 316-322. Wilks, D. S. (2011). Statistical Methods in the Atmospheric Sciences. Academic Press, third edition.

  1. Evaluation of WRF-based convection-permitting multi-physics ensemble forecasts over China for an extreme rainfall event on 21 July 2012 in Beijing

    NASA Astrophysics Data System (ADS)

    Zhu, Kefeng; Xue, Ming

    2016-11-01

    On 21 July 2012, an extreme rainfall event that recorded a maximum rainfall amount over 24 hours of 460 mm, occurred in Beijing, China. Most operational models failed to predict such an extreme amount. In this study, a convective-permitting ensemble forecast system (CEFS), at 4-km grid spacing, covering the entire mainland of China, is applied to this extreme rainfall case. CEFS consists of 22 members and uses multiple physics parameterizations. For the event, the predicted maximum is 415 mm d-1 in the probability-matched ensemble mean. The predicted high-probability heavy rain region is located in southwest Beijing, as was observed. Ensemble-based verification scores are then investigated. For a small verification domain covering Beijing and its surrounding areas, the precipitation rank histogram of CEFS is much flatter than that of a reference global ensemble. CEFS has a lower (higher) Brier score and a higher resolution than the global ensemble for precipitation, indicating more reliable probabilistic forecasting by CEFS. Additionally, forecasts of different ensemble members are compared and discussed. Most of the extreme rainfall comes from convection in the warm sector east of an approaching cold front. A few members of CEFS successfully reproduce such precipitation, and orographic lift of highly moist low-level flows with a significantly southeasterly component is suggested to have played important roles in producing the initial convection. Comparisons between good and bad forecast members indicate a strong sensitivity of the extreme rainfall to the mesoscale environmental conditions, and, to less of an extent, the model physics.

  2. Cyber Surveillance for Flood Disasters

    PubMed Central

    Lo, Shi-Wei; Wu, Jyh-Horng; Lin, Fang-Pang; Hsu, Ching-Han

    2015-01-01

    Regional heavy rainfall is usually caused by the influence of extreme weather conditions. Instant heavy rainfall often results in the flooding of rivers and the neighboring low-lying areas, which is responsible for a large number of casualties and considerable property loss. The existing precipitation forecast systems mostly focus on the analysis and forecast of large-scale areas but do not provide precise instant automatic monitoring and alert feedback for individual river areas and sections. Therefore, in this paper, we propose an easy method to automatically monitor the flood object of a specific area, based on the currently widely used remote cyber surveillance systems and image processing methods, in order to obtain instant flooding and waterlogging event feedback. The intrusion detection mode of these surveillance systems is used in this study, wherein a flood is considered a possible invasion object. Through the detection and verification of flood objects, automatic flood risk-level monitoring of specific individual river segments, as well as the automatic urban inundation detection, has become possible. The proposed method can better meet the practical needs of disaster prevention than the method of large-area forecasting. It also has several other advantages, such as flexibility in location selection, no requirement of a standard water-level ruler, and a relatively large field of view, when compared with the traditional water-level measurements using video screens. The results can offer prompt reference for appropriate disaster warning actions in small areas, making them more accurate and effective. PMID:25621609

  3. Integrated Medical Model Verification, Validation, and Credibility

    NASA Technical Reports Server (NTRS)

    Walton, Marlei; Kerstman, Eric; Foy, Millennia; Shah, Ronak; Saile, Lynn; Boley, Lynn; Butler, Doug; Myers, Jerry

    2014-01-01

    The Integrated Medical Model (IMM) was designed to forecast relative changes for a specified set of crew health and mission success risk metrics by using a probabilistic (stochastic process) model based on historical data, cohort data, and subject matter expert opinion. A probabilistic approach is taken since exact (deterministic) results would not appropriately reflect the uncertainty in the IMM inputs. Once the IMM was conceptualized, a plan was needed to rigorously assess input information, framework and code, and output results of the IMM, and ensure that end user requests and requirements were considered during all stages of model development and implementation. METHODS: In 2008, the IMM team developed a comprehensive verification and validation (VV) plan, which specified internal and external review criteria encompassing 1) verification of data and IMM structure to ensure proper implementation of the IMM, 2) several validation techniques to confirm that the simulation capability of the IMM appropriately represents occurrences and consequences of medical conditions during space missions, and 3) credibility processes to develop user confidence in the information derived from the IMM. When the NASA-STD-7009 (7009) was published, the IMM team updated their verification, validation, and credibility (VVC) project plan to meet 7009 requirements and include 7009 tools in reporting VVC status of the IMM. RESULTS: IMM VVC updates are compiled recurrently and include 7009 Compliance and Credibility matrices, IMM VV Plan status, and a synopsis of any changes or updates to the IMM during the reporting period. Reporting tools have evolved over the lifetime of the IMM project to better communicate VVC status. This has included refining original 7009 methodology with augmentation from the NASA-STD-7009 Guidance Document. End user requests and requirements are being satisfied as evidenced by ISS Program acceptance of IMM risk forecasts, transition to an operational model and simulation tool, and completion of service requests from a broad end user consortium including Operations, Science and Technology Planning, and Exploration Planning. CONCLUSIONS: The VVC approach established by the IMM project of combining the IMM VV Plan with 7009 requirements is comprehensive and includes the involvement of end users at every stage in IMM evolution. Methods and techniques used to quantify the VVC status of the IMM have not only received approval from the local NASA community but have also garnered recognition by other federal agencies seeking to develop similar guidelines in the medical modeling community.

  4. Evaluation of the NCEP CFSv2 45-day Forecasts for Predictability of Intraseasonal Tropical Storm Activities

    NASA Astrophysics Data System (ADS)

    Schemm, J. E.; Long, L.; Baxter, S.

    2013-12-01

    Evaluation of the NCEP CFSv2 45-day Forecasts for Predictability of Intraseasonal Tropical Storm Activities Jae-Kyung E. Schemm, Lindsey Long and Stephen Baxter Climate Prediction Center, NCEP/NWS/NOAA Predictability of intraseasonal tropical storm (TS) activities is assessed using the 1999-2010 CFSv2 hindcast suite. Weekly TS activities in the CFSv2 45-day forecasts were determined using the TS detection and tracking method devised by Carmago and Zebiak (2002). The forecast periods are divided into weekly intervals for Week 1 through Week 6, and also the 30-day mean. The TS activities in those intervals are compared to the observed activities based on the NHC HURDAT and JTWC Best Track datasets. The CFSv2 45-day hindcast suite is made of forecast runs initialized at 00, 06, 12 and 18Z every day during the 1999 - 2010 period. For predictability evaluation, forecast TS activities are analyzed based on 20-member ensemble forecasts comprised of 45-day runs made during the most recent 5 days prior to the verification period. The forecast TS activities are evaluated in terms of the number of storms, genesis locations and storm tracks during the weekly periods. The CFSv2 forecasts are shown to have a fair level of skill in predicting the number of storms over the Atlantic Basin with the temporal correlation scores ranging from 0.73 for Week 1 forecasts to 0.63 for Week 6, and the average RMS errors ranging from 0.86 to 1.07 during the 1999-2010 hurricane season. Also, the forecast track density distribution and false alarm statistics are compiled using the hindcast analyses. In real-time applications of the intraseasonal TS activity forecasts, the climatological TS forecast statistics will be used to make the model bias corrections in terms of the storm counts, track distribution and removal of false alarms. An operational implementation of the weekly TS activity prediction is planned for early 2014 to provide an objective input for the CPC's Global Tropical Hazards Outlooks.

  5. On the predictability of outliers in ensemble forecasts

    NASA Astrophysics Data System (ADS)

    Siegert, S.; Bröcker, J.; Kantz, H.

    2012-03-01

    In numerical weather prediction, ensembles are used to retrieve probabilistic forecasts of future weather conditions. We consider events where the verification is smaller than the smallest, or larger than the largest ensemble member of a scalar ensemble forecast. These events are called outliers. In a statistically consistent K-member ensemble, outliers should occur with a base rate of 2/(K+1). In operational ensembles this base rate tends to be higher. We study the predictability of outlier events in terms of the Brier Skill Score and find that forecast probabilities can be calculated which are more skillful than the unconditional base rate. This is shown analytically for statistically consistent ensembles. Using logistic regression, forecast probabilities for outlier events in an operational ensemble are calculated. These probabilities exhibit positive skill which is quantitatively similar to the analytical results. Possible causes of these results as well as their consequences for ensemble interpretation are discussed.

  6. Discriminant analysis forecasting model of first trimester pregnancy outcomes developed by following 9,963 infertile patients after in vitro fertilization.

    PubMed

    Yi, Yan; Li, Xihong; Ouyang, Yan; Lin, Ge; Lu, Guangxiu; Gong, Fei

    2016-05-01

    To investigate a forecasting method developed to predict first trimester pregnancy outcomes using the first routine ultrasound scan for early pregnancy on days 27-29 after ET and to determine whether to perform a repeated scan several days later based on this forecasting method. Prospective analysis. Infertile patients at an assisted reproductive technology center. A total of 9,963 patients with an early singleton pregnancy after in vitro fertilization (IVF)-ET. None. Ongoing pregnancy >12 weeks of gestation. The classification score of ongoing pregnancy was equal to (1.57 × Maternal age) + (1.01 × Mean sac diameter) + (-0.19 × Crown-rump length) + 25.15 (if cardiac activity is present) + 1.30 (if intrauterine hematomas are present) - 47.35. The classification score of early pregnancy loss was equal to (1.66 × Maternal age) + (0.84 × Mean sac diameter) + (-0.38 × Crown-rump length) + 8.69 (if cardiac activity is present) + 1.60 (if intrauterine hematomas are present) - 34.77. In verification samples, 94.44% of cases were correctly classified using these forecasting models. The discriminant forecasting models are accurate in predicting first trimester pregnancy outcomes based on the first scan for early pregnancy after ET. When the predictive result is ongoing pregnancy, a second scan can be postponed until 11-14 weeks if no symptoms of abdominal pain or vaginal bleeding are present. When the predictive results suggest early pregnancy loss, repeated scans are imperative to avoid a misdiagnosis before evacuating the uterus. Copyright © 2016 American Society for Reproductive Medicine. Published by Elsevier Inc. All rights reserved.

  7. Verification of Global Assimilation of Ionospheric Measurements Gauss Markov (GAIM-GM) Model Forecast Accuracy

    DTIC Science & Technology

    2011-09-01

    m b e r o f O cc u rr e n ce s 50 ( a ) Kp 0-3 (b) Kp 4-9 Figure 25. Scatter plot of...dependent physics based model that uses the Ionospheric Forecast Model ( IFM ) as a background model upon which perturbations are imposed via a Kalman filter...vertical output resolution as the IFM . GAIM-GM can also be run in a regional mode with a finer resolution (Scherliess et al., 2006). GAIM-GM is

  8. Bayesian quantitative precipitation forecasts in terms of quantiles

    NASA Astrophysics Data System (ADS)

    Bentzien, Sabrina; Friederichs, Petra

    2014-05-01

    Ensemble prediction systems (EPS) for numerical weather predictions on the mesoscale are particularly developed to obtain probabilistic guidance for high impact weather. An EPS not only issues a deterministic future state of the atmosphere but a sample of possible future states. Ensemble postprocessing then translates such a sample of forecasts into probabilistic measures. This study focus on probabilistic quantitative precipitation forecasts in terms of quantiles. Quantiles are particular suitable to describe precipitation at various locations, since no assumption is required on the distribution of precipitation. The focus is on the prediction during high-impact events and related to the Volkswagen Stiftung funded project WEX-MOP (Mesoscale Weather Extremes - Theory, Spatial Modeling and Prediction). Quantile forecasts are derived from the raw ensemble and via quantile regression. Neighborhood method and time-lagging are effective tools to inexpensively increase the ensemble spread, which results in more reliable forecasts especially for extreme precipitation events. Since an EPS provides a large amount of potentially informative predictors, a variable selection is required in order to obtain a stable statistical model. A Bayesian formulation of quantile regression allows for inference about the selection of predictive covariates by the use of appropriate prior distributions. Moreover, the implementation of an additional process layer for the regression parameters accounts for spatial variations of the parameters. Bayesian quantile regression and its spatially adaptive extension is illustrated for the German-focused mesoscale weather prediction ensemble COSMO-DE-EPS, which runs (pre)operationally since December 2010 at the German Meteorological Service (DWD). Objective out-of-sample verification uses the quantile score (QS), a weighted absolute error between quantile forecasts and observations. The QS is a proper scoring function and can be decomposed into reliability, resolutions and uncertainty parts. A quantile reliability plot gives detailed insights in the predictive performance of the quantile forecasts.

  9. Extended Range Prediction of Indian Summer Monsoon: Current status

    NASA Astrophysics Data System (ADS)

    Sahai, A. K.; Abhilash, S.; Borah, N.; Joseph, S.; Chattopadhyay, R.; S, S.; Rajeevan, M.; Mandal, R.; Dey, A.

    2014-12-01

    The main focus of this study is to develop forecast consensus in the extended range prediction (ERP) of monsoon Intraseasonal oscillations using a suit of different variants of Climate Forecast system (CFS) model. In this CFS based Grand MME prediction system (CGMME), the ensemble members are generated by perturbing the initial condition and using different configurations of CFSv2. This is to address the role of different physical mechanisms known to have control on the error growth in the ERP in the 15-20 day time scale. The final formulation of CGMME is based on 21 ensembles of the standalone Global Forecast System (GFS) forced with bias corrected forecasted SST from CFS, 11 low resolution CFST126 and 11 high resolution CFST382. Thus, we develop the multi-model consensus forecast for the ERP of Indian summer monsoon (ISM) using a suite of different variants of CFS model. This coordinated international effort lead towards the development of specific tailor made regional forecast products over Indian region. Skill of deterministic and probabilistic categorical rainfall forecast as well the verification of large-scale low frequency monsoon intraseasonal oscillations has been carried out using hindcast from 2001-2012 during the monsoon season in which all models are initialized at every five days starting from 16May to 28 September. The skill of deterministic forecast from CGMME is better than the best participating single model ensemble configuration (SME). The CGMME approach is believed to quantify the uncertainty in both initial conditions and model formulation. Main improvement is attained in probabilistic forecast which is because of an increase in the ensemble spread, thereby reducing the error due to over-confident ensembles in a single model configuration. For probabilistic forecast, three tercile ranges are determined by ranking method based on the percentage of ensemble members from all the participating models falls in those three categories. CGMME further added value to both deterministic and probability forecast compared to raw SME's and this better skill is probably flows from large spread and improved spread-error relationship. CGMME system is currently capable of generating ER prediction in real time and successfully delivering its experimental operational ER forecast of ISM for the last few years.

  10. Nowcasting of deep convective clouds and heavy precipitation: Comparison study between NWP model simulation and extrapolation

    NASA Astrophysics Data System (ADS)

    Bližňák, Vojtěch; Sokol, Zbyněk; Zacharov, Petr

    2017-02-01

    An evaluation of convective cloud forecasts performed with the numerical weather prediction (NWP) model COSMO and extrapolation of cloud fields is presented using observed data derived from the geostationary satellite Meteosat Second Generation (MSG). The present study focuses on the nowcasting range (1-5 h) for five severe convective storms in their developing stage that occurred during the warm season in the years 2012-2013. Radar reflectivity and extrapolated radar reflectivity data were assimilated for at least 6 h depending on the time of occurrence of convection. Synthetic satellite imageries were calculated using radiative transfer model RTTOV v10.2, which was implemented into the COSMO model. NWP model simulations of IR10.8 μm and WV06.2 μm brightness temperatures (BTs) with a horizontal resolution of 2.8 km were interpolated into the satellite projection and objectively verified against observations using Root Mean Square Error (RMSE), correlation coefficient (CORR) and Fractions Skill Score (FSS) values. Naturally, the extrapolation of cloud fields yielded an approximately 25% lower RMSE, 20% higher CORR and 15% higher FSS at the beginning of the second forecasted hour compared to the NWP model forecasts. On the other hand, comparable scores were observed for the third hour, whereas the NWP forecasts outperformed the extrapolation by 10% for RMSE, 15% for CORR and up to 15% for FSS during the fourth forecasted hour and 15% for RMSE, 27% for CORR and up to 15% for FSS during the fifth forecasted hour. The analysis was completed by a verification of the precipitation forecasts yielding approximately 8% higher RMSE, 15% higher CORR and up to 45% higher FSS when the NWP model simulation is used compared to the extrapolation for the first hour. Both the methods yielded unsatisfactory level of precipitation forecast accuracy from the fourth forecasted hour onward.

  11. ICE CONTROL - Towards optimizing wind energy production during icing events

    NASA Astrophysics Data System (ADS)

    Dorninger, Manfred; Strauss, Lukas; Serafin, Stefano; Beck, Alexander; Wittmann, Christoph; Weidle, Florian; Meier, Florian; Bourgeois, Saskia; Cattin, René; Burchhart, Thomas; Fink, Martin

    2017-04-01

    Forecasts of wind power production loss caused by icing weather conditions are produced by a chain of physical models. The model chain consists of a numerical weather prediction model, an icing model and a production loss model. Each element of the model chain is affected by significant uncertainty, which can be quantified using targeted observations and a probabilistic forecasting approach. In this contribution, we present preliminary results from the recently launched project ICE CONTROL, an Austrian research initiative on measurements, probabilistic forecasting, and verification of icing on wind turbine blades. ICE CONTROL includes an experimental field phase, consisting of measurement campaigns in a wind park in Rhineland-Palatinate, Germany, in the winters 2016/17 and 2017/18. Instruments deployed during the campaigns consist of a conventional icing detector on the turbine hub and newly devised ice sensors (eologix Sensor System) on the turbine blades, as well as meteorological sensors for wind, temperature, humidity, visibility, and precipitation type and spectra. Liquid water content and spectral characteristics of super-cooled water droplets are measured using a Fog Monitor FM-120. Three cameras document the icing conditions on the instruments and on the blades. Different modelling approaches are used to quantify the components of the model-chain uncertainties. The uncertainty related to the initial conditions of the weather prediction is evaluated using the existing global ensemble prediction system (EPS) of the European Centre for Medium-Range Weather Forecasts (ECMWF). Furthermore, observation system experiments are conducted with the AROME model and its 3D-Var data assimilation to investigate the impact of additional observations (such as Mode-S aircraft data, SCADA data and MSG cloud mask initialization) on the numerical icing forecast. The uncertainty related to model formulation is estimated from multi-physics ensembles based on the Weather Research and Forecasting model (WRF) by perturbing parameters in the physical parameterization schemes. In addition, uncertainties of the icing model and of its adaptations to the rotating turbine blade are addressed. The model forecasts combined with the suite of instruments and their measurements make it possible to conduct a step-wise verification of all the components of the model chain - a novel aspect compared to similar ongoing and completed forecasting projects.

  12. NCEP HYSPLIT SMOKE & DUST Verification. NOAA/NWS/NCEP/EMC

    Science.gov Websites

    April May June July August Summer September October November December Prod vs Para Summer 2013 CA/MX Hawaii All regions PROD run All regions PARA run Select averaged hour: 1 hr average Select forecast four

  13. Soundscapes

    DTIC Science & Technology

    2014-09-30

    Soundscapes ...global oceanographic models to provide hindcasts, nowcasts, and forecasts of the time-evolving soundscape . In terms of the types of sound sources, we...other types of sources. APPROACH The research has two principle thrusts: 1) the modeling of the soundscape , and 2) verification using datasets that

  14. A comparison of breeding and ensemble transform vectors for global ensemble generation

    NASA Astrophysics Data System (ADS)

    Deng, Guo; Tian, Hua; Li, Xiaoli; Chen, Jing; Gong, Jiandong; Jiao, Meiyan

    2012-02-01

    To compare the initial perturbation techniques using breeding vectors and ensemble transform vectors, three ensemble prediction systems using both initial perturbation methods but with different ensemble member sizes based on the spectral model T213/L31 are constructed at the National Meteorological Center, China Meteorological Administration (NMC/CMA). A series of ensemble verification scores such as forecast skill of the ensemble mean, ensemble resolution, and ensemble reliability are introduced to identify the most important attributes of ensemble forecast systems. The results indicate that the ensemble transform technique is superior to the breeding vector method in light of the evaluation of anomaly correlation coefficient (ACC), which is a deterministic character of the ensemble mean, the root-mean-square error (RMSE) and spread, which are of probabilistic attributes, and the continuous ranked probability score (CRPS) and its decomposition. The advantage of the ensemble transform approach is attributed to its orthogonality among ensemble perturbations as well as its consistence with the data assimilation system. Therefore, this study may serve as a reference for configuration of the best ensemble prediction system to be used in operation.

  15. A Decision Support System for effective use of probability forecasts

    NASA Astrophysics Data System (ADS)

    De Kleermaeker, Simone; Verkade, Jan

    2013-04-01

    Often, water management decisions are based on hydrological forecasts. These forecasts, however, are affected by inherent uncertainties. It is increasingly common for forecasting agencies to make explicit estimates of these uncertainties and thus produce probabilistic forecasts. Associated benefits include the decision makers' increased awareness of forecasting uncertainties and the potential for risk-based decision-making. Also, a stricter separation of responsibilities between forecasters and decision maker can be made. However, simply having probabilistic forecasts available is not sufficient to realise the associated benefits. Additional effort is required in areas such as forecast visualisation and communication, decision making in uncertainty and forecast verification. Also, revised separation of responsibilities requires a shift in institutional arrangements and responsibilities. A recent study identified a number of additional issues related to the effective use of probability forecasts. When moving from deterministic to probability forecasting, a dimension is added to an already multi-dimensional problem; this makes it increasingly difficult for forecast users to extract relevant information from a forecast. A second issue is that while probability forecasts provide a necessary ingredient for risk-based decision making, other ingredients may not be present. For example, in many cases no estimates of flood damage, of costs of management measures and of damage reduction are available. This paper presents the results of the study, including some suggestions for resolving these issues and the integration of those solutions in a prototype decision support system (DSS). A pathway for further development of the DSS is outlined.

  16. Evaluations of Extended-Range tropical Cyclone Forecasts in the Western North Pacific by using the Ensemble Reforecasts: Preliminary Results

    NASA Astrophysics Data System (ADS)

    Tsai, Hsiao-Chung; Chen, Pang-Cheng; Elsberry, Russell L.

    2017-04-01

    The objective of this study is to evaluate the predictability of the extended-range forecasts of tropical cyclone (TC) in the western North Pacific using reforecasts from National Centers for Environmental Prediction (NCEP) Global Ensemble Forecast System (GEFS) during 1996-2015, and from the Climate Forecast System (CFS) during 1999-2010. Tsai and Elsberry have demonstrated that an opportunity exists to support hydrological operations by using the extended-range TC formation and track forecasts in the western North Pacific from the ECMWF 32-day ensemble. To demonstrate this potential for the decision-making processes regarding water resource management and hydrological operation in Taiwan reservoir watershed areas, special attention is given to the skill of the NCEP GEFS and CFS models in predicting the TCs affecting the Taiwan area. The first objective of this study is to analyze the skill of NCEP GEFS and CFS TC forecasts and quantify the forecast uncertainties via verifications of categorical binary forecasts and probabilistic forecasts. The second objective is to investigate the relationships among the large-scale environmental factors [e.g., El Niño Southern Oscillation (ENSO), Madden-Julian Oscillation (MJO), etc.] and the model forecast errors by using the reforecasts. Preliminary results are indicating that the skill of the TC activity forecasts based on the raw forecasts can be further improved if the model biases are minimized by utilizing these reforecasts.

  17. Assessment of wind energy potential in Poland

    NASA Astrophysics Data System (ADS)

    Starosta, Katarzyna; Linkowska, Joanna; Mazur, Andrzej

    2014-05-01

    The aim of the presentation is to show the suitability of using numerical model wind speed forecasts for the wind power industry applications in Poland. In accordance with the guidelines of the European Union, the consumption of wind energy in Poland is rapidly increasing. According to the report of Energy Regulatory Office from 30 March 2013, the installed capacity of wind power in Poland was 2807MW from 765 wind power stations. Wind energy is strongly dependent on the meteorological conditions. Based on the climatological wind speed data, potential energy zones within the area of Poland have been developed (H. Lorenc). They are the first criterion for assessing the location of the wind farm. However, for exact monitoring of a given wind farm location the prognostic data from numerical model forecasts are necessary. For the practical interpretation and further post-processing, the verification of the model data is very important. Polish Institute Meteorology and Water Management - National Research Institute (IMWM-NRI) runs an operational model COSMO (Consortium for Small-scale Modelling, version 4.8) using two nested domains at horizontal resolutions of 7 km and 2.8 km. The model produces 36 hour and 78 hour forecasts from 00 UTC, for 2.8 km and 7 km domain resolutions respectively. Numerical forecasts were compared with the observation of 60 SYNOP and 3 TEMP stations in Poland, using VERSUS2 (Unified System Verification Survey 2) and R package. For every zone the set of statistical indices (ME, MAE, RMSE) was calculated. Forecast errors for aerological profiles are shown for Polish TEMP stations at Wrocław, Legionowo and Łeba. The current studies are connected with a topic of the COST ES1002 WIRE-Weather Intelligence for Renewable Energies.

  18. Toward Improved Land Surface Initialization in Support of Regional WRF Forecasts at the Kenya Meteorological Department

    NASA Technical Reports Server (NTRS)

    Case. Jonathan; Mungai, John; Sakwa, Vincent; Kabuchanga, Eric; Zavodsky, Bradley T.; Limaye, Ashutosh S.

    2014-01-01

    Flooding and drought are two key forecasting challenges for the Kenya Meteorological Department (KMD). Atmospheric processes leading to excessive precipitation and/or prolonged drought can be quite sensitive to the state of the land surface, which interacts with the boundary layer of the atmosphere providing a source of heat and moisture. The development and evolution of precipitation systems are affected by heat and moisture fluxes from the land surface within weakly-sheared environments, such as in the tropics and sub-tropics. These heat and moisture fluxes during the day can be strongly influenced by land cover, vegetation, and soil moisture content. Therefore, it is important to represent the land surface state as accurately as possible in numerical weather prediction models. Enhanced regional modeling capabilities have the potential to improve forecast guidance in support of daily operations and high-end events over east Africa. KMD currently runs a configuration of the Weather Research and Forecasting (WRF) model in real time to support its daily forecasting operations, invoking the Nonhydrostatic Mesoscale Model (NMM) dynamical core. They make use of the National Oceanic and Atmospheric Administration / National Weather Service Science and Training Resource Center's Environmental Modeling System (EMS) to manage and produce the WRF-NMM model runs on a 7-km regional grid over eastern Africa. Two organizations at the National Aeronautics and Space Administration Marshall Space Flight Center in Huntsville, AL, SERVIR and the Short-term Prediction Research and Transition (SPoRT) Center, have established a working partnership with KMD for enhancing its regional modeling capabilities. To accomplish this goal, SPoRT and SERVIR will provide experimental land surface initialization datasets and model verification capabilities to KMD. To produce a land-surface initialization more consistent with the resolution of the KMD-WRF runs, the NASA Land Information System (LIS) will be run at a comparable resolution to provide real-time, daily soil initialization data in place of interpolated Global Forecast System soil moisture and temperature data. Additionally, real-time green vegetation fraction data from the Visible Infrared Imaging Radiometer Suite will be incorporated into the KMD-WRF runs, once it becomes publicly available from the National Environmental Satellite Data and Information Service. Finally, model verification capabilities will be transitioned to KMD using the Model Evaluation Tools (MET) package, in order to quantify possible improvements in simulated temperature, moisture and precipitation resulting from the experimental land surface initialization. The transition of these MET tools will enable KMD to monitor model forecast accuracy in near real time. This presentation will highlight preliminary verification results of WRF runs over east Africa using the LIS land surface initialization.

  19. Forecasting of cyanobacterial density in Torrão reservoir using artificial neural networks.

    PubMed

    Torres, Rita; Pereira, Elisa; Vasconcelos, Vítor; Teles, Luís Oliva

    2011-06-01

    The ability of general regression neural networks (GRNN) to forecast the density of cyanobacteria in the Torrão reservoir (Tâmega river, Portugal), in a period of 15 days, based on three years of collected physical and chemical data, was assessed. Several models were developed and 176 were selected based on their correlation values for the verification series. A time lag of 11 was used, equivalent to one sample (periods of 15 days in the summer and 30 days in the winter). Several combinations of the series were used. Input and output data collected from three depths of the reservoir were applied (surface, euphotic zone limit and bottom). The model that presented a higher average correlation value presented the correlations 0.991; 0.843; 0.978 for training, verification and test series. This model had the three series independent in time: first test series, then verification series and, finally, training series. Only six input variables were considered significant to the performance of this model: ammonia, phosphates, dissolved oxygen, water temperature, pH and water evaporation, physical and chemical parameters referring to the three depths of the reservoir. These variables are common to the next four best models produced and, although these included other input variables, their performance was not better than the selected best model.

  20. Canadian Operational Air Quality Forecasting Systems: Status, Recent Progress, and Challenges

    NASA Astrophysics Data System (ADS)

    Pavlovic, Radenko; Davignon, Didier; Ménard, Sylvain; Munoz-Alpizar, Rodrigo; Landry, Hugo; Beaulieu, Paul-André; Gilbert, Samuel; Moran, Michael; Chen, Jack

    2017-04-01

    ECCC's Canadian Meteorological Centre Operations (CMCO) division runs a number of operational air quality (AQ)-related systems that revolve around the Regional Air Quality Deterministic Prediction System (RAQDPS). The RAQDPS generates 48-hour AQ forecasts and outputs hourly concentration fields of O3, PM2.5, NO2, and other pollutants twice daily on a North-American domain with 10-km horizontal grid spacing and 80 vertical levels. A closely related AQ forecast system with near-real-time wildfire emissions, known as FireWork, has been run by CMCO during the Canadian wildfire season (April to October) since 2014. This system became operational in June 2016. The CMCO`s operational AQ forecast systems also benefit from several support systems, such as a statistical post-processing model called UMOS-AQ that is applied to enhance forecast reliability at point locations with AQ monitors. The Regional Deterministic Air Quality Analysis (RDAQA) system has also been connected to the RAQDPS since February 2013, and hourly surface objective analyses are now available for O3, PM2.5, NO2, PM10, SO2 and, indirectly, the Canadian Air Quality Health Index. As of June 2015, another version of the RDAQA has been connected to FireWork (RDAQA-FW). For verification purposes, CMCO developed a third support system called Verification for Air QUality Models (VAQUM), which has a geospatial relational database core and which enables continuous monitoring of the AQ forecast systems' performance. Urban environments are particularly subject to AQ pollution. In order to improve the services offered, ECCC has recently been investing efforts to develop a high resolution air quality prediction capability for urban areas in Canada. In this presentation, a comprehensive description of the ECCC AQ systems will be provided, along with a discussion on AQ systems performance. Recent improvements, current challenges, and future directions of the Canadian operational AQ program will also be discussed.

  1. Operational 0-3 h probabilistic quantitative precipitation forecasts: Recent performance and potential enhancements

    NASA Astrophysics Data System (ADS)

    Sokol, Z.; Kitzmiller, D.; Pešice, P.; Guan, S.

    2009-05-01

    The NOAA National Weather Service has maintained an automated, centralized 0-3 h prediction system for probabilistic quantitative precipitation forecasts since 2001. This advective-statistical system (ADSTAT) produces probabilities that rainfall will exceed multiple threshold values up to 50 mm at some location within a 40-km grid box. Operational characteristics and development methods for the system are described. Although development data were stratified by season and time of day, ADSTAT utilizes only a single set of nation-wide equations that relate predictor variables derived from radar reflectivity, lightning, satellite infrared temperatures, and numerical prediction model output to rainfall occurrence. A verification study documented herein showed that the operational ADSTAT reliably models regional variations in the relative frequency of heavy rain events. This was true even in the western United States, where no regional-scale, gridded hourly precipitation data were available during the development period in the 1990s. An effort was recently launched to improve the quality of ADSTAT forecasts by regionalizing the prediction equations and to adapt the model for application in the Czech Republic. We have experimented with incorporating various levels of regional specificity in the probability equations. The geographic localization study showed that in the warm season, regional climate differences and variations in the diurnal temperature cycle have a marked effect on the predictor-predictand relationships, and thus regionalization would lead to better statistical reliability in the forecasts.

  2. Nationwide validation of ensemble streamflow forecasts from the Hydrologic Ensemble Forecast Service (HEFS) of the U.S. National Weather Service

    NASA Astrophysics Data System (ADS)

    Lee, H. S.; Liu, Y.; Ward, J.; Brown, J.; Maestre, A.; Herr, H.; Fresch, M. A.; Wells, E.; Reed, S. M.; Jones, E.

    2017-12-01

    The National Weather Service's (NWS) Office of Water Prediction (OWP) recently launched a nationwide effort to verify streamflow forecasts from the Hydrologic Ensemble Forecast Service (HEFS) for a majority of forecast locations across the 13 River Forecast Centers (RFCs). Known as the HEFS Baseline Validation (BV), the project involves a joint effort between the OWP and the RFCs. It aims to provide a geographically consistent, statistically robust validation, and a benchmark to guide the operational implementation of the HEFS, inform practical applications, such as impact-based decision support services, and to provide an objective framework for evaluating strategic investments in the HEFS. For the BV, HEFS hindcasts are issued once per day on a 12Z cycle for the period of 1985-2015 with a forecast horizon of 30 days. For the first two weeks, the hindcasts are forced with precipitation and temperature ensemble forecasts from the Global Ensemble Forecast System of the National Centers for Environmental Prediction, and by resampled climatology for the remaining period. The HEFS-generated ensemble streamflow hindcasts are verified using the Ensemble Verification System. Skill is assessed relative to streamflow hindcasts generated from NWS' current operational system, namely climatology-based Ensemble Streamflow Prediction. In this presentation, we summarize the results and findings to date.

  3. A Study of Subseasonal Predictability of the Atmospheric Circulation Low-frequency Modes based on SL-AV forecasts

    NASA Astrophysics Data System (ADS)

    Kruglova, Ekaterina; Kulikova, Irina; Khan, Valentina; Tischenko, Vladimir

    2017-04-01

    The subseasonal predictability of low-frequency modes and the atmospheric circulation regimes is investigated based on the using of outputs from global Semi-Lagrangian (SL-AV) model of the Hydrometcentre of Russia and Institute of Numerical Mathematics of Russian Academy of Science. Teleconnection indices (AO, WA, EA, NAO, EU, WP, PNA) are used as the quantitative characteristics of low-frequency variability to identify zonal and meridional flow regimes with focus on control distribution of high impact weather patterns in the Northern Eurasia. The predictability of weekly and monthly averaged indices is estimated by the methods of diagnostic verification of forecast and reanalysis data covering the hindcast period, and also with the use of the recommended WMO quantitative criteria. Characteristics of the low frequency variability have been discussed. Particularly, it is revealed that the meridional flow regimes are reproduced by SL-AV for summer season better comparing to winter period. It is shown that the model's deterministic forecast (ensemble mean) skill at week 1 (days 1-7) is noticeably better than that of climatic forecasts. The decrease of skill scores at week 2 (days 8-14) and week 3( days 15-21) is explained by deficiencies in the modeling system and inaccurate initial conditions. It was noticed the slightly improvement of the skill of model at week 4 (days 22-28), when the condition of atmosphere is more determined by the flow of energy from the outside. The reliability of forecasts of monthly (days 1-30) averaged indices is comparable to that at week 1 (days 1-7). Numerical experiments demonstrated that the forecast accuracy can be improved (thus the limit of practical predictability can be extended) through the using of probabilistic approach based on ensemble forecasts. It is shown that the quality of forecasts of the regimes of circulation like blocking is higher, than that of zonal flow.

  4. Accuracy of short‐term sea ice drift forecasts using a coupled ice‐ocean model

    PubMed Central

    Zhang, Jinlun

    2015-01-01

    Abstract Arctic sea ice drift forecasts of 6 h–9 days for the summer of 2014 are generated using the Marginal Ice Zone Modeling and Assimilation System (MIZMAS); the model is driven by 6 h atmospheric forecasts from the Climate Forecast System (CFSv2). Forecast ice drift speed is compared to drifting buoys and other observational platforms. Forecast positions are compared with actual positions 24 h–8 days since forecast. Forecast results are further compared to those from the forecasts generated using an ice velocity climatology driven by multiyear integrations of the same model. The results are presented in the context of scheduling the acquisition of high‐resolution images that need to follow buoys or scientific research platforms. RMS errors for ice speed are on the order of 5 km/d for 24–48 h since forecast using the sea ice model compared with 9 km/d using climatology. Predicted buoy position RMS errors are 6.3 km for 24 h and 14 km for 72 h since forecast. Model biases in ice speed and direction can be reduced by adjusting the air drag coefficient and water turning angle, but the adjustments do not affect verification statistics. This suggests that improved atmospheric forecast forcing may further reduce the forecast errors. The model remains skillful for 8 days. Using the forecast model increases the probability of tracking a target drifting in sea ice with a 10 km × 10 km image from 60 to 95% for a 24 h forecast and from 27 to 73% for a 48 h forecast. PMID:27818852

  5. Regional Earthquake Likelihood Models: A realm on shaky grounds?

    NASA Astrophysics Data System (ADS)

    Kossobokov, V.

    2005-12-01

    Seismology is juvenile and its appropriate statistical tools to-date may have a "medievil flavor" for those who hurry up to apply a fuzzy language of a highly developed probability theory. To become "quantitatively probabilistic" earthquake forecasts/predictions must be defined with a scientific accuracy. Following the most popular objectivists' viewpoint on probability, we cannot claim "probabilities" adequate without a long series of "yes/no" forecast/prediction outcomes. Without "antiquated binary language" of "yes/no" certainty we cannot judge an outcome ("success/failure"), and, therefore, quantify objectively a forecast/prediction method performance. Likelihood scoring is one of the delicate tools of Statistics, which could be worthless or even misleading when inappropriate probability models are used. This is a basic loophole for a misuse of likelihood as well as other statistical methods on practice. The flaw could be avoided by an accurate verification of generic probability models on the empirical data. It is not an easy task in the frames of the Regional Earthquake Likelihood Models (RELM) methodology, which neither defines the forecast precision nor allows a means to judge the ultimate success or failure in specific cases. Hopefully, the RELM group realizes the problem and its members do their best to close the hole with an adequate, data supported choice. Regretfully, this is not the case with the erroneous choice of Gerstenberger et al., who started the public web site with forecasts of expected ground shaking for `tomorrow' (Nature 435, 19 May 2005). Gerstenberger et al. have inverted the critical evidence of their study, i.e., the 15 years of recent seismic record accumulated just in one figure, which suggests rejecting with confidence above 97% "the generic California clustering model" used in automatic calculations. As a result, since the date of publication in Nature the United States Geological Survey website delivers to the public, emergency planners and the media, a forecast product, which is based on wrong assumptions that violate the best-documented earthquake statistics in California, which accuracy was not investigated, and which forecasts were not tested in a rigorous way.

  6. A Diagnostics Tool to detect ensemble forecast system anomaly and guide operational decisions

    NASA Astrophysics Data System (ADS)

    Park, G. H.; Srivastava, A.; Shrestha, E.; Thiemann, M.; Day, G. N.; Draijer, S.

    2017-12-01

    The hydrologic community is moving toward using ensemble forecasts to take uncertainty into account during the decision-making process. The New York City Department of Environmental Protection (DEP) implements several types of ensemble forecasts in their decision-making process: ensemble products for a statistical model (Hirsch and enhanced Hirsch); the National Weather Service (NWS) Advanced Hydrologic Prediction Service (AHPS) forecasts based on the classical Ensemble Streamflow Prediction (ESP) technique; and the new NWS Hydrologic Ensemble Forecasting Service (HEFS) forecasts. To remove structural error and apply the forecasts to additional forecast points, the DEP post processes both the AHPS and the HEFS forecasts. These ensemble forecasts provide mass quantities of complex data, and drawing conclusions from these forecasts is time-consuming and difficult. The complexity of these forecasts also makes it difficult to identify system failures resulting from poor data, missing forecasts, and server breakdowns. To address these issues, we developed a diagnostic tool that summarizes ensemble forecasts and provides additional information such as historical forecast statistics, forecast skill, and model forcing statistics. This additional information highlights the key information that enables operators to evaluate the forecast in real-time, dynamically interact with the data, and review additional statistics, if needed, to make better decisions. We used Bokeh, a Python interactive visualization library, and a multi-database management system to create this interactive tool. This tool compiles and stores data into HTML pages that allows operators to readily analyze the data with built-in user interaction features. This paper will present a brief description of the ensemble forecasts, forecast verification results, and the intended applications for the diagnostic tool.

  7. Assessment of SWE data assimilation for ensemble streamflow predictions

    NASA Astrophysics Data System (ADS)

    Franz, Kristie J.; Hogue, Terri S.; Barik, Muhammad; He, Minxue

    2014-11-01

    An assessment of data assimilation (DA) for Ensemble Streamflow Prediction (ESP) using seasonal water supply hindcasting in the North Fork of the American River Basin (NFARB) and the National Weather Service (NWS) hydrologic forecast models is undertaken. Two parameter sets, one from the California Nevada River Forecast Center (RFC) and one from the Differential Evolution Adaptive Metropolis (DREAM) algorithm, are tested. For each parameter set, hindcasts are generated using initial conditions derived with and without the inclusion of a DA scheme that integrates snow water equivalent (SWE) observations. The DREAM-DA scenario uses an Integrated Uncertainty and Ensemble-based data Assimilation (ICEA) framework that also considers model and parameter uncertainty. Hindcasts are evaluated using deterministic and probabilistic forecast verification metrics. In general, the impact of DA on the skill of the seasonal water supply predictions is mixed. For deterministic (ensemble mean) predictions, the Percent Bias (PBias) is improved with integration of the DA. DREAM-DA and the RFC-DA have the lowest biases and the RFC-DA has the lowest Root Mean Squared Error (RMSE). However, the RFC and DREAM-DA have similar RMSE scores. For the probabilistic predictions, the RFC and DREAM have the highest Continuous Ranked Probability Skill Scores (CRPSS) and the RFC has the best discrimination for low flows. Reliability results are similar between the non-DA and DA tests and the DREAM and DREAM-DA have better reliability than the RFC and RFC-DA for forecast dates February 1 and later. Despite producing improved streamflow simulations in previous studies, the hindcast analysis suggests that the DA method tested may not result in obvious improvements in streamflow forecasts. We advocate that integration of hindcasting and probabilistic metrics provides more rigorous insight on model performance for forecasting applications, such as in this study.

  8. Examining Atmospheric and Ecological Drivers of Wildfires, Modeling Wildfire Occurrence in the Southwest United States, and Using Atmospheric Sounding Observations to Verify National Weather Service Spot Forecasts

    NASA Astrophysics Data System (ADS)

    Nauslar, Nicholas J.

    This dissertation is comprised of three different papers that all pertain to wildland fire applications. The first paper performs a verification analysis on mixing height, transport winds, and Haines Index from National Weather Service spot forecasts across the United States. The final two papers, which are closely related, examine atmospheric and ecological drivers of wildfire for the Southwest Area (SWA) (Arizona, New Mexico, west Texas, and Oklahoma panhandle) to better equip operational fire meteorologists and managers to make informed decisions on wildfire potential in this region. The verification analysis here utilizes NWS spot forecasts of mixing height, transport winds and Haines Index from 2009-2013 issued for a location within 50 km of an upper sounding location and valid for the day of the fire event. Mixing height was calculated from the 0000 UTC sounding via the Stull, Holzworth, and Richardson methods. Transport wind speeds were determined by averaging the wind speed through the boundary layer as determined by the three mixing height methods from the 0000 UTC sounding. Haines Index was calculated at low, mid, and high elevation based on the elevation of the sounding and spot forecast locations. Mixing height forecasts exhibited large mean absolute errors and biased towards over forecasting. Forecasts of transport wind speeds and Haines Index outperformed mixing height forecasts with smaller errors relative to their respective means. The rainfall and lightning associated with the North American Monsoon (NAM) can vary greatly intra- and inter-annually and has a large impact on wildfire activity across the SWA by igniting or suppressing wildfires. NAM onset thresholds and subsequent dates are determined for the SWA and each Predictive Service Area (PSA), which are sub-regions used by operational fire meteorologists to predict wildfire potential within the SWA, April through September from 1995-2013. Various wildfire activity thresholds using the number of wildfires and large wildfires identified days or time periods with increased wildfire activity for each PSA and the SWA. Self-organizing maps utilizing 500 and 700 hPa geopotential heights and precipitable water were implemented to identify atmospheric patterns contributing to the NAM onset and busy days/periods for each PSA and the SWA. Resulting SOM map types also showed the transition to, during, and from the NAM. Northward and eastward displacements of the subtropical ridge (i.e., four-corners high) over the SWA were associated with NAM onset, and a suppressed subtropical ridge and breakdown of the subtropical ridge map types over the SWA were associated with increased wildfire activity. We implemented boosted regression trees (BRT) to model wildfire occurrence for all and large wildfires for different wildfire types (i.e., lightning, human) across the SWA by PSA. BRT models for all wildfires demonstrated relatively small mean and mean absolute errors and showed better predictability on days with wildfires. Cross-validated accuracy assessments for large wildfires demonstrated the ability to discriminate between large wildfire and non-large wildfire days across all wildfire types. Measurements describing fuel conditions (i.e., 100 and 1000-hour dead fuel moisture, energy release component) were the most important predictors when considering all wildfire types and sizes. However, a combination of fuels and atmospheric predictors (i.e., lightning, temperature) proved most predictive for large wildfire occurrence, and the number of relevant predictors increases for large wildfires indicating more conditions need to align to support large wildfires.

  9. Verification of an ensemble prediction system for storm surge forecast in the Adriatic Sea

    NASA Astrophysics Data System (ADS)

    Mel, Riccardo; Lionello, Piero

    2014-12-01

    In the Adriatic Sea, storm surges present a significant threat to Venice and to the flat coastal areas of the northern coast of the basin. Sea level forecast is of paramount importance for the management of daily activities and for operating the movable barriers that are presently being built for the protection of the city. In this paper, an EPS (ensemble prediction system) for operational forecasting of storm surge in the northern Adriatic Sea is presented and applied to a 3-month-long period (October-December 2010). The sea level EPS is based on the HYPSE (hydrostatic Padua Sea elevation) model, which is a standard single-layer nonlinear shallow water model, whose forcings (mean sea level pressure and surface wind fields) are provided by the ensemble members of the ECMWF (European Center for Medium-Range Weather Forecasts) EPS. Results are verified against observations at five tide gauges located along the Croatian and Italian coasts of the Adriatic Sea. Forecast uncertainty increases with the predicted value of the storm surge and with the forecast lead time. The EMF (ensemble mean forecast) provided by the EPS has a rms (root mean square) error lower than the DF (deterministic forecast), especially for short (up to 3 days) lead times. Uncertainty for short lead times of the forecast and for small storm surges is mainly caused by uncertainty of the initial condition of the hydrodynamical model. Uncertainty for large lead times and large storm surges is mainly caused by uncertainty in the meteorological forcings. The EPS spread increases with the rms error of the forecast. For large lead times the EPS spread and the forecast error substantially coincide. However, the EPS spread in this study, which does not account for uncertainty in the initial condition, underestimates the error during the early part of the forecast and for small storm surge values. On the contrary, it overestimates the rms error for large surge values. The PF (probability forecast) of the EPS has a clear skill in predicting the actual probability distribution of sea level, and it outperforms simple "dressed" PF methods. A probability estimate based on the single DF is shown to be inadequate. However, a PF obtained with a prescribed Gaussian distribution and centered on the DF value performs very similarly to the EPS-based PF.

  10. Applications systems verification and transfer project. Volume 1: Operational applications of satellite snow cover observations: Executive summary. [usefulness of satellite snow-cover data for water yield prediction

    NASA Technical Reports Server (NTRS)

    Rango, A.

    1981-01-01

    Both LANDSAT and NOAA satellite data were used in improving snowmelt runoff forecasts. When the satellite snow cover data were tested in both empirical seasonal runoff estimation and short term modeling approaches, a definite potential for reducing forecast error was evident. A cost benefit analysis run in conjunction with the snow mapping indicated a $36.5 million annual benefit accruing from a one percent improvement in forecast accuracy using the snow cover data for the western United States. The annual cost of employing the system would be $505,000. The snow mapping has proven that satellite snow cover data can be used to reduce snowmelt runoff forecast error in a cost effective manner once all operational satellite data are available within 72 hours after acquisition. Executive summaries of the individual snow mapping projects are presented.

  11. Predictability of the Arctic sea ice edge

    NASA Astrophysics Data System (ADS)

    Goessling, H. F.; Tietsche, S.; Day, J. J.; Hawkins, E.; Jung, T.

    2016-02-01

    Skillful sea ice forecasts from days to years ahead are becoming increasingly important for the operation and planning of human activities in the Arctic. Here we analyze the potential predictability of the Arctic sea ice edge in six climate models. We introduce the integrated ice-edge error (IIEE), a user-relevant verification metric defined as the area where the forecast and the "truth" disagree on the ice concentration being above or below 15%. The IIEE lends itself to decomposition into an absolute extent error, corresponding to the common sea ice extent error, and a misplacement error. We find that the often-neglected misplacement error makes up more than half of the climatological IIEE. In idealized forecast ensembles initialized on 1 July, the IIEE grows faster than the absolute extent error. This means that the Arctic sea ice edge is less predictable than sea ice extent, particularly in September, with implications for the potential skill of end-user relevant forecasts.

  12. Development of a High Resolution Weather Forecast Model for Mesoamerica Using the NASA Nebula Cloud Computing Environment

    NASA Technical Reports Server (NTRS)

    Molthan, Andrew L.; Case, Jonathan L.; Venner, Jason; Moreno-Madrinan, Max. J.; Delgado, Francisco

    2012-01-01

    Over the past two years, scientists in the Earth Science Office at NASA fs Marshall Space Flight Center (MSFC) have explored opportunities to apply cloud computing concepts to support near real ]time weather forecast modeling via the Weather Research and Forecasting (WRF) model. Collaborators at NASA fs Short ]term Prediction Research and Transition (SPoRT) Center and the SERVIR project at Marshall Space Flight Center have established a framework that provides high resolution, daily weather forecasts over Mesoamerica through use of the NASA Nebula Cloud Computing Platform at Ames Research Center. Supported by experts at Ames, staff at SPoRT and SERVIR have established daily forecasts complete with web graphics and a user interface that allows SERVIR partners access to high resolution depictions of weather in the next 48 hours, useful for monitoring and mitigating meteorological hazards such as thunderstorms, heavy precipitation, and tropical weather that can lead to other disasters such as flooding and landslides. This presentation will describe the framework for establishing and providing WRF forecasts, example applications of output provided via the SERVIR web portal, and early results of forecast model verification against available surface ] and satellite ]based observations.

  13. Development of a High Resolution Weather Forecast Model for Mesoamerica Using the NASA Nebula Cloud Computing Environment

    NASA Astrophysics Data System (ADS)

    Molthan, A.; Case, J.; Venner, J.; Moreno-Madriñán, M. J.; Delgado, F.

    2012-12-01

    Over the past two years, scientists in the Earth Science Office at NASA's Marshall Space Flight Center (MSFC) have explored opportunities to apply cloud computing concepts to support near real-time weather forecast modeling via the Weather Research and Forecasting (WRF) model. Collaborators at NASA's Short-term Prediction Research and Transition (SPoRT) Center and the SERVIR project at Marshall Space Flight Center have established a framework that provides high resolution, daily weather forecasts over Mesoamerica through use of the NASA Nebula Cloud Computing Platform at Ames Research Center. Supported by experts at Ames, staff at SPoRT and SERVIR have established daily forecasts complete with web graphics and a user interface that allows SERVIR partners access to high resolution depictions of weather in the next 48 hours, useful for monitoring and mitigating meteorological hazards such as thunderstorms, heavy precipitation, and tropical weather that can lead to other disasters such as flooding and landslides. This presentation will describe the framework for establishing and providing WRF forecasts, example applications of output provided via the SERVIR web portal, and early results of forecast model verification against available surface- and satellite-based observations.

  14. NWS Marine Contacts

    Science.gov Websites

    ! Boating Safety Beach Hazards Rip Currents Hypothermia Hurricanes Thunderstorms Lightning Coastal Flooding , Verification Richard May 301-427-9378 301-713-1520 FAX richard.may@noaa.gov Coastal Weather, Great Lakes, Ice operational nature relating to near shore and coastal forecasts, contact your local National Weather Service

  15. Assessing the performance of eight real-time updating models and procedures for the Brosna River

    NASA Astrophysics Data System (ADS)

    Goswami, M.; O'Connor, K. M.; Bhattarai, K. P.; Shamseldin, A. Y.

    2005-10-01

    The flow forecasting performance of eight updating models, incorporated in the Galway River Flow Modelling and Forecasting System (GFMFS), was assessed using daily data (rainfall, evaporation and discharge) of the Irish Brosna catchment (1207 km2), considering their one to six days lead-time discharge forecasts. The Perfect Forecast of Input over the Forecast Lead-time scenario was adopted, where required, in place of actual rainfall forecasts. The eight updating models were: (i) the standard linear Auto-Regressive (AR) model, applied to the forecast errors (residuals) of a simulation (non-updating) rainfall-runoff model; (ii) the Neural Network Updating (NNU) model, also using such residuals as input; (iii) the Linear Transfer Function (LTF) model, applied to the simulated and the recently observed discharges; (iv) the Non-linear Auto-Regressive eXogenous-Input Model (NARXM), also a neural network-type structure, but having wide options of using recently observed values of one or more of the three data series, together with non-updated simulated outflows, as inputs; (v) the Parametric Simple Linear Model (PSLM), of LTF-type, using recent rainfall and observed discharge data; (vi) the Parametric Linear perturbation Model (PLPM), also of LTF-type, using recent rainfall and observed discharge data, (vii) n-AR, an AR model applied to the observed discharge series only, as a naïve updating model; and (viii) n-NARXM, a naive form of the NARXM, using only the observed discharge data, excluding exogenous inputs. The five GFMFS simulation (non-updating) models used were the non-parametric and parametric forms of the Simple Linear Model and of the Linear Perturbation Model, the Linearly-Varying Gain Factor Model, the Artificial Neural Network Model, and the conceptual Soil Moisture Accounting and Routing (SMAR) model. As the SMAR model performance was found to be the best among these models, in terms of the Nash-Sutcliffe R2 value, both in calibration and in verification, the simulated outflows of this model only were selected for the subsequent exercise of producing updated discharge forecasts. All the eight forms of updating models for producing lead-time discharge forecasts were found to be capable of producing relatively good lead-1 (1-day ahead) forecasts, with R2 values almost 90% or above. However, for higher lead time forecasts, only three updating models, viz., NARXM, LTF, and NNU, were found to be suitable, with lead-6 values of R2 about 90% or higher. Graphical comparisons were made of the lead-time forecasts for the two largest floods, one in the calibration period and the other in the verification period.

  16. How well should probabilistic seismic hazard maps work?

    NASA Astrophysics Data System (ADS)

    Vanneste, K.; Stein, S.; Camelbeeck, T.; Vleminckx, B.

    2016-12-01

    Recent large earthquakes that gave rise to shaking much stronger than shown in earthquake hazard maps have stimulated discussion about how well these maps forecast future shaking. These discussions have brought home the fact that although the maps are designed to achieve certain goals, we know little about how well they actually perform. As for any other forecast, this question involves verification and validation. Verification involves assessing how well the algorithm used to produce hazard maps implements the conceptual PSHA model ("have we built the model right?"). Validation asks how well the model forecasts the shaking that actually occurs ("have we built the right model?"). We explore the verification issue by simulating the shaking history of an area with assumed distribution of earthquakes, frequency-magnitude relation, temporal occurrence model, and ground-motion prediction equation. We compare the "observed" shaking at many sites over time to that predicted by a hazard map generated for the same set of parameters. PSHA predicts that the fraction of sites at which shaking will exceed that mapped is p = 1 - exp(t/T), where t is the duration of observations and T is the map's return period. This implies that shaking in large earthquakes is typically greater than shown on hazard maps, as has occurred in a number of cases. A large number of simulated earthquake histories yield distributions of shaking consistent with this forecast, with a scatter about this value that decreases as t/T increases. The median results are somewhat lower than predicted for small values of t/T and approach the predicted value for larger values of t/T. Hence, the algorithm appears to be internally consistent and can be regarded as verified for this set of simulations. Validation is more complicated because a real observed earthquake history can yield a fractional exceedance significantly higher or lower than that predicted while still being consistent with the hazard map in question. As a result, given that in the real world we have only a single sample, it is hard to assess whether a misfit between a map and observations arises by chance or reflects a biased map.

  17. Real-Time Kennedy Space Center and Cape Canaveral Air Force Station High-Resolution Model Implementation and Verification

    NASA Technical Reports Server (NTRS)

    Shafer, Jaclyn; Watson, Leela R.

    2015-01-01

    NASA's Launch Services Program, Ground Systems Development and Operations, Space Launch System and other programs at Kennedy Space Center (KSC) and Cape Canaveral Air Force Station (CCAFS) use the daily and weekly weather forecasts issued by the 45th Weather Squadron (45 WS) as decision tools for their day-to-day and launch operations on the Eastern Range (ER). Examples include determining if they need to limit activities such as vehicle transport to the launch pad, protect people, structures or exposed launch vehicles given a threat of severe weather, or reschedule other critical operations. The 45 WS uses numerical weather prediction models as a guide for these weather forecasts, particularly the Air Force Weather Agency (AFWA) 1.67 km Weather Research and Forecasting (WRF) model. Considering the 45 WS forecasters' and Launch Weather Officers' (LWO) extensive use of the AFWA model, the 45 WS proposed a task at the September 2013 Applied Meteorology Unit (AMU) Tasking Meeting requesting the AMU verify this model. Due to the lack of archived model data available from AFWA, verification is not yet possible. Instead, the AMU proposed to implement and verify the performance of an ER version of the high-resolution WRF Environmental Modeling System (EMS) model configured by the AMU (Watson 2013) in real time. Implementing a real-time version of the ER WRF-EMS would generate a larger database of model output than in the previous AMU task for determining model performance, and allows the AMU more control over and access to the model output archive. The tasking group agreed to this proposal; therefore the AMU implemented the WRF-EMS model on the second of two NASA AMU modeling clusters. The AMU also calculated verification statistics to determine model performance compared to observational data. Finally, the AMU made the model output available on the AMU Advanced Weather Interactive Processing System II (AWIPS II) servers, which allows the 45 WS and AMU staff to customize the model output display on the AMU and Range Weather Operations (RWO) AWIPS II client computers and conduct real-time subjective analyses.

  18. Dust storm events over Delhi: verification of dust AOD forecasts with satellite and surface observations

    NASA Astrophysics Data System (ADS)

    Singh, Aditi; Iyengar, Gopal R.; George, John P.

    2016-05-01

    Thar desert located in northwest part of India is considered as one of the major dust source. Dust storms originate in Thar desert during pre-monsoon season, affects large part of Indo-Gangetic plains. High dust loading causes the deterioration of the ambient air quality and degradation in visibility. Present study focuses on the identification of dust events and verification of the forecast of dust events over Delhi and western part of IG Plains, during the pre-monsoon season of 2015. Three dust events have been identified over Delhi during the study period. For all the selected days, Terra-MODIS AOD at 550 nm are found close to 1.0, while AURA-OMI AI shows high values. Dust AOD forecasts from NCMRWF Unified Model (NCUM) for the three selected dust events are verified against satellite (MODIS) and ground based observations (AERONET). Comparison of observed AODs at 550 nm from MODIS with NCUM predicted AODs reveals that NCUM is able to predict the spatial and temporal distribution of dust AOD, in these cases. Good correlation (~0.67) is obtained between the NCUM predicted dust AODs and location specific observations available from AERONET. Model under-predicted the AODs as compared to the AERONET observations. This may be mainly because the model account for only dust and no anthropogenic activities are considered. The results of the present study emphasize the requirement of more realistic representation of local dust emission in the model both of natural and anthropogenic origin, to improve the forecast of dust from NCUM during the dust events.

  19. An experimental system for flood risk forecasting and monitoring at global scale

    NASA Astrophysics Data System (ADS)

    Dottori, Francesco; Alfieri, Lorenzo; Kalas, Milan; Lorini, Valerio; Salamon, Peter

    2017-04-01

    Global flood forecasting and monitoring systems are nowadays a reality and are being applied by a wide range of users and practitioners in disaster risk management. Furthermore, there is an increasing demand from users to integrate flood early warning systems with risk based forecasting, combining streamflow estimations with expected inundated areas and flood impacts. Finally, emerging technologies such as crowdsourcing and social media monitoring can play a crucial role in flood disaster management and preparedness. Here, we present some recent advances of an experimental procedure for near-real time flood mapping and impact assessment. The procedure translates in near real-time the daily streamflow forecasts issued by the Global Flood Awareness System (GloFAS) into event-based flood hazard maps, which are then combined with exposure and vulnerability information at global scale to derive risk forecast. Impacts of the forecasted flood events are evaluated in terms of flood prone areas, potential economic damage, and affected population, infrastructures and cities. To increase the reliability of our forecasts we propose the integration of model-based estimations with an innovative methodology for social media monitoring, which allows for real-time verification and correction of impact forecasts. Finally, we present the results of preliminary tests which show the potential of the proposed procedure in supporting emergency response and management.

  20. A Comparative Verification of Forecasts from Two Operational Solar Wind Models (Postprint)

    DTIC Science & Technology

    2012-02-08

    much confidence to place on predicted parameters. Cost /benefit information is provided to administrators who decide to sustain or replace existing...magnetic field magnitude and three components of the magnetic field vector in the geocentric solar magnetospheric (GSM) coordinate system at each hour of

  1. Multiresolution comparison of precipitation datasets for large-scale models

    NASA Astrophysics Data System (ADS)

    Chun, K. P.; Sapriza Azuri, G.; Davison, B.; DeBeer, C. M.; Wheater, H. S.

    2014-12-01

    Gridded precipitation datasets are crucial for driving large-scale models which are related to weather forecast and climate research. However, the quality of precipitation products is usually validated individually. Comparisons between gridded precipitation products along with ground observations provide another avenue for investigating how the precipitation uncertainty would affect the performance of large-scale models. In this study, using data from a set of precipitation gauges over British Columbia and Alberta, we evaluate several widely used North America gridded products including the Canadian Gridded Precipitation Anomalies (CANGRD), the National Center for Environmental Prediction (NCEP) reanalysis, the Water and Global Change (WATCH) project, the thin plate spline smoothing algorithms (ANUSPLIN) and Canadian Precipitation Analysis (CaPA). Based on verification criteria for various temporal and spatial scales, results provide an assessment of possible applications for various precipitation datasets. For long-term climate variation studies (~100 years), CANGRD, NCEP, WATCH and ANUSPLIN have different comparative advantages in terms of their resolution and accuracy. For synoptic and mesoscale precipitation patterns, CaPA provides appealing performance of spatial coherence. In addition to the products comparison, various downscaling methods are also surveyed to explore new verification and bias-reduction methods for improving gridded precipitation outputs for large-scale models.

  2. Nowcasting Aircraft Icing Conditions in Moscow Region Using Geostationary Meteorological Satellite Data

    NASA Astrophysics Data System (ADS)

    Barabanova, Olga

    2013-04-01

    Nowadays the Main Aviation Meteorological Centre in Moscow (MAMC) provides forecasts of icing conditions in Moscow Region airports using information of surface observation network, weather radars and atmospheric sounding. Unfortunately, satellite information is not used properly in aviation meteorological offices in Moscow Region: weather forecasters deal with satellites images of cloudiness only. The main forecasters of MAMC realise that it is necessary to employ meteorological satellite numerical data from different channels in aviation forecasting and especially in nowcasting. Algorithm of nowcasting aircraft in-flight icing conditions has been developed using data from geostationary meteorological satellites "Meteosat-7" and "Meteosat-9". The algorithm is based on the brightness temperature differences. Calculation of brightness temperature differences help to discriminate clouds with supercooled large drops where severe icing conditions are most likely. Due to the lack of visible channel data, the satellite icing detection methods will be less accurate at night. Besides this method is limited by optically thick ice clouds where it is not possible to determine the extent to which supercooled large drops exists within the underlying clouds. However, we determined that most of the optically thick cases are associated with convection or mid-latitude cyclones and they will nearly always have a layer where which supercooled large drops exists with an icing threat. This product is created hourly for the Moscow Air Space and mark zones with moderate or severe icing hazards. The results were compared with mesoscale numerical atmospheric model COSMO-RU output. Verification of the algorithms results using aircraft pilot reports shows that this algorithm is a good instrument for the operational practise in aviation meteorological offices in Moscow Region. The satellite-based algorithms presented here can be used in real time to diagnose areas of icing for pilots to avoid.

  3. Flood Forecasting in Wales: Challenges and Solutions

    NASA Astrophysics Data System (ADS)

    How, Andrew; Williams, Christopher

    2015-04-01

    With steep, fast-responding river catchments, exposed coastal reaches with large tidal ranges and large population densities in some of the most at-risk areas; flood forecasting in Wales presents many varied challenges. Utilising advances in computing power and learning from best practice within the United Kingdom and abroad have seen significant improvements in recent years - however, many challenges still remain. Developments in computing and increased processing power comes with a significant price tag; greater numbers of data sources and ensemble feeds brings a better understanding of uncertainty but the wealth of data needs careful management to ensure a clear message of risk is disseminated; new modelling techniques utilise better and faster computation, but lack the history of record and experience gained from the continued use of more established forecasting models. As a flood forecasting team we work to develop coastal and fluvial forecasting models, set them up for operational use and manage the duty role that runs the models in real time. An overview of our current operational flood forecasting system will be presented, along with a discussion on some of the solutions we have in place to address the challenges we face. These include: • real-time updating of fluvial models • rainfall forecasting verification • ensemble forecast data • longer range forecast data • contingency models • offshore to nearshore wave transformation • calculation of wave overtopping

  4. Assessing the Predictability of Convection using Ensemble Data Assimilation of Simulated Radar Observations in an LETKF system

    NASA Astrophysics Data System (ADS)

    Lange, Heiner; Craig, George

    2014-05-01

    This study uses the Local Ensemble Transform Kalman Filter (LETKF) to perform storm-scale Data Assimilation of simulated Doppler radar observations into the non-hydrostatic, convection-permitting COSMO model. In perfect model experiments (OSSEs), it is investigated how the limited predictability of convective storms affects precipitation forecasts. The study compares a fine analysis scheme with small RMS errors to a coarse scheme that allows for errors in position, shape and occurrence of storms in the ensemble. The coarse scheme uses superobservations, a coarser grid for analysis weights, a larger localization radius and larger observation error that allow a broadening of the Gaussian error statistics. Three hour forecasts of convective systems (with typical lifetimes exceeding 6 hours) from the detailed analyses of the fine scheme are found to be advantageous to those of the coarse scheme during the first 1-2 hours, with respect to the predicted storm positions. After 3 hours in the convective regime used here, the forecast quality of the two schemes appears indiscernible, judging by RMSE and verification methods for rain-fields and objects. It is concluded that, for operational assimilation systems, the analysis scheme might not necessarily need to be detailed to the grid scale of the model. Depending on the forecast lead time, and on the presence of orographic or synoptic forcing that enhance the predictability of storm occurrences, analyses from a coarser scheme might suffice.

  5. Long-term ensemble forecast of snowmelt inflow into the Cheboksary Reservoir under two different weather scenarios

    NASA Astrophysics Data System (ADS)

    Gelfan, Alexander; Moreydo, Vsevolod; Motovilov, Yury; Solomatine, Dimitri P.

    2018-04-01

    A long-term forecasting ensemble methodology, applied to water inflows into the Cheboksary Reservoir (Russia), is presented. The methodology is based on a version of the semi-distributed hydrological model ECOMAG (ECOlogical Model for Applied Geophysics) that allows for the calculation of an ensemble of inflow hydrographs using two different sets of weather ensembles for the lead time period: observed weather data, constructed on the basis of the Ensemble Streamflow Prediction methodology (ESP-based forecast), and synthetic weather data, simulated by a multi-site weather generator (WG-based forecast). We have studied the following: (1) whether there is any advantage of the developed ensemble forecasts in comparison with the currently issued operational forecasts of water inflow into the Cheboksary Reservoir, and (2) whether there is any noticeable improvement in probabilistic forecasts when using the WG-simulated ensemble compared to the ESP-based ensemble. We have found that for a 35-year period beginning from the reservoir filling in 1982, both continuous and binary model-based ensemble forecasts (issued in the deterministic form) outperform the operational forecasts of the April-June inflow volume actually used and, additionally, provide acceptable forecasts of additional water regime characteristics besides the inflow volume. We have also demonstrated that the model performance measures (in the verification period) obtained from the WG-based probabilistic forecasts, which are based on a large number of possible weather scenarios, appeared to be more statistically reliable than the corresponding measures calculated from the ESP-based forecasts based on the observed weather scenarios.

  6. The Hydrologic Ensemble Prediction Experiment (HEPEX)

    NASA Astrophysics Data System (ADS)

    Wood, Andy; Wetterhall, Fredrik; Ramos, Maria-Helena

    2015-04-01

    The Hydrologic Ensemble Prediction Experiment was established in March, 2004, at a workshop hosted by the European Center for Medium Range Weather Forecasting (ECMWF), and co-sponsored by the US National Weather Service (NWS) and the European Commission (EC). The HEPEX goal was to bring the international hydrological and meteorological communities together to advance the understanding and adoption of hydrological ensemble forecasts for decision support. HEPEX pursues this goal through research efforts and practical implementations involving six core elements of a hydrologic ensemble prediction enterprise: input and pre-processing, ensemble techniques, data assimilation, post-processing, verification, and communication and use in decision making. HEPEX has grown through meetings that connect the user, forecast producer and research communities to exchange ideas, data and methods; the coordination of experiments to address specific challenges; and the formation of testbeds to facilitate shared experimentation. In the last decade, HEPEX has organized over a dozen international workshops, as well as sessions at scientific meetings (including AMS, AGU and EGU) and special issues of scientific journals where workshop results have been published. Through these interactions and an active online blog (www.hepex.org), HEPEX has built a strong and active community of nearly 400 researchers & practitioners around the world. This poster presents an overview of recent and planned HEPEX activities, highlighting case studies that exemplify the focus and objectives of HEPEX.

  7. Verification of the NWP models operated at ICM, Poland

    NASA Astrophysics Data System (ADS)

    Melonek, Malgorzata

    2010-05-01

    Interdisciplinary Centre for Mathematical and Computational Modelling, University of Warsaw (ICM) started its activity in the field of NWP in May 1997. Since this time the numerical weather forecasts covering Central Europe have been routinely published on our publicly available website. First NWP model used in ICM was hydrostatic Unified Model developed by the UK Meteorological Office. It was a mesoscale version with horizontal resolution of 17 km and 31 levels in vertical. At present two NWP non-hydrostatic models are running in quasi-operational regime. The main new UM model with 4 km horizontal resolution, 38 levels in vertical and forecats range of 48 hours is running four times a day. Second, the COAMPS model (Coupled Ocean/Atmosphere Mesoscale Prediction System) developed by the US Naval Research Laboratory, configured with the three nested grids (with coresponding resolutions of 39km, 13km and 4.3km, 30 vertical levels) are running twice a day (for 00 and 12 UTC). The second grid covers Central Europe and has forecast range of 84 hours. Results of the both NWP models, ie. COAMPS computed on 13km mesh resolution and UM, are verified against observations from the Polish synoptic stations. Verification uses surface observations and nearest grid point forcasts. Following meteorological elements are verified: air temperature at 2m, mean sea level pressure, wind speed and wind direction at 10 m and 12 hours accumulated precipitation. There are presented different statistical indices. For continous variables Mean Error(ME), Mean Absolute Error (MAE) and Root Mean Squared Error (RMSE) in 6 hours intervals are computed. In case of precipitation the contingency tables for different thresholds are computed and some of the verification scores such as FBI, ETS, POD, FAR are graphically presented. The verification sample covers nearly one year.

  8. Global scale predictability of floods

    NASA Astrophysics Data System (ADS)

    Weerts, Albrecht; Gijsbers, Peter; Sperna Weiland, Frederiek

    2016-04-01

    Flood (and storm surge) forecasting at the continental and global scale has only become possible in recent years (Emmerton et al., 2016; Verlaan et al., 2015) due to the availability of meteorological forecast, global scale precipitation products and global scale hydrologic and hydrodynamic models. Deltares has setup GLOFFIS a research-oriented multi model operational flood forecasting system based on Delft-FEWS in an open experimental ICT facility called Id-Lab. In GLOFFIS both the W3RA and PCRGLOB-WB model are run in ensemble mode using GEFS and ECMWF-EPS (latency 2 days). GLOFFIS will be used for experiments into predictability of floods (and droughts) and their dependency on initial state estimation, meteorological forcing and the hydrologic model used. Here we present initial results of verification of the ensemble flood forecasts derived with the GLOFFIS system. Emmerton, R., Stephens, L., Pappenberger, F., Pagano, T., Weerts, A., Wood, A. Salamon, P., Brown, J., Hjerdt, N., Donnelly, C., Cloke, H. Continental and Global Scale Flood Forecasting Systems, WIREs Water (accepted), 2016 Verlaan M, De Kleermaeker S, Buckman L. GLOSSIS: Global storm surge forecasting and information system 2015, Australasian Coasts & Ports Conference, 15-18 September 2015,Auckland, New Zealand.

  9. Establishing NWP capabilities in African Small Island States (SIDs)

    NASA Astrophysics Data System (ADS)

    Rögnvaldsson, Ólafur

    2017-04-01

    Íslenskar orkurannsóknir (ÍSOR), in collaboration with Belgingur Ltd. and the United Nations Economic Commission for Africa (UNECA) signed a Letter of Agreement in 2015 regarding collaboration in the "Establishing Operational Capacity for Building, Deploying and Using Numerical Weather and Seasonal Prediction Systems in Small Island States in Africa (SIDs)" project. The specific objectives of the collaboration were the following: - Build capacity of National Meteorological and Hydrology Services (NMHS) staff on the use of the WRF atmospheric model for weather and seasonal forecasting, interpretation of model results, and the use of observations to verify and improve model simulations. - Establish a platform for integrating short to medium range weather forecasts, as well as seasonal forecasts, into already existing infrastructure at NMHS and Regional Climate Centres. - Improve understanding of existing model results and forecast verification, for improving decision-making on the time scale of days to weeks. To meet these challenges the operational Weather On Demand (WOD) forecasting system, developed by Belgingur, is being installed in a number of SIDs countries (Cabo Verde, Guinea-Bissau, and Seychelles), as well as being deployed for the Pan-Africa region, with forecasts being disseminated to collaborating NMHSs.

  10. An Objective Verification of the North American Mesoscale Model for Kennedy Space Center and Cape Canaveral Air Force Station

    NASA Technical Reports Server (NTRS)

    Bauman, William H., III

    2010-01-01

    The 45th Weather Squadron (45 WS) Launch Weather Officers use the 12-km resolution North American Mesoscale (NAM) model (MesoNAM) text and graphical product forecasts extensively to support launch weather operations. However, the actual performance of the model at Kennedy Space Center (KSC) and Cape Canaveral Air Force Station (CCAFS) has not been measured objectively. In order to have tangible evidence of model performance, the 45 WS tasked the Applied Meteorology Unit to conduct a detailed statistical analysis of model output compared to observed values. The model products are provided to the 45 WS by ACTA, Inc. and include hourly forecasts from 0 to 84 hours based on model initialization times of 00, 06, 12 and 18 UTC. The objective analysis compared the MesoNAM forecast winds, temperature and dew point, as well as the changes in these parameters over time, to the observed values from the sensors in the KSC/CCAFS wind tower network. Objective statistics will give the forecasters knowledge of the model's strength and weaknesses, which will result in improved forecasts for operations.

  11. An Automated System to Quantify Convectively induced Aircraft encounters with Turbulence over Europe and North Atlantic

    NASA Astrophysics Data System (ADS)

    Meneguz, Elena; Turp, Debi; Wells, Helen

    2015-04-01

    It is well known that encounters with moderate or severe turbulence can lead to passenger injuries and incur high costs for airlines from compensation and litigation. As one of two World Area Forecast Centres (WAFCs), the Met Office has responsibility for forecasting en-route weather hazards worldwide for aviation above a height of 10,000 ft. Observations from commercial aircraft provide a basis for gaining a better understanding of turbulence and for improving turbulence forecasts through verification. However there is currently a lack of information regarding the possible cause of the observed turbulence, or whether the turbulence occurred within cloud. Such information would be invaluable for the development of forecasting techniques for particular types of turbulence and for forecast verification. Of all the possible sources of turbulence, convective activity is believed to be a major cause of turbulence. Its relative importance over the Europe and North Atlantic area has not been yet quantified in a systematic way: in this study, a new approach is developed to automate identification of turbulent encounters in the proximity of convective clouds. Observations of convection are provided from two independent sources: a surface based lightning network and satellite imagery. Lightning observations are taken from the Met Office Arrival Time Detections network (ATDnet). ATDnet has been designed to identify cloud-to-ground flashes over Europe but also detects (a smaller fraction of) strikes over the North Atlantic. Meteosat Second Generation (MSG) satellite products are used to identify convective clouds by applying a brightness temperature filtering technique. The morphological features of cold cloud tops are also investigated. The system is run for all in situ turbulence reports received from airlines for a total of 12 months during summer 2013 and 2014 for the domain of interest. Results of this preliminary short term climatological study show significant intra-seasonal variability and an average of 15% of all aircraft encounters with turbulence are found in the proximity of convective clouds.

  12. Implementation of the Short-Term Ensemble Prediction System (STEPS) in Belgium and verification of case studies

    NASA Astrophysics Data System (ADS)

    Foresti, Loris; Reyniers, Maarten; Delobbe, Laurent

    2014-05-01

    The Short-Term Ensemble Prediction System (STEPS) is a probabilistic precipitation nowcasting scheme developed at the Australian Bureau of Meteorology in collaboration with the UK Met Office. In order to account for the multiscaling nature of rainfall structures, the radar field is decomposed into an 8 levels multiplicative cascade using a Fast Fourier Transform. The cascade is advected using the velocity field estimated with optical flow and evolves stochastically according to a hierarchy of auto-regressive processes. This allows reproducing the empirical observation that the rate of temporal evolution of the small scales is faster than the large scales. The uncertainty in radar rainfall measurement and the unknown future development of the velocity field are also considered by stochastic modelling in order to reflect their typical spatial and temporal variability. Recently, a 4 years national research program has been initiated by the University of Leuven, the Royal Meteorological Institute (RMI) of Belgium and 3 other partners: PLURISK ("forecasting and management of extreme rainfall induced risks in the urban environment"). The project deals with the nowcasting of rainfall and subsequent urban inundations, as well as socio-economic risk quantification, communication, warning and prevention. At the urban scale it is widely recognized that the uncertainty of hydrological and hydraulic models is largely driven by the input rainfall estimation and forecast uncertainty. In support to the PLURISK project the RMI aims at integrating STEPS in the current operational deterministic precipitation nowcasting system INCA-BE (Integrated Nowcasting through Comprehensive Analysis). This contribution will illustrate examples of STEPS ensemble and probabilistic nowcasts for a few selected case studies of stratiform and convective rain in Belgium. The paper focuses on the development of STEPS products for potential hydrological users and a preliminary verification of the nowcasts, especially to analyze the spatial distribution of forecast errors. The analysis of nowcast biases reveals the locations where the convective initiation, rainfall growth and decay processes significantly reduce the forecast accuracy, but also points out the need for improving the radar-based quantitative precipitation estimation product that is used both to generate and verify the nowcasts. The collection of fields of verification statistics is implemented using an online update strategy, which potentially enables the system to learn from forecast errors as the archive of nowcasts grows. The study of the spatial or temporal distribution of nowcast errors is a key step to convey to the users an overall estimation of the nowcast accuracy and to drive future model developments.

  13. Toward Better Intraseasonal and Seasonal Prediction: Verification and Evaluation of the NOGAPS Model Forecasts

    DTIC Science & Technology

    2012-09-30

    package developed by the Cloud Feedback Model Intercomparison Project (CFMIP), COSP (BODAS- SALCEDO et al. 2011). COSP will convert the model hydrometers ...and infrared data at high spatial and temporal resolution. J. Hydromet ., 5, 487-503. Kay, J. E. et al., 2012: Exposing global cloud biases in the

  14. Soundscapes

    DTIC Science & Technology

    2013-09-30

    STATEMENT A. Approved for public release; distribution is unlimited. . Soundscapes Michael B...models to provide hindcasts, nowcasts, and forecasts of the time-evolving soundscape . In terms of the types of sound sources, we will focus initially on...APPROACH The research has two principle thrusts: 1) the modeling of the soundscape , and 2) verification using datasets that have been collected

  15. Soundscapes

    DTIC Science & Technology

    2012-09-30

    STATEMENT A. Approved for public release; distribution is unlimited. . Soundscapes Michael B...models to provide hindcasts, nowcasts, and forecasts of the time-evolving soundscape . In terms of the types of sound sources, we will focus initially on...APPROACH The research has two principle thrusts: 1) the modeling of the soundscape , and 2) verification using datasets that have been collected

  16. Analysis and verification of a prediction model of solar energetic proton events

    NASA Astrophysics Data System (ADS)

    Wang, J.; Zhong, Q.

    2017-12-01

    The solar energetic particle event can cause severe radiation damages near Earth. The alerts and summary products of the solar energetic proton events were provided by the Space Environment Prediction Center (SEPC) according to the flux of the greater than 10 MeV protons taken by GOES satellite in geosynchronous orbit. The start of a solar energetic proton event is defined as the time when the flux of the greater than 10 MeV protons equals or exceeds 10 proton flux units (pfu). In this study, a model was developed to predict the solar energetic proton events, provide the warning for the solar energetic proton events at least minutes in advance, based on both the soft X-ray flux and integral proton flux taken by GOES. The quality of the forecast model was measured against verifications of accuracy, reliability, discrimination capability, and forecast skills. The peak flux and rise time of the solar energetic proton events in the six channels, >1MeV, >5 MeV, >10 MeV, >30 MeV, >50 MeV, >100 MeV, were also simulated and analyzed.

  17. The HEPEX Seasonal Streamflow Forecast Intercomparison Project

    NASA Astrophysics Data System (ADS)

    Wood, A. W.; Schepen, A.; Bennett, J.; Mendoza, P. A.; Ramos, M. H.; Wetterhall, F.; Pechlivanidis, I.

    2016-12-01

    The Hydrologic Ensemble Prediction Experiment (HEPEX; www.hepex.org) has launched an international seasonal streamflow forecasting intercomparison project (SSFIP) with the goal of broadening community knowledge about the strengths and weaknesses of various operational approaches being developed around the world. While some of these approaches have existed for decades (e.g. Ensemble Streamflow Prediction - ESP - in the United States and elsewhere), recent years have seen the proliferation of new operational and experimental streamflow forecasting approaches. These have largely been developed independently in each country, thus it is difficult to assess whether the approaches employed in some centers offer more promise for development than others. This motivates us to establish a forecasting testbed to facilitate a diagnostic evaluation of a range of different streamflow forecasting approaches and their components over a common set of catchments, using a common set of validation methods. Rather than prescribing a set of scientific questions from the outset, we are letting the hindcast results and notable differences in methodologies on a watershed-specific basis motivate more targeted analyses and sub-experiments that may provide useful insights. The initial pilot of the testbed involved two approaches - CSIRO's Bayesian joint probability (BJP) and NCAR's sequential regression - for two catchments, each designated by one of the teams (the Murray River, Australia, and Hungry Horse reservoir drainage area, USA). Additional catchments/approaches are in the process of being added to the testbed. To support this CSIRO and NCAR have developed data and analysis tools, data standards and protocols to formalize the experiment. These include requirements for cross-validation, verification, reference climatologies, and common predictands. This presentation describes the SSFIP experiments, pilot basin results and scientific findings to date.

  18. Development of a multi-sensor based urban discharge forecasting system using remotely sensed data: A case study of extreme rainfall in South Korea

    NASA Astrophysics Data System (ADS)

    Yoon, Sunkwon; Jang, Sangmin; Park, Kyungwon

    2017-04-01

    Extreme weather due to changing climate is a main source of water-related disasters such as flooding and inundation and its damage will be accelerated somewhere in world wide. To prevent the water-related disasters and mitigate their damage in urban areas in future, we developed a multi-sensor based real-time discharge forecasting system using remotely sensed data such as radar and satellite. We used Communication, Ocean and Meteorological Satellite (COMS) and Korea Meteorological Agency (KMA) weather radar for quantitative precipitation estimation. The Automatic Weather System (AWS) and McGill Algorithm for Precipitation Nowcasting by Lagrangian Extrapolation (MAPLE) were used for verification of rainfall accuracy. The optimal Z-R relation was applied the Tropical Z-R relationship (Z=32R1.65), it has been confirmed that the accuracy is improved in the extreme rainfall events. In addition, the performance of blended multi-sensor combining rainfall was improved in 60mm/h rainfall and more strong heavy rainfall events. Moreover, we adjusted to forecast the urban discharge using Storm Water Management Model (SWMM). Several statistical methods have been used for assessment of model simulation between observed and simulated discharge. In terms of the correlation coefficient and r-squared discharge between observed and forecasted were highly correlated. Based on this study, we captured a possibility of real-time urban discharge forecasting system using remotely sensed data and its utilization for real-time flood warning. Acknowledgement This research was supported by a grant (13AWMP-B066744-01) from Advanced Water Management Research Program (AWMP) funded by Ministry of Land, Infrastructure and Transport (MOLIT) of Korean government.

  19. An experimental system for flood risk forecasting at global scale

    NASA Astrophysics Data System (ADS)

    Alfieri, L.; Dottori, F.; Kalas, M.; Lorini, V.; Bianchi, A.; Hirpa, F. A.; Feyen, L.; Salamon, P.

    2016-12-01

    Global flood forecasting and monitoring systems are nowadays a reality and are being applied by an increasing range of users and practitioners in disaster risk management. Furthermore, there is an increasing demand from users to integrate flood early warning systems with risk based forecasts, combining streamflow estimations with expected inundated areas and flood impacts. To this end, we have developed an experimental procedure for near-real time flood mapping and impact assessment based on the daily forecasts issued by the Global Flood Awareness System (GloFAS). The methodology translates GloFAS streamflow forecasts into event-based flood hazard maps based on the predicted flow magnitude and the forecast lead time and a database of flood hazard maps with global coverage. Flood hazard maps are then combined with exposure and vulnerability information to derive flood risk. Impacts of the forecasted flood events are evaluated in terms of flood prone areas, potential economic damage, and affected population, infrastructures and cities. To further increase the reliability of the proposed methodology we integrated model-based estimations with an innovative methodology for social media monitoring, which allows for real-time verification of impact forecasts. The preliminary tests provided good results and showed the potential of the developed real-time operational procedure in helping emergency response and management. In particular, the link with social media is crucial for improving the accuracy of impact predictions.

  20. Three-model ensemble wind prediction in southern Italy

    NASA Astrophysics Data System (ADS)

    Torcasio, Rosa Claudia; Federico, Stefano; Calidonna, Claudia Roberta; Avolio, Elenio; Drofa, Oxana; Landi, Tony Christian; Malguzzi, Piero; Buzzi, Andrea; Bonasoni, Paolo

    2016-03-01

    Quality of wind prediction is of great importance since a good wind forecast allows the prediction of available wind power, improving the penetration of renewable energies into the energy market. Here, a 1-year (1 December 2012 to 30 November 2013) three-model ensemble (TME) experiment for wind prediction is considered. The models employed, run operationally at National Research Council - Institute of Atmospheric Sciences and Climate (CNR-ISAC), are RAMS (Regional Atmospheric Modelling System), BOLAM (BOlogna Limited Area Model), and MOLOCH (MOdello LOCale in H coordinates). The area considered for the study is southern Italy and the measurements used for the forecast verification are those of the GTS (Global Telecommunication System). Comparison with observations is made every 3 h up to 48 h of forecast lead time. Results show that the three-model ensemble outperforms the forecast of each individual model. The RMSE improvement compared to the best model is between 22 and 30 %, depending on the season. It is also shown that the three-model ensemble outperforms the IFS (Integrated Forecasting System) of the ECMWF (European Centre for Medium-Range Weather Forecast) for the surface wind forecasts. Notably, the three-model ensemble forecast performs better than each unbiased model, showing the added value of the ensemble technique. Finally, the sensitivity of the three-model ensemble RMSE to the length of the training period is analysed.

  1. Assessment of a new seasonal to inter-annual operational Great Lakes water supply, water levels, and connecting channel flow forecasting system

    NASA Astrophysics Data System (ADS)

    Gronewold, A.; Fry, L. M.; Hunter, T.; Pei, L.; Smith, J.; Lucier, H.; Mueller, R.

    2017-12-01

    The U.S. Army Corps of Engineers (USACE) has recently operationalized a suite of ensemble forecasts of Net Basin Supply (NBS), water levels, and connecting channel flows that was developed through a collaboration among USACE, NOAA's Great Lakes Environmental Research Laboratory, Ontario Power Generation (OPG), New York Power Authority (NYPA), and the Niagara River Control Center (NRCC). These forecasts are meant to provide reliable projections of potential extremes in daily discharge in the Niagara and St. Lawrence Rivers over a long time horizon (5 years). The suite of forecasts includes eight configurations that vary by (a) NBS model configuration, (b) meteorological forcings, and (c) incorporation of seasonal climate projections through the use of weighting. Forecasts are updated on a weekly basis, and represent the first operational forecasts of Great Lakes water levels and flows that span daily to inter-annual horizons and employ realistic regulation logic and lake-to-lake routing. We will present results from a hindcast assessment conducted during the transition from research to operation, as well as early indications of success rates determined through operational verification of forecasts. Assessment will include an exploration of the relative skill of various forecast configurations at different time horizons and the potential for application to hydropower decision making and Great Lakes water management.

  2. Impact of SST on heavy rainfall events on eastern Adriatic during SOP1 of HyMeX

    NASA Astrophysics Data System (ADS)

    Ivatek-Šahdan, Stjepan; Stanešić, Antonio; Tudor, Martina; Odak Plenković, Iris; Janeković, Ivica

    2018-02-01

    The season of late summer and autumn is favourable for intensive precipitation events (IPE) in the central Mediterranean. During that period the sea surface is warm and contributes to warming and moistening of the lowest portion of the atmosphere, particularly the planetary boundary layer (PBL). Adriatic sea is surrounded by mountains and the area often receives substantial amounts of precipitation in short time (24 h). The IPEs are a consequence of convection triggered by topography acting on the southerly flow that has brought the unstable air to the coastline. Improvement in prediction of high impact weather events is one of the goals of The Hydrological cycle in the Mediterranean eXperiment (HyMeX). This study examines how precipitation patterns change in response to different SST forcing. We focus on the IPEs that occurred on the eastern Adriatic coast during the first HyMeX Special observing period (SOP1, 6 September to 5 November 2012). The operational forecast model ALADIN uses the same SST as the global meteorological model (ARPEGE from Meteo France), as well as the forecast lateral boundary conditions (LBCs). First we assess the SST used by the operational atmospheric model ALADIN and compare it to the in situ measurements, ROMS ocean model, OSTIA and MUR analyses. Results of this assessment show that SST in the eastern Adriatic was overestimated by up to 10 K during HyMeX SOP1 period. Then we examine the sensitivity of 8 km and 2 km resolution forecasts of IPEs to the changes in the SST during whole SOP1 with special attention to the intensive precipitation event in Rijeka. Forecast runs in both resolutions are performed for the whole SOP1 using different SST fields prescribed at initial time and kept constant during the model forecast. Categorical verification of 24 h accumulated precipitation did not show substantial improvement in verification scores when more realistic SST was used. Furthermore, the results show that the impact of introducing improved SST in the analysis on the precipitation forecast varies for different cases. There is generally a larger sensitivity to the SST in high resolution than in the lower one, although the forecast period of the latter is longer.

  3. Forecasting Global Rainfall for Points Using ECMWF's Global Ensemble and Its Applications in Flood Forecasting

    NASA Astrophysics Data System (ADS)

    Pillosu, F. M.; Hewson, T.; Mazzetti, C.

    2017-12-01

    Prediction of local extreme rainfall has historically been the remit of nowcasting and high resolution limited area modelling, which represent only limited areas, may not be spatially accurate, give reasonable results only for limited lead times (<2 days) and become prohibitively expensive at global scale. ECMWF/EFAS/GLOFAS have developed a novel, cost-effective and physically-based statistical post-processing software ("ecPoint-Rainfall, ecPR", operational in 2017) that uses ECMWF Ensemble (ENS) output to deliver global probabilistic rainfall forecasts for points up to day 10. Firstly, ecPR applies a new notion of "remote calibration", which 1) allows us to replicate a multi-centennial training period using only one year of data, and 2) provides forecasts for anywhere in the world. Secondly, the software applies an understanding of how different rainfall generation mechanisms lead to different degrees of sub-grid variability in rainfall totals, and of where biases in the model can be improved upon. A long-term verification has shown that the post-processed rainfall has better reliability and resolution at every lead time if compared with ENS, and for large totals, ecPR outputs have the same skill at day 5 that the raw ENS has at day 1 (ROC area metric). ecPR could be used as input for hydrological models if its probabilistic output is modified accordingly to the inputs requirements for hydrological models. Indeed, ecPR does not provide information on where the highest total is likely to occur inside the gridbox, nor on the spatial distribution of rainfall values nearby. "Scenario forecasts" could be a solution. They are derived from locating the rainfall peak in sensitive positions (e.g. urban areas), and then redistributing the remaining quantities in the gridbox modifying traditional spatial correlation characterization methodologies (e.g. variogram analysis) in order to take account, for instance, of the type of rainfall forecast (stratiform, convective). Such an approach could be a turning point in the field of medium-range global real-time riverine flood forecasts. This presentation will illustrate for ecPR 1) system calibration, 2) operational implementation, 3) long-term verification, 4) future developments, and 5) early ideas for the application of ecPR outputs in hydrological models.

  4. Long-term weather predictability: Ural case study

    NASA Astrophysics Data System (ADS)

    Kubyshen, Alexander; Shopin, Sergey

    2016-04-01

    The accuracy of the state-of-the-art long-term meteorological forecast (at the seasonal level) is still low. Here it is presented approach (RAMES method) realizing different forecasting methodology. It provides prediction horizon of up to 19-22 years under equal probabilities of determination of parameters in every analyzed period [1]. Basic statements of the method are the following. 1. Long-term forecast on the basis of numerical modeling of the global meteorological process is principally impossible. Extension of long-term prediction horizon could be obtained only by the revealing and using a periodicity of meteorological situations at one point of observation. 2. Conventional calendar is unsuitable for generalization of meteorological data and revealing of cyclicity of meteorological processes. RAMES method uses natural time intervals: one day, synodic month and one year. It was developed a set of special calendars using these natural periods and the Metonic cycle. 3. Long-term time series of meteorological data is not a uniform universal set, it is a sequence of 28 universal sets appropriately superseding each other in time. The specifics of the method are: 1. Usage of the original research toolkit consisting of - a set of calendars based on the Metonic cycle; - a set of charts (coordinate systems) for the construction of sequence diagrams (of daily variability of a meteorological parameter during the analyzed year; of daily variability of a meteorological parameter using long-term dynamical time series of periods-analogues; of monthly and yearly variability of accumulated value of meteorological parameter). 2. Identification and usage of new virtual meteorological objects having several degrees of generalization appropriately located in the used coordinate systems. 3. All calculations are integrated into the single technological scheme providing comparison and mutual verification of calculation results. During the prolonged testing in the Ural region, it was proved the efficiency of the method for forecasting the following meteorological parameters: ­- air temperature (minimum, maximum, daily mean, diurnal variation, last spring and first autumn freeze); - periods of winds with speeds of >5m/s and the maximal expected wind speed; - precipitation periods and amount of precipitations; -­ relative humidity; - atmospheric pressure. Atmospheric events (thunderstorms, fog) and hydrometeors also occupy the appropriate positions at the sequence diagrams that provides a possibility of long-term forecasting also for these events. Accuracy of forecasts was tested in 2006-2009 years. The difference between the forecasted monthly mean temperature and actual values was <0.5°C in 40.9% of cases, between 0.5°C and 1°C in 18.2% of cases, between 1°C and 1.5°C in 18.2% of cases, <2°C in 86% of cases. The RAMES method provides the toolkit to successfully forecast the weather conditions in advance of several years. 1. A.F. Kubyshen, "RAMES method: revealing the periodicity of meteorological processes and it usage for long-term forecast [Metodika «RAMES»: vyjavlenie periodichnosti meteorologicheskih processov i ee ispol'zovanie dlja dolgosrochnogo prognozirovanija]", in A.E. Fedorov (ed.), Sistema «Planeta Zemlja»: 200 let so dnja rozhdenija Izmaila Ivanovicha Sreznevskogo. 100 let so dnja izdanija ego slovarja drevnerusskogo jazyka. LENAND. Moscow. pp. 305-311. (In Russian)

  5. Statistical and dynamical forecast of regional precipitation after mature phase of ENSO

    NASA Astrophysics Data System (ADS)

    Sohn, S.; Min, Y.; Lee, J.; Tam, C.; Ahn, J.

    2010-12-01

    While the seasonal predictability of general circulation models (GCMs) has been improved, the current model atmosphere in the mid-latitude does not respond correctly to external forcing such as tropical sea surface temperature (SST), particularly over the East Asia and western North Pacific summer monsoon regions. In addition, the time-scale of prediction scope is considerably limited and the model forecast skill still is very poor beyond two weeks. Although recent studies indicate that coupled model based multi-model ensemble (MME) forecasts show the better performance, the long-lead forecasts exceeding 9 months still show a dramatic decrease of the seasonal predictability. This study aims at diagnosing the dynamical MME forecasts comprised of the state of art 1-tier models as well as comparing them with the statistical model forecasts, focusing on the East Asian summer precipitation predictions after mature phase of ENSO. The lagged impact of El Nino as major climate contributor on the summer monsoon in model environments is also evaluated, in the sense of the conditional probabilities. To evaluate the probability forecast skills, the reliability (attributes) diagram and the relative operating characteristics following the recommendations of the World Meteorological Organization (WMO) Standardized Verification System for Long-Range Forecasts are used in this study. The results should shed light on the prediction skill for dynamical model and also for the statistical model, in forecasting the East Asian summer monsoon rainfall with a long-lead time.

  6. An Approach to Assess Observation Impact Based on Observation-Minus-Forecast Residuals

    NASA Technical Reports Server (NTRS)

    Todling, Ricardo

    2009-01-01

    Langland and Baker (2004) introduced an approach to assess the impact of observations on the forecasts. In that, a state-space aspect of the forecast is defined and a procedure is derived that relates changes in the aspect with changes in the initial conditions associated with the assimilation of observations) ultimately providing information about the impact of individual observations on the forecast. Some features of the approach are to be noted. The typical choice of forecast aspect employed in related works is rather arbitrary and leads to an incomplete assessment of the observing system. Furthermore, the state-space forecast aspect requires availability of a verification state that should ideally be uncorrelated with the forecast but in practice is not. Lastly, the approach involves the adjoint operator of the entire data assimilation system and as such it is constrained by the validity of this operator. In this presentation, an observation-space metric is used that, for a relatively time-homogeneous observing system, allows inferring observation impact on the forecast without some of the limitations above. Specifically, using observation-minus-forecast residuals leads to an approach with the following features: (i) it suggests a rather natural choice of forecast aspect, directly linked to the analysis system and providing full assessment of the observations; (ii) it naturally avoids introducing undesirable correlations in the forecast aspect by verifying against the observations; and (iii) it does not involve linearization and use of adjoints; therefore being applicable to any length of forecast. The state and observation-space approaches might be complementary to some degree, and involve different limitations and complexities. Illustrations are given using the NASA GEOS-5 data.

  7. New smoke predictions for Alaska in NOAA’s National Air Quality Forecast Capability

    NASA Astrophysics Data System (ADS)

    Davidson, P. M.; Ruminski, M.; Draxler, R.; Kondragunta, S.; Zeng, J.; Rolph, G.; Stajner, I.; Manikin, G.

    2009-12-01

    Smoke from wildfire is an important component of fine particle pollution, which is responsible for tens of thousands of premature deaths each year in the US. In Alaska, wildfire smoke is the leading cause of poor air quality in summer. Smoke forecast guidance helps air quality forecasters and the public take steps to limit exposure to airborne particulate matter. A new smoke forecast guidance tool, built by a cross-NOAA team, leverages efforts of NOAA’s partners at the USFS on wildfire emissions information, and with EPA, in coordinating with state/local air quality forecasters. Required operational deployment criteria, in categories of objective verification, subjective feedback, and production readiness, have been demonstrated in experimental testing during 2008-2009, for addition to the operational products in NOAA's National Air Quality Forecast Capability. The Alaska smoke forecast tool is an adaptation of NOAA’s smoke predictions implemented operationally for the lower 48 states (CONUS) in 2007. The tool integrates satellite information on location of wildfires with weather (North American mesoscale model) and smoke dispersion (HYSPLIT) models to produce daily predictions of smoke transport for Alaska, in binary and graphical formats. Hour-by hour predictions at 12km grid resolution of smoke at the surface and in the column are provided each day by 13 UTC, extending through midnight next day. Forecast accuracy and reliability are monitored against benchmark criteria for accuracy and reliability. While wildfire activity in the CONUS is year-round, the intense wildfire activity in AK is limited to the summer. Initial experimental testing during summer 2008 was hindered by unusually limited wildfire activity and very cloudy conditions. In contrast, heavier than average wildfire activity during summer 2009 provided a representative basis (more than 60 days of wildfire smoke) for demonstrating required prediction accuracy. A new satellite observation product was developed for routine near-real time verification of these predictions. The footprint of the predicted smoke from identified fires is verified with satellite observations of the spatial extent of smoke aerosols (5km resolution). Based on geostationary aerosol optical depth measurements that provide good time resolution of the horizontal spatial extent of the plumes, these observations do not yield quantitative concentrations of smoke particles at the surface. Predicted surface smoke concentrations are consistent with the limited number of in situ observations of total fine particle mass from all sources; however they are much higher than predicted for most CONUS fires. To assess uncertainty associated with fire emissions estimates, sensitivity analyses are in progress.

  8. Breaks in MODIS time series portend vegetation change: verification using long-term data in an arid grassland ecosystem

    USDA-ARS?s Scientific Manuscript database

    Frequency and severity of extreme climatic events are forecast to increase in the 21st century. Predicting how managed ecosystems may respond to climatic extremes is intensified by uncertainty associated with knowing when, where, and how long effects of the extreme events will be manifest in the eco...

  9. Performance of the multi-model SREPS precipitation probabilistic forecast over Mediterranean area

    NASA Astrophysics Data System (ADS)

    Callado, A.; Escribà, P.; Santos, C.; Santos-Muñoz, D.; Simarro, J.; García-Moya, J. A.

    2009-09-01

    The performance of the Short-Range Ensemble Prediction system (SREPS) probabilistic precipitation forecast over the Mediterranean area has been evaluated comparing with both, an Atlantic-European area excluding the first one, and a more general area including the two previous ones. The main aim is to assess whether the performance of the system due to its meso-alpha horizontal resolution of 25 kilometres is affected over the Mediterranean area, where the meteorological mesoscale events play a more important role than in an Atlantic-European area, more related to synoptic scale with an Atlantic influence. Furthermore, two different verification methods have been applied and compared for the three areas in order to assess its performance. The SREPS is a daily experimental LAM EPS focused on the short range (up to 72 hours) which has been developed at the Spanish Meteorological Agency (AEMET). To take into account implicitly the model errors, five purely independent different limited area models are used (COSMO (COSMO), HIRLAM (HIRLAM Consortium), HRM (DWD), MM5 (NOAA) and UM-NAE (UKMO)), and in order to sample the initial and boundary condition uncertainties each model is integrated using data from four different global deterministic models (GFS (NCEP), GME (DWD), IFS (ECMWF) and UM (UKMO)). As a result, crossing models and initial conditions the EPS is composed by 20 members. The underlying idea is that the ensemble performance has to improve as far as each member has itself the better possible performance, i.e. the better operational configuration limited area models are combined with the better global deterministic model configurations initialized with the best analysis. Because of this neither global EPS as initial conditions nor different model settings as multi-parameterizations or multi-parameters are used to generate SREPS. The performance over the three areas has been assessed focusing on 24 hour accumulation precipitation with four different usual forecasting thresholds: 1, 5 , 10 and 20 mm. A standard probabilistic verification exercise (following ECMWF recommendations) has been carried out, assessing quality with well known properties like reliability, resolution and discrimination, using usual performance measures: Reliability (Attributes) Diagram, Brier and Brier Skill Score Decomposition, Relative Operating Characteristic (ROC) and ROC area. The value of the forecasts w.r.t. sample climatology is shown with Relative value envelopes. This exercise has been carried out for a one year period (May 2007 to May 2008). Observed precipitation data from High Resolution (HR) networks over Europe have been used as reference. To avoid the potential lack of statistical significance due to spatial dependence between close observations, up-scaling processed observations have been used, provided by ECMWF, who collects the raw data from different member and cooperating states over Europe. This advanced up-scaling methodology has the feature to be more independent of the density of precipitation observations than the more classical simple methodology of interpolate the model outputs to the observation station points. In particular, the observations have been up-scaled to a 0.25ºx0.25º box taking each box as representative only when more than five observations are available in it. In the first one verifying method the box-average is taken, and for the second one a set of quantiles is considered, specifically 10, 25, 50 , 75 and 90 quantiles. The difference between both methods is that the first one takes over each box a single value as representative of precipitation. Whereas the second one takes a probability density function as representation of precipitation over the box, thus introducing uncertainty (related with spatial distribution) in the observations. The results are consistent, and show that in general SREPS is a reliable probabilistic forecasting system for the three selected areas. Concerning performance over different regions, the SREPS probabilistic precipitation forecasts over the selected Mediterranean area have a little less reliability and resolution than over the North Europe area, specially with the higher thresholds 10 and 20 mm. The latter results suggests that in SREPS the representation of the mesoscale meteorological events around the Mediterranean basin has to be improved, and probably also the orographic-related processes as the orographic enhancement of the precipitation. So it is suggested that the predictability skill of SREPS system around the Mediterranean could be expected to improve if the horizontal and vertical resolution of each limited area model of the system is increased in order to take into account the meso-beta scale. When comparing the two verification methods, one using up-scaled box average and the other using an up-scaled set of quantiles (i.e. a box PDF), it is shown that the validation of the probabilistic forecast is quite more consistent in the latter method when uncertainties in the observations are introduced and probably gives a more realistic idea of performance.

  10. An Experimental High-Resolution Forecast System During the Vancouver 2010 Winter Olympic and Paralympic Games

    NASA Astrophysics Data System (ADS)

    Mailhot, J.; Milbrandt, J. A.; Giguère, A.; McTaggart-Cowan, R.; Erfani, A.; Denis, B.; Glazer, A.; Vallée, M.

    2014-01-01

    Environment Canada ran an experimental numerical weather prediction (NWP) system during the Vancouver 2010 Winter Olympic and Paralympic Games, consisting of nested high-resolution (down to 1-km horizontal grid-spacing) configurations of the GEM-LAM model, with improved geophysical fields, cloud microphysics and radiative transfer schemes, and several new diagnostic products such as density of falling snow, visibility, and peak wind gust strength. The performance of this experimental NWP system has been evaluated in these winter conditions over complex terrain using the enhanced mesoscale observing network in place during the Olympics. As compared to the forecasts from the operational regional 15-km GEM model, objective verification generally indicated significant added value of the higher-resolution models for near-surface meteorological variables (wind speed, air temperature, and dewpoint temperature) with the 1-km model providing the best forecast accuracy. Appreciable errors were noted in all models for the forecasts of wind direction and humidity near the surface. Subjective assessment of several cases also indicated that the experimental Olympic system was skillful at forecasting meteorological phenomena at high-resolution, both spatially and temporally, and provided enhanced guidance to the Olympic forecasters in terms of better timing of precipitation phase change, squall line passage, wind flow channeling, and visibility reduction due to fog and snow.

  11. Wave ensemble forecast system for tropical cyclones in the Australian region

    NASA Astrophysics Data System (ADS)

    Zieger, Stefan; Greenslade, Diana; Kepert, Jeffrey D.

    2018-05-01

    Forecasting of waves under extreme conditions such as tropical cyclones is vitally important for many offshore industries, but there remain many challenges. For Northwest Western Australia (NW WA), wave forecasts issued by the Australian Bureau of Meteorology have previously been limited to products from deterministic operational wave models forced by deterministic atmospheric models. The wave models are run over global (resolution 1/4∘) and regional (resolution 1/10∘) domains with forecast ranges of + 7 and + 3 day respectively. Because of this relatively coarse resolution (both in the wave models and in the forcing fields), the accuracy of these products is limited under tropical cyclone conditions. Given this limited accuracy, a new ensemble-based wave forecasting system for the NW WA region has been developed. To achieve this, a new dedicated 8-km resolution grid was nested in the global wave model. Over this grid, the wave model is forced with winds from a bias-corrected European Centre for Medium Range Weather Forecast atmospheric ensemble that comprises 51 ensemble members to take into account the uncertainties in location, intensity and structure of a tropical cyclone system. A unique technique is used to select restart files for each wave ensemble member. The system is designed to operate in real time during the cyclone season providing + 10-day forecasts. This paper will describe the wave forecast components of this system and present the verification metrics and skill for specific events.

  12. An Objective Verification of the North American Mesoscale Model for Kennedy Space Center and Cape Canaveral Air Force Station

    NASA Technical Reports Server (NTRS)

    Bauman, William H., III

    2010-01-01

    The 45th Weather Squadron (45 WS) Launch Weather Officers (LWO's) use the 12-km resolution North American Mesoscale (NAM) model (MesoNAM) text and graphical product forecasts extensively to support launch weather operations. However, the actual performance of the model at Kennedy Space Center (KSC) and Cape Canaveral Air Force Station (CCAFS) has not been measured objectively. In order to have tangible evidence of model performance, the 45 WS tasked the Applied Meteorology Unit (AMU; Bauman et ai, 2004) to conduct a detailed statistical analysis of model output compared to observed values. The model products are provided to the 45 WS by ACTA, Inc. and include hourly forecasts from 0 to 84 hours based on model initialization times of 00, 06, 12 and 18 UTC. The objective analysis compared the MesoNAM forecast winds, temperature (T) and dew pOint (T d), as well as the changes in these parameters over time, to the observed values from the sensors in the KSC/CCAFS wind tower network shown in Table 1. These objective statistics give the forecasters knowledge of the model's strengths and weaknesses, which will result in improved forecasts for operations.

  13. The Hydrologic Ensemble Prediction Experiment (HEPEX)

    NASA Astrophysics Data System (ADS)

    Wood, A. W.; Thielen, J.; Pappenberger, F.; Schaake, J. C.; Hartman, R. K.

    2012-12-01

    The Hydrologic Ensemble Prediction Experiment was established in March, 2004, at a workshop hosted by the European Center for Medium Range Weather Forecasting (ECMWF). With support from the US National Weather Service (NWS) and the European Commission (EC), the HEPEX goal was to bring the international hydrological and meteorological communities together to advance the understanding and adoption of hydrological ensemble forecasts for decision support in emergency management and water resources sectors. The strategy to meet this goal includes meetings that connect the user, forecast producer and research communities to exchange ideas, data and methods; the coordination of experiments to address specific challenges; and the formation of testbeds to facilitate shared experimentation. HEPEX has organized about a dozen international workshops, as well as sessions at scientific meetings (including AMS, AGU and EGU) and special issues of scientific journals where workshop results have been published. Today, the HEPEX mission is to demonstrate the added value of hydrological ensemble prediction systems (HEPS) for emergency management and water resources sectors to make decisions that have important consequences for economy, public health, safety, and the environment. HEPEX is now organised around six major themes that represent core elements of a hydrologic ensemble prediction enterprise: input and pre-processing, ensemble techniques, data assimilation, post-processing, verification, and communication and use in decision making. This poster presents an overview of recent and planned HEPEX activities, highlighting case studies that exemplify the focus and objectives of HEPEX.

  14. The use of seasonal forecasts in a crop failure early warning system for West Africa

    NASA Astrophysics Data System (ADS)

    Nicklin, K. J.; Challinor, A.; Tompkins, A.

    2011-12-01

    Seasonal rainfall in semi-arid West Africa is highly variable. Farming systems in the region are heavily dependent on the monsoon rains leading to large variability in crop yields and a population that is vulnerable to drought. The existing crop yield forecasting system uses observed weather to calculate a water satisfaction index, which is then related to expected crop yield (Traore et al, 2006). Seasonal climate forecasts may be able to increase the lead-time of yield forecasts and reduce the humanitarian impact of drought. This study assesses the potential for a crop failure early warning system, which uses dynamic seasonal forecasts and a process-based crop model. Two sets of simulations are presented. In the first, the crop model is driven with observed weather as a control run. Observed rainfall is provided by the GPCP 1DD data set, whilst observed temperature and solar radiation data are given by the ERA-Interim reanalysis. The crop model used is the groundnut version of the General Large Area Model for annual crops (GLAM), which has been designed to operate on the grids used by seasonal weather forecasts (Challinor et al, 2004). GLAM is modified for use in West Africa by allowing multiple planting dates each season, replanting failed crops and producing parameter sets for Spanish- and Virginia- type West African groundnut. Crop yields are simulated for three different assumptions concerning the distribution and relative abundance of Spanish- and Virginia- type groundnut. Model performance varies with location, but overall shows positive skill in reproducing observed crop failure. The results for the three assumptions are similar, suggesting that the performance of the system is limited by something other than information on the type of groundnut grown. In the second set of simulations the crop model is driven with observed weather up to the forecast date, followed by ECMWF system 3 seasonal forecasts until harvest. The variation of skill with forecast date is assessed along with the extent to which forecasts can be improved by bias correction of the rainfall data. Two forms of bias correction are applied: a novel method of spatially bias correcting daily data, and statistical bias correction of the frequency and intensity distribution. Results are presented using both observed yields and the control run as the reference for verification. The potential for current dynamic seasonal forecasts to form part of an operational system giving timely and accurate warnings of crop failure is discussed. Traore S.B. et al., 2006. A Review of Agrometeorological Monitoring Tools and Methods Used in the West African Sahel. In: Motha R.P. et al., Strengthening Operational Agrometeorological Services at the National Level. Technical Bulletin WAOB-2006-1 and AGM-9, WMO/TD No. 1277. Pages 209-220. www.wamis.org/agm/pubs/agm9/WMO-TD1277.pdf Challinor A.J. et al., 2004. Design and optimisation of a large-area process based model for annual crops. Agric. For. Meteorol. 124, 99-120.

  15. EMC MODEL FORECAST VERIFICATION STATS

    Science.gov Websites

    48-H FCST 54-H FCST 60-H FCST 72-H FCST 84-H FCST Loop 500 mb Height BIAS and RMSE CONUS VALID 00Z sub-regions) Surface Wind Vector BIAS and RMSE REGION VALID 00Z VALID 12Z VALID 00Z (loop) VALID 12Z (loop) GMC (Gulf of Mexico Coast) * * * * SEC (Southeast Coast) * * * * NEC (Northeast Coast

  16. NCEP Air Quality Forecast(AQF) for Alaska/Hawaii. NOAA/NWS/NCEP/EMC

    Science.gov Websites

    : Daily sfc max or avg 2D fields Quick verification O3/PM2.5 Meteor Year: 2012 2013 2014 2015 Month: Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec Day: 01 02 03 04 05 06 07 08 09 10 11 12 13 14 15 16 17 18

  17. Verifying and Postprocesing the Ensemble Spread-Error Relationship

    NASA Astrophysics Data System (ADS)

    Hopson, Tom; Knievel, Jason; Liu, Yubao; Roux, Gregory; Wu, Wanli

    2013-04-01

    With the increased utilization of ensemble forecasts in weather and hydrologic applications, there is a need to verify their benefit over less expensive deterministic forecasts. One such potential benefit of ensemble systems is their capacity to forecast their own forecast error through the ensemble spread-error relationship. The paper begins by revisiting the limitations of the Pearson correlation alone in assessing this relationship. Next, we introduce two new metrics to consider in assessing the utility an ensemble's varying dispersion. We argue there are two aspects of an ensemble's dispersion that should be assessed. First, and perhaps more fundamentally: is there enough variability in the ensembles dispersion to justify the maintenance of an expensive ensemble prediction system (EPS), irrespective of whether the EPS is well-calibrated or not? To diagnose this, the factor that controls the theoretical upper limit of the spread-error correlation can be useful. Secondly, does the variable dispersion of an ensemble relate to variable expectation of forecast error? Representing the spread-error correlation in relation to its theoretical limit can provide a simple diagnostic of this attribute. A context for these concepts is provided by assessing two operational ensembles: 30-member Western US temperature forecasts for the U.S. Army Test and Evaluation Command and 51-member Brahmaputra River flow forecasts of the Climate Forecast and Applications Project for Bangladesh. Both of these systems utilize a postprocessing technique based on quantile regression (QR) under a step-wise forward selection framework leading to ensemble forecasts with both good reliability and sharpness. In addition, the methodology utilizes the ensemble's ability to self-diagnose forecast instability to produce calibrated forecasts with informative skill-spread relationships. We will describe both ensemble systems briefly, review the steps used to calibrate the ensemble forecast, and present verification statistics using error-spread metrics, along with figures from operational ensemble forecasts before and after calibration.

  18. Performance of the operational high-resolution numerical weather predictions of the Daphne project

    NASA Astrophysics Data System (ADS)

    Tegoulias, Ioannis; Pytharoulis, Ioannis; Karacostas, Theodore; Kartsios, Stergios; Kotsopoulos, Stelios; Bampzelis, Dimitrios

    2015-04-01

    In the framework of the DAPHNE project, the Department of Meteorology and Climatology (http://meteo.geo.auth.gr) of the Aristotle University of Thessaloniki, Greece, utilizes the nonhydrostatic Weather Research and Forecasting model with the Advanced Research dynamic solver (WRF-ARW) in order to produce high-resolution weather forecasts over Thessaly in central Greece. The aim of the DAPHNE project is to tackle the problem of drought in this area by means of Weather Modification. Cloud seeding assists the convective clouds to produce rain more efficiently or reduce hailstone size in favour of raindrops. The most favourable conditions for such a weather modification program in Thessaly occur in the period from March to October when convective clouds are triggered more frequently. Three model domains, using 2-way telescoping nesting, cover: i) Europe, the Mediterranean sea and northern Africa (D01), ii) Greece (D02) and iii) the wider region of Thessaly (D03; at selected periods) at horizontal grid-spacings of 15km, 5km and 1km, respectively. This research work intents to describe the atmospheric model setup and analyse its performance during a selected period of the operational phase of the project. The statistical evaluation of the high-resolution operational forecasts is performed using surface observations, gridded fields and radar data. Well established point verification methods combined with novel object based upon these methods, provide in depth analysis of the model skill. Spatial characteristics are adequately captured but a variable time lag between forecast and observation is noted. Acknowledgments: This research work has been co-financed by the European Union (European Regional Development Fund) and Greek national funds, through the action "COOPERATION 2011: Partnerships of Production and Research Institutions in Focused Research and Technology Sectors" (contract number 11SYN_8_1088 - DAPHNE) in the framework of the operational programme "Competitiveness and Entrepreneurship" and Regions in Transition (OPC II, NSRF 2007-2013)

  19. Adaptation of Mesoscale Weather Models to Local Forecasting

    NASA Technical Reports Server (NTRS)

    Manobianco, John T.; Taylor, Gregory E.; Case, Jonathan L.; Dianic, Allan V.; Wheeler, Mark W.; Zack, John W.; Nutter, Paul A.

    2003-01-01

    Methodologies have been developed for (1) configuring mesoscale numerical weather-prediction models for execution on high-performance computer workstations to make short-range weather forecasts for the vicinity of the Kennedy Space Center (KSC) and the Cape Canaveral Air Force Station (CCAFS) and (2) evaluating the performances of the models as configured. These methodologies have been implemented as part of a continuing effort to improve weather forecasting in support of operations of the U.S. space program. The models, methodologies, and results of the evaluations also have potential value for commercial users who could benefit from tailoring their operations and/or marketing strategies based on accurate predictions of local weather. More specifically, the purpose of developing the methodologies for configuring the models to run on computers at KSC and CCAFS is to provide accurate forecasts of winds, temperature, and such specific thunderstorm-related phenomena as lightning and precipitation. The purpose of developing the evaluation methodologies is to maximize the utility of the models by providing users with assessments of the capabilities and limitations of the models. The models used in this effort thus far include the Mesoscale Atmospheric Simulation System (MASS), the Regional Atmospheric Modeling System (RAMS), and the National Centers for Environmental Prediction Eta Model ( Eta for short). The configuration of the MASS and RAMS is designed to run the models at very high spatial resolution and incorporate local data to resolve fine-scale weather features. Model preprocessors were modified to incorporate surface, ship, buoy, and rawinsonde data as well as data from local wind towers, wind profilers, and conventional or Doppler radars. The overall evaluation of the MASS, Eta, and RAMS was designed to assess the utility of these mesoscale models for satisfying the weather-forecasting needs of the U.S. space program. The evaluation methodology includes objective and subjective verification methodologies. Objective (e.g., statistical) verification of point forecasts is a stringent measure of model performance, but when used alone, it is not usually sufficient for quantifying the value of the overall contribution of the model to the weather-forecasting process. This is especially true for mesoscale models with enhanced spatial and temporal resolution that may be capable of predicting meteorologically consistent, though not necessarily accurate, fine-scale weather phenomena. Therefore, subjective (phenomenological) evaluation, focusing on selected case studies and specific weather features, such as sea breezes and precipitation, has been performed to help quantify the added value that cannot be inferred solely from objective evaluation.

  20. [Research on rapid and quantitative detection method for organophosphorus pesticide residue].

    PubMed

    Sun, Yuan-Xin; Chen, Bing-Tai; Yi, Sen; Sun, Ming

    2014-05-01

    The methods of physical-chemical inspection is adopted in the traditional pesticide residue detection, which require a lot of pretreatment processes, are time-consuming and complicated. In the present study, the authors take chlorpyrifos applied widely in the present agricultural field as the research object and propose a rapid and quantitative detection method for organophosphorus pesticide residues. At first, according to the chemical characteristics of chlorpyrifos and comprehensive chromogenic effect of several colorimetric reagents and secondary pollution, the pretreatment of the scheme of chromogenic reaction of chlorpyrifos with resorcin in a weak alkaline environment was determined. Secondly, by analyzing Uv-Vis spectrum data of chlorpyrifos samples whose content were between 0. 5 and 400 mg kg-1, it was confirmed that the characteristic information after the color reaction mainly was concentrated among 360 approximately 400 nm. Thirdly, the full spectrum forecasting model was established based on the partial least squares, whose correlation coefficient of calibration was 0. 999 6, correlation coefficient of prediction reached 0. 995 6, standard deviation of calibration (RMSEC) was 2. 814 7 mg kg-1, and standard deviation of verification (RMSEP) was 8. 012 4 mg kg-1. Fourthly, the wavelengths whose center wavelength is 400 nm was extracted as characteristic region to build a forecasting model, whose correlation coefficient of calibration was 0. 999 6, correlation coefficient of prediction reached 0. 999 3, standard deviation of calibration (RMSEC) was 2. 566 7 mg kg-1 , standard deviation of verification (RMSEP) was 4. 886 6 mg kg-1, respectively. At last, by analyzing the near infrared spectrum data of chlorpyrifos samples with contents between 0. 5 and 16 mg kg-1, the authors found that although the characteristics of the chromogenic functional group are not obvious, the change of absorption peaks of resorcin itself in the neighborhood of 5 200 cm-' happens. The above-mentioned experimental results show that the proposed method is effective and feasible for rapid and quantitative detection prediction for organophosphorus pesticide residues. In the method, the information in full spectrum especially UV-Vis spectrum is strengthened by chromogenic reaction of a colorimetric reagent, which provides a new way of rapid detection of pesticide residues for agricultural products in the future.

  1. The forecaster's added value in QPF

    NASA Astrophysics Data System (ADS)

    Turco, M.; Milelli, M.

    2010-03-01

    To the authors' knowledge there are relatively few studies that try to answer this question: "Are humans able to add value to computer-generated forecasts and warnings?". Moreover, the answers are not always positive. In particular some postprocessing method is competitive or superior to human forecast. Within the alert system of ARPA Piemonte it is possible to study in an objective manner if the human forecaster is able to add value with respect to computer-generated forecasts. Every day the meteorology group of the Centro Funzionale of Regione Piemonte produces the HQPF (Human Quantitative Precipitation Forecast) in terms of an areal average and maximum value for each of the 13 warning areas, which have been created according to meteo-hydrological criteria. This allows the decision makers to produce an evaluation of the expected effects by comparing these HQPFs with predefined rainfall thresholds. Another important ingredient in this study is the very dense non-GTS (Global Telecommunication System) network of rain gauges available that makes possible a high resolution verification. In this work we compare the performances of the latest three years of QPF derived from the meteorological models COSMO-I7 (the Italian version of the COSMO Model, a mesoscale model developed in the framework of the COSMO Consortium) and IFS (the ECMWF global model) with the HQPF. In this analysis it is possible to introduce the hypothesis test developed by Hamill (1999), in which a confidence interval is calculated with the bootstrap method in order to establish the real difference between the skill scores of two competitive forecasts. It is important to underline that the conclusions refer to the analysis of the Piemonte operational alert system, so they cannot be directly taken as universally true. But we think that some of the main lessons that can be derived from this study could be useful for the meteorological community. In details, the main conclusions are the following: - despite the overall improvement in global scale and the fact that the resolution of the limited area models has increased considerably over recent years, the QPF produced by the meteorological models involved in this study has not improved enough to allow its direct use: the subjective HQPF continues to offer the best performance for the period +24 h/+48 h (i.e. the warning period in the Piemonte system); - in the forecast process, the step where humans have the largest added value with respect to mathematical models, is the communication. In fact the human characterization and communication of the forecast uncertainty to end users cannot be replaced by any computer code; - eventually, although there is no novelty in this study, we would like to show that the correct application of appropriated statistical techniques permits a better definition and quantification of the errors and, mostly important, allows a correct (unbiased) communication between forecasters and decision makers.

  2. Forecasting Global Point Rainfall using ECMWF's Ensemble Forecasting System

    NASA Astrophysics Data System (ADS)

    Pillosu, Fatima; Hewson, Timothy; Zsoter, Ervin; Baugh, Calum

    2017-04-01

    ECMWF (the European Centre for Medium range Weather Forecasts), in collaboration with the EFAS (European Flood Awareness System) and GLOFAS (GLObal Flood Awareness System) teams, has developed a new operational system that post-processes grid box rainfall forecasts from its ensemble forecasting system to provide global probabilistic point-rainfall predictions. The project attains a higher forecasting skill by applying an understanding of how different rainfall generation mechanisms lead to different degrees of sub-grid variability in rainfall totals. In turn this approach facilitates identification of cases in which very localized extreme totals are much more likely. This approach aims also to improve the rainfall input required in different hydro-meteorological applications. Flash flood forecasting, in particular in urban areas, is a good example. In flash flood scenarios precipitation is typically characterised by high spatial variability and response times are short. In this case, to move beyond radar based now casting, the classical approach has been to use very high resolution hydro-meteorological models. Of course these models are valuable but they can represent only very limited areas, may not be spatially accurate and may give reasonable results only for limited lead times. On the other hand, our method aims to use a very cost-effective approach to downscale global rainfall forecasts to a point scale. It needs only rainfall totals from standard global reporting stations and forecasts over a relatively short period to train it, and it can give good results even up to day 5. For these reasons we believe that this approach better satisfies user needs around the world. This presentation aims to describe two phases of the project: The first phase, already completed, is the implementation of this new system to provide 6 and 12 hourly point-rainfall accumulation probabilities. To do this we use a limited number of physically relevant global model parameters (i.e. convective precipitation ratio, speed of steering winds, CAPE - Convective Available Potential Energy - and solar radiation), alongside the rainfall forecasts themselves, to define the "weather types" that in turn define the expected sub-grid variability. The calibration and computational strategy intrinsic to the system will be illustrated. The quality of the global point rainfall forecasts is also illustrated by analysing recent case studies in which extreme totals and a greatly elevated flash flood risk could be foreseen some days in advance but especially by a longer-term verification that arises out of retrospective global point rainfall forecasting for 2016. The second phase, currently in development, is focussing on the relationships with other relevant geographical aspects, for instance, orography and coastlines. Preliminary results will be presented. These are promising but need further study to fully understand their impact on the spatial distribution of point rainfall totals.

  3. Ensemble sea ice forecast for predicting compressive situations in the Baltic Sea

    NASA Astrophysics Data System (ADS)

    Lehtiranta, Jonni; Lensu, Mikko; Kokkonen, Iiro; Haapala, Jari

    2017-04-01

    Forecasting of sea ice hazards is important for winter shipping in the Baltic Sea. In current numerical models the ice thickness distribution and drift are captured well, but compressive situations are often missing from forecast products. Its inclusion is requested by the shipping community, as compression poses a threat to ship operations. As compressing ice is capable of stopping ships for days and even damaging them, its inclusion in ice forecasts is vital. However, we have found that compression can not be predicted well in a deterministic forecast, since it can be a local and a quickly changing phenomenon. It is also very sensitive to small changes in the wind speed and direction, the prevailing ice conditions, and the model parameters. Thus, a probabilistic ensemble simulation is needed to produce a meaningful compression forecast. An ensemble model setup was developed in the SafeWIN project for this purpose. It uses the HELMI multicategory ice model, which was amended for making simulations in parallel. The ensemble was built by perturbing the atmospheric forcing and the physical parameters of the ice pack. The model setup will provide probabilistic forecasts for the compression in the Baltic sea ice. Additionally the model setup provides insight into the uncertainties related to different model parameters and their impact on the model results. We have completed several hindcast simulations for the Baltic Sea for verification purposes. These results are shown to match compression reports gathered from ships. In addition, an ensemble forecast is in preoperational testing phase and its first evaluation will be presented in this work.

  4. The predictability of Iowa's hydroclimate through analog forecasts

    NASA Astrophysics Data System (ADS)

    Rowe, Scott Thomas

    Iowa has long been affected by periods characterized by extreme drought and flood. In 2008, Cedar Rapids, Iowa was devastated by a record flood with damages around 3 billion. Several years later, Iowa was affected by severe drought in 2012, causing upwards of 30 billion in damages and losses across the United States. These climatic regimes can quickly transition from one regime to another, as was observed in the June 2013 major floods to the late summer 2013 severe drought across eastern Iowa. Though it is not possible to prevent a natural disaster from occurring, we explore how predictable these events are by using forecast models and analogs. Iowa's climate records are analyzed from 1950 to 2012 to determine if there are specific surface and upper-air pressure patterns linked to climate regimes (i.e., cold/hot and dry/wet conditions for a given month). We found that opposing climate regimes in Iowa have reversed anomalies in certain geographical regions of the northern hemisphere. These defined patterns and waves suggested to us that it could be possible to forecast extreme temperature and precipitation periods over Iowa if given a skillful forecast system. We examined the CMC, COLA, and GFDL models within the National Multi-Model Ensemble suite to create analog forecasts based on either surface or upper-air pressure forecasts. The verification results show that some analogs have predictability skill at the 0.5-month lead time exceeding random chance, but our overall confidence in the analog forecasts is not high enough to allow us to issue statewide categorical temperature and precipitation climate forecasts.

  5. Evaluation and Applications of the Prediction of Intensity Model Error (PRIME) Model

    NASA Astrophysics Data System (ADS)

    Bhatia, K. T.; Nolan, D. S.; Demaria, M.; Schumacher, A.

    2015-12-01

    Forecasters and end users of tropical cyclone (TC) intensity forecasts would greatly benefit from a reliable expectation of model error to counteract the lack of consistency in TC intensity forecast performance. As a first step towards producing error predictions to accompany each TC intensity forecast, Bhatia and Nolan (2013) studied the relationship between synoptic parameters, TC attributes, and forecast errors. In this study, we build on previous results of Bhatia and Nolan (2013) by testing the ability of the Prediction of Intensity Model Error (PRIME) model to forecast the absolute error and bias of four leading intensity models available for guidance in the Atlantic basin. PRIME forecasts are independently evaluated at each 12-hour interval from 12 to 120 hours during the 2007-2014 Atlantic hurricane seasons. The absolute error and bias predictions of PRIME are compared to their respective climatologies to determine their skill. In addition to these results, we will present the performance of the operational version of PRIME run during the 2015 hurricane season. PRIME verification results show that it can reliably anticipate situations where particular models excel, and therefore could lead to a more informed protocol for hurricane evacuations and storm preparations. These positive conclusions suggest that PRIME forecasts also have the potential to lower the error in the original intensity forecasts of each model. As a result, two techniques are proposed to develop a post-processing procedure for a multimodel ensemble based on PRIME. The first approach is to inverse-weight models using PRIME absolute error predictions (higher predicted absolute error corresponds to lower weights). The second multimodel ensemble applies PRIME bias predictions to each model's intensity forecast and the mean of the corrected models is evaluated. The forecasts of both of these experimental ensembles are compared to those of the equal-weight ICON ensemble, which currently provides the most reliable forecasts in the Atlantic basin.

  6. A Framework for Assessing Operational Madden–Julian Oscillation Forecasts: A CLIVAR MJO Working Group Project

    DOE PAGES

    Gottschalck, J.; Wheeler, M.; Weickmann, K.; ...

    2010-09-01

    The U.S. Climate Variability and Predictability (CLIVAR) MJO Working Group (MJOWG) has taken steps to promote the adoption of a uniform diagnostic and set of skill metrics for analyzing and assessing dynamical forecasts of the MJO. Here we describe the framework and initial implementation of the approach using real-time forecast data from multiple operational numerical weather prediction (NWP) centers. The objectives of this activity are to provide a means to i) quantitatively compare skill of MJO forecasts across operational centers, ii) measure gains in forecast skill over time by a given center and the community as a whole, and iii)more » facilitate the development of a multimodel forecast of the MJO. The MJO diagnostic is based on extensive deliberations among the MJOWG in conjunction with input from a number of operational centers and makes use of the MJO index of Wheeler and Hendon. This forecast activity has been endorsed by the Working Group on Numerical Experimentation (WGNE), the international body that fosters the development of atmospheric models for NWP and climate studies. The Climate Prediction Center (CPC) within the National Centers for Environmental Prediction (NCEP) is hosting the acquisition of the forecast data, application of the MJO diagnostic, and real-time display of the standardized forecasts. The activity has contributed to the production of 1–2-week operational outlooks at NCEP and activities at other centers. Further enhancements of the diagnostic's implementation, including more extensive analysis, comparison, illustration, and verification of the contributions from the participating centers, will increase the usefulness and application of these forecasts and potentially lead to more skillful predictions of the MJO and indirectly extratropical and other weather variability (e.g., tropical cyclones) influenced by the MJO. The purpose of this article is to inform the larger scientific and operational forecast communities of the MJOWG forecast effort and invite participation from additional operational centers.« less

  7. Evaluation of Multi-Model Ensemble System for Seasonal and Monthly Prediction

    NASA Astrophysics Data System (ADS)

    Zhang, Q.; Van den Dool, H. M.

    2013-12-01

    Since August 2011, the realtime seasonal forecasts of U.S. National Multi-Model Ensemble (NMME) have been made on 8th of each month by NCEP Climate Prediction Center (CPC). During the first year, the participating models were NCEP/CFSv1&2, GFDL/CM2.2, NCAR/U.Miami/COLA/CCSM3, NASA/GEOS5, IRI/ ECHAM-a & ECHAM-f for the realtime NMME forecast. The Canadian Meteorological Center CanCM3 and CM4 replaced the CFSv1 and IRI's models in the second year. The NMME team at CPC collects three variables, including precipitation, 2-meter temperature and sea surface temperature from each modeling center on a 1x1 global grid, removes systematic errors, makes the grand ensemble mean with equal weight for each model and constructs a probability forecast with equal weight for each member. The team then provides the NMME forecast to the operational CPC forecaster responsible for the seasonal and monthly outlook each month. Verification of the seasonal and monthly prediction from NMME is conducted by calculating the anomaly correlation (AC) from the 30-year hindcasts (1982-2011) of individual model and NMME ensemble. The motivation of this study is to provide skill benchmarks for future improvements of the NMME seasonal and monthly prediction system. The experimental (Phase I) stage of the project already supplies routine guidance to users of the NMME forecasts.

  8. Performance assessment of a Bayesian Forecasting System (BFS) for real-time flood forecasting

    NASA Astrophysics Data System (ADS)

    Biondi, D.; De Luca, D. L.

    2013-02-01

    SummaryThe paper evaluates, for a number of flood events, the performance of a Bayesian Forecasting System (BFS), with the aim of evaluating total uncertainty in real-time flood forecasting. The predictive uncertainty of future streamflow is estimated through the Bayesian integration of two separate processors. The former evaluates the propagation of input uncertainty on simulated river discharge, the latter computes the hydrological uncertainty of actual river discharge associated with all other possible sources of error. A stochastic model and a distributed rainfall-runoff model were assumed, respectively, for rainfall and hydrological response simulations. A case study was carried out for a small basin in the Calabria region (southern Italy). The performance assessment of the BFS was performed with adequate verification tools suited for probabilistic forecasts of continuous variables such as streamflow. Graphical tools and scalar metrics were used to evaluate several attributes of the forecast quality of the entire time-varying predictive distributions: calibration, sharpness, accuracy, and continuous ranked probability score (CRPS). Besides the overall system, which incorporates both sources of uncertainty, other hypotheses resulting from the BFS properties were examined, corresponding to (i) a perfect hydrological model; (ii) a non-informative rainfall forecast for predicting streamflow; and (iii) a perfect input forecast. The results emphasize the importance of using different diagnostic approaches to perform comprehensive analyses of predictive distributions, to arrive at a multifaceted view of the attributes of the prediction. For the case study, the selected criteria revealed the interaction of the different sources of error, in particular the crucial role of the hydrological uncertainty processor when compensating, at the cost of wider forecast intervals, for the unreliable and biased predictive distribution resulting from the Precipitation Uncertainty Processor.

  9. Operational Applications of Satellite Snowcover Observations

    NASA Technical Reports Server (NTRS)

    Rango, A. (Editor); Peterson, R. (Editor)

    1980-01-01

    The history of remote sensing of snow cover is reviewed and the following topics are covered: various techniques for interpreting LANDSAT and NOAA satellite data; the status of future systems for continuing snow hydrology applications; the use of snow cover observations in streamflow forecasts by Applications Systems Verification and Transfer participants and selected foreign investigators; and the benefits of using satellite snow cover data in runoff prediction.

  10. A Modeling and Verification Study of Summer Precipitation Systems Using NASA Surface Initialization Datasets

    NASA Technical Reports Server (NTRS)

    Jonathan L. Case; Kumar, Sujay V.; Srikishen, Jayanthi; Jedlovec, Gary J.

    2010-01-01

    One of the most challenging weather forecast problems in the southeastern U.S. is daily summertime pulse-type convection. During the summer, atmospheric flow and forcing are generally weak in this region; thus, convection typically initiates in response to local forcing along sea/lake breezes, and other discontinuities often related to horizontal gradients in surface heating rates. Numerical simulations of pulse convection usually have low skill, even in local predictions at high resolution, due to the inherent chaotic nature of these precipitation systems. Forecast errors can arise from assumptions within parameterization schemes, model resolution limitations, and uncertainties in both the initial state of the atmosphere and land surface variables such as soil moisture and temperature. For this study, it is hypothesized that high-resolution, consistent representations of surface properties such as soil moisture, soil temperature, and sea surface temperature (SST) are necessary to better simulate the interactions between the surface and atmosphere, and ultimately improve predictions of summertime pulse convection. This paper describes a sensitivity experiment using the Weather Research and Forecasting (WRF) model. Interpolated land and ocean surface fields from a large-scale model are replaced with high-resolution datasets provided by unique NASA assets in an experimental simulation: the Land Information System (LIS) and Moderate Resolution Imaging Spectroradiometer (MODIS) SSTs. The LIS is run in an offline mode for several years at the same grid resolution as the WRF model to provide compatible land surface initial conditions in an equilibrium state. The MODIS SSTs provide detailed analyses of SSTs over the oceans and large lakes compared to current operational products. The WRF model runs initialized with the LIS+MODIS datasets result in a reduction in the overprediction of rainfall areas; however, the skill is almost equally as low in both experiments using traditional verification methodologies. Output from object-based verification within NCAR s Meteorological Evaluation Tools reveals that the WRF runs initialized with LIS+MODIS data consistently generated precipitation objects that better matched observed precipitation objects, especially at higher precipitation intensities. The LIS+MODIS runs produced on average a 4% increase in matched precipitation areas and a simultaneous 4% decrease in unmatched areas during three months of daily simulations.

  11. Validation and Verification of Operational Land Analysis Activities at the Air Force Weather Agency

    NASA Technical Reports Server (NTRS)

    Shaw, Michael; Kumar, Sujay V.; Peters-Lidard, Christa D.; Cetola, Jeffrey

    2012-01-01

    The NASA developed Land Information System (LIS) is the Air Force Weather Agency's (AFWA) operational Land Data Assimilation System (LDAS) combining real time precipitation observations and analyses, global forecast model data, vegetation, terrain, and soil parameters with the community Noah land surface model, along with other hydrology module options, to generate profile analyses of global soil moisture, soil temperature, and other important land surface characteristics. (1) A range of satellite data products and surface observations used to generate the land analysis products (2) Global, 1/4 deg spatial resolution (3) Model analysis generated at 3 hours. AFWA recognizes the importance of operational benchmarking and uncertainty characterization for land surface modeling and is developing standard methods, software, and metrics to verify and/or validate LIS output products. To facilitate this and other needs for land analysis activities at AFWA, the Model Evaluation Toolkit (MET) -- a joint product of the National Center for Atmospheric Research Developmental Testbed Center (NCAR DTC), AFWA, and the user community -- and the Land surface Verification Toolkit (LVT), developed at the Goddard Space Flight Center (GSFC), have been adapted to operational benchmarking needs of AFWA's land characterization activities.

  12. Medium range forecasting of Hurricane Harvey flash flooding using ECMWF and social vulnerability data

    NASA Astrophysics Data System (ADS)

    Pillosu, F. M.; Jurlina, T.; Baugh, C.; Tsonevsky, I.; Hewson, T.; Prates, F.; Pappenberger, F.; Prudhomme, C.

    2017-12-01

    During hurricane Harvey the greater east Texas area was affected by extensive flash flooding. Their localised nature meant they were too small for conventional large scale flood forecasting systems to capture. We are testing the use of two real time forecast products from the European Centre for Medium-range Weather Forecasts (ECMWF) in combination with local vulnerability information to provide flash flood forecasting tools at the medium range (up to 7 days ahead). Meteorological forecasts are the total precipitation extreme forecast index (EFI), a measure of how the ensemble forecast probability distribution differs from the model-climate distribution for the chosen location, time of year and forecast lead time; and the shift of tails (SOT) which complements the EFI by quantifying how extreme an event could potentially be. Both products give the likelihood of flash flood generating precipitation. For hurricane Harvey, 3-day EFI and SOT products for the period 26th - 29th August 2017 were used, generated from the twice daily, 18 km, 51 ensemble member ECMWF Integrated Forecast System. After regridding to 1 km resolution the forecasts were combined with vulnerable area data to produce a flash flood hazard risk area. The vulnerability data were floodplains (EU Joint Research Centre), road networks (Texas Department of Transport) and urban areas (Census Bureau geographic database), together reflecting the susceptibility to flash floods from the landscape. The flash flood hazard risk area forecasts were verified using a traditional approach against observed National Weather Service flash flood reports, a total of 153 reported flash floods have been detected in that period. Forecasts performed best for SOT = 5 (hit ratio = 65%, false alarm ratio = 44%) and EFI = 0.7 (hit ratio = 74%, false alarm ratio = 45%) at 72 h lead time. By including the vulnerable areas data, our verification results improved by 5-15%, demonstrating the value of vulnerability information within natural hazard forecasts. This research shows that flash flooding from hurricane Harvey was predictable up to 4 days ahead and that filtering the forecasts to vulnerable areas provides a more focused guidance to civil protection agencies planning their emergency response.

  13. a system approach to the long term forecasting of the climat data in baikal region

    NASA Astrophysics Data System (ADS)

    Abasov, N.; Berezhnykh, T.

    2003-04-01

    The Angara river running from Baikal with a cascade of hydropower plants built on it plays a peculiar role in economy of the region. With view of high variability of water inflow into the rivers and lakes (long-term low water periods and catastrophic floods) that is due to climatic peculiarities of the water resource formation, a long-term forecasting is developed and applied for risk decreasing at hydropower plants. Methodology and methods of long-term forecasting of natural-climatic processes employs some ideas of the research schools by Academician I.P.Druzhinin and Prof. A.P.Reznikhov and consists in detailed investigation of cause-effect relations, finding out physical analogs and their application to formalized methods of long-term forecasting. They are divided into qualitative (background method; method of analogs based on solar activity), probabilistic and approximative methods (analog-similarity relations; discrete-continuous model). These forecasting methods have been implemented in the form of analytical aids of the information-forecasting software "GIPSAR" that provides for some elements of artificial intelligence. Background forecasts of the runoff of the Ob, the Yenisei, the Angara Rivers in the south of Siberia are based on space-time regularities that were revealed on taking account of the phase shifts in occurrence of secular maxima and minima on integral-difference curves of many-year hydrological processes in objects compared. Solar activity plays an essential role in investigations of global variations of climatic processes. Its consideration in the method of superimposed epochs has allowed a conclusion to be made on the higher probability of the low-water period in the actual inflow to Lake Baikal that takes place on the increasing branch of solar activity of its 11-year cycle. The higher probability of a high-water period is observed on the decreasing branch of solar activity from the 2nd to the 5th year after its maximum. Probabilistic method of forecasting (with a year in advance) is based on the property of alternation of series of years with increase and decrease in the observed indicators (characteristic indices) of natural processes. Most of the series (98.4-99.6%) are represented by series of one to three years. The problem of forecasting is divided into two parts: 1) qualitative forecast of the probability that the started series will either continue or be replaced by a new series during the next year that is based on the frequency characteristics of series of years with increase or decrease of the forecasted sequence); 2) quantitative estimate of the forecasted value in the form of a curve of conditional frequencies is made on the base of intra-sequence interrelations among hydrometeorological elements by their differentiation with respect to series of years of increase or decrease, by construction of particular curves of conditional frequencies of the runoff for each expected variant of series development and by subsequent construction a generalized curve. Approximative learning methods form forecasted trajectories of the studied process indices for a long-term perspective. The method of analog-similarity relations is based on the fact that long periods of observations reveal some similarities in the character of variability of indices for some fragments of the sequence x (t) by definite criteria. The idea of the method is to estimate similarity of such fragments of the sequence that have been called the analogs. The method applies multistage optimization of both external parameters (e.g. the number of iterations of the sliding averaging needed to decompose the sequence into two components: the smoothed one with isolated periodic oscillations and the residual or random one). The method is applicable to current terms of forecasts and ending with the double solar cycle. Using a special procedure of integration, it separates terms with the best results for the given optimization subsample. Several optimal vectors of parameters obtained are tested on the examination (verifying) subsample. If the procedure is successful, the forecast is immediately made by integration of several best solutions. Peculiarities of forecasting extreme processes. Methods of long-term forecasting allow the sufficiently reliable forecasts to be made within the interval of xmin+Δ_1, xmax - Δ_2 (i.e. in the interval of medium values of indices). Meanwhile, in the intervals close to extreme ones, reliability of forecasts is substantially lower. While for medium values the statistics of the100-year sequence gives acceptable results owing to a sufficiently large number of revealed analogs that correspond to prognostic samples, for extreme values the situation is quite different, first of all by virtue of poverty of statistical data. Decreasing the values of Δ_1,Δ_2: Δ_1,Δ_2 rightarrow 0 (by including them into optimization parameters of the considered forecasting methods) could be one of the ways to improve reliability of forecasts. Partially, such an approach has been realized in the method of analog-similarity relations, giving the possibility to form a range of possible forecasted trajectories in two variants - from the minimum possible trajectory to the maximum possible one. Reliability of long-term forecasts. Both the methodology and the methods considered above have been realized as the information-forecasting system "GIPSAR". The system includes some tools implementing several methods of forecasting, analysis of initial and forecasted information, a developed database, a set of tools for verification of algorithms, additional information on the algorithms of statistical processing of sequences (sliding averaging, integral-difference curves, etc.), aids to organize input of initial information (in its various forms) as well as aids to draw up output prognostic documents. Risk management. The normal functioning of the Angara cascade is periodically interrupted by risks of two types that take place in the Baikal, the Bratsk and Ust-Ilimsk reservoirs: long low-water periods and sudden periods of extremely high water levels. For example, low-water periods, observed in the reservoirs of the Angara cascade can be classified under four risk categories : 1 - acceptable (negligible reduction of electric power generation by hydropower plants; certain difficulty in meeting environmental and navigation requirements); 2 - significant (substantial reduction of electric power generation by hydropower plants; certain restriction on water releases for navigation; violation of environmental requirements in some years); 3 - emergency (big losses in electric power generation; limited electricity supply to large consumers; significant restriction of water releases for navigation; threat of exposure of drinkable water intake works; violation of environmental requirements for a number of years); 4 - catastrophic (energy crisis; social crisis exposure of drinkable water intake works; termination of navigation; environmental catastrophe). Management of energy systems consists in operative, many-year regulation and perspective planning and has to take into account the analysis of operative data (water reserves in reservoirs), long-term statistics and relations among natural processes and also forecasts - short-term (for a day, week, decade), long-term and/or super-long-term (from a month to several decades). Such natural processes as water inflow to reservoirs, air temperatures during heating periods depend in turn on external factors: prevailing types of atmospheric circulation, intensity of the 11- and 22-year cycles of solar activity, volcanic activity, interaction between the ocean and atmosphere, etc. Until recently despite the formed scientific schools on long-term forecasting (I.P.Druzhinin, A.P.Reznikhov) the energy system management has been based on specially drawn dispatching schedules and long-term hydrometeorological forecasts only without attraction of perspective forecasted indices. Insertion of a parallel block of forecast (based on the analysis of data on natural processes and special methods of forecasting) into the scheme can largely smooth unfavorable consequences from the impact of natural processes on sustainable development of energy systems and especially on its safe operation. However, the requirements to reliability and accuracy of long-term forecasts significantly increase. The considered approach to long term forecasting can be used for prediction: mean winter and summer air temperatures, droughts and wood fires.

  14. Skill of a global seasonal ensemble streamflow forecasting system

    NASA Astrophysics Data System (ADS)

    Candogan Yossef, Naze; Winsemius, Hessel; Weerts, Albrecht; van Beek, Rens; Bierkens, Marc

    2013-04-01

    Forecasting of water availability and scarcity is a prerequisite for managing the risks and opportunities caused by the inter-annual variability of streamflow. Reliable seasonal streamflow forecasts are necessary to prepare for an appropriate response in disaster relief, management of hydropower reservoirs, water supply, agriculture and navigation. Seasonal hydrological forecasting on a global scale could be valuable especially for developing regions of the world, where effective hydrological forecasting systems are scarce. In this study, we investigate the forecasting skill of the global seasonal streamflow forecasting system FEWS-World, using the global hydrological model PCR-GLOBWB. FEWS-World has been setup within the European Commission 7th Framework Programme project Global Water Scarcity Information Service (GLOWASIS). Skill is assessed in historical simulation mode as well as retroactive forecasting mode. The assessment in historical simulation mode used a meteorological forcing based on observations from the Climate Research Unit of the University of East Anglia and the ERA-40 reanalysis of the European Center for Medium-Range Weather Forecasts (ECMWF). We assessed the skill of the global hydrological model PCR-GLOBWB in reproducing past discharge extremes in 20 large rivers of the world. This preliminary assessment concluded that the prospects for seasonal forecasting with PCR-GLOBWB or comparable models are positive. However this assessment did not include actual meteorological forecasts. Thus the meteorological forcing errors were not assessed. Yet, in a forecasting setup, the predictive skill of a hydrological forecasting system is affected by errors due to uncertainty from numerical weather prediction models. For the assessment in retroactive forecasting mode, the model is forced with actual ensemble forecasts from the seasonal forecast archives of ECMWF. Skill is assessed at 78 stations on large river basins across the globe, for all the months of the year and for lead times up to 6 months. The forecasted discharges are compared with observed monthly streamflow records using the ensemble verification measures Brier Skill Score (BSS) and Continuous Ranked Probability Score (CRPS). The eventual goal is to transfer FEWS-World to operational forecasting mode, where the system will use operational seasonal forecasts from ECMWF. The results will be disseminated on the internet, and hopefully provide information that is valuable for users in data and model-poor regions of the world.

  15. Extending to seasonal scales the current usage of short range weather forecasts and climate projections for water management in Spain

    NASA Astrophysics Data System (ADS)

    Rodriguez-Camino, Ernesto; Voces, José; Sánchez, Eroteida; Navascues, Beatriz; Pouget, Laurent; Roldan, Tamara; Gómez, Manuel; Cabello, Angels; Comas, Pau; Pastor, Fernando; Concepción García-Gómez, M.°; José Gil, Juan; Gil, Delfina; Galván, Rogelio; Solera, Abel

    2016-04-01

    This presentation, first, briefly describes the current use of weather forecasts and climate projections delivered by AEMET for water management in Spain. The potential use of seasonal climate predictions for water -in particular dams- management is then discussed more in-depth, using a pilot experience carried out by a multidisciplinary group coordinated by AEMET and DG for Water of Spain. This initiative is being developed in the framework of the national implementation of the GFCS and the European project, EUPORIAS. Among the main components of this experience there are meteorological and hydrological observations, and an empirical seasonal forecasting technique that provides an ensemble of water reservoir inflows. These forecasted inflows feed a prediction model for the dam state that has been adapted for this purpose. The full system is being tested retrospectively, over several decades, for selected water reservoirs located in different Spanish river basins. The assessment includes an objective verification of the probabilistic seasonal forecasts using standard metrics, and the evaluation of the potential social and economic benefits, with special attention to drought and flooding conditions. The methodology of implementation of these seasonal predictions in the decision making process is being developed in close collaboration with final users participating in this pilot experience.

  16. Global Turbulence Decision Support for Aviation

    NASA Astrophysics Data System (ADS)

    Williams, J.; Sharman, R.; Kessinger, C.; Feltz, W.; Wimmers, A.

    2009-09-01

    Turbulence is widely recognized as the leading cause of injuries to flight attendants and passengers on commercial air carriers, yet legacy decision support products such as SIGMETs and SIGWX charts provide relatively low spatial- and temporal-resolution assessments and forecasts of turbulence, with limited usefulness for strategic planning and tactical turbulence avoidance. A new effort is underway to develop an automated, rapid-update, gridded global turbulence diagnosis and forecast system that addresses upper-level clear-air turbulence, mountain-wave turbulence, and convectively-induced turbulence. This NASA-funded effort, modeled on the U.S. Federal Aviation Administration's Graphical Turbulence Guidance (GTG) and GTG Nowcast systems, employs NCEP Global Forecast System (GFS) model output and data from NASA and operational satellites to produce quantitative turbulence nowcasts and forecasts. A convective nowcast element based on GFS forecasts and satellite data provides a basis for diagnosing convective turbulence. An operational prototype "Global GTG” system has been running in real-time at the U.S. National Center for Atmospheric Research since the spring of 2009. Initial verification based on data from TRMM, Cloudsat and MODIS (for the convection nowcasting) and AIREPs and AMDAR data (for turbulence) are presented. This product aims to provide the "single authoritative source” for global turbulence information for the U.S. Next Generation Air Transportation System.

  17. GPS Estimates of Integrated Precipitable Water Aid Weather Forecasters

    NASA Technical Reports Server (NTRS)

    Moore, Angelyn W.; Gutman, Seth I.; Holub, Kirk; Bock, Yehuda; Danielson, David; Laber, Jayme; Small, Ivory

    2013-01-01

    Global Positioning System (GPS) meteorology provides enhanced density, low-latency (30-min resolution), integrated precipitable water (IPW) estimates to NOAA NWS (National Oceanic and Atmospheric Adminis tration Nat ional Weather Service) Weather Forecast Offices (WFOs) to provide improved model and satellite data verification capability and more accurate forecasts of extreme weather such as flooding. An early activity of this project was to increase the number of stations contributing to the NOAA Earth System Research Laboratory (ESRL) GPS meteorology observing network in Southern California by about 27 stations. Following this, the Los Angeles/Oxnard and San Diego WFOs began using the enhanced GPS-based IPW measurements provided by ESRL in the 2012 and 2013 monsoon seasons. Forecasters found GPS IPW to be an effective tool in evaluating model performance, and in monitoring monsoon development between weather model runs for improved flood forecasting. GPS stations are multi-purpose, and routine processing for position solutions also yields estimates of tropospheric zenith delays, which can be converted into mm-accuracy PWV (precipitable water vapor) using in situ pressure and temperature measurements, the basis for GPS meteorology. NOAA ESRL has implemented this concept with a nationwide distribution of more than 300 "GPSMet" stations providing IPW estimates at sub-hourly resolution currently used in operational weather models in the U.S.

  18. Monitoring and seasonal forecasting of meteorological droughts

    NASA Astrophysics Data System (ADS)

    Dutra, Emanuel; Pozzi, Will; Wetterhall, Fredrik; Di Giuseppe, Francesca; Magnusson, Linus; Naumann, Gustavo; Barbosa, Paulo; Vogt, Jurgen; Pappenberger, Florian

    2015-04-01

    Near-real time drought monitoring can provide decision makers valuable information for use in several areas, such as water resources management, or international aid. Unfortunately, a major constraint in current drought outlooks is the lack of reliable monitoring capability for observed precipitation globally in near-real time. Furthermore, drought monitoring systems requires a long record of past observations to provide mean climatological conditions. We address these constraints by developing a novel drought monitoring approach in which monthly mean precipitation is derived from short-range using ECMWF probabilistic forecasts and then merged with the long term precipitation climatology of the Global Precipitation Climatology Centre (GPCC) dataset. Merging the two makes available a real-time global precipitation product out of which the Standardized Precipitation Index (SPI) can be estimated and used for global or regional drought monitoring work. This approach provides stability in that by-passes problems of latency (lags) in having local rain-gauge measurements available in real time or lags in satellite precipitation products. Seasonal drought forecasts can also be prepared using the common methodology and based upon two data sources used to provide initial conditions (GPCC and the ECMWF ERA-Interim reanalysis (ERAI) combined with either the current ECMWF seasonal forecast or a climatology based upon ensemble forecasts. Verification of the forecasts as a function of lead time revealed a reduced impact on skill for: (i) long lead times using different initial conditions, and (ii) short lead times using different precipitation forecasts. The memory effect of initial conditions was found to be 1 month lead time for the SPI-3, 3 to 4 months for the SPI-6 and 5 months for the SPI-12. Results show that dynamical forecasts of precipitation provide added value, a skill similar to or better than climatological forecasts. In some cases, particularly for long SPI time scales, it is very difficult to improve on the use of climatological forecasts. However, results presented regionally and globally pinpoint several regions in the world where drought onset forecasting is feasible and skilful.

  19. Understanding the land-atmospheric interaction in drought forecast from CFSv2 for the 2011 Texas and 2012 Upper Midwest US droughts

    NASA Astrophysics Data System (ADS)

    Zhang, Y.; Roundy, J. K.; Ek, M. B.; Wood, E. F.

    2015-12-01

    Prediction and thus preparedness in advance of hydrological extremes, such as drought and flood events, is crucial for proactively reducing their social and economic impacts. In the summers of 2011 Texas, and 2012 the Upper Midwest, experienced intense droughts that affected crops and the food market in the US. It is expected that seasonal forecasts with sufficient skill would reduce the negative impacts through planning and preparation. However, the forecast skill from models such as Climate Forecast System Version 2 (CFSv2) from National Centers for Environmental Prediction (NCEP) is low over the US, especially during the warm season (Jun - Sep), which restricts their practical use for drought prediction. This study analyzes the processes that lead to premature termination of 2011 and 2012 US summer droughts in CFSv2 forecast resulting in its low forecast skill. Using the North American Land Data Assimilation System version 2 (NLDAS2) and Climate Forecast System Reanalysis (CFSR) as references, this study investigates the forecast skills of CFSv2 initialized at 00, 06, 12, 18z from May 15 - 31 (leads out to September) for each event in terms of land-atmosphere interaction, through a recently developed Coupling Drought Index (CDI), which is based on the Convective Triggering Potential-Humidity Index-soil moisture (CTP-HI-SM) classification of four climate regimes: wet coupling, dry coupling, transitional and atmospherically controlled. A recycling model is used to trace the moisture sources in the CFSv2 forecasts of anomalous precipitation, which lead to the breakdown of drought conditions and a lack of drought forecasting skills. This is then compared with tracing the moisture source in CFSR with the same recycling model, which is used as the verification for the same periods. This helps to identify the parameterization that triggered precipitation in CFSv2 during 2011 and 2012 summer in the US thus has the potential to improve the forecast skill of CSFv2.

  20. Disease Prediction Models and Operational Readiness

    PubMed Central

    Corley, Courtney D.; Pullum, Laura L.; Hartley, David M.; Benedum, Corey; Noonan, Christine; Rabinowitz, Peter M.; Lancaster, Mary J.

    2014-01-01

    The objective of this manuscript is to present a systematic review of biosurveillance models that operate on select agents and can forecast the occurrence of a disease event. We define a disease event to be a biological event with focus on the One Health paradigm. These events are characterized by evidence of infection and or disease condition. We reviewed models that attempted to predict a disease event, not merely its transmission dynamics and we considered models involving pathogens of concern as determined by the US National Select Agent Registry (as of June 2011). We searched commercial and government databases and harvested Google search results for eligible models, using terms and phrases provided by public health analysts relating to biosurveillance, remote sensing, risk assessments, spatial epidemiology, and ecological niche modeling. After removal of duplications and extraneous material, a core collection of 6,524 items was established, and these publications along with their abstracts are presented in a semantic wiki at http://BioCat.pnnl.gov. As a result, we systematically reviewed 44 papers, and the results are presented in this analysis. We identified 44 models, classified as one or more of the following: event prediction (4), spatial (26), ecological niche (28), diagnostic or clinical (6), spread or response (9), and reviews (3). The model parameters (e.g., etiology, climatic, spatial, cultural) and data sources (e.g., remote sensing, non-governmental organizations, expert opinion, epidemiological) were recorded and reviewed. A component of this review is the identification of verification and validation (V&V) methods applied to each model, if any V&V method was reported. All models were classified as either having undergone Some Verification or Validation method, or No Verification or Validation. We close by outlining an initial set of operational readiness level guidelines for disease prediction models based upon established Technology Readiness Level definitions. PMID:24647562

  1. A Meso-Climatology Study of the High-Resolution Tower Network Over the Florida Spaceport

    NASA Technical Reports Server (NTRS)

    Case, Jonathan L.; Bauman, William H., III

    2004-01-01

    Forecasters at the US Air Force 45th Weather Squadron (45 WS) use wind and temperature data from the tower network over the Kennedy Space Center (KSC) and Cape Canaveral Air Force Station (CCAFS) to evaluate Launch Commit Criteria and to issue and verify temperature and wind advisories, watches, and warnings for ground operations. The Spaceflight Meteorology Group at the Johnson Space Center in Houston, TX also uses these data when issuing forecasts for shuttle landings at the KSC Shuttle Landing Facility. Systematic biases in these parameters at any of the towers could adversely affect an analysis, forecast, or verification for all of these operations. In addition, substantial geographical variations in temperature and wind speed can occur under specific wind directions. Therefore, the Applied Meteorology Unit (AMU), operated by ENSCO Inc., was tasked to develop a monthly and hourly climatology of temperatures and winds from the tower network, and identify the geographical variation, tower biases, and the magnitude of those biases. This paper presents a sub-set of results from a nine-year climatology of the KSC/CCAFS tower network, highlighting the geographical variations based on location, month, times of day, and specific wind direction regime. Section 2 provides a description of the tower mesonetwork and instrumentation characteristics. Section 3 presents the methodology used to construct the tower climatology including QC methods and data processing. The results of the tower climatology are presented in Section 4 and Section 5 summarizes the paper.

  2. Adapting CALIPSO Climate Measurements for Near Real Time Analyses and Forecasting

    NASA Technical Reports Server (NTRS)

    Vaughan, Mark A.; Trepte, Charles R.; Winker, David M.; Avery, Melody A.; Campbell, James; Hoff, Ray; Young, Stuart; Getzewich, Brian J.; Tackett, Jason L.; Kar, Jayanta

    2011-01-01

    The Cloud-Aerosol Lidar and Infrared Pathfinder satellite Observations (CALIPSO) mission was originally conceived and designed as a climate measurements mission, with considerable latency between data acquisition and the release of the level 1 and level 2 data products. However, the unique nature of the CALIPSO lidar backscatter profiles quickly led to the qualitative use of CALIPSO?s near real time (i.e., ? expedited?) lidar data imagery in several different forecasting applications. To enable quantitative use of their near real time analyses, the CALIPSO project recently expanded their expedited data catalog to include all of the standard level 1 and level 2 lidar data products. Also included is a new cloud cleared level 1.5 profile product developed for use by operational forecast centers for verification of aerosol predictions. This paper describes the architecture and content of the CALIPSO expedited data products. The fidelity and accuracy of the expedited products are assessed via comparisons to the standard CALIPSO data products.

  3. Effect of pellet-cladding interaction (PCI) and degradation mechanisms on spent nuclear fuel rod mechanical performance during transportation

    NASA Astrophysics Data System (ADS)

    Peterson, Brittany Ann

    Winter storms can affect millions of people, with impacts such as disruptions to transportation, hazards to human health, reduction in retail sales, and structural damage. Blizzard forecasts for Alberta Clippers can be a particular challenge in the Northern Plains, as these systems typically depart from the Canadian Rockies, intensify, and impact the Northern Plains all within 24 hours. The purpose of this study is to determine whether probabilistic forecasts derived from a local physics-based ensemble can improve specific aspects of winter storm forecasts for three Alberta Clipper cases. Verification is performed on the ensemble members and ensemble mean with a focus on quantifying uncertainty in the storm track, two-meter winds, and precipitation using the MERRA and NOHRSC SNODAS datasets. This study finds that addition improvements are needed to proceed with operational use of the ensemble blizzard products, but the use of a proxy for blizzard conditions yields promising results.

  4. The Super Tuesday Outbreak: Forecast Sensitivities to Single-Moment Microphysics Schemes

    NASA Technical Reports Server (NTRS)

    Molthan, Andrew L.; Case, Jonathan L.; Dembek, Scott R.; Jedlovec, Gary J.; Lapenta, William M.

    2008-01-01

    Forecast precipitation and radar characteristics are used by operational centers to guide the issuance of advisory products. As operational numerical weather prediction is performed at increasingly finer spatial resolution, convective precipitation traditionally represented by sub-grid scale parameterization schemes is now being determined explicitly through single- or multi-moment bulk water microphysics routines. Gains in forecasting skill are expected through improved simulation of clouds and their microphysical processes. High resolution model grids and advanced parameterizations are now available through steady increases in computer resources. As with any parameterization, their reliability must be measured through performance metrics, with errors noted and targeted for improvement. Furthermore, the use of these schemes within an operational framework requires an understanding of limitations and an estimate of biases so that forecasters and model development teams can be aware of potential errors. The National Severe Storms Laboratory (NSSL) Spring Experiments have produced daily, high resolution forecasts used to evaluate forecast skill among an ensemble with varied physical parameterizations and data assimilation techniques. In this research, high resolution forecasts of the 5-6 February 2008 Super Tuesday Outbreak are replicated using the NSSL configuration in order to evaluate two components of simulated convection on a large domain: sensitivities of quantitative precipitation forecasts to assumptions within a single-moment bulk water microphysics scheme, and to determine if these schemes accurately depict the reflectivity characteristics of well-simulated, organized, cold frontal convection. As radar returns are sensitive to the amount of hydrometeor mass and the distribution of mass among variably sized targets, radar comparisons may guide potential improvements to a single-moment scheme. In addition, object-based verification metrics are evaluated for their utility in gauging model performance and QPF variability.

  5. Similarity-based multi-model ensemble approach for 1-15-day advance prediction of monsoon rainfall over India

    NASA Astrophysics Data System (ADS)

    Jaiswal, Neeru; Kishtawal, C. M.; Bhomia, Swati

    2018-04-01

    The southwest (SW) monsoon season (June, July, August and September) is the major period of rainfall over the Indian region. The present study focuses on the development of a new multi-model ensemble approach based on the similarity criterion (SMME) for the prediction of SW monsoon rainfall in the extended range. This approach is based on the assumption that training with the similar type of conditions may provide the better forecasts in spite of the sequential training which is being used in the conventional MME approaches. In this approach, the training dataset has been selected by matching the present day condition to the archived dataset and days with the most similar conditions were identified and used for training the model. The coefficients thus generated were used for the rainfall prediction. The precipitation forecasts from four general circulation models (GCMs), viz. European Centre for Medium-Range Weather Forecasts (ECMWF), United Kingdom Meteorological Office (UKMO), National Centre for Environment Prediction (NCEP) and China Meteorological Administration (CMA) have been used for developing the SMME forecasts. The forecasts of 1-5, 6-10 and 11-15 days were generated using the newly developed approach for each pentad of June-September during the years 2008-2013 and the skill of the model was analysed using verification scores, viz. equitable skill score (ETS), mean absolute error (MAE), Pearson's correlation coefficient and Nash-Sutcliffe model efficiency index. Statistical analysis of SMME forecasts shows superior forecast skill compared to the conventional MME and the individual models for all the pentads, viz. 1-5, 6-10 and 11-15 days.

  6. Tower Mesonetwork Climatology and Interactive Display Tool

    NASA Technical Reports Server (NTRS)

    Case, Jonathan L.; Bauman, William H., III

    2004-01-01

    Forecasters at the 45th Weather Squadron and Spaceflight Meteorology Group use data from the tower network over the Kennedy Space Center (KSC) and Cape Canaveral Air Force Station (CCAFS) to evaluate Launch Commit Criteria, and issue and verify forecasts for ground operations. Systematic biases in these parameters could adversely affect an analysis, forecast, or verification. Also, substantial geographical variations in temperature and wind speed can occur under specific wind directions. To address these concerns, the Applied Meteorology Unit (AMU) developed a climatology of temperatures and winds from the tower network, and identified the geographical variation and significant tower biases. The mesoclimate is largely driven by the complex land-water interfaces across KSC/CCAFS. Towers with close proximity to water typically had much warmer nocturnal temperatures and higher wind speeds throughout the year. The strongest nocturnal wind speeds occurred from October to March whereas the strongest mean daytime wind speeds occurred from February to May. These results of this project can be viewed by forecasters through an interactive graphical user interface developed by the AMU. The web-based interface includes graphical and map displays of mean, standard deviation, bias, and data availability for any combination of towers, variables, months, hours, and wind directions.

  7. Relative effects of statistical preprocessing and postprocessing on a regional hydrological ensemble prediction system

    NASA Astrophysics Data System (ADS)

    Sharma, Sanjib; Siddique, Ridwan; Reed, Seann; Ahnert, Peter; Mendoza, Pablo; Mejia, Alfonso

    2018-03-01

    The relative roles of statistical weather preprocessing and streamflow postprocessing in hydrological ensemble forecasting at short- to medium-range forecast lead times (day 1-7) are investigated. For this purpose, a regional hydrologic ensemble prediction system (RHEPS) is developed and implemented. The RHEPS is comprised of the following components: (i) hydrometeorological observations (multisensor precipitation estimates, gridded surface temperature, and gauged streamflow); (ii) weather ensemble forecasts (precipitation and near-surface temperature) from the National Centers for Environmental Prediction 11-member Global Ensemble Forecast System Reforecast version 2 (GEFSRv2); (iii) NOAA's Hydrology Laboratory-Research Distributed Hydrologic Model (HL-RDHM); (iv) heteroscedastic censored logistic regression (HCLR) as the statistical preprocessor; (v) two statistical postprocessors, an autoregressive model with a single exogenous variable (ARX(1,1)) and quantile regression (QR); and (vi) a comprehensive verification strategy. To implement the RHEPS, 1 to 7 days weather forecasts from the GEFSRv2 are used to force HL-RDHM and generate raw ensemble streamflow forecasts. Forecasting experiments are conducted in four nested basins in the US Middle Atlantic region, ranging in size from 381 to 12 362 km2. Results show that the HCLR preprocessed ensemble precipitation forecasts have greater skill than the raw forecasts. These improvements are more noticeable in the warm season at the longer lead times (> 3 days). Both postprocessors, ARX(1,1) and QR, show gains in skill relative to the raw ensemble streamflow forecasts, particularly in the cool season, but QR outperforms ARX(1,1). The scenarios that implement preprocessing and postprocessing separately tend to perform similarly, although the postprocessing-alone scenario is often more effective. The scenario involving both preprocessing and postprocessing consistently outperforms the other scenarios. In some cases, however, the differences between this scenario and the scenario with postprocessing alone are not as significant. We conclude that implementing both preprocessing and postprocessing ensures the most skill improvements, but postprocessing alone can often be a competitive alternative.

  8. The Good, the Bad, and the Ugly: Numerical Prediction for Hurricane Juan (2003)

    NASA Astrophysics Data System (ADS)

    Gyakum, J.; McTaggart-Cowan, R.

    2004-05-01

    The range of accuracy of the numerical weather prediction (NWP) guidance for the landfall of Hurricane Juan (2003), from nearly perfect to nearly useless, motivates a study of the NWP forecast errors on 28-29 September 2003 in the eastern North Atlantic. Although the forecasts issued over the period were of very high quality, this is primarily because of the diligence of the forecasters, and not related to the reliability of the numerical predictions provided to them by the North American operational centers and the research community. A bifurcation in the forecast fields from various centers and institutes occurred beginning with the 0000 UTC run of 28 September, and continuing until landfall just after 0000 UTC on 29 September. The GFS (NCEP), Eta (NCEP), GEM (Canadian Meteorological Centre; CMC), and MC2 (McGill) forecast models all showed an extremely weak (minimum SLP above 1000 hPa) remnant vortex moving north-northwestward into the Gulf of Maine and merging with a diabatically-developed surface low offshore. The GFS uses a vortex-relocation scheme, the Eta a vortex bogus, and the GEM and MC2 are run on CMC analyses that contain no enhanced vortex. The UK Met Office operational, the GFDL, and the NOGAPS (US Navy) forecast models all ran a small-scale hurricane-like vortex directly into Nova Scotia and verified very well for this case. The UKMO model uses synthetic observations to enhance structures in poorly-forecasted areas during the analysis cycle and both the GFDL and NOGAPS model use advanced idealized vortex bogusing in their initial conditions. The quality of the McGill MC2 forecast is found to be significantly enhanced using a bogusing technique similar to that used in the initialization of the successful forecast models. A verification of the improved forecast is presented along with a discussion of the need for operational quality control of the background fields in the analysis cycle and for proper representation of strong, small-scale tropical vortices.

  9. A Real-time 1/16° Global Ocean Nowcast/Forecast System

    NASA Astrophysics Data System (ADS)

    Shriver, J. F.; Rhodes, R. C.; Hurlburt, H. E.; Wallcraft, A. J.; Metzger, E. J.; Smedstad, O. M.; Kara, A. B.

    2001-05-01

    A 1/16° eddy-resolving global ocean prediction system that uses the NRL Layered Ocean Model (NLOM) has been transitioned to the Naval Oceanographic Office (NAVO), Stennis Space Center, MS. The system gives a real time view of the ocean down to the 50-100 mile scale of ocean eddies and the meandering of ocean currents and fronts, a view with unprecedented resolution and clarity, and demonstrated forecast skill for a month or more for many ocean features. It has been running in real time at NAVO since 19 Oct 2000 with assimilation of real-time altimeter sea surface height (SSH) data (currently ERS-2, GFO and TOPEX/POSEIDON) and sea surface temperature (SST). The model is updated daily and 4-day forecasts are made daily. 30-day forecasts are made once a week. Nowcasts and forecasts using this model are viewable on the web, including SSH, SST and 30-day forecast verification statistics for many zoom regions. The NRL web address is http://www7320.nrlssc.navy.mil/global_nlom/index.html. The NAVO web address is: http://www.navo.navy.mil. Click on "Operational Products", then "Product Search Form", then "Product Type View", then select "Model Navy Layered Ocean Model" and a region and click on "Submit Query". This system is used at NAVO for ocean front and eddy analyses and predictions and to provide accurate sea surface height for use in computing synthetic temperature and salinity profiles, among other applications.

  10. The SPoRT-WRF: Evaluating the Impact of NASA Datasets on Convective Forecasts

    NASA Technical Reports Server (NTRS)

    Zavodsky, Bradley; Case, Jonathan; Kozlowski, Danielle; Molthan, Andrew

    2012-01-01

    The Short-term Prediction Research and Transition Center (SPoRT) is a collaborative partnership between NASA and operational forecasting entities, including a number of National Weather Service offices. SPoRT transitions real-time NASA products and capabilities to its partners to address specific operational forecast challenges. One challenge that forecasters face is applying convection-allowing numerical models to predict mesoscale convective weather. In order to address this specific forecast challenge, SPoRT produces real-time mesoscale model forecasts using the Weather Research and Forecasting (WRF) model that includes unique NASA products and capabilities. Currently, the SPoRT configuration of the WRF model (SPoRT-WRF) incorporates the 4-km Land Information System (LIS) land surface data, 1-km SPoRT sea surface temperature analysis and 1-km Moderate resolution Imaging Spectroradiometer (MODIS) greenness vegetation fraction (GVF) analysis, and retrieved thermodynamic profiles from the Atmospheric Infrared Sounder (AIRS). The LIS, SST, and GVF data are all integrated into the SPoRT-WRF through adjustments to the initial and boundary conditions, and the AIRS data are assimilated into a 9-hour SPoRT WRF forecast each day at 0900 UTC. This study dissects the overall impact of the NASA datasets and the individual surface and atmospheric component datasets on daily mesoscale forecasts. A case study covering the super tornado outbreak across the Ce ntral and Southeastern United States during 25-27 April 2011 is examined. Three different forecasts are analyzed including the SPoRT-WRF (NASA surface and atmospheric data), the SPoRT WRF without AIRS (NASA surface data only), and the operational National Severe Storms Laboratory (NSSL) WRF (control with no NASA data). The forecasts are compared qualitatively by examining simulated versus observed radar reflectivity. Differences between the simulated reflectivity are further investigated using convective parameters along with model soundings to determine the impacts of the various NASA datasets. Additionally, quantitative evaluation of select meteorological parameters is performed using the Meteorological Evaluation Tools model verification package to compare forecasts to in situ surface and upper air observations.

  11. A study comparison of two system model performance in estimated lifted index over Indonesia.

    NASA Astrophysics Data System (ADS)

    lestari, Juliana tri; Wandala, Agie

    2018-05-01

    Lifted index (LI) is one of atmospheric stability indices that used for thunderstorm forecasting. Numerical weather Prediction Models are essential for accurate weather forecast these day. This study has completed the attempt to compare the two NWP models these are Weather Research Forecasting (WRF) model and Global Forecasting System (GFS) model in estimates LI at 20 locations over Indonesia and verified the result with observation. Taylor diagram was used to comparing the models skill with shown the value of standard deviation, coefficient correlation and Root mean square error (RMSE). This study using the dataset on 00.00 UTC and 12.00 UTC during mid-March to Mid-April 2017. From the sample of LI distributions, both models have a tendency to overestimated LI value in almost all region in Indonesia while the WRF models has the better ability to catch the LI pattern distribution with observation than GFS model has. The verification result shows how both WRF and GFS model have such a weak relationship with observation except Eltari meteorologi station that its coefficient correlation reach almost 0.6 with the low RMSE value. Mean while WRF model have a better performance than GFS model. This study suggest that estimated LI of WRF model can provide the good performance for Thunderstorm forecasting over Indonesia in the future. However unsufficient relation between output models and observation in the certain location need a further investigation.

  12. Multiphysics superensemble forecast applied to Mediterranean heavy precipitation situations

    NASA Astrophysics Data System (ADS)

    Vich, M.; Romero, R.

    2010-11-01

    The high-impact precipitation events that regularly affect the western Mediterranean coastal regions are still difficult to predict with the current prediction systems. Bearing this in mind, this paper focuses on the superensemble technique applied to the precipitation field. Encouraged by the skill shown by a previous multiphysics ensemble prediction system applied to western Mediterranean precipitation events, the superensemble is fed with this ensemble. The training phase of the superensemble contributes to the actual forecast with weights obtained by comparing the past performance of the ensemble members and the corresponding observed states. The non-hydrostatic MM5 mesoscale model is used to run the multiphysics ensemble. Simulations are performed with a 22.5 km resolution domain (Domain 1 in http://mm5forecasts.uib.es) nested in the ECMWF forecast fields. The period between September and December 2001 is used to train the superensemble and a collection of 19~MEDEX cyclones is used to test it. The verification procedure involves testing the superensemble performance and comparing it with that of the poor-man and bias-corrected ensemble mean and the multiphysic EPS control member. The results emphasize the need of a well-behaved training phase to obtain good results with the superensemble technique. A strategy to obtain this improved training phase is already outlined.

  13. Toward Better Intraseasonal and Seasonal Prediction: Verification and Evaluation of the NOGAPS Model Forecasts

    DTIC Science & Technology

    2013-09-30

    Circulation (HC) in terms of the meridional streamfunction. The interannual variability of the Atlantic HC in boreal summer was examined using the EOF...large-scale circulations in the NAVGEM model and the source of predictability for the seasonal variation of the Atlantic TCs. We have been working...EOF analysis of Meridional Circulation (JAS). (a) The leading mode (M1); (b) variance explained by the first 10 modes. 9

  14. Evaluation of the 29-km Eta Model. Part I: Objective Verification at Three Selected Stations

    NASA Technical Reports Server (NTRS)

    Manobianco, John; Nutter, Paul

    1998-01-01

    A subjective evaluation of the National Centers for Environmental Prediction 29-km (meso-) eta model during the 1996 warm (May-August) and cool (October-January) seasons is described. The overall evaluation assessed the utility of the model for operational weather forecasting by the U.S. Air Force 45th Weather Squadron, National Weather Service (NWS) Spaceflight Meteorology Group (SMG) and NWS Office in Melbourne, FL.

  15. Individual versus superensemble forecasts of seasonal influenza outbreaks in the United States.

    PubMed

    Yamana, Teresa K; Kandula, Sasikiran; Shaman, Jeffrey

    2017-11-01

    Recent research has produced a number of methods for forecasting seasonal influenza outbreaks. However, differences among the predicted outcomes of competing forecast methods can limit their use in decision-making. Here, we present a method for reconciling these differences using Bayesian model averaging. We generated retrospective forecasts of peak timing, peak incidence, and total incidence for seasonal influenza outbreaks in 48 states and 95 cities using 21 distinct forecast methods, and combined these individual forecasts to create weighted-average superensemble forecasts. We compared the relative performance of these individual and superensemble forecast methods by geographic location, timing of forecast, and influenza season. We find that, overall, the superensemble forecasts are more accurate than any individual forecast method and less prone to producing a poor forecast. Furthermore, we find that these advantages increase when the superensemble weights are stratified according to the characteristics of the forecast or geographic location. These findings indicate that different competing influenza prediction systems can be combined into a single more accurate forecast product for operational delivery in real time.

  16. Individual versus superensemble forecasts of seasonal influenza outbreaks in the United States

    PubMed Central

    Kandula, Sasikiran; Shaman, Jeffrey

    2017-01-01

    Recent research has produced a number of methods for forecasting seasonal influenza outbreaks. However, differences among the predicted outcomes of competing forecast methods can limit their use in decision-making. Here, we present a method for reconciling these differences using Bayesian model averaging. We generated retrospective forecasts of peak timing, peak incidence, and total incidence for seasonal influenza outbreaks in 48 states and 95 cities using 21 distinct forecast methods, and combined these individual forecasts to create weighted-average superensemble forecasts. We compared the relative performance of these individual and superensemble forecast methods by geographic location, timing of forecast, and influenza season. We find that, overall, the superensemble forecasts are more accurate than any individual forecast method and less prone to producing a poor forecast. Furthermore, we find that these advantages increase when the superensemble weights are stratified according to the characteristics of the forecast or geographic location. These findings indicate that different competing influenza prediction systems can be combined into a single more accurate forecast product for operational delivery in real time. PMID:29107987

  17. Monthly and seasonally verification of precipitation in Poland

    NASA Astrophysics Data System (ADS)

    Starosta, K.; Linkowska, J.

    2009-04-01

    The national meteorological service of Poland - the Institute of Meteorology and Water Management (IMWM) joined COSMO - The Consortium for Small Scale Modelling on July 2004. In Poland, the COSMO _PL model version 3.5 had run till June 2007. Since July 2007, the model version 4.0 has been running. The model runs in an operational mode at 14-km grid spacing, twice a day (00 UTC, 12 UTC). For scientific research also model with 7-km grid spacing is ran. Monthly and seasonally verification for the 24-hours (06 UTC - 06 UTC) accumulated precipitation is presented in this paper. The precipitation field of COSMO_LM had been verified against rain gauges network (308 points). The verification had been made for every month and all seasons from December 2007 to December 2008. The verification was made for three forecast days for selected thresholds: 0.5, 1, 2.5, 5, 10, 20, 25, 30 mm. Following indices from contingency table were calculated: FBI (bias), POD (probability of detection), PON (probability of detection of non event), FAR (False alarm rate), TSS (True sill statistic), HSS (Heidke skill score), ETS (Equitable skill score). Also percentile ranks and ROC-relative operating characteristic are presented. The ROC is a graph of the hit rate (Y-axis) against false alarm rate (X-axis) for different decision thresholds

  18. Monthly and seasonally verification of precipitation in Poland

    NASA Astrophysics Data System (ADS)

    Starosta, K.; Linkowska, J.

    2009-04-01

    The national meteorological service of Poland - the Institute of Meteorology and Water Management (IMWM) joined COSMO - The Consortium for Small Scale Modelling on July 2004. In Poland, the COSMO _PL model version 3.5 had run till June 2007. Since July 2007, the model version 4.0 has been running. The model runs in an operational mode at 14-km grid spacing, twice a day (00 UTC, 12 UTC). For scientific research also model with 7-km grid spacing is ran. Monthly and seasonally verification for the 24-hours (06 UTC - 06 UTC) accumulated precipitation is presented in this paper. The precipitation field of COSMO_LM had been verified against rain gauges network (308 points). The verification had been made for every month and all seasons from December 2007 to December 2008. The verification was made for three forecast days for selected thresholds: 0.5, 1, 2.5, 5, 10, 20, 25, 30 mm. Following indices from contingency table were calculated: FBI (bias), POD (probability of detection), PON (probability of detection of non event), FAR (False alarm rate), TSS (True sill statistic), HSS (Heidke skill score), ETS (Equitable skill score). Also percentile ranks and ROC-relative operating characteristic are presented. The ROC is a graph of the hit rate (Y-axis) against false alarm rate (X-axis) for different decision thresholds.

  19. How accurate are the weather forecasts for Bierun (southern Poland)?

    NASA Astrophysics Data System (ADS)

    Gawor, J.

    2012-04-01

    Weather forecast accuracy has increased in recent times mainly thanks to significant development of numerical weather prediction models. Despite the improvements, the forecasts should be verified to control their quality. The evaluation of forecast accuracy can also be an interesting learning activity for students. It joins natural curiosity about everyday weather and scientific process skills: problem solving, database technologies, graph construction and graphical analysis. The examination of the weather forecasts has been taken by a group of 14-year-old students from Bierun (southern Poland). They participate in the GLOBE program to develop inquiry-based investigations of the local environment. For the atmospheric research the automatic weather station is used. The observed data were compared with corresponding forecasts produced by two numerical weather prediction models, i.e. COAMPS (Coupled Ocean/Atmosphere Mesoscale Prediction System) developed by Naval Research Laboratory Monterey, USA; it runs operationally at the Interdisciplinary Centre for Mathematical and Computational Modelling in Warsaw, Poland and COSMO (The Consortium for Small-scale Modelling) used by the Polish Institute of Meteorology and Water Management. The analysed data included air temperature, precipitation, wind speed, wind chill and sea level pressure. The prediction periods from 0 to 24 hours (Day 1) and from 24 to 48 hours (Day 2) were considered. The verification statistics that are commonly used in meteorology have been applied: mean error, also known as bias, for continuous data and a 2x2 contingency table to get the hit rate and false alarm ratio for a few precipitation thresholds. The results of the aforementioned activity became an interesting basis for discussion. The most important topics are: 1) to what extent can we rely on the weather forecasts? 2) How accurate are the forecasts for two considered time ranges? 3) Which precipitation threshold is the most predictable? 4) Why are some weather elements easier to verify than others? 5) What factors may contribute to the quality of the weather forecast?

  20. Objective Lightning Probability Forecasting for Kennedy Space Center and Cape Canaveral Air Force Station, Phase III

    NASA Technical Reports Server (NTRS)

    Crawford, Winifred C.

    2010-01-01

    The AMU created new logistic regression equations in an effort to increase the skill of the Objective Lightning Forecast Tool developed in Phase II (Lambert 2007). One equation was created for each of five sub-seasons based on the daily lightning climatology instead of by month as was done in Phase II. The assumption was that these equations would capture the physical attributes that contribute to thunderstorm formation more so than monthly equations. However, the SS values in Section 5.3.2 showed that the Phase III equations had worse skill than the Phase II equations and, therefore, will not be transitioned into operations. The current Objective Lightning Forecast Tool developed in Phase II will continue to be used operationally in MIDDS. Three warm seasons were added to the Phase II dataset to increase the POR from 17 to 20 years (1989-2008), and data for October were included since the daily climatology showed lightning occurrence extending into that month. None of the three methods tested to determine the start of the subseason in each individual year were able to discern the start dates with consistent accuracy. Therefore, the start dates were determined by the daily climatology shown in Figure 10 and were the same in every year. The procedures used to create the predictors and develop the equations were identical to those in Phase II. The equations were made up of one to three predictors. TI and the flow regime probabilities were the top predictors followed by 1-day persistence, then VT and Ll. Each equation outperformed four other forecast methods by 7-57% using the verification dataset, but the new equations were outperformed by the Phase II equations in every sub-season. The reason for the degradation may be due to the fact that the same sub-season start dates were used in every year. It is likely there was overlap of sub-season days at the beginning and end of each defined sub-season in each individual year, which could very well affect equation performance.

  1. Probabilistic verification of cloud fraction from three different products with CALIPSO

    NASA Astrophysics Data System (ADS)

    Jung, B. J.; Descombes, G.; Snyder, C.

    2017-12-01

    In this study, we present how Cloud-Aerosol Lidar and Infrared Pathfinder Satellite Observation (CALIPSO) can be used for probabilistic verification of cloud fraction, and apply this probabilistic approach to three cloud fraction products: a) The Air Force Weather (AFW) World Wide Merged Cloud Analysis (WWMCA), b) Satellite Cloud Observations and Radiative Property retrieval Systems (SatCORPS) from NASA Langley Research Center, and c) Multi-sensor Advection Diffusion nowCast (MADCast) from NCAR. Although they differ in their details, both WWMCA and SatCORPS retrieve cloud fraction from satellite observations, mainly of infrared radiances. MADCast utilizes in addition a short-range forecast of cloud fraction (provided by the Model for Prediction Across Scales, assuming cloud fraction is advected as a tracer) and a column-by-column particle filter implemented within the Gridpoint Statistical Interpolation (GSI) data-assimilation system. The probabilistic verification considers the retrieved or analyzed cloud fractions as predicting the probability of cloud at any location within a grid cell and the 5-km vertical feature mask (VFM) from CALIPSO level-2 products as a point observation of cloud.

  2. Air Pollution Forecasts: An Overview

    PubMed Central

    Bai, Lu; Wang, Jianzhou; Lu, Haiyan

    2018-01-01

    Air pollution is defined as a phenomenon harmful to the ecological system and the normal conditions of human existence and development when some substances in the atmosphere exceed a certain concentration. In the face of increasingly serious environmental pollution problems, scholars have conducted a significant quantity of related research, and in those studies, the forecasting of air pollution has been of paramount importance. As a precaution, the air pollution forecast is the basis for taking effective pollution control measures, and accurate forecasting of air pollution has become an important task. Extensive research indicates that the methods of air pollution forecasting can be broadly divided into three classical categories: statistical forecasting methods, artificial intelligence methods, and numerical forecasting methods. More recently, some hybrid models have been proposed, which can improve the forecast accuracy. To provide a clear perspective on air pollution forecasting, this study reviews the theory and application of those forecasting models. In addition, based on a comparison of different forecasting methods, the advantages and disadvantages of some methods of forecasting are also provided. This study aims to provide an overview of air pollution forecasting methods for easy access and reference by researchers, which will be helpful in further studies. PMID:29673227

  3. Air Pollution Forecasts: An Overview.

    PubMed

    Bai, Lu; Wang, Jianzhou; Ma, Xuejiao; Lu, Haiyan

    2018-04-17

    Air pollution is defined as a phenomenon harmful to the ecological system and the normal conditions of human existence and development when some substances in the atmosphere exceed a certain concentration. In the face of increasingly serious environmental pollution problems, scholars have conducted a significant quantity of related research, and in those studies, the forecasting of air pollution has been of paramount importance. As a precaution, the air pollution forecast is the basis for taking effective pollution control measures, and accurate forecasting of air pollution has become an important task. Extensive research indicates that the methods of air pollution forecasting can be broadly divided into three classical categories: statistical forecasting methods, artificial intelligence methods, and numerical forecasting methods. More recently, some hybrid models have been proposed, which can improve the forecast accuracy. To provide a clear perspective on air pollution forecasting, this study reviews the theory and application of those forecasting models. In addition, based on a comparison of different forecasting methods, the advantages and disadvantages of some methods of forecasting are also provided. This study aims to provide an overview of air pollution forecasting methods for easy access and reference by researchers, which will be helpful in further studies.

  4. Alternative configurations of Quantile Regression for estimating predictive uncertainty in water level forecasts for the Upper Severn River: a comparison

    NASA Astrophysics Data System (ADS)

    Lopez, Patricia; Verkade, Jan; Weerts, Albrecht; Solomatine, Dimitri

    2014-05-01

    Hydrological forecasting is subject to many sources of uncertainty, including those originating in initial state, boundary conditions, model structure and model parameters. Although uncertainty can be reduced, it can never be fully eliminated. Statistical post-processing techniques constitute an often used approach to estimate the hydrological predictive uncertainty, where a model of forecast error is built using a historical record of past forecasts and observations. The present study focuses on the use of the Quantile Regression (QR) technique as a hydrological post-processor. It estimates the predictive distribution of water levels using deterministic water level forecasts as predictors. This work aims to thoroughly verify uncertainty estimates using the implementation of QR that was applied in an operational setting in the UK National Flood Forecasting System, and to inter-compare forecast quality and skill in various, differing configurations of QR. These configurations are (i) 'classical' QR, (ii) QR constrained by a requirement that quantiles do not cross, (iii) QR derived on time series that have been transformed into the Normal domain (Normal Quantile Transformation - NQT), and (iv) a piecewise linear derivation of QR models. The QR configurations are applied to fourteen hydrological stations on the Upper Severn River with different catchments characteristics. Results of each QR configuration are conditionally verified for progressively higher flood levels, in terms of commonly used verification metrics and skill scores. These include Brier's probability score (BS), the continuous ranked probability score (CRPS) and corresponding skill scores as well as the Relative Operating Characteristic score (ROCS). Reliability diagrams are also presented and analysed. The results indicate that none of the four Quantile Regression configurations clearly outperforms the others.

  5. Evaluation of the North American Multi-Model Ensemble System for Monthly and Seasonal Prediction

    NASA Astrophysics Data System (ADS)

    Zhang, Q.

    2014-12-01

    Since August 2011, the real time seasonal forecasts of the U.S. National Multi-Model Ensemble (NMME) have been made on 8th of each month by NCEP Climate Prediction Center (CPC). The participating models were NCEP/CFSv1&2, GFDL/CM2.2, NCAR/U.Miami/COLA/CCSM3, NASA/GEOS5, IRI/ ECHAM-a & ECHAM-f in the first year of the real time NMME forecast. Two Canadian coupled models CMC/CanCM3 and CM4 joined in and CFSv1 and IRI's models dropped out in the second year. The NMME team at CPC collects monthly means of three variables, precipitation, temperature at 2m and sea surface temperature from each modeling center on a 1x1 global grid, removes systematic errors, makes the grand ensemble mean in equal weight for each model mean and probability forecast with equal weight for each member of each model. This provides the NMME forecast locked in schedule for the CPC operational seasonal and monthly outlook. The basic verification metrics of seasonal and monthly prediction of NMME are calculated as an evaluation of skill, including both deterministic and probabilistic forecasts for the 3-year real time (August, 2011- July 2014) period and the 30-year retrospective forecast (1982-2011) of the individual models as well as the NMME ensemble. The motivation of this study is to provide skill benchmarks for future improvements of the NMME seasonal and monthly prediction system. We also want to establish whether the real time and hindcast periods (used for bias correction in real time) are consistent. The experimental phase I of the project already supplies routine guidance to users of the NMME forecasts.

  6. Observations and modeling of the effects of waves and rotors on submeso and turbulence variability within the stable boundary layer over central Pennsylvania

    NASA Astrophysics Data System (ADS)

    Suarez Mullins, Astrid

    Terrain-induced gravity waves and rotor circulations have been hypothesized to enhance the generation of submeso motions (i.e., nonstationary shear events with spatial and temporal scales greater than the turbulence scale and smaller than the meso-gamma scale) and to modulate low-level intermittency in the stable boundary layer (SBL). Intermittent turbulence, generated by submeso motions and/or the waves, can affect the atmospheric transport and dispersion of pollutants and hazardous materials. Thus, the study of these motions and the mechanisms through which they impact the weakly to very stable SBL is crucial for improving air quality modeling and hazard predictions. In this thesis, the effects of waves and rotor circulations on submeso and turbulence variability within the SBL is investigated over the moderate terrain of central Pennsylvania using special observations from a network deployed at Rock Springs, PA and high-resolution Weather Research and Forecasting (WRF) model forecasts. The investigation of waves and rotors over central PA is important because 1) the moderate topography of this region is common to most of the eastern US and thus the knowledge acquired from this study can be of significance to a large population, 2) there have been little evidence of complex wave structures and rotors reported for this region, and 3) little is known about the waves and rotors generated by smaller and more moderate topographies. Six case studies exhibiting an array of wave and rotor structures are analyzed. Observational evidence of the presence of complex wave structures, resembling nonstationary trapped gravity waves and downslope windstorms, and complex rotor circulations, resembling trapped and jump-type rotors, is presented. These motions and the mechanisms through which they modulate the SBL are further investigated using high-resolution WRF forecasts. First, the efficacy of the 0.444-km horizontal grid spacing WRF model to reproduce submeso and meso-gamma motions, generated by waves and rotors and hypothesized to impact the SBL, is investigated using a new wavelet-based verification methodology for assessing non-deterministic model skill in the submeso and meso-gamma range to complement standard deterministic measures. This technique allows the verification and/or intercomparison of any two nonstationary stochastic systems without many of the limitations of typical wavelet-based verification approaches (e.g., selection of noise models, testing for significance, etc.). Through this analysis, it is shown that the WRF model largely underestimates the number of small amplitude fluctuations in the small submeso range, as expected; and it overestimates the number of small amplitude fluctuations in the meso-gamma range, generally resulting in forecasts that are too smooth. Investigation of the variability for different initialization strategies shows that deterministic wind speed predictions are less sensitive to the choice of initialization strategy than temperature forecasts. Similarly, investigation of the variability for various planetary boundary layer (PBL) parameterizations reveals that turbulent kinetic energy (TKE)-based schemes have an advantage over the non-local schemes for non-deterministic motions. The larger spread in the verification scores for various PBL parameterizations than initialization strategies indicates that PBL parameterization may play a larger role modulating the variability of non-deterministic motions in the SBL for these cases. These results confirm previous findings that have shown WRF to have limited skill forecasting submeso variability for periods greater than ~20 min. The limited skill of the WRF at these scales in these cases is related to the systematic underestimation of the amplitude of observed fluctuations. These results are implemented in the model design and configuration for the investigation of nonstationary waves and rotor structures modulating submeso and mesogamma motions and the SBL. Observations and WRF forecasts of two wave cases characterized by nonstationary waves and rotors are investigated to show the WRF model to have reasonable accuracy forecasting low-level temperature and wind speed in the SBL and to qualitatively produce rotors, similar to those observed, as well as some of the mechanisms modulating their development and evolution. Finally, observations and high-resolution WRF forecasts under different environmental conditions using various initialization strategies are used to investigate the impact of nonlinear gravity waves and rotor structures on the generation of intermittent turbulence and valley transport in the SBL. Evidence of the presence of elevated regions of TKE generated by the complex waves and rotors is presented and investigated using an additional four case studies, exhibiting two synoptic flow regimes and different wave and rotor structures. Throughout this thesis, terrain-induced gravity waves and rotors in the SBL are shown to synergistically interact with the surface cold pool and to enhance low-level turbulence intermittency through the development of submeso and meso-gamma motions. These motions are shown to be an important source of uncertainty for the atmospheric transport and dispersion of pollutants and hazardous materials under very stable conditions. (Abstract shortened by ProQuest.).

  7. IEA Wind Task 36 Forecasting

    NASA Astrophysics Data System (ADS)

    Giebel, Gregor; Cline, Joel; Frank, Helmut; Shaw, Will; Pinson, Pierre; Hodge, Bri-Mathias; Kariniotakis, Georges; Sempreviva, Anna Maria; Draxl, Caroline

    2017-04-01

    Wind power forecasts have been used operatively for over 20 years. Despite this fact, there are still several possibilities to improve the forecasts, both from the weather prediction side and from the usage of the forecasts. The new International Energy Agency (IEA) Task on Wind Power Forecasting tries to organise international collaboration, among national weather centres with an interest and/or large projects on wind forecast improvements (NOAA, DWD, UK MetOffice, …) and operational forecaster and forecast users. The Task is divided in three work packages: Firstly, a collaboration on the improvement of the scientific basis for the wind predictions themselves. This includes numerical weather prediction model physics, but also widely distributed information on accessible datasets for verification. Secondly, we will be aiming at an international pre-standard (an IEA Recommended Practice) on benchmarking and comparing wind power forecasts, including probabilistic forecasts aiming at industry and forecasters alike. This WP will also organise benchmarks, in cooperation with the IEA Task WakeBench. Thirdly, we will be engaging end users aiming at dissemination of the best practice in the usage of wind power predictions, especially probabilistic ones. The Operating Agent is Gregor Giebel of DTU, Co-Operating Agent is Joel Cline of the US Department of Energy. Collaboration in the task is solicited from everyone interested in the forecasting business. We will collaborate with IEA Task 31 Wakebench, which developed the Windbench benchmarking platform, which this task will use for forecasting benchmarks. The task runs for three years, 2016-2018. Main deliverables are an up-to-date list of current projects and main project results, including datasets which can be used by researchers around the world to improve their own models, an IEA Recommended Practice on performance evaluation of probabilistic forecasts, a position paper regarding the use of probabilistic forecasts, and one or more benchmark studies implemented on the Windbench platform hosted at CENER. Additionally, spreading of relevant information in both the forecasters and the users community is paramount. The poster also shows the work done in the first half of the Task, e.g. the collection of available datasets and the learnings from a public workshop on 9 June in Barcelona on Experiences with the Use of Forecasts and Gaps in Research. Participation is open for all interested parties in member states of the IEA Annex on Wind Power, see ieawind.org for the up-to-date list. For collaboration, please contact the author grgi@dtu.dk).

  8. Transitioning Enhanced Land Surface Initialization and Model Verification Capabilities to the Kenya Meteorological Department (KMD)

    NASA Technical Reports Server (NTRS)

    Case, Jonathan L.; Mungai, John; Sakwa, Vincent; Zavodsky, Bradley T.; Srikishen, Jayanthi; Limaye, Ashutosh; Blankenship, Clay B.

    2016-01-01

    Flooding, severe weather, and drought are key forecasting challenges for the Kenya Meteorological Department (KMD), based in Nairobi, Kenya. Atmospheric processes leading to convection, excessive precipitation and/or prolonged drought can be strongly influenced by land cover, vegetation, and soil moisture content, especially during anomalous conditions and dry/wet seasonal transitions. It is thus important to represent accurately land surface state variables (green vegetation fraction, soil moisture, and soil temperature) in Numerical Weather Prediction (NWP) models. The NASA SERVIR and the Short-term Prediction Research and Transition (SPoRT) programs in Huntsville, AL have established a working partnership with KMD to enhance its regional modeling capabilities. SPoRT and SERVIR are providing experimental land surface initialization datasets and model verification capabilities for capacity building at KMD. To support its forecasting operations, KMD is running experimental configurations of the Weather Research and Forecasting (WRF; Skamarock et al. 2008) model on a 12-km/4-km nested regional domain over eastern Africa, incorporating the land surface datasets provided by NASA SPoRT and SERVIR. SPoRT, SERVIR, and KMD participated in two training sessions in March 2014 and June 2015 to foster the collaboration and use of unique land surface datasets and model verification capabilities. Enhanced regional modeling capabilities have the potential to improve guidance in support of daily operations and high-impact weather and climate outlooks over Eastern Africa. For enhanced land-surface initialization, the NASA Land Information System (LIS) is run over Eastern Africa at 3-km resolution, providing real-time land surface initialization data in place of interpolated global model soil moisture and temperature data available at coarser resolutions. Additionally, real-time green vegetation fraction (GVF) composites from the Suomi-NPP VIIRS instrument is being incorporated into the KMD-WRF runs, using the product generated by NOAA/NESDIS. Model verification capabilities are also being transitioned to KMD using NCAR's Model *Corresponding author address: Jonathan Case, ENSCO, Inc., 320 Sparkman Dr., Room 3008, Huntsville, AL, 35805. Email: Jonathan.Case-1@nasa.gov Evaluation Tools (MET; Brown et al. 2009) software in conjunction with a SPoRT-developed scripting package, in order to quantify and compare errors in simulated temperature, moisture and precipitation in the experimental WRF model simulations. This extended abstract and accompanying presentation summarizes the efforts and training done to date to support this unique regional modeling initiative at KMD. To honor the memory of Dr. Peter J. Lamb and his extensive efforts in bolstering weather and climate science and capacity-building in Africa, we offer this contribution to the special Peter J. Lamb symposium. The remainder of this extended abstract is organized as follows. The collaborating international organizations involved in the project are presented in Section 2. Background information on the unique land surface input datasets is presented in Section 3. The hands-on training sessions from March 2014 and June 2015 are described in Section 4. Sample experimental WRF output and verification from the June 2015 training are given in Section 5. A summary is given in Section 6, followed by Acknowledgements and References.

  9. 24 CFR 985.3 - Indicators, HUD verification methods and ratings.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 24 Housing and Urban Development 4 2010-04-01 2010-04-01 false Indicators, HUD verification..., HUD verification methods and ratings. This section states the performance indicators that are used to assess PHA Section 8 management. HUD will use the verification method identified for each indicator in...

  10. HEPEX - achievements and challenges!

    NASA Astrophysics Data System (ADS)

    Pappenberger, Florian; Ramos, Maria-Helena; Thielen, Jutta; Wood, Andy; Wang, Qj; Duan, Qingyun; Collischonn, Walter; Verkade, Jan; Voisin, Nathalie; Wetterhall, Fredrik; Vuillaume, Jean-Francois Emmanuel; Lucatero Villasenor, Diana; Cloke, Hannah L.; Schaake, John; van Andel, Schalk-Jan

    2014-05-01

    HEPEX is an international initiative bringing together hydrologists, meteorologists, researchers and end-users to develop advanced probabilistic hydrological forecast techniques for improved flood, drought and water management. HEPEX was launched in 2004 as an independent, cooperative international scientific activity. During the first meeting, the overarching goal was defined as: "to develop and test procedures to produce reliable hydrological ensemble forecasts, and to demonstrate their utility in decision making related to the water, environmental and emergency management sectors." The applications of hydrological ensemble predictions span across large spatio-temporal scales, ranging from short-term and localized predictions to global climate change and regional modeling. Within the HEPEX community, information is shared through its blog (www.hepex.org), meetings, testbeds and intercompaison experiments, as well as project reportings. Key questions of HEPEX are: * What adaptations are required for meteorological ensemble systems to be coupled with hydrological ensemble systems? * How should the existing hydrological ensemble prediction systems be modified to account for all sources of uncertainty within a forecast? * What is the best way for the user community to take advantage of ensemble forecasts and to make better decisions based on them? This year HEPEX celebrates its 10th year anniversary and this poster will present a review of the main operational and research achievements and challenges prepared by Hepex contributors on data assimilation, post-processing of hydrologic predictions, forecast verification, communication and use of probabilistic forecasts in decision-making. Additionally, we will present the most recent activities implemented by Hepex and illustrate how everyone can join the community and participate to the development of new approaches in hydrologic ensemble prediction.

  11. Ensemble flare forecasting: using numerical weather prediction techniques to improve space weather operations

    NASA Astrophysics Data System (ADS)

    Murray, S.; Guerra, J. A.

    2017-12-01

    One essential component of operational space weather forecasting is the prediction of solar flares. Early flare forecasting work focused on statistical methods based on historical flaring rates, but more complex machine learning methods have been developed in recent years. A multitude of flare forecasting methods are now available, however it is still unclear which of these methods performs best, and none are substantially better than climatological forecasts. Current operational space weather centres cannot rely on automated methods, and generally use statistical forecasts with a little human intervention. Space weather researchers are increasingly looking towards methods used in terrestrial weather to improve current forecasting techniques. Ensemble forecasting has been used in numerical weather prediction for many years as a way to combine different predictions in order to obtain a more accurate result. It has proved useful in areas such as magnetospheric modelling and coronal mass ejection arrival analysis, however has not yet been implemented in operational flare forecasting. Here we construct ensemble forecasts for major solar flares by linearly combining the full-disk probabilistic forecasts from a group of operational forecasting methods (ASSA, ASAP, MAG4, MOSWOC, NOAA, and Solar Monitor). Forecasts from each method are weighted by a factor that accounts for the method's ability to predict previous events, and several performance metrics (both probabilistic and categorical) are considered. The results provide space weather forecasters with a set of parameters (combination weights, thresholds) that allow them to select the most appropriate values for constructing the 'best' ensemble forecast probability value, according to the performance metric of their choice. In this way different forecasts can be made to fit different end-user needs.

  12. A new method for evaluating impacts of data assimilation with respect to tropical cyclone intensity forecast problem

    NASA Astrophysics Data System (ADS)

    Vukicevic, T.; Uhlhorn, E.; Reasor, P.; Klotz, B.

    2012-12-01

    A significant potential for improving numerical model forecast skill of tropical cyclone (TC) intensity by assimilation of airborne inner core observations in high resolution models has been demonstrated in recent studies. Although encouraging , the results so far have not provided clear guidance on the critical information added by the inner core data assimilation with respect to the intensity forecast skill. Better understanding of the relationship between the intensity forecast and the value added by the assimilation is required to further the progress, including the assimilation of satellite observations. One of the major difficulties in evaluating such a relationship is the forecast verification metric of TC intensity: the maximum one-minute sustained wind speed at 10 m above surface. The difficulty results from two issues : 1) the metric refers to a practically unobservable quantity since it is an extreme value in a highly turbulent, and spatially-extensive wind field and 2) model- and observation-based estimates of this measure are not compatible in terms of spatial and temporal scales, even in high-resolution models. Although the need for predicting the extreme value of near surface wind is well justified, and the observation-based estimates that are used in practice are well thought of, a revised metric for the intensity is proposed for the purpose of numerical forecast evaluation and the impacts on the forecast. The metric should enable a robust observation- and model-resolvable and phenomenologically-based evaluation of the impacts. It is shown that the maximum intensity could be represented in terms of decomposition into deterministic and stochastic components of the wind field. Using the vortex-centric cylindrical reference frame, the deterministic component is defined as the sum of amplitudes of azimuthal wave numbers 0 and 1 at the radius of maximum wind, whereas the stochastic component is represented by a non-Gaussian PDF. This decomposition is exact and fully independent of individual TC properties. The decomposition of the maximum wind intensity was first evaluated using several sources of data including Step Frequency Microwave Radiometer surface wind speeds from NOAA and Air Force reconnaissance flights,NOAA P-3 Tail Doppler Radar measurements, and best track maximum intensity estimates as well as the simulations from Hurricane WRF Ensemble Data Assimilation System (HEDAS) experiments for 83 real data cases. The results confirmed validity of the method: the stochastic component of the maximum exibited a non-Gaussian PDF with small mean amplitude and variance that was comparable to the known best track error estimates. The results of the decomposition were then used to evaluate the impact of the improved initial conditions on the forecast. It was shown that the errors in the deterministic component of the intensity had the dominant effect on the forecast skill for the studied cases. This result suggests that the data assimilation of the inner core observations could focus primarily on improving the analysis of wave number 0 and 1 initial structure and on the mechanisms responsible for forcing the evolution of this low-wavenumber structure. For the latter analysis, the assimilation of airborne and satellite remote sensing observations could play significant role.

  13. Evaluation of Enhanced High Resolution MODIS/AMSR-E SSTs and the Impact on Regional Weather Forecast

    NASA Technical Reports Server (NTRS)

    Schiferl, Luke D.; Fuell, Kevin K.; Case, Jonathan L.; Jedlovec, Gary J.

    2010-01-01

    Over the last few years, the NASA Short-term Prediction Research and Transition (SPoRT) Center has been generating a 1-km sea surface temperature (SST) composite derived from retrievals of the Moderate Resolution Imaging Spectroradiometer (MODIS) for use in operational diagnostics and regional model initialization. With the assumption that the day-to-day variation in the SST is nominal, individual MODIS passes aboard the Earth Observing System (EOS) Aqua and Terra satellites are used to create and update four composite SST products each day at 0400, 0700, 1600, and 1900 UTC, valid over the western Atlantic and Caribbean waters. A six month study from February to August 2007 over the marine areas surrounding southern Florida was conducted to compare the use of the MODIS SST composite versus the Real-Time Global SST analysis to initialize the Weather Research and Forecasting (WRF) model. Substantial changes in the forecast heat fluxes were seen at times in the marine boundary layer, but relatively little overall improvement was measured in the sensible weather elements. The limited improvement in the WRF model forecasts could be attributed to the diurnal changes in SST seen in the MODIS SST composites but not accounted for by the model. Furthermore, cloud contamination caused extended periods when individual passes of MODIS were unable to update the SSTs, leading to substantial SST latency and a cool bias during the early summer months. In order to alleviate the latency problems, the SPoRT Center recently enhanced its MODIS SST composite by incorporating information from the Advanced Microwave Scanning Radiometer-EOS (AMSR-E) instruments as well as the Operational Sea Surface Temperature and Sea Ice Analysis. These enhancements substantially decreased the latency due to cloud cover and improved the bias and correlation of the composites at available marine point observations. While these enhancements improved upon the modeled cold bias using the original MODIS SSTs, the discernable impacts on the WRF model were still somewhat limited. This paper explores several factors that may have contributed to this result. First, the original methodology to initialize the model used the most recent SST composite available in a hypothetical real ]time configuration, often matching the forecast initial time with an SST field that was 5-8 hours offset. To minimize the differences that result from the diurnal variations in SST, the previous day fs SST composite is incorporated at a time closest to the model initialization hour (e.g. 1600 UTC composite at 1500 UTC model initialization). Second, the diurnal change seen in the MODIS SST composites was not represented by the WRF model in previous simulations, since the SSTs were held constant throughout the model integration. To address this issue, we explore the use of a water skin-temperature diurnal cycle prediction capability within v3.1 of the WRF model to better represent fluctuations in marine surface forcing. Finally, the verification of the WRF model is limited to very few over-water sites, many of which are located near the coastlines. In order to measure the open ocean improvements from the AMSR-E, we could use an independent 2-dimensional, satellite-derived data set to validate the forecast model by applying an object-based verification method. Such a validation technique could aid in better understanding the benefits of the mesoscale SST spatial structure to regional models applications.

  14. Refined Source Terms in WAVEWATCH III with Wave Breaking and Sea Spray Forecasts

    DTIC Science & Technology

    2015-09-30

    young wind seas reported by Schwendeman et al. (2014) and for the open ocean cases reported by Sutherland and Melville (2015). These verifications...modeled Λ(c) distributions shown in Figure 3 follow a very similar dependence to the Sutherland and Melville observations to about 1-2 m/s. The...and 11) as well as Sutherland and Melville (2015) which show beff ~ O(10-3). Figure 4. Modeled behavior of spectrally-integrated breaking

  15. Do quantitative decadal forecasts from GCMs provide decision relevant skill?

    NASA Astrophysics Data System (ADS)

    Suckling, E. B.; Smith, L. A.

    2012-04-01

    It is widely held that only physics-based simulation models can capture the dynamics required to provide decision-relevant probabilistic climate predictions. This fact in itself provides no evidence that predictions from today's GCMs are fit for purpose. Empirical (data-based) models are employed to make probability forecasts on decadal timescales, where it is argued that these 'physics free' forecasts provide a quantitative 'zero skill' target for the evaluation of forecasts based on more complicated models. It is demonstrated that these zero skill models are competitive with GCMs on decadal scales for probability forecasts evaluated over the last 50 years. Complications of statistical interpretation due to the 'hindcast' nature of this experiment, and the likely relevance of arguments that the lack of hindcast skill is irrelevant as the signal will soon 'come out of the noise' are discussed. A lack of decision relevant quantiative skill does not bring the science-based insights of anthropogenic warming into doubt, but it does call for a clear quantification of limits, as a function of lead time, for spatial and temporal scales on which decisions based on such model output are expected to prove maladaptive. Failing to do so may risk the credibility of science in support of policy in the long term. The performance amongst a collection of simulation models is evaluated, having transformed ensembles of point forecasts into probability distributions through the kernel dressing procedure [1], according to a selection of proper skill scores [2] and contrasted with purely data-based empirical models. Data-based models are unlikely to yield realistic forecasts for future climate change if the Earth system moves away from the conditions observed in the past, upon which the models are constructed; in this sense the empirical model defines zero skill. When should a decision relevant simulation model be expected to significantly outperform such empirical models? Probability forecasts up to ten years ahead (decadal forecasts) are considered, both on global and regional spatial scales for surface air temperature. Such decadal forecasts are not only important in terms of providing information on the impacts of near-term climate change, but also from the perspective of climate model validation, as hindcast experiments and a sufficient database of historical observations allow standard forecast verification methods to be used. Simulation models from the ENSEMBLES hindcast experiment [3] are evaluated and contrasted with static forecasts of the observed climatology, persistence forecasts and against simple statistical models, called dynamic climatology (DC). It is argued that DC is a more apropriate benchmark in the case of a non-stationary climate. It is found that the ENSEMBLES models do not demonstrate a significant increase in skill relative to the empirical models even at global scales over any lead time up to a decade ahead. It is suggested that the contsruction and co-evaluation with the data-based models become a regular component of the reporting of large simulation model forecasts. The methodology presented may easily be adapted to other forecasting experiments and is expected to influence the design of future experiments. The inclusion of comparisons with dynamic climatology and other data-based approaches provide important information to both scientists and decision makers on which aspects of state-of-the-art simulation forecasts are likely to be fit for purpose. [1] J. Bröcker and L. A. Smith. From ensemble forecasts to predictive distributions, Tellus A, 60(4), 663-678 (2007). [2] J. Bröcker and L. A. Smith. Scoring probabilistic forecasts: The importance of being proper, Weather and Forecasting, 22, 382-388 (2006). [3] F. J. Doblas-Reyes, A. Weisheimer, T. N. Palmer, J. M. Murphy and D. Smith. Forecast quality asessment of the ENSEMBLES seasonal-to-decadal stream 2 hindcasts, ECMWF Technical Memorandum, 621 (2010).

  16. Integrated Medical Model (IMM) Project Verification, Validation, and Credibility (VVandC)

    NASA Technical Reports Server (NTRS)

    Walton, M.; Boley, L.; Keenan, L.; Kerstman, E.; Shah, R.; Young, M.; Saile, L.; Garcia, Y.; Meyers, J.; Reyes, D.

    2015-01-01

    The Integrated Medical Model (IMM) Project supports end user requests by employing the Integrated Medical Evidence Database (iMED) and IMM tools as well as subject matter expertise within the Project. The iMED houses data used by the IMM. The IMM is designed to forecast relative changes for a specified set of crew health and mission success risk metrics by using a probabilistic model based on historical data, cohort data, and subject matter expert opinion. A stochastic approach is taken because deterministic results would not appropriately reflect the uncertainty in the IMM inputs. Once the IMM was conceptualized, a plan was needed to rigorously assess input information, framework and code, and output results of the IMM, and ensure that end user requests and requirements were considered during all stages of model development and implementation, as well as lay the foundation for external review and application. METHODS: In 2008, the Project team developed a comprehensive verification and validation (VV) plan, which specified internal and external review criteria encompassing 1) verification of data and IMM structure to ensure proper implementation of the IMM, 2) several validation techniques to confirm that the simulation capability of the IMM appropriately represents occurrences and consequences of medical conditions during space missions, and 3) credibility processes to develop user confidence in the information derived from the IMM. When the NASA-STD-7009 (7009) [1] was published, the Project team updated their verification, validation, and credibility (VVC) project plan to meet 7009 requirements and include 7009 tools in reporting VVC status of the IMM. Construction of these tools included meeting documentation and evidence requirements sufficient to meet external review success criteria. RESULTS: IMM Project VVC updates are compiled recurrently and include updates to the 7009 Compliance and Credibility matrices. Reporting tools have evolved over the lifetime of the IMM Project to better communicate VVC status. This has included refining original 7009 methodology with augmentation from the HRP NASA-STD-7009 Guidance Document working group and the NASA-HDBK-7009 [2]. End user requests and requirements are being satisfied as evidenced by ISS Program acceptance of IMM risk forecasts, transition to an operational model and simulation tool, and completion of service requests from a broad end user consortium including operations, science and technology planning, and exploration planning. IMM v4.0 is slated for operational release in the FY015 and current VVC assessments illustrate the expected VVC status prior to the completion of customer lead external review efforts. CONCLUSIONS: The VVC approach established by the IMM Project of incorporating Project-specific recommended practices and guidelines for implementing the 7009 requirements is comprehensive and includes the involvement of end users at every stage in IMM evolution. Methods and techniques used to quantify the VVC status of the IMM Project represented a critical communication tool in providing clear and concise suitability assessments to IMM customers. These processes have not only received approval from the local NASA community but have also garnered recognition by other federal agencies seeking to develop similar guidelines in the medical modeling community.

  17. An ensemble Kalman filter with a high-resolution atmosphere-ocean coupled model for tropical cyclone forecasts

    NASA Astrophysics Data System (ADS)

    Kunii, M.; Ito, K.; Wada, A.

    2015-12-01

    An ensemble Kalman filter (EnKF) using a regional mesoscale atmosphere-ocean coupled model was developed to represent the uncertainties of sea surface temperature (SST) in ensemble data assimilation strategies. The system was evaluated through data assimilation cycle experiments over a one-month period from July to August 2014, during which a tropical cyclone as well as severe rainfall events occurred. The results showed that the data assimilation cycle with the coupled model could reproduce SST distributions realistically even without updating SST and salinity during the data assimilation cycle. Therefore, atmospheric variables and radiation applied as a forcing to ocean models can control oceanic variables to some extent in the current data assimilation configuration. However, investigations of the forecast error covariance estimated in EnKF revealed that the correlation between atmospheric and oceanic variables could possibly lead to less flow-dependent error covariance for atmospheric variables owing to the difference in the time scales between atmospheric and oceanic variables. A verification of the analyses showed positive impacts of applying the ocean model to EnKF on precipitation forecasts. The use of EnKF with the coupled model system captured intensity changes of a tropical cyclone better than it did with an uncoupled atmosphere model, even though the impact on the track forecast was negligibly small.

  18. Monitoring and Predicting the African Climate for Food Security

    NASA Astrophysics Data System (ADS)

    Thiaw, W. M.

    2015-12-01

    Drought is one of the greatest challenges in Africa due to its impact on access to sanitary water and food. In response to this challenge, the international community has mobilized to develop famine early warning systems (FEWS) to bring safe food and water to populations in need. Over the past several decades, much attention has focused on advance risk planning in agriculture and water. This requires frequent updates of weather and climate outlooks. This paper describes the active role of NOAA's African Desk in FEWS. Emphasis is on the operational products from short and medium range weather forecasts to subseasonal and seasonal outlooks in support of humanitarian relief programs. Tools to provide access to real time weather and climate information to the public are described. These include the downscaling of the U.S. National Multi-model Ensemble (NMME) to improve seasonal forecasts in support of Regional Climate Outlook Forums (RCOFs). The subseasonal time scale has emerged as extremely important to many socio-economic sectors. Drawing from advances in numerical models that can now provide a better representation of the MJO, operational subseasonal forecasts are included in the African Desk product suite. These along with forecasts skill assessment and verifications are discussed. The presentation will also highlight regional hazards outlooks basis for FEWSNET food security outlooks.

  19. Lossy compression for Animated Web Visualisation

    NASA Astrophysics Data System (ADS)

    Prudden, R.; Tomlinson, J.; Robinson, N.; Arribas, A.

    2017-12-01

    This talk will discuss an technique for lossy data compression specialised for web animation. We set ourselves the challenge of visualising a full forecast weather field as an animated 3D web page visualisation. This data is richly spatiotemporal, however it is routinely communicated to the public as a 2D map, and scientists are largely limited to visualising data via static 2D maps or 1D scatter plots. We wanted to present Met Office weather forecasts in a way that represents all the generated data. Our approach was to repurpose the technology used to stream high definition videos. This enabled us to achieve high rates of compression, while being compatible with both web browsers and GPU processing. Since lossy compression necessarily involves discarding information, evaluating the results is an important and difficult problem. This is essentially a problem of forecast verification. The difficulty lies in deciding what it means for two weather fields to be "similar", as simple definitions such as mean squared error often lead to undesirable results. In the second part of the talk, I will briefly discuss some ideas for alternative measures of similarity.

  20. Near-surface wind speed statistical distribution: comparison between ECMWF System 4 and ERA-Interim

    NASA Astrophysics Data System (ADS)

    Marcos, Raül; Gonzalez-Reviriego, Nube; Torralba, Verónica; Cortesi, Nicola; Young, Doo; Doblas-Reyes, Francisco J.

    2017-04-01

    In the framework of seasonal forecast verification, knowing whether the characteristics of the climatological wind speed distribution, simulated by the forecasting systems, are similar to the observed ones is essential to guide the subsequent process of bias adjustment. To bring some light about this topic, this work assesses the properties of the statistical distributions of 10m wind speed from both ERA-Interim reanalysis and seasonal forecasts of ECMWF system 4. The 10m wind speed distribution has been characterized in terms of the four main moments of the probability distribution (mean, standard deviation, skewness and kurtosis) together with the coefficient of variation and goodness of fit Shapiro-Wilks test, allowing the identification of regions with higher wind variability and non-Gaussian behaviour at monthly time-scales. Also, the comparison of the predicted and observed 10m wind speed distributions has been measured considering both inter-annual and intra-seasonal variability. Such a comparison is important in both climate research and climate services communities because it provides useful climate information for decision-making processes and wind industry applications.

  1. Draft Forecasts from Real-Time Runs of Physics-Based Models - A Road to the Future

    NASA Technical Reports Server (NTRS)

    Hesse, Michael; Rastatter, Lutz; MacNeice, Peter; Kuznetsova, Masha

    2008-01-01

    The Community Coordinated Modeling Center (CCMC) is a US inter-agency activity aiming at research in support of the generation of advanced space weather models. As one of its main functions, the CCMC provides to researchers the use of space science models, even if they are not model owners themselves. The second focus of CCMC activities is on validation and verification of space weather models, and on the transition of appropriate models to space weather forecast centers. As part of the latter activity, the CCMC develops real-time simulation systems that stress models through routine execution. A by-product of these real-time calculations is the ability to derive model products, which may be useful for space weather operators. After consultations with NOAA/SEC and with AFWA, CCMC has developed a set of tools as a first step to make real-time model output useful to forecast centers. In this presentation, we will discuss the motivation for this activity, the actions taken so far, and options for future tools from model output.

  2. Impact of MODIS High-Resolution Sea-Surface Temperatures on WRF Forecasts at NWS Miami, FL

    NASA Technical Reports Server (NTRS)

    Case, Jonathan L.; LaCasse, Katherine M.; Dembek, Scott R.; Santos, Pablo; Lapenta, William M.

    2007-01-01

    Over the past few years,studies at the Short-term Prediction Research and Transition (SPoRT) Center have suggested that the use of Moderate Resolution Imaging Spectroradiometer (MODIS) composite sea-surface temperature (SST) products in regional weather forecast models can have a significant positive impact on short-term numerical weather prediction in coastal regions. The recent paper by LaCasse et al. (2007, Monthly Weather Review) highlights lower atmospheric differences in regional numerical simulations over the Florida offshore waters using 2-km SST composites derived from the MODIS instrument aboard the polar-orbiting Aqua and Terra Earth Observing System satellites. To help quantify the value of this impact on NWS Weather Forecast Offices (WFOs), the SPoRT Center and the NWS WFO at Miami, FL (MIA) are collaborating on a project to investigate the impact of using the high-resolution MODIS SST fields within the Weather Research and Forecasting (WRF) prediction system. The scientific hypothesis being tested is: More accurate specification of the lower-boundary forcing within WRF will result in improved land/sea fluxes and hence, more accurate evolution of coastal mesoscale circulations and the associated sensible weather elements. The NWS MIA is currently running the WRF system in real-time to support daily forecast operations, using the National Centers for Environmental Prediction Nonhydrostatic Mesoscale Model dynamical core within the NWS Science and Training Resource Center's Environmental Modeling System (EMS) software; The EMS is a standalone modeling system capable of downloading the necessary daily datasets, and initializing, running and displaying WRF forecasts in the NWS Advanced Weather Interactive Processing System (AWIPS) with little intervention required by forecasters. Twenty-seven hour forecasts are run daily with start times of 0300,0900, 1500, and 2100 UTC on a domain with 4-km grid spacing covering the southern half of Florida and the far western portions of the Bahamas, the Florida Keys, the Straights of Florida, and adjacent waters of the Gulf of Mexico and Atlantic Ocean. Each model run is initialized using the Local Analysis and Prediction System (LAPS) analyses available in AWIPS, invoking the diabatic. "hot-start" capability. In this WRF model "hot-start", the LAPS-analyzed cloud and precipitation features are converted into model microphysics fields with enhanced vertical velocity profiles, effectively reducing the model spin-up time required to predict precipitation systems. The SSTs are initialized with the NCEP Real-Time Global (RTG) analyses at l/12 degree resolution (approx. 9 km); however, the RTG product does not exhibit fine-scale details consistent with its grid resolution. SPoRT is conducting parallel WRF EMS runs identical to the operational runs at NWS MIA in every respect except for the use of MODIS SST composites in place of the RTG product as the initial and boundary conditions over water. The MODIS SST composites for initializing the SPoRT WRF runs are generated on a 2-km grid four times daily at 0400, 0700, 1600, and 1900 UTC, based on the times of the overhead passes of the Aqua and Terra satellites. The incorporation of the MODIS SST composites into the SPoRTWRF runs is staggered such that the 0400UTC composite initializes the 0900 UTC WRF, the 0700 UTC composite initializes the 1500 UTC WRF, the 1600 UTC composite initializes the 2100 UTC WRF, and the 1900 UTC composite initializes the 0300 UTC WRF. A comparison of the SPoRT and Miami forecasts is underway in 2007, and includes quantitative verification of near-surface temperature, dewpoint, and wind forecasts at surface observation locations. In addition, particular days of interest are being analyzed to determine the impact of the MODIS SST data on the development and evolution of predicted sea/land-breeze circulations, clouds, and precipitation. This paper will present verification results comparing the NWS MIA forecasts the SPoRT experimental WRF forecasts, and highlight any substantial differences noted in the predicted mesoscale phenomena.

  3. Forecasting Japanese encephalitis incidence from historical morbidity patterns: Statistical analysis with 27 years of observation in Assam, India.

    PubMed

    Handique, Bijoy K; Khan, Siraj A; Mahanta, J; Sudhakar, S

    2014-09-01

    Japanese encephalitis (JE) is one of the dreaded mosquito-borne viral diseases mostly prevalent in south Asian countries including India. Early warning of the disease in terms of disease intensity is crucial for taking adequate and appropriate intervention measures. The present study was carried out in Dibrugarh district in the state of Assam located in the northeastern region of India to assess the accuracy of selected forecasting methods based on historical morbidity patterns of JE incidence during the past 22 years (1985-2006). Four selected forecasting methods, viz. seasonal average (SA), seasonal adjustment with last three observations (SAT), modified method adjusting long-term and cyclic trend (MSAT), and autoregressive integrated moving average (ARIMA) have been employed to assess the accuracy of each of the forecasting methods. The forecasting methods were validated for five consecutive years from 2007-2012 and accuracy of each method has been assessed. The forecasting method utilising seasonal adjustment with long-term and cyclic trend emerged as best forecasting method among the four selected forecasting methods and outperformed the even statistically more advanced ARIMA method. Peak of the disease incidence could effectively be predicted with all the methods, but there are significant variations in magnitude of forecast errors among the selected methods. As expected, variation in forecasts at primary health centre (PHC) level is wide as compared to that of district level forecasts. The study showed that adopted forecasting techniques could reasonably forecast the intensity of JE cases at PHC level without considering the external variables. The results indicate that the understanding of long-term and cyclic trend of the disease intensity will improve the accuracy of the forecasts, but there is a need for making the forecast models more robust to explain sudden variation in the disease intensity with detail analysis of parasite and host population dynamics.

  4. An Assessment of the Subseasonal Predictability of Severe Thunderstorm Environments and Activity using the Climate Forecast System Version 2

    NASA Astrophysics Data System (ADS)

    Stepanek, Adam J.

    The prospect for skillful long-term predictions of atmospheric conditions known to directly contribute to the onset and maintenance of severe convective storms remains unclear. A thorough assessment of the capability for a global climate model such as the Climate Forecast System Version 2 (CFSv2) to skillfully represent parameters related to severe weather has the potential to significantly improve medium- to long-range outlooks vital to risk managers. Environmental convective available potential energy (CAPE) and deep-layer vertical wind shear (DLS) can be used to distinguish an atmosphere conducive to severe storms from one supportive of primarily non-severe 'ordinary' convection. As such, this research concentrates on the predictability of CAPE, DLS, and a product of the two parameters (CAPEDLS) by the CFSv2 with a specific focus on the subseasonal timescale. Individual month-long verification periods from the Climate Forecast System reanalysis (CFSR) dataset are measured against a climatological standard using cumulative distribution function (CDF) and area-under-the-CDF (AUCDF) techniques designed mitigate inherent model biases while concurrently assessing the entire distribution of a given parameter in lieu of a threshold-based approach. Similar methods imposed upon the CFS reforecast (CFSRef) and operational CFSv2 allow for comparisons elucidating both spatial and temporal trends in skill using correlation coefficients, proportion correct metrics, Heidke skill score (HSS), and root-mean-square-error (RMSE) statistics. Key results show the CFSv2-based output often demonstrates skill beyond a climatologically-based threshold when the forecast is notably anomalous from the 29-year (1982-2010) mean CFSRef prediction (exceeding one standard deviation at grid point level). CFSRef analysis indicates enhanced skill during the months of April and June (relative to May) and for predictions of DLS. Furthermore, years exhibiting skill in terms of RMSE are shown to possess certain correlations with El Nino-Southern Oscillation conditions from the preceding winter and concurrent Madden Julian Oscillation activity. Applying results gleaned from the CFSRef analysis to the operational CFSv2 (2011-16) indicates predictive skill can be increased by isolating forecasts meeting multiple parameter-based relationships.

  5. Lightning-generated whistler waves observed by probes on the Communication/Navigation Outage Forecast System satellite at low latitudes

    NASA Astrophysics Data System (ADS)

    Holzworth, R. H.; McCarthy, M. P.; Pfaff, R. F.; Jacobson, A. R.; Willcockson, W. L.; Rowland, D. E.

    2011-06-01

    Direct evidence is presented for a causal relationship between lightning and strong electric field transients inside equatorial ionospheric density depletions. In fact, these whistler mode plasma waves may be the dominant electric field signal within such depletions. Optical lightning data from the Communication/Navigation Outage Forecast System (C/NOFS) satellite and global lightning location information from the World Wide Lightning Location Network are presented as independent verification that these electric field transients are caused by lightning. The electric field instrument on C/NOFS routinely measures lightning-related electric field wave packets or sferics, associated with simultaneous measurements of optical flashes at all altitudes encountered by the satellite (401-867 km). Lightning-generated whistler waves have abundant access to the topside ionosphere, even close to the magnetic equator.

  6. Lightning-Generated Whistler Waves Observed by Probes On The Communication/Navigation Outage Forecast System Satellite at Low Latitudes

    NASA Technical Reports Server (NTRS)

    Holzworth, R. H.; McCarthy, M. P.; Pfaff, R. F.; Jacobson, A. R.; Willcockson, W. L.; Rowland, D. E.

    2011-01-01

    Direct evidence is presented for a causal relationship between lightning and strong electric field transients inside equatorial ionospheric density depletions. In fact, these whistler mode plasma waves may be the dominant electric field signal within such depletions. Optical lightning data from the Communication/Navigation Outage Forecast System (C/NOFS) satellite and global lightning location information from the World Wide Lightning Location Network are presented as independent verification that these electric field transients are caused by lightning. The electric field instrument on C/NOFS routinely measures lightning ]related electric field wave packets or sferics, associated with simultaneous measurements of optical flashes at all altitudes encountered by the satellite (401.867 km). Lightning ]generated whistler waves have abundant access to the topside ionosphere, even close to the magnetic equator.

  7. Applications systems verification and transfer project. Volume 3: Operational applications of satellite snow cover observations in California

    NASA Technical Reports Server (NTRS)

    Brown, A. J.; Hannaford, J. F.

    1981-01-01

    Five southern Sierra snowmelt basins and two northern Sierra-Southern Cascade snowmelt basins were used to evaluate the effect on operational water supply forecasting from satellite imagery. Manual photointerpretation techniques were used to obtain SCA and equivalent snow line for the years 1973 to 1979 for the seven test basins using LANDSAT imagery and GOES imagery. The use of SCA was tested operationally in 1977-79. Results indicate the addition of SCA improve the water supply forecasts during the snowmelt phase for these basins where there may be an unusual distribution of snowpack throughout the basin, or where there is a limited amount of real time data available. A high correlation to runoff was obtained when SCA was combined with snow water content data obtained from reporting snow sensors.

  8. Hydrologic ensembles based on convection-permitting precipitation nowcasts for flash flood warnings

    NASA Astrophysics Data System (ADS)

    Demargne, Julie; Javelle, Pierre; Organde, Didier; de Saint Aubin, Céline; Ramos, Maria-Helena

    2017-04-01

    In order to better anticipate flash flood events and provide timely warnings to communities at risk, the French national service in charge of flood forecasting (SCHAPI) is implementing a national flash flood warning system for small-to-medium ungauged basins. Based on a discharge-threshold flood warning method called AIGA (Javelle et al. 2014), the current version of the system runs a simplified hourly distributed hydrologic model with operational radar-gauge QPE grids from Météo-France at a 1-km2 resolution every 15 minutes. This produces real-time peak discharge estimates along the river network, which are subsequently compared to regionalized flood frequency estimates to provide warnings according to the AIGA-estimated return period of the ongoing event. To further extend the effective warning lead time while accounting for hydrometeorological uncertainties, the flash flood warning system is being enhanced to include Météo-France's AROME-NWC high-resolution precipitation nowcasts as time-lagged ensembles and multiple sets of hydrological regionalized parameters. The operational deterministic precipitation forecasts, from the nowcasting version of the AROME convection-permitting model (Auger et al. 2015), were provided at a 2.5-km resolution for a 6-hr forecast horizon for 9 significant rain events from September 2014 to June 2016. The time-lagged approach is a practical choice of accounting for the atmospheric forecast uncertainty when no extensive forecast archive is available for statistical modelling. The evaluation on 781 French basins showed significant improvements in terms of flash flood event detection and effective warning lead-time, compared to warnings from the current AIGA setup (without any future precipitation). We also discuss how to effectively communicate verification information to help determine decision-relevant warning thresholds for flood magnitude and probability. Javelle, P., Demargne, J., Defrance, D., Arnaud, P., 2014. Evaluating flash flood warnings at ungauged locations using post-event surveys: a case study with the AIGA warning system. Hydrological Sciences Journal, doi: 10.1080/02626667.2014.923970 Auger, L., Dupont, O., Hagelin, S., Brousseau, P., Brovelli, P., 2015. AROME-NWC: a new nowcasting tool based on an operational mesoscale forecasting system. Quarterly Journal of the Royal Meteorological Society, 141: 1603-1611, doi:10.1002/qj.2463

  9. Systematic Model-in-the-Loop Test of Embedded Control Systems

    NASA Astrophysics Data System (ADS)

    Krupp, Alexander; Müller, Wolfgang

    Current model-based development processes offer new opportunities for verification automation, e.g., in automotive development. The duty of functional verification is the detection of design flaws. Current functional verification approaches exhibit a major gap between requirement definition and formal property definition, especially when analog signals are involved. Besides lack of methodical support for natural language formalization, there does not exist a standardized and accepted means for formal property definition as a target for verification planning. This article addresses several shortcomings of embedded system verification. An Enhanced Classification Tree Method is developed based on the established Classification Tree Method for Embeded Systems CTM/ES which applies a hardware verification language to define a verification environment.

  10. Location-Based Rainfall Nowcasting Service for Public

    NASA Astrophysics Data System (ADS)

    Woo, Wang-chun

    2013-04-01

    The Hong Kong Observatory has developed the "Short-range Warning of Intense Rainstorms in Localized Systems (SWIRLS)", a radar-based rainfall nowcasting system originally to support forecasters in rainstorm warning and severe weather forecasting such as hail, lightning and strong wind gusts in Hong Kong. The system has since been extended to provide rainfall nowcast service direct for the public in recent years. Following the launch of "Rainfall Nowcast for the Pearl River Delta Region" service provided via a Geographical Information System (GIS) platform in 2008, a location-based rainfall nowcast service served through "MyObservatory", a smartphone app for iOS and Android developed by the Observatory, debuted in September 2012. The new service takes advantage of the capability of smartphones to detect own locations and utilizes the quantitative precipitation forecast (QPF) from SWIRLS to provide location-based rainfall nowcast to the public. The conversion of radar reflectivity data (at 2 or 3 km above ground) to rainfall in SWIRLS is based on the Z-R relationship (Z=aRb) with dynamical calibration of the coefficients a and b determined using real-time rain gauge data. Adopting the "Multi-scale Optical-flow by Variational Analysis (MOVA)" scheme to track the movement of radar echoes and Semi-Lagrangian Advection (SLA) scheme to extrapolate their movement, the system is capable of producing QPF for the next six hours in a grid of 480 x 480 that covers a domain of 256 km x 256 km once every 6 minutes. Referencing the closest point in a resampled 2-km grid over the territory of Hong Kong, a prediction as to whether there will be rainfall exceeding 0.5 mm in every 30 minute intervals for the next two hours at users' own or designated locations are made available to the users in both textual and graphical format. For those users who have opted to receive notifications, a message would pop up on the user's phone whenever rain is predicted in the next two hours in a user-configurable manner. Verification indicates that the service achieves a detection rate of 76% and a false alarm rate of 26% in the first 30 minute forecast. The skill decreases as the forecast range extends, with the detection rate lowered to 40% and false alarm rate increased to 63% for the two hour forecast. A number of factors affect the accuracy of the forecast, notably the anomalous propagation, the sensitivity and vertical coverage of the radar, as well as the growth and decay of the rain echoes. The service has been gaining popularity rapidly since launch, and has already registered over 12,000 users who have opted for notifications. The successful launch of the location-based rainfall nowcast service in Hong Kong and favourable verification results reveal the high practicality of such services.

  11. Case studies of NOAA 6/TIROS N data impact on numerical weather forecasts

    NASA Technical Reports Server (NTRS)

    Druyan, L. M.; Alperson, Z.; Ben-Amram, T.

    1984-01-01

    The impact of satellite temperatures from systems which predate the launching of the third generation of vertical sounding instruments aboard TIROS N (13 Oct 1978) and NOAA 6 (27 June 1979) is reported. The first evaluation of soundings from TIROS N found that oceanic, cloudy retrievals over NH mid latitudes show a cold bias in winter. It is confirmed for both satellite systems using a larger data base. It is shown that RMS differences between retrievals and colocated radiosonde observations within the swath 30-60N during the 1979-80 winter were generally 2-3K in clear air and higher for cloudy columns. A positive impact of TIROS N temperatures on the analysis of synoptic weather systems is shown. Analyses prepared from only satellite temperatures seemed to give a better definition to weather systems' thermal structure than that provided by corresponding NMC analyses without satellite data. The results of a set of 14 numerical forecast experiments performed with the PE model of the Israel Meteorological Service (IMS) are summarized; these were designed to test the impact of TIROS N and NOAA 6 temperatures within the IMS analysis and forecast cycle. The satellite data coverage over the NH, the mean area/period S1 and RMS verification scores and the spatial distribution of SAT versus NO SAT forecast differences are discussed and it is concluded that positive forecast impact occurs over ocean areas where the extra data improve the specification which is otherwise available from conventional observations. The forecast impact for three cases from the same set of experiments was examined and it is found that satellite temperatures, observed over the Atlantic Ocean contribute to better forecasts over Iceland and central Europe although a worse result was verified over Spain. It is also shown that the better scores of a forecast based also on satellite data and verified over North America actually represent a mixed impact on the forecast synoptic patterns. A superior 48 hr 500 mb forecast over the western US due to the better initial specification afforded by satellite observed temperatures over the North Pacific Ocean is shown.

  12. A Bayesian modelling method for post-processing daily sub-seasonal to seasonal rainfall forecasts from global climate models and evaluation for 12 Australian catchments

    NASA Astrophysics Data System (ADS)

    Schepen, Andrew; Zhao, Tongtiegang; Wang, Quan J.; Robertson, David E.

    2018-03-01

    Rainfall forecasts are an integral part of hydrological forecasting systems at sub-seasonal to seasonal timescales. In seasonal forecasting, global climate models (GCMs) are now the go-to source for rainfall forecasts. For hydrological applications however, GCM forecasts are often biased and unreliable in uncertainty spread, and calibration is therefore required before use. There are sophisticated statistical techniques for calibrating monthly and seasonal aggregations of the forecasts. However, calibration of seasonal forecasts at the daily time step typically uses very simple statistical methods or climate analogue methods. These methods generally lack the sophistication to achieve unbiased, reliable and coherent forecasts of daily amounts and seasonal accumulated totals. In this study, we propose and evaluate a Rainfall Post-Processing method for Seasonal forecasts (RPP-S), which is based on the Bayesian joint probability modelling approach for calibrating daily forecasts and the Schaake Shuffle for connecting the daily ensemble members of different lead times. We apply the method to post-process ACCESS-S forecasts for 12 perennial and ephemeral catchments across Australia and for 12 initialisation dates. RPP-S significantly reduces bias in raw forecasts and improves both skill and reliability. RPP-S forecasts are also more skilful and reliable than forecasts derived from ACCESS-S forecasts that have been post-processed using quantile mapping, especially for monthly and seasonal accumulations. Several opportunities to improve the robustness and skill of RPP-S are identified. The new RPP-S post-processed forecasts will be used in ensemble sub-seasonal to seasonal streamflow applications.

  13. Improvement of Storm Forecasts Using Gridded Bayesian Linear Regression for Northeast United States

    NASA Astrophysics Data System (ADS)

    Yang, J.; Astitha, M.; Schwartz, C. S.

    2017-12-01

    Bayesian linear regression (BLR) is a post-processing technique in which regression coefficients are derived and used to correct raw forecasts based on pairs of observation-model values. This study presents the development and application of a gridded Bayesian linear regression (GBLR) as a new post-processing technique to improve numerical weather prediction (NWP) of rain and wind storm forecasts over northeast United States. Ten controlled variables produced from ten ensemble members of the National Center for Atmospheric Research (NCAR) real-time prediction system are used for a GBLR model. In the GBLR framework, leave-one-storm-out cross-validation is utilized to study the performances of the post-processing technique in a database composed of 92 storms. To estimate the regression coefficients of the GBLR, optimization procedures that minimize the systematic and random error of predicted atmospheric variables (wind speed, precipitation, etc.) are implemented for the modeled-observed pairs of training storms. The regression coefficients calculated for meteorological stations of the National Weather Service are interpolated back to the model domain. An analysis of forecast improvements based on error reductions during the storms will demonstrate the value of GBLR approach. This presentation will also illustrate how the variances are optimized for the training partition in GBLR and discuss the verification strategy for grid points where no observations are available. The new post-processing technique is successful in improving wind speed and precipitation storm forecasts using past event-based data and has the potential to be implemented in real-time.

  14. WOD - Weather On Demand forecasting system

    NASA Astrophysics Data System (ADS)

    Rognvaldsson, Olafur; Ragnarsson, Logi; Stanislawska, Karolina

    2017-04-01

    The backbone of the Belgingur forecasting system (called WOD - Weather On Demand) is the WRF-Chem atmospheric model, with a number of in-house customisations. Initial and boundary data are taken from the Global Forecasting System, operated by the National Oceanic and Atmospheric Administration (NOAA). Operational forecasts use cycling of a number of parameters, mainly deep soil and surface fields. This is done to minimise spin-up effects and to ensure proper book-keeping of hydrological fields such as snow accumulation and runoff, as well as the constituents of various chemical parameters. The WOD system can be used to create conventional short- to medium-range weather forecasts for any location on the globe. The WOD system can also be used for air quality purposes (e.g. dispersion forecasts from volcanic eruptions) and as a tool to provide input to other modelling systems, such as hydrological models. A wide variety of post-processing options are also available, making WOD an ideal tool for creating highly customised output that can be tailored to the specific needs of individual end-users. The most recent addition to the WOD system is an integrated verification system where forecasts can be compared to surface observations from chosen locations. Forecast visualisation, such as weather charts, meteograms, weather icons and tables, is done via number of web components that can be configured to serve the varying needs of different end-users. The WOD system itself can be installed in an automatic way on hardware running a range of Linux based OS. System upgrades can also be done in semi-automatic fashion, i.e. upgrades and/or bug-fixes can be pushed to the end-user hardware without system downtime. Importantly, the WOD system requires only rudimentary knowledge of the WRF modelling, and the Linux operating systems on behalf of the end-user, making it an ideal NWP tool in locations with limited IT infrastructure.

  15. Using total precipitable water anomaly as a forecast aid for heavy precipitation events

    NASA Astrophysics Data System (ADS)

    VandenBoogart, Lance M.

    Heavy precipitation events are of interest to weather forecasters, local government officials, and the Department of Defense. These events can cause flooding which endangers lives and property. Military concerns include decreased trafficability for military vehicles, which hinders both war- and peace-time missions. Even in data-rich areas such as the United States, it is difficult to determine when and where a heavy precipitation event will occur. The challenges are compounded in data-denied regions. The hypothesis that total precipitable water anomaly (TPWA) will be positive and increasing preceding heavy precipitation events is tested in order to establish an understanding of TPWA evolution. Results are then used to create a precipitation forecast aid. The operational, 16 km-gridded, 6-hourly TPWA product developed at the Cooperative Institute for Research in the Atmosphere (CIRA) compares a blended TPW product with a TPW climatology to give a percent of normal TPWA value. TPWA evolution is examined for 84 heavy precipitation events which occurred between August 2010 and November 2011. An algorithm which uses various TPWA thresholds derived from the 84 events is then developed and tested using dichotomous contingency table verification statistics to determine the extent to which satellite-based TPWA might be used to aid in forecasting precipitation over mesoscale domains. The hypothesis of positive and increasing TPWA preceding heavy precipitation events is supported by the analysis. Event-average TPWA rises for 36 hours and peaks at 154% of normal at the event time. The average precipitation event detected by the forecast algorithm is not of sufficient magnitude to be termed a "heavy" precipitation event; however, the algorithm adds skill to a climatological precipitation forecast. Probability of detection is low and false alarm ratios are large, thus qualifying the algorithm's current use as an aid rather than a deterministic forecast tool. The algorithm's ability to be easily modified and quickly run gives it potential for future use in precipitation forecasting.

  16. An Event-Based Verification Scheme for the Real-Time Flare Detection System at Kanzelhöhe Observatory

    NASA Astrophysics Data System (ADS)

    Pötzi, W.; Veronig, A. M.; Temmer, M.

    2018-06-01

    In the framework of the Space Situational Awareness program of the European Space Agency (ESA/SSA), an automatic flare detection system was developed at Kanzelhöhe Observatory (KSO). The system has been in operation since mid-2013. The event detection algorithm was upgraded in September 2017. All data back to 2014 was reprocessed using the new algorithm. In order to evaluate both algorithms, we apply verification measures that are commonly used for forecast validation. In order to overcome the problem of rare events, which biases the verification measures, we introduce a new event-based method. We divide the timeline of the Hα observations into positive events (flaring period) and negative events (quiet period), independent of the length of each event. In total, 329 positive and negative events were detected between 2014 and 2016. The hit rate for the new algorithm reached 96% (just five events were missed) and a false-alarm ratio of 17%. This is a significant improvement of the algorithm, as the original system had a hit rate of 85% and a false-alarm ratio of 33%. The true skill score and the Heidke skill score both reach values of 0.8 for the new algorithm; originally, they were at 0.5. The mean flare positions are accurate within {±} 1 heliographic degree for both algorithms, and the peak times improve from a mean difference of 1.7± 2.9 minutes to 1.3± 2.3 minutes. The flare start times that had been systematically late by about 3 minutes as determined by the original algorithm, now match the visual inspection within -0.47± 4.10 minutes.

  17. Water demand forecasting: review of soft computing methods.

    PubMed

    Ghalehkhondabi, Iman; Ardjmand, Ehsan; Young, William A; Weckman, Gary R

    2017-07-01

    Demand forecasting plays a vital role in resource management for governments and private companies. Considering the scarcity of water and its inherent constraints, demand management and forecasting in this domain are critically important. Several soft computing techniques have been developed over the last few decades for water demand forecasting. This study focuses on soft computing methods of water consumption forecasting published between 2005 and 2015. These methods include artificial neural networks (ANNs), fuzzy and neuro-fuzzy models, support vector machines, metaheuristics, and system dynamics. Furthermore, it was discussed that while in short-term forecasting, ANNs have been superior in many cases, but it is still very difficult to pick a single method as the overall best. According to the literature, various methods and their hybrids are applied to water demand forecasting. However, it seems soft computing has a lot more to contribute to water demand forecasting. These contribution areas include, but are not limited, to various ANN architectures, unsupervised methods, deep learning, various metaheuristics, and ensemble methods. Moreover, it is found that soft computing methods are mainly used for short-term demand forecasting.

  18. Use of the LANDSAT-2 Data Collection System in the Colorado River Basin Weather Modification Program. [San Juan Mountains, Colorado

    NASA Technical Reports Server (NTRS)

    Kahan, A. M. (Principal Investigator)

    1975-01-01

    The author has identified the following significant results. The LANDSAT data collection system has proven itself to be a valuable tool for control of cloud seeding operations and for verification of weather forecasts. These platforms have proven to be reliable weather resistant units suitable for the collection of hydrometeorological data from remote severe weather environments. The detailed design of the wind speed and direction system and the wire-wrapping of the logic boards were completed.

  19. On the Mitigation of Solar Index Variability for High Precision Orbit Determination in Low Earth Orbit

    DTIC Science & Technology

    2016-09-16

    Astrodynamics Specialist Conference, No. AAS 15-752, American Astronautical Society, 2015. 3Center, N. S. W. P., “Estimated Ap Forecast Verification,” http...atmospheric density modeling,” AIAA/AAS astrodynamics specialist conference and exhibit , 2008, pp. 18–21. 6Marcos, F. A., “Accuracy of atmospheric... Specialist Conference and Exhibit, Honolulu, Hawaii , 2008. 17Tobiska, W. K., Bowman, B. R., and Bouwer, S. D., “Solar and Geomagnetic Indices for

  20. Prediction of a service demand using combined forecasting approach

    NASA Astrophysics Data System (ADS)

    Zhou, Ling

    2017-08-01

    Forecasting facilitates cutting down operational and management costs while ensuring service level for a logistics service provider. Our case study here is to investigate how to forecast short-term logistic demand for a LTL carrier. Combined approach depends on several forecasting methods simultaneously, instead of a single method. It can offset the weakness of a forecasting method with the strength of another, which could improve the precision performance of prediction. Main issues of combined forecast modeling are how to select methods for combination, and how to find out weight coefficients among methods. The principles of method selection include that each method should apply to the problem of forecasting itself, also methods should differ in categorical feature as much as possible. Based on these principles, exponential smoothing, ARIMA and Neural Network are chosen to form the combined approach. Besides, least square technique is employed to settle the optimal weight coefficients among forecasting methods. Simulation results show the advantage of combined approach over the three single methods. The work done in the paper helps manager to select prediction method in practice.

  1. Research on key technology of the verification system of steel rule based on vision measurement

    NASA Astrophysics Data System (ADS)

    Jia, Siyuan; Wang, Zhong; Liu, Changjie; Fu, Luhua; Li, Yiming; Lu, Ruijun

    2018-01-01

    The steel rule plays an important role in quantity transmission. However, the traditional verification method of steel rule based on manual operation and reading brings about low precision and low efficiency. A machine vison based verification system of steel rule is designed referring to JJG1-1999-Verificaiton Regulation of Steel Rule [1]. What differentiates this system is that it uses a new calibration method of pixel equivalent and decontaminates the surface of steel rule. Experiments show that these two methods fully meet the requirements of the verification system. Measuring results strongly prove that these methods not only meet the precision of verification regulation, but also improve the reliability and efficiency of the verification system.

  2. An overview of health forecasting.

    PubMed

    Soyiri, Ireneous N; Reidpath, Daniel D

    2013-01-01

    Health forecasting is a novel area of forecasting, and a valuable tool for predicting future health events or situations such as demands for health services and healthcare needs. It facilitates preventive medicine and health care intervention strategies, by pre-informing health service providers to take appropriate mitigating actions to minimize risks and manage demand. Health forecasting requires reliable data, information and appropriate analytical tools for the prediction of specific health conditions or situations. There is no single approach to health forecasting, and so various methods have often been adopted to forecast aggregate or specific health conditions. Meanwhile, there are no defined health forecasting horizons (time frames) to match the choices of health forecasting methods/approaches that are often applied. The key principles of health forecasting have not also been adequately described to guide the process. This paper provides a brief introduction and theoretical analysis of health forecasting. It describes the key issues that are important for health forecasting, including: definitions, principles of health forecasting, and the properties of health data, which influence the choices of health forecasting methods. Other matters related to the value of health forecasting, and the general challenges associated with developing and using health forecasting services are discussed. This overview is a stimulus for further discussions on standardizing health forecasting approaches and methods that will facilitate health care and health services delivery.

  3. A framework for probabilistic pluvial flood nowcasting for urban areas

    NASA Astrophysics Data System (ADS)

    Ntegeka, Victor; Murla, Damian; Wang, Lipen; Foresti, Loris; Reyniers, Maarten; Delobbe, Laurent; Van Herk, Kristine; Van Ootegem, Luc; Willems, Patrick

    2016-04-01

    Pluvial flood nowcasting is gaining ground not least because of the advancements in rainfall forecasting schemes. Short-term forecasts and applications have benefited from the availability of such forecasts with high resolution in space (~1km) and time (~5min). In this regard, it is vital to evaluate the potential of nowcasting products for urban inundation applications. One of the most advanced Quantitative Precipitation Forecasting (QPF) techniques is the Short-Term Ensemble Prediction System, which was originally co-developed by the UK Met Office and Australian Bureau of Meteorology. The scheme was further tuned to better estimate extreme and moderate events for the Belgian area (STEPS-BE). Against this backdrop, a probabilistic framework has been developed that consists of: (1) rainfall nowcasts; (2) sewer hydraulic model; (3) flood damage estimation; and (4) urban inundation risk mapping. STEPS-BE forecasts are provided at high resolution (1km/5min) with 20 ensemble members with a lead time of up to 2 hours using a 4 C-band radar composite as input. Forecasts' verification was performed over the cities of Leuven and Ghent and biases were found to be small. The hydraulic model consists of the 1D sewer network and an innovative 'nested' 2D surface model to model 2D urban surface inundations at high resolution. The surface components are categorized into three groups and each group is modelled using triangular meshes at different resolutions; these include streets (3.75 - 15 m2), high flood hazard areas (12.5 - 50 m2) and low flood hazard areas (75 - 300 m2). Functions describing urban flood damage and social consequences were empirically derived based on questionnaires to people in the region that were recently affected by sewer floods. Probabilistic urban flood risk maps were prepared based on spatial interpolation techniques of flood inundation. The method has been implemented and tested for the villages Oostakker and Sint-Amandsberg, which are part of the larger city of Gent, Belgium. After each of the different above-mentioned components were evaluated, they were combined and tested for recent historical flood events. The rainfall nowcasting, hydraulic sewer and 2D inundation modelling and socio-economical flood risk results each could be partly evaluated: the rainfall nowcasting results based on radar data and rain gauges; the hydraulic sewer model results based on water level and discharge data at pumping stations; the 2D inundation modelling results based on limited data on some recent flood locations and inundation depths; the results for the socio-economical flood consequences of the most extreme events based on claims in the database of the national disaster agency. Different methods for visualization of the probabilistic inundation results are proposed and tested.

  4. Use of Air Quality Observations by the National Air Quality Forecast Capability

    NASA Astrophysics Data System (ADS)

    Stajner, I.; McQueen, J.; Lee, P.; Stein, A. F.; Kondragunta, S.; Ruminski, M.; Tong, D.; Pan, L.; Huang, J. P.; Shafran, P.; Huang, H. C.; Dickerson, P.; Upadhayay, S.

    2015-12-01

    The National Air Quality Forecast Capability (NAQFC) operational predictions of ozone and wildfire smoke for the United States (U.S.) and predictions of airborne dust for continental U.S. are available at http://airquality.weather.gov/. NOAA National Centers for Environmental Prediction (NCEP) operational North American Mesoscale (NAM) weather predictions are combined with the Community Multiscale Air Quality (CMAQ) model to produce the ozone predictions and test fine particulate matter (PM2.5) predictions. The Hybrid Single Particle Lagrangian Integrated Trajectory (HYSPLIT) model provides smoke and dust predictions. Air quality observations constrain emissions used by NAQFC predictions. NAQFC NOx emissions from mobile sources were updated using National Emissions Inventory (NEI) projections for year 2012. These updates were evaluated over large U.S. cities by comparing observed changes in OMI NO2 observations and NOx measured by surface monitors. The rate of decrease in NOx emission projections from year 2005 to year 2012 is in good agreement with the observed changes over the same period. Smoke emissions rely on the fire locations detected from satellite observations obtained from NESDIS Hazard Mapping System (HMS). Dust emissions rely on a climatology of areas with a potential for dust emissions based on MODIS Deep Blue aerosol retrievals. Verification of NAQFC predictions uses AIRNow compilation of surface measurements for ozone and PM2.5. Retrievals of smoke from GOES satellites are used for verification of smoke predictions. Retrievals of dust from MODIS are used for verification of dust predictions. In summary, observations are the basis for the emissions inputs for NAQFC, they are critical for evaluation of performance of NAQFC predictions, and furthermore they are used in real-time testing of bias correction of PM2.5 predictions, as we continue to work on improving modeling and emissions important for representation of PM2.5.

  5. Bayesian Processor of Output for Probabilistic Quantitative Precipitation Forecasting

    NASA Astrophysics Data System (ADS)

    Krzysztofowicz, R.; Maranzano, C. J.

    2006-05-01

    The Bayesian Processor of Output (BPO) is a new, theoretically-based technique for probabilistic forecasting of weather variates. It processes output from a numerical weather prediction (NWP) model and optimally fuses it with climatic data in order to quantify uncertainty about a predictand. The BPO is being tested by producing Probabilistic Quantitative Precipitation Forecasts (PQPFs) for a set of climatically diverse stations in the contiguous U.S. For each station, the PQPFs are produced for the same 6-h, 12-h, and 24-h periods up to 84- h ahead for which operational forecasts are produced by the AVN-MOS (Model Output Statistics technique applied to output fields from the Global Spectral Model run under the code name AVN). The inputs into the BPO are estimated as follows. The prior distribution is estimated from a (relatively long) climatic sample of the predictand; this sample is retrieved from the archives of the National Climatic Data Center. The family of the likelihood functions is estimated from a (relatively short) joint sample of the predictor vector and the predictand; this sample is retrieved from the same archive that the Meteorological Development Laboratory of the National Weather Service utilized to develop the AVN-MOS system. This talk gives a tutorial introduction to the principles and procedures behind the BPO, and highlights some results from the testing: a numerical example of the estimation of the BPO, and a comparative verification of the BPO forecasts and the MOS forecasts. It concludes with a list of demonstrated attributes of the BPO (vis- à-vis the MOS): more parsimonious definitions of predictors, more efficient extraction of predictive information, better representation of the distribution function of predictand, and equal or better performance (in terms of calibration and informativeness).

  6. Application of the Haines Index in the fire warning system

    NASA Astrophysics Data System (ADS)

    Kalin, Lovro; Marija, Mokoric; Tomislav, Kozaric

    2016-04-01

    Croatia, as all Mediterranean countries, is strongly affected by large wildfires, particularly in the coastal region. In the last two decades the number and intensity of fires has been significantly increased, which is unanimously associated with climate change, e.g. global warming. More extreme fires are observed, and the fire-fighting season has been expanded to June and September. The meteorological support for fire protection and planning is therefore even more important. At the Meteorological and Hydrological Service of Croatia a comprehensive monitoring and warning system has been established. It includes standard components, such as short term forecast of Fire Weather Index (FWI), but long range forecast as well. However, due to more frequent hot and dry seasons, FWI index often does not provide additional information of extremely high fire danger, since it regularly takes the highest values for long periods. Therefore the additional tools have been investigated. One of widely used meteorological products is the Haines index (HI). It provides information of potential fire growth, taking into account only the vertical instability of the atmosphere, and not the state of the fuel. Several analyses and studies carried out at the Service confirmed the correlation of high HI values with large and extreme fires. The Haines index forecast has been used at the Service for several years, employing European Centre for Medium Range Weather Forecast (ECMWF) global prediction model, as well as the limited-area Aladin model. The verification results show that these forecast are reliable, when compared to radiosonde measurements. All these results provided the introduction of the additional fire warnings, that are issued by the Service's Forecast Department.

  7. The impact of water vapor assimilation on quantitative precipitation forecast over the Washington, DC metropolitan area

    NASA Astrophysics Data System (ADS)

    Walford, Segayle Cereta

    Forecasting subtle, small-scale convective cases in both winter and summer time is an ongoing challenge in weather forecasting. Recent studies have shown that better structure of moisture within the boundary layer is crucial for improving forecasting skills, particularly quantitative precipitation forecasting (QPF). Lidars, which take high temporal observations of moisture, are able to capture very detailed structures, especially within the boundary layer where convection often begins. This study first investigates the extent to which an aerosol and a water vapor lidar are able to capture key boundary layer processes necessary for the development of convection. The results of this preliminary study show that the water vapor lidar is best able to capture the small scale water vapor variability that is necessary for the development of convection. These results are then used to investigate impacts of assimilating moisture from the Howard University Raman Lidar (HURL) for one mesoscale convective case, July 27-28, 2006. The data for this case is from the Water Vapor Validation Experiment-Satellite and Sondes (WAVES) field campaign located at the Howard University Beltsville Site (HUBS) in Beltsville, MD. Specifically, lidar-based water vapor mixing ratio profiles are assimilated into the Weather Research and Forecasting (WRF) regional model over a 4 km grid resolution over Washington, DC. Model verification is conducted using the Meteorological Evaluation Tool (MET) and the results from the lidar run are then compared to a control (no assimilation) run. The findings indicate that quantitatively conclusions cannot be draw from this one case study. However, qualitatively, the assimilation of the lidar observations improved the equivalent potential temperature, and water vapor distribution of the region. This difference changed location, strength and spatial coverage of the convective system over the HUBS region.

  8. A hybrid approach EMD-HW for short-term forecasting of daily stock market time series data

    NASA Astrophysics Data System (ADS)

    Awajan, Ahmad Mohd; Ismail, Mohd Tahir

    2017-08-01

    Recently, forecasting time series has attracted considerable attention in the field of analyzing financial time series data, specifically within the stock market index. Moreover, stock market forecasting is a challenging area of financial time-series forecasting. In this study, a hybrid methodology between Empirical Mode Decomposition with the Holt-Winter method (EMD-HW) is used to improve forecasting performances in financial time series. The strength of this EMD-HW lies in its ability to forecast non-stationary and non-linear time series without a need to use any transformation method. Moreover, EMD-HW has a relatively high accuracy and offers a new forecasting method in time series. The daily stock market time series data of 11 countries is applied to show the forecasting performance of the proposed EMD-HW. Based on the three forecast accuracy measures, the results indicate that EMD-HW forecasting performance is superior to traditional Holt-Winter forecasting method.

  9. Operational value of ensemble streamflow forecasts for hydropower production: A Canadian case study

    NASA Astrophysics Data System (ADS)

    Boucher, Marie-Amélie; Tremblay, Denis; Luc, Perreault; François, Anctil

    2010-05-01

    Ensemble and probabilistic forecasts have many advantages over deterministic ones, both in meteorology and hydrology (e.g. Krzysztofowicz, 2001). Mainly, they inform the user on the uncertainty linked to the forecast. It has been brought to attention that such additional information could lead to improved decision making (e.g. Wilks and Hamill, 1995; Mylne, 2002; Roulin, 2007), but very few studies concentrate on operational situations involving the use of such forecasts. In addition, many authors have demonstrated that ensemble forecasts outperform deterministic forecasts in terms of performance (e.g. Jaun et al., 2005; Velazquez et al., 2009; Laio and Tamea, 2007). However, such performance is mostly assessed on the basis of numerical scoring rules, which compare the forecasts to the observations, and seldom in terms of management gains. The proposed case study adopts an operational point of view, on the basis that a novel forecasting system has value only if it leads to increase monetary and societal gains (e.g. Murphy, 1994; Laio and Tamea, 2007). More specifically, Environment Canada operational ensemble precipitation forecasts are used to drive the HYDROTEL distributed hydrological model (Fortin et al., 1995), calibrated on the Gatineau watershed located in Québec, Canada. The resulting hydrological ensemble forecasts are then incorporated into Hydro-Québec SOHO stochastic management optimization tool that automatically search for optimal operation decisions for the all reservoirs and hydropower plants located on the basin. The timeline of the study is the fall season of year 2003. This period is especially relevant because of high precipitations that nearly caused a major spill, and forced the preventive evacuation of a portion of the population located near one of the dams. We show that the use of the ensemble forecasts would have reduced the occurrence of spills and flooding, which is of particular importance for dams located in populous area, and increased hydropower production. The ensemble precipitation forecasts extend from March 1st of 2002 to December 31st of 2003. They were obtained using two atmospheric models, SEF (8 members plus the control deterministic forecast) and GEM (8 members). The corresponding deterministic precipitation forecast issued by SEF model is also used within HYDROTEL in order to compare ensemble streamflow forecasts with their deterministic counterparts. Although this study does not incorporate all the sources of uncertainty, precipitation is certainly the most important input for hydrological modeling and conveys a great portion of the total uncertainty. References: Fortin, J.P., Moussa, R., Bocquillon, C. and Villeneuve, J.P. 1995: HYDROTEL, un modèle hydrologique distribué pouvant bénéficier des données fournies par la télédétection et les systèmes d'information géographique, Revue des Sciences de l'Eau, 8(1), 94-124. Jaun, S., Ahrens, B., Walser, A., Ewen, T. and Schaer, C. 2008: A probabilistic view on the August 2005 floods in the upper Rhine catchment, Natural Hazards and Earth System Sciences, 8 (2), 281-291. Krzysztofowicz, R. 2001: The case for probabilistic forecasting in hydrology, Journal of Hydrology, 249, 2-9. Murphy, A.H. 1994: Assessing the economic value of weather forecasts: An overview of methods, results and issues, Meteorological Applications, 1, 69-73. Mylne, K.R. 2002: Decision-Making from probability forecasts based on forecast value, Meteorological Applications, 9, 307-315. Laio, F. and Tamea, S. 2007: Verification tools for probabilistic forecasts of continuous hydrological variables, Hydrology and Earth System Sciences, 11, 1267-1277. Roulin, E. 2007: Skill and relative economic value of medium-range hydrological ensemble predictions, Hydrology and Earth System Sciences, 11, 725-737. Velazquez, J.-A., Petit, T., Lavoie, A., Boucher, M.-A., Turcotte, R., Fortin, V. and Anctil, F. 2009: An evaluation of the Canadian global meteorological ensemble prediction system for short-term hydrological forecasting, Hydrology and Earth System Sciences, 13(11), 2221-2231. Wilks, D.S. and Hamill, T.M. 1995: Potential economic value of ensemble-based surface weather forecasts, Monthly Weather Review, 123(12), 3565-3575.

  10. Comparison of Adaline and Multiple Linear Regression Methods for Rainfall Forecasting

    NASA Astrophysics Data System (ADS)

    Sutawinaya, IP; Astawa, INGA; Hariyanti, NKD

    2018-01-01

    Heavy rainfall can cause disaster, therefore need a forecast to predict rainfall intensity. Main factor that cause flooding is there is a high rainfall intensity and it makes the river become overcapacity. This will cause flooding around the area. Rainfall factor is a dynamic factor, so rainfall is very interesting to be studied. In order to support the rainfall forecasting, there are methods that can be used from Artificial Intelligence (AI) to statistic. In this research, we used Adaline for AI method and Regression for statistic method. The more accurate forecast result shows the method that used is good for forecasting the rainfall. Through those methods, we expected which is the best method for rainfall forecasting here.

  11. Influence of the Redundant Verification and the Non-Redundant Verification on the Hydraulic Tomography

    NASA Astrophysics Data System (ADS)

    Wei, T. B.; Chen, Y. L.; Lin, H. R.; Huang, S. Y.; Yeh, T. C. J.; Wen, J. C.

    2016-12-01

    In the groundwater study, it estimated the heterogeneous spatial distribution of hydraulic Properties, there were many scholars use to hydraulic tomography (HT) from field site pumping tests to estimate inverse of heterogeneous spatial distribution of hydraulic Properties, to prove the most of most field site aquifer was heterogeneous hydrogeological parameters spatial distribution field. Many scholars had proposed a method of hydraulic tomography to estimate heterogeneous spatial distribution of hydraulic Properties of aquifer, the Huang et al. [2011] was used the non-redundant verification analysis of pumping wells changed, observation wells fixed on the inverse and the forward, to reflect the feasibility of the heterogeneous spatial distribution of hydraulic Properties of field site aquifer of the non-redundant verification analysis on steady-state model.From post literature, finding only in steady state, non-redundant verification analysis of pumping well changed location and observation wells fixed location for inverse and forward. But the studies had not yet pumping wells fixed or changed location, and observation wells fixed location for redundant verification or observation wells change location for non-redundant verification of the various combinations may to explore of influences of hydraulic tomography method. In this study, it carried out redundant verification method and non-redundant verification method for forward to influences of hydraulic tomography method in transient. And it discuss above mentioned in NYUST campus sites the actual case, to prove the effectiveness of hydraulic tomography methods, and confirmed the feasibility on inverse and forward analysis from analysis results.Keywords: Hydraulic Tomography, Redundant Verification, Heterogeneous, Inverse, Forward

  12. Survey of Verification and Validation Techniques for Small Satellite Software Development

    NASA Technical Reports Server (NTRS)

    Jacklin, Stephen A.

    2015-01-01

    The purpose of this paper is to provide an overview of the current trends and practices in small-satellite software verification and validation. This document is not intended to promote a specific software assurance method. Rather, it seeks to present an unbiased survey of software assurance methods used to verify and validate small satellite software and to make mention of the benefits and value of each approach. These methods include simulation and testing, verification and validation with model-based design, formal methods, and fault-tolerant software design with run-time monitoring. Although the literature reveals that simulation and testing has by far the longest legacy, model-based design methods are proving to be useful for software verification and validation. Some work in formal methods, though not widely used for any satellites, may offer new ways to improve small satellite software verification and validation. These methods need to be further advanced to deal with the state explosion problem and to make them more usable by small-satellite software engineers to be regularly applied to software verification. Last, it is explained how run-time monitoring, combined with fault-tolerant software design methods, provides an important means to detect and correct software errors that escape the verification process or those errors that are produced after launch through the effects of ionizing radiation.

  13. Evaluating the Contribution of NASA Remotely-Sensed Data Sets on a Convection-Allowing Forecast Model

    NASA Technical Reports Server (NTRS)

    Zavodsky, Bradley T.; Case, Jonathan L.; Molthan, Andrew L.

    2012-01-01

    The Short-term Prediction Research and Transition (SPoRT) Center is a collaborative partnership between NASA and operational forecasting partners, including a number of National Weather Service forecast offices. SPoRT provides real-time NASA products and capabilities to help its partners address specific operational forecast challenges. One challenge that forecasters face is using guidance from local and regional deterministic numerical models configured at convection-allowing resolution to help assess a variety of mesoscale/convective-scale phenomena such as sea-breezes, local wind circulations, and mesoscale convective weather potential on a given day. While guidance from convection-allowing models has proven valuable in many circumstances, the potential exists for model improvements by incorporating more representative land-water surface datasets, and by assimilating retrieved temperature and moisture profiles from hyper-spectral sounders. In order to help increase the accuracy of deterministic convection-allowing models, SPoRT produces real-time, 4-km CONUS forecasts using a configuration of the Weather Research and Forecasting (WRF) model (hereafter SPoRT-WRF) that includes unique NASA products and capabilities including 4-km resolution soil initialization data from the Land Information System (LIS), 2-km resolution SPoRT SST composites over oceans and large water bodies, high-resolution real-time Green Vegetation Fraction (GVF) composites derived from the Moderate-resolution Imaging Spectroradiometer (MODIS) instrument, and retrieved temperature and moisture profiles from the Atmospheric Infrared Sounder (AIRS) and Infrared Atmospheric Sounding Interferometer (IASI). NCAR's Model Evaluation Tools (MET) verification package is used to generate statistics of model performance compared to in situ observations and rainfall analyses for three months during the summer of 2012 (June-August). Detailed analyses of specific severe weather outbreaks during the summer will be presented to assess the potential added-value of the SPoRT datasets and data assimilation methodology compared to a WRF configuration without the unique datasets and data assimilation.

  14. The Impact of Atmospheric InfraRed Sounder (AIRS) Profiles on Short-term Weather Forecasts

    NASA Technical Reports Server (NTRS)

    Chou, Shih-Hung; Zavodsky, Brad; Jedlovec, Gary J.; Lapenta, William

    2007-01-01

    The Atmospheric Infrared Sounder (AIRS), together with the Advanced Microwave Sounding Unit (AMSU), represents one of the most advanced spacebased atmospheric sounding systems. The combined AlRS/AMSU system provides radiance measurements used to retrieve temperature profiles with an accuracy of 1 K over 1 km layers under both clear and partly cloudy conditions, while the accuracy of the derived humidity profiles is 15% in 2 km layers. Critical to the successful use of AIRS profiles for weather and climate studies is the use of profile quality indicators and error estimates provided with each profile Aside form monitoring changes in Earth's climate, one of the objectives of AIRS is to provide sounding information of sufficient accuracy such that the assimilation of the new observations, especially in data sparse region, will lead to an improvement in weather forecasts. The purpose of this paper is to describe a procedure to optimally assimilate highresolution AIRS profile data in a regional analysis/forecast model. The paper will focus on the impact of AIRS profiles on a rapidly developing east coast storm and will also discuss preliminary results for a 30-day forecast period, simulating a quasi-operation environment. Temperature and moisture profiles were obtained from the prototype version 5.0 EOS science team retrieval algorithm which includes explicit error information for each profile. The error profile information was used to select the highest quality temperature and moisture data for every profile location and pressure level for assimilation into the ARPS Data Analysis System (ADAS). The AIRS-enhanced analyses were used as initial fields for the Weather Research and Forecast (WRF) system used by the SPORT project for regional weather forecast studies. The ADASWRF system will be run on CONUS domain with an emphasis on the east coast. The preliminary assessment of the impact of the AIRS profiles will focus on quality control issues associated with AIRS, intelligent use of the quality indicators, and forecast verification.

  15. Evaluation of Regional Extended-Range Prediction for Tropical Waves Using COAMPS®

    NASA Astrophysics Data System (ADS)

    Hong, X.; Reynolds, C. A.; Doyle, J. D.; May, P. W.; Chen, S.; Flatau, M. K.; O'Neill, L. W.

    2014-12-01

    The Navy's Coupled Ocean/Atmosphere Mesoscale Prediction System (COAMPS1) in a two-way coupled mode is used for two-month regional extended-range prediction for the Madden-Julian Oscillation (MJO) and Tropical Cyclone 05 (TC05) that occurred during the DYNAMO period from November to December 2011. Verification and statistics from two experiments with 45-km and 27-km horizontal resolutions indicate that 27-km run provides a better representation of the three MJO events that occurred during this 2-month period, including the two convectively-coupled Kelvin waves associated with the second MJO event as observed. The 27-km run also significantly reduces forecast error after 15-days, reaching a maximum bias reduction of 89% in the third 15-day period due to the well represented MJO propagation over the Maritime Continent. Correlations between the model forecasts and observations or ECMWF analyses show that the MJO suppressed period is more difficult to predict than the active period. In addition, correlation coefficients for cloud liquid water path (CLWP) and precipitation are relatively low for both cases compared to other variables. The study suggests that a good simulation of TC05 and a good simulation of the Kelvin waves and westerly wind bursts are linked. Further research is needed to investigate the capability in regional extended-range forecasts when the lateral boundary conditions are provided from a long-term global forecast to allow for an assessment of potential operational forecast skill. _____________________________________________________ 1COAMPS is a registered trademark of U.S. Naval Research Laboratory

  16. The implementation of reverse Kessler warm rain scheme for radar reflectivity assimilation using a nudging approach in New Zealand

    NASA Astrophysics Data System (ADS)

    Zhang, Sijin; Austin, Geoff; Sutherland-Stacey, Luke

    2014-05-01

    Reverse Kessler warm rain processes were implemented within the Weather Research and Forecasting Model (WRF) and coupled with a Newtonian relaxation, or nudging technique designed to improve quantitative precipitation forecasting (QPF) in New Zealand by making use of observed radar reflectivity and modest computing facilities. One of the reasons for developing such a scheme, rather than using 4D-Var for example, is that radar VAR scheme in general, and 4D-Var in particular, requires computational resources beyond the capability of most university groups and indeed some national forecasting centres of small countries like New Zealand. The new scheme adjusts the model water vapor mixing ratio profiles based on observed reflectivity at each time step within an assimilation time window. The whole scheme can be divided into following steps: (i) The radar reflectivity is firstly converted to rain water, and (ii) then the rain water is used to derive cloud water content according to the reverse Kessler scheme; (iii) The cloud water content associated water vapor mixing ratio is then calculated based on the saturation adjustment processes; (iv) Finally the adjusted water vapor is nudged into the model and the model background is updated. 13 rainfall cases which occurred in the summer of 2011/2012 in New Zealand were used to evaluate the new scheme, different forecast scores were calculated and showed that the new scheme was able to improve precipitation forecasts on average up to around 7 hours ahead depending on different verification thresholds.

  17. Development and verification of a new wind speed forecasting system using an ensemble Kalman filter data assimilation technique in a fully coupled hydrologic and atmospheric model

    NASA Astrophysics Data System (ADS)

    Williams, John L.; Maxwell, Reed M.; Monache, Luca Delle

    2013-12-01

    Wind power is rapidly gaining prominence as a major source of renewable energy. Harnessing this promising energy source is challenging because of the chaotic nature of wind and its inherently intermittent nature. Accurate forecasting tools are critical to support the integration of wind energy into power grids and to maximize its impact on renewable energy portfolios. We have adapted the Data Assimilation Research Testbed (DART), a community software facility which includes the ensemble Kalman filter (EnKF) algorithm, to expand our capability to use observational data to improve forecasts produced with a fully coupled hydrologic and atmospheric modeling system, the ParFlow (PF) hydrologic model and the Weather Research and Forecasting (WRF) mesoscale atmospheric model, coupled via mass and energy fluxes across the land surface, and resulting in the PF.WRF model. Numerous studies have shown that soil moisture distribution and land surface vegetative processes profoundly influence atmospheric boundary layer development and weather processes on local and regional scales. We have used the PF.WRF model to explore the connections between the land surface and the atmosphere in terms of land surface energy flux partitioning and coupled variable fields including hydraulic conductivity, soil moisture, and wind speed and demonstrated that reductions in uncertainty in these coupled fields realized through assimilation of soil moisture observations propagate through the hydrologic and atmospheric system. The sensitivities found in this study will enable further studies to optimize observation strategies to maximize the utility of the PF.WRF-DART forecasting system.

  18. Using Time-Series Regression to Predict Academic Library Circulations.

    ERIC Educational Resources Information Center

    Brooks, Terrence A.

    1984-01-01

    Four methods were used to forecast monthly circulation totals in 15 midwestern academic libraries: dummy time-series regression, lagged time-series regression, simple average (straight-line forecasting), monthly average (naive forecasting). In tests of forecasting accuracy, dummy regression method and monthly mean method exhibited smallest average…

  19. Bayesian flood forecasting methods: A review

    NASA Astrophysics Data System (ADS)

    Han, Shasha; Coulibaly, Paulin

    2017-08-01

    Over the past few decades, floods have been seen as one of the most common and largely distributed natural disasters in the world. If floods could be accurately forecasted in advance, then their negative impacts could be greatly minimized. It is widely recognized that quantification and reduction of uncertainty associated with the hydrologic forecast is of great importance for flood estimation and rational decision making. Bayesian forecasting system (BFS) offers an ideal theoretic framework for uncertainty quantification that can be developed for probabilistic flood forecasting via any deterministic hydrologic model. It provides suitable theoretical structure, empirically validated models and reasonable analytic-numerical computation method, and can be developed into various Bayesian forecasting approaches. This paper presents a comprehensive review on Bayesian forecasting approaches applied in flood forecasting from 1999 till now. The review starts with an overview of fundamentals of BFS and recent advances in BFS, followed with BFS application in river stage forecasting and real-time flood forecasting, then move to a critical analysis by evaluating advantages and limitations of Bayesian forecasting methods and other predictive uncertainty assessment approaches in flood forecasting, and finally discusses the future research direction in Bayesian flood forecasting. Results show that the Bayesian flood forecasting approach is an effective and advanced way for flood estimation, it considers all sources of uncertainties and produces a predictive distribution of the river stage, river discharge or runoff, thus gives more accurate and reliable flood forecasts. Some emerging Bayesian forecasting methods (e.g. ensemble Bayesian forecasting system, Bayesian multi-model combination) were shown to overcome limitations of single model or fixed model weight and effectively reduce predictive uncertainty. In recent years, various Bayesian flood forecasting approaches have been developed and widely applied, but there is still room for improvements. Future research in the context of Bayesian flood forecasting should be on assimilation of various sources of newly available information and improvement of predictive performance assessment methods.

  20. Opportunities and challenges for extended-range predictions of tropical cyclone impacts on hydrological predictions

    NASA Astrophysics Data System (ADS)

    Tsai, Hsiao-Chung; Elsberry, Russell L.

    2013-12-01

    SummaryAn opportunity exists to extend support to the decision-making processes of water resource management and hydrological operations by providing extended-range tropical cyclone (TC) formation and track forecasts in the western North Pacific from the 51-member ECMWF 32-day ensemble. A new objective verification technique demonstrates that the ECMWF ensemble can predict most of the formations and tracks of the TCs during July 2009 to December 2010, even for most of the tropical depressions. Due to the relatively large number of false-alarm TCs in the ECMWF ensemble forecasts that would cause problems for support of hydrological operations, characteristics of these false alarms are discussed. Special attention is given to the ability of the ECMWF ensemble to predict periods of no-TCs in the Taiwan area, since water resource management decisions also depend on the absence of typhoon-related rainfall. A three-tier approach is proposed to provide support for hydrological operations via extended-range forecasts twice weekly on the 30-day timescale, twice-daily on the 15-day timescale, and up to four times a day with a consensus of high-resolution deterministic models.

  1. A study of applications scribe frame data verifications using design rule check

    NASA Astrophysics Data System (ADS)

    Saito, Shoko; Miyazaki, Masaru; Sakurai, Mitsuo; Itoh, Takahisa; Doi, Kazumasa; Sakurai, Norioko; Okada, Tomoyuki

    2013-06-01

    In semiconductor manufacturing, scribe frame data generally is generated for each LSI product according to its specific process design. Scribe frame data is designed based on definition tables of scanner alignment, wafer inspection and customers specified marks. We check that scribe frame design is conforming to specification of alignment and inspection marks at the end. Recently, in COT (customer owned tooling) business or new technology development, there is no effective verification method for the scribe frame data, and we take a lot of time to work on verification. Therefore, we tried to establish new verification method of scribe frame data by applying pattern matching and DRC (Design Rule Check) which is used in device verification. We would like to show scheme of the scribe frame data verification using DRC which we tried to apply. First, verification rules are created based on specifications of scanner, inspection and others, and a mark library is also created for pattern matching. Next, DRC verification is performed to scribe frame data. Then the DRC verification includes pattern matching using mark library. As a result, our experiments demonstrated that by use of pattern matching and DRC verification our new method can yield speed improvements of more than 12 percent compared to the conventional mark checks by visual inspection and the inspection time can be reduced to less than 5 percent if multi-CPU processing is used. Our method delivers both short processing time and excellent accuracy when checking many marks. It is easy to maintain and provides an easy way for COT customers to use original marks. We believe that our new DRC verification method for scribe frame data is indispensable and mutually beneficial.

  2. Time-series-based hybrid mathematical modelling method adapted to forecast automotive and medical waste generation: Case study of Lithuania.

    PubMed

    Karpušenkaitė, Aistė; Ruzgas, Tomas; Denafas, Gintaras

    2018-05-01

    The aim of the study was to create a hybrid forecasting method that could produce higher accuracy forecasts than previously used 'pure' time series methods. Mentioned methods were already tested with total automotive waste, hazardous automotive waste, and total medical waste generation, but demonstrated at least a 6% error rate in different cases and efforts were made to decrease it even more. Newly developed hybrid models used a random start generation method to incorporate different time-series advantages and it helped to increase the accuracy of forecasts by 3%-4% in hazardous automotive waste and total medical waste generation cases; the new model did not increase the accuracy of total automotive waste generation forecasts. Developed models' abilities to forecast short- and mid-term forecasts were tested using prediction horizon.

  3. A Time-Series Water Level Forecasting Model Based on Imputation and Variable Selection Method.

    PubMed

    Yang, Jun-He; Cheng, Ching-Hsue; Chan, Chia-Pan

    2017-01-01

    Reservoirs are important for households and impact the national economy. This paper proposed a time-series forecasting model based on estimating a missing value followed by variable selection to forecast the reservoir's water level. This study collected data from the Taiwan Shimen Reservoir as well as daily atmospheric data from 2008 to 2015. The two datasets are concatenated into an integrated dataset based on ordering of the data as a research dataset. The proposed time-series forecasting model summarily has three foci. First, this study uses five imputation methods to directly delete the missing value. Second, we identified the key variable via factor analysis and then deleted the unimportant variables sequentially via the variable selection method. Finally, the proposed model uses a Random Forest to build the forecasting model of the reservoir's water level. This was done to compare with the listing method under the forecasting error. These experimental results indicate that the Random Forest forecasting model when applied to variable selection with full variables has better forecasting performance than the listing model. In addition, this experiment shows that the proposed variable selection can help determine five forecast methods used here to improve the forecasting capability.

  4. Forecasting daily patient volumes in the emergency department.

    PubMed

    Jones, Spencer S; Thomas, Alun; Evans, R Scott; Welch, Shari J; Haug, Peter J; Snow, Gregory L

    2008-02-01

    Shifts in the supply of and demand for emergency department (ED) resources make the efficient allocation of ED resources increasingly important. Forecasting is a vital activity that guides decision-making in many areas of economic, industrial, and scientific planning, but has gained little traction in the health care industry. There are few studies that explore the use of forecasting methods to predict patient volumes in the ED. The goals of this study are to explore and evaluate the use of several statistical forecasting methods to predict daily ED patient volumes at three diverse hospital EDs and to compare the accuracy of these methods to the accuracy of a previously proposed forecasting method. Daily patient arrivals at three hospital EDs were collected for the period January 1, 2005, through March 31, 2007. The authors evaluated the use of seasonal autoregressive integrated moving average, time series regression, exponential smoothing, and artificial neural network models to forecast daily patient volumes at each facility. Forecasts were made for horizons ranging from 1 to 30 days in advance. The forecast accuracy achieved by the various forecasting methods was compared to the forecast accuracy achieved when using a benchmark forecasting method already available in the emergency medicine literature. All time series methods considered in this analysis provided improved in-sample model goodness of fit. However, post-sample analysis revealed that time series regression models that augment linear regression models by accounting for serial autocorrelation offered only small improvements in terms of post-sample forecast accuracy, relative to multiple linear regression models, while seasonal autoregressive integrated moving average, exponential smoothing, and artificial neural network forecasting models did not provide consistently accurate forecasts of daily ED volumes. This study confirms the widely held belief that daily demand for ED services is characterized by seasonal and weekly patterns. The authors compared several time series forecasting methods to a benchmark multiple linear regression model. The results suggest that the existing methodology proposed in the literature, multiple linear regression based on calendar variables, is a reasonable approach to forecasting daily patient volumes in the ED. However, the authors conclude that regression-based models that incorporate calendar variables, account for site-specific special-day effects, and allow for residual autocorrelation provide a more appropriate, informative, and consistently accurate approach to forecasting daily ED patient volumes.

  5. Verification and Validation Studies for the LAVA CFD Solver

    NASA Technical Reports Server (NTRS)

    Moini-Yekta, Shayan; Barad, Michael F; Sozer, Emre; Brehm, Christoph; Housman, Jeffrey A.; Kiris, Cetin C.

    2013-01-01

    The verification and validation of the Launch Ascent and Vehicle Aerodynamics (LAVA) computational fluid dynamics (CFD) solver is presented. A modern strategy for verification and validation is described incorporating verification tests, validation benchmarks, continuous integration and version control methods for automated testing in a collaborative development environment. The purpose of the approach is to integrate the verification and validation process into the development of the solver and improve productivity. This paper uses the Method of Manufactured Solutions (MMS) for the verification of 2D Euler equations, 3D Navier-Stokes equations as well as turbulence models. A method for systematic refinement of unstructured grids is also presented. Verification using inviscid vortex propagation and flow over a flat plate is highlighted. Simulation results using laminar and turbulent flow past a NACA 0012 airfoil and ONERA M6 wing are validated against experimental and numerical data.

  6. Global Positioning System (GPS) Precipitable Water in Forecasting Lightning at Spaceport Canaveral

    NASA Technical Reports Server (NTRS)

    Kehrer, Kristen C.; Graf, Brian; Roeder, William

    2006-01-01

    This paper evaluates the use of precipitable water (PW) from Global Positioning System (GPS) in lightning prediction. Additional independent verification of an earlier model is performed. This earlier model used binary logistic regression with the following four predictor variables optimally selected from a candidate list of 23 candidate predictors: the current precipitable water value for a given time of the day, the change in GPS-PW over the past 9 hours, the KIndex, and the electric field mill value. This earlier model was not optimized for any specific forecast interval, but showed promise for 6 hour and 1.5 hour forecasts. Two new models were developed and verified. These new models were optimized for two operationally significant forecast intervals. The first model was optimized for the 0.5 hour lightning advisories issued by the 45th Weather Squadron. An additional 1.5 hours was allowed for sensor dwell, communication, calculation, analysis, and advisory decision by the forecaster. Therefore the 0.5 hour advisory model became a 2 hour forecast model for lightning within the 45th Weather Squadron advisory areas. The second model was optimized for major ground processing operations supported by the 45th Weather Squadron, which can require lightning forecasts with a lead-time of up to 7.5 hours. Using the same 1.5 lag as in the other new model, this became a 9 hour forecast model for lightning within 37 km (20 NM)) of the 45th Weather Squadron advisory areas. The two new models were built using binary logistic regression from a list of 26 candidate predictor variables: the current GPS-PW value, the change of GPS-PW over 0.5 hour increments from 0.5 to 12 hours, and the K-index. The new 2 hour model found the following for predictors to be statistically significant, listed in decreasing order of contribution to the forecast: the 0.5 hour change in GPS-PW, the 7.5 hour change in GPS-PW, the current GPS-PW value, and the KIndex. The new 9 hour forecast model found the following five independent variables to be statistically significant, listed in decreasing order of contribution to the forecast: the current GPSPW value, the 8.5 hour change in GPS-PW, the 3.5 hour change in GPS-PW, the 12 hour change in GPS-PW, and the K-Index. In both models, the GPS-PW parameters had better correlation to the lightning forecast than the K-Index, a widely used thunderstorm index. Possible future improvements to this study are discussed.

  7. Application and verification of ECMWF seasonal forecast for wind energy

    NASA Astrophysics Data System (ADS)

    Žagar, Mark; Marić, Tomislav; Qvist, Martin; Gulstad, Line

    2015-04-01

    A good understanding of long-term annual energy production (AEP) is crucial when assessing the business case of investing in green energy like wind power. The art of wind-resource assessment has emerged into a scientific discipline on its own, which has advanced at high pace over the last decade. This has resulted in continuous improvement of the AEP accuracy and, therefore, increase in business case certainty. Harvesting the full potential output of a wind farm or a portfolio of wind farms depends heavily on optimizing operation and management strategy. The necessary information for short-term planning (up to 14 days) is provided by standard weather and power forecasting services, and the long-term plans are based on climatology. However, the wind-power industry is lacking quality information on intermediate scales of the expected variability in seasonal and intra-annual variations and their geographical distribution. The seasonal power forecast presented here is designed to bridge this gap. The seasonal power production forecast is based on the ECMWF seasonal weather forecast and the Vestas' high-resolution, mesoscale weather library. The seasonal weather forecast is enriched through a layer of statistical post-processing added to relate large-scale wind speed anomalies to mesoscale climatology. The resulting predicted energy production anomalies, thus, include mesoscale effects not captured by the global forecasting systems. The turbine power output is non-linearly related to the wind speed, which has important implications for the wind power forecast. In theory, the wind power is proportional to the cube of wind speed. However, due to the nature of turbine design, this exponent is close to 3 only at low wind speeds, becomes smaller as the wind speed increases, and above 11-13 m/s the power output remains constant, called the rated power. The non-linear relationship between wind speed and the power output generally increases sensitivity of the forecasted power to the wind speed anomalies. On the other hand, in some cases and areas where turbines operate close to, or above the rated power, the sensitivity of power forecast is reduced. Thus, the seasonal power forecasting system requires good knowledge of the changes in frequency of events with sufficient wind speeds to have acceptable skill. The scientific background for the Vestas seasonal power forecasting system is described and the relationship between predicted monthly wind speed anomalies and observed wind energy production are investigated for a number of operating wind farms in different climate zones. Current challenges will be discussed and some future research and development areas identified.

  8. A Solar Time-Based Analog Ensemble Method for Regional Solar Power Forecasting

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hodge, Brian S; Zhang, Xinmin; Li, Yuan

    This paper presents a new analog ensemble method for day-ahead regional photovoltaic (PV) power forecasting with hourly resolution. By utilizing open weather forecast and power measurement data, this prediction method is processed within a set of historical data with similar meteorological data (temperature and irradiance), and astronomical date (solar time and earth declination angle). Further, clustering and blending strategies are applied to improve its accuracy in regional PV forecasting. The robustness of the proposed method is demonstrated with three different numerical weather prediction models, the North American Mesoscale Forecast System, the Global Forecast System, and the Short-Range Ensemble Forecast, formore » both region level and single site level PV forecasts. Using real measured data, the new forecasting approach is applied to the load zone in Southeastern Massachusetts as a case study. The normalized root mean square error (NRMSE) has been reduced by 13.80%-61.21% when compared with three tested baselines.« less

  9. How is the weather? Forecasting inpatient glycemic control

    PubMed Central

    Saulnier, George E; Castro, Janna C; Cook, Curtiss B; Thompson, Bithika M

    2017-01-01

    Aim: Apply methods of damped trend analysis to forecast inpatient glycemic control. Method: Observed and calculated point-of-care blood glucose data trends were determined over 62 weeks. Mean absolute percent error was used to calculate differences between observed and forecasted values. Comparisons were drawn between model results and linear regression forecasting. Results: The forecasted mean glucose trends observed during the first 24 and 48 weeks of projections compared favorably to the results provided by linear regression forecasting. However, in some scenarios, the damped trend method changed inferences compared with linear regression. In all scenarios, mean absolute percent error values remained below the 10% accepted by demand industries. Conclusion: Results indicate that forecasting methods historically applied within demand industries can project future inpatient glycemic control. Additional study is needed to determine if forecasting is useful in the analyses of other glucometric parameters and, if so, how to apply the techniques to quality improvement. PMID:29134125

  10. Requirement Assurance: A Verification Process

    NASA Technical Reports Server (NTRS)

    Alexander, Michael G.

    2011-01-01

    Requirement Assurance is an act of requirement verification which assures the stakeholder or customer that a product requirement has produced its "as realized product" and has been verified with conclusive evidence. Product requirement verification answers the question, "did the product meet the stated specification, performance, or design documentation?". In order to ensure the system was built correctly, the practicing system engineer must verify each product requirement using verification methods of inspection, analysis, demonstration, or test. The products of these methods are the "verification artifacts" or "closure artifacts" which are the objective evidence needed to prove the product requirements meet the verification success criteria. Institutional direction is given to the System Engineer in NPR 7123.1A NASA Systems Engineering Processes and Requirements with regards to the requirement verification process. In response, the verification methodology offered in this report meets both the institutional process and requirement verification best practices.

  11. Design and Realization of Controllable Ultrasonic Fault Detector Automatic Verification System

    NASA Astrophysics Data System (ADS)

    Sun, Jing-Feng; Liu, Hui-Ying; Guo, Hui-Juan; Shu, Rong; Wei, Kai-Li

    The ultrasonic flaw detection equipment with remote control interface is researched and the automatic verification system is developed. According to use extensible markup language, the building of agreement instruction set and data analysis method database in the system software realizes the controllable designing and solves the diversification of unreleased device interfaces and agreements. By using the signal generator and a fixed attenuator cascading together, a dynamic error compensation method is proposed, completes what the fixed attenuator does in traditional verification and improves the accuracy of verification results. The automatic verification system operating results confirms that the feasibility of the system hardware and software architecture design and the correctness of the analysis method, while changes the status of traditional verification process cumbersome operations, and reduces labor intensity test personnel.

  12. A Multi-Season Study of the Effects of MODIS Sea-Surface Temperatures on Operational WRF Forecasts at NWS Miami, FL

    NASA Technical Reports Server (NTRS)

    Case, Jonathan L.; Santos, Pablo; Lazarus, Steven M.; Splitt, Michael E.; Haines, Stephanie L.; Dembek, Scott R.; Lapenta, William M.

    2008-01-01

    Studies at the Short-term Prediction Research and Transition (SPORT) Center have suggested that the use of Moderate Resolution Imaging Spectroradiometer (MODIS) sea-surface temperature (SST) composites in regional weather forecast models can have a significant positive impact on short-term numerical weather prediction in coastal regions. Recent work by LaCasse et al (2007, Monthly Weather Review) highlights lower atmospheric differences in regional numerical simulations over the Florida offshore waters using 2-km SST composites derived from the MODIS instrument aboard the polar-orbiting Aqua and Terra Earth Observing System satellites. To help quantify the value of this impact on NWS Weather Forecast Offices (WFOs), the SPORT Center and the NWS WFO at Miami, FL (MIA) are collaborating on a project to investigate the impact of using the high-resolution MODIS SST fields within the Weather Research and Forecasting (WRF) prediction system. The project's goal is to determine whether more accurate specification of the lower-boundary forcing within WRF will result in improved land/sea fluxes and hence, more accurate evolution of coastal mesoscale circulations and the associated sensible weather elements. The NWS MIA is currently running WRF in real-time to support daily forecast operations, using the National Centers for Environmental Prediction Nonhydrostatic Mesoscale Model dynamical core within the NWS Science and Training Resource Center's Environmental Modeling System (EMS) software. Twenty-seven hour forecasts are run dally initialized at 0300, 0900, 1500, and 2100 UTC on a domain with 4-km grid spacing covering the southern half of Florida and adjacent waters of the Gulf of Mexico and Atlantic Ocean. Each model run is initialized using the Local Analysis and Prediction System (LAPS) analyses available in AWIPS. The SSTs are initialized with the NCEP Real-Time Global (RTG) analyses at 1/12deg resolution (approx.9 km); however, the RTG product does not exhibit fine-scale details consistent with its grid resolution. SPORT is conducting parallel WRF EMS runs identical to the operational runs at NWS MIA except for the use of MODIS SST composites in place of the RTG product as the initial and boundary conditions over water, The MODIS SST composites for initializing the SPORT WRF runs are generated on a 2-km grid four times daily at 0400, 0700, 1600, and 1900 UTC, based on the times of the overhead passes of the Aqua and Terra satellites. The incorporation of the MODIS SST data into the SPORT WRF runs is staggered such that SSTs are updated with a new composite every six hours in each of the WRF runs. From mid-February to July 2007, over 500 parallel WRF simulations have been collected for analysis and verification. This paper will present verification results comparing the NWS MIA operational WRF runs to the SPORT experimental runs, and highlight any substantial differences noted in the predicted mesoscale phenomena for specific cases.

  13. Simulation verification techniques study

    NASA Technical Reports Server (NTRS)

    Schoonmaker, P. B.; Wenglinski, T. H.

    1975-01-01

    Results are summarized of the simulation verification techniques study which consisted of two tasks: to develop techniques for simulator hardware checkout and to develop techniques for simulation performance verification (validation). The hardware verification task involved definition of simulation hardware (hardware units and integrated simulator configurations), survey of current hardware self-test techniques, and definition of hardware and software techniques for checkout of simulator subsystems. The performance verification task included definition of simulation performance parameters (and critical performance parameters), definition of methods for establishing standards of performance (sources of reference data or validation), and definition of methods for validating performance. Both major tasks included definition of verification software and assessment of verification data base impact. An annotated bibliography of all documents generated during this study is provided.

  14. Forecasting peaks of seasonal influenza epidemics.

    PubMed

    Nsoesie, Elaine; Mararthe, Madhav; Brownstein, John

    2013-06-21

    We present a framework for near real-time forecast of influenza epidemics using a simulation optimization approach. The method combines an individual-based model and a simple root finding optimization method for parameter estimation and forecasting. In this study, retrospective forecasts were generated for seasonal influenza epidemics using web-based estimates of influenza activity from Google Flu Trends for 2004-2005, 2007-2008 and 2012-2013 flu seasons. In some cases, the peak could be forecasted 5-6 weeks ahead. This study adds to existing resources for influenza forecasting and the proposed method can be used in conjunction with other approaches in an ensemble framework.

  15. High Predictive Skill of Global Surface Temperature a Year Ahead

    NASA Astrophysics Data System (ADS)

    Folland, C. K.; Colman, A.; Kennedy, J. J.; Knight, J.; Parker, D. E.; Stott, P.; Smith, D. M.; Boucher, O.

    2011-12-01

    We discuss the high skill of real-time forecasts of global surface temperature a year ahead issued by the UK Met Office, and their scientific background. Although this is a forecasting and not a formal attribution study, we show that the main instrumental global annual surface temperature data sets since 1891 are structured consistently with a set of five physical forcing factors except during and just after the second World War. Reconstructions use a multiple application of cross validated linear regression to minimise artificial skill allowing time-varying uncertainties in the contribution of each forcing factor to global temperature to be assessed. Mean cross validated reconstructions for the data sets have total correlations in the range 0.93-0.95,interannual correlations in the range 0.72-0.75 and root mean squared errors near 0.06oC, consistent with observational uncertainties.Three transient runs of the HadCM3 coupled model for 1888-2002 demonstrate quite similar reconstruction skill from similar forcing factors defined appropriately for the model, showing that skilful use of our technique is not confined to observations. The observed reconstructions show that the Atlantic Multidecadal Oscillation (AMO) likely contributed to the re-commencement of global warming between 1976 and 2010 and to global cooling observed immediately beforehand in 1965-1976. The slowing of global warming in the last decade is likely to be largely due to a phase-delayed response to the downturn in the solar cycle since 2001-2, with no net ENSO contribution. The much reduced trend in 2001-10 is similar in size to other weak decadal temperature trends observed since global warming resumed in the 1970s. The causes of variations in decadal trends can be mostly explained by variations in the strength of the forcing factors. Eleven real-time forecasts of global mean surface temperature for the year ahead for 2000-2010, based on broadly similar methods, provide an independent test of the ideas of this study. They had the high correlation and root mean square error skill levels compared to observations of 0.74 and 0.07oC respectively. Pseudo-forecasts for the same period reconstructed from somewhat improved forcing data used for this study had the slightly better correlation of 0.80 and root mean squared error of 0.05oC. Finally we compare the statistical forecasts with dynamical hindcasts and forecasts of global surface temperature a year ahead made by the Met Office DePreSys coupled model. The statistical and dynamical forecasts of global surface temperature for 2011 will be compared with preliminary verification data.

  16. Hybrid Intrusion Forecasting Framework for Early Warning System

    NASA Astrophysics Data System (ADS)

    Kim, Sehun; Shin, Seong-Jun; Kim, Hyunwoo; Kwon, Ki Hoon; Han, Younggoo

    Recently, cyber attacks have become a serious hindrance to the stability of Internet. These attacks exploit interconnectivity of networks, propagate in an instant, and have become more sophisticated and evolutionary. Traditional Internet security systems such as firewalls, IDS and IPS are limited in terms of detecting recent cyber attacks in advance as these systems respond to Internet attacks only after the attacks inflict serious damage. In this paper, we propose a hybrid intrusion forecasting system framework for an early warning system. The proposed system utilizes three types of forecasting methods: time-series analysis, probabilistic modeling, and data mining method. By combining these methods, it is possible to take advantage of the forecasting technique of each while overcoming their drawbacks. Experimental results show that the hybrid intrusion forecasting method outperforms each of three forecasting methods.

  17. Probabilistic Wind Power Ramp Forecasting Based on a Scenario Generation Method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Qin; Florita, Anthony R; Krishnan, Venkat K

    Wind power ramps (WPRs) are particularly important in the management and dispatch of wind power and currently drawing the attention of balancing authorities. With the aim to reduce the impact of WPRs for power system operations, this paper develops a probabilistic ramp forecasting method based on a large number of simulated scenarios. An ensemble machine learning technique is first adopted to forecast the basic wind power forecasting scenario and calculate the historical forecasting errors. A continuous Gaussian mixture model (GMM) is used to fit the probability distribution function (PDF) of forecasting errors. The cumulative distribution function (CDF) is analytically deduced.more » The inverse transform method based on Monte Carlo sampling and the CDF is used to generate a massive number of forecasting error scenarios. An optimized swinging door algorithm is adopted to extract all the WPRs from the complete set of wind power forecasting scenarios. The probabilistic forecasting results of ramp duration and start-time are generated based on all scenarios. Numerical simulations on publicly available wind power data show that within a predefined tolerance level, the developed probabilistic wind power ramp forecasting method is able to predict WPRs with a high level of sharpness and accuracy.« less

  18. Probabilistic Wind Power Ramp Forecasting Based on a Scenario Generation Method: Preprint

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Qin; Florita, Anthony R; Krishnan, Venkat K

    2017-08-31

    Wind power ramps (WPRs) are particularly important in the management and dispatch of wind power, and they are currently drawing the attention of balancing authorities. With the aim to reduce the impact of WPRs for power system operations, this paper develops a probabilistic ramp forecasting method based on a large number of simulated scenarios. An ensemble machine learning technique is first adopted to forecast the basic wind power forecasting scenario and calculate the historical forecasting errors. A continuous Gaussian mixture model (GMM) is used to fit the probability distribution function (PDF) of forecasting errors. The cumulative distribution function (CDF) ismore » analytically deduced. The inverse transform method based on Monte Carlo sampling and the CDF is used to generate a massive number of forecasting error scenarios. An optimized swinging door algorithm is adopted to extract all the WPRs from the complete set of wind power forecasting scenarios. The probabilistic forecasting results of ramp duration and start time are generated based on all scenarios. Numerical simulations on publicly available wind power data show that within a predefined tolerance level, the developed probabilistic wind power ramp forecasting method is able to predict WPRs with a high level of sharpness and accuracy.« less

  19. Intermittent Demand Forecasting in a Tertiary Pediatric Intensive Care Unit.

    PubMed

    Cheng, Chen-Yang; Chiang, Kuo-Liang; Chen, Meng-Yin

    2016-10-01

    Forecasts of the demand for medical supplies both directly and indirectly affect the operating costs and the quality of the care provided by health care institutions. Specifically, overestimating demand induces an inventory surplus, whereas underestimating demand possibly compromises patient safety. Uncertainty in forecasting the consumption of medical supplies generates intermittent demand events. The intermittent demand patterns for medical supplies are generally classified as lumpy, erratic, smooth, and slow-moving demand. This study was conducted with the purpose of advancing a tertiary pediatric intensive care unit's efforts to achieve a high level of accuracy in its forecasting of the demand for medical supplies. On this point, several demand forecasting methods were compared in terms of the forecast accuracy of each. The results confirm that applying Croston's method combined with a single exponential smoothing method yields the most accurate results for forecasting lumpy, erratic, and slow-moving demand, whereas the Simple Moving Average (SMA) method is the most suitable for forecasting smooth demand. In addition, when the classification of demand consumption patterns were combined with the demand forecasting models, the forecasting errors were minimized, indicating that this classification framework can play a role in improving patient safety and reducing inventory management costs in health care institutions.

  20. A robust method using propensity score stratification for correcting verification bias for binary tests

    PubMed Central

    He, Hua; McDermott, Michael P.

    2012-01-01

    Sensitivity and specificity are common measures of the accuracy of a diagnostic test. The usual estimators of these quantities are unbiased if data on the diagnostic test result and the true disease status are obtained from all subjects in an appropriately selected sample. In some studies, verification of the true disease status is performed only for a subset of subjects, possibly depending on the result of the diagnostic test and other characteristics of the subjects. Estimators of sensitivity and specificity based on this subset of subjects are typically biased; this is known as verification bias. Methods have been proposed to correct verification bias under the assumption that the missing data on disease status are missing at random (MAR), that is, the probability of missingness depends on the true (missing) disease status only through the test result and observed covariate information. When some of the covariates are continuous, or the number of covariates is relatively large, the existing methods require parametric models for the probability of disease or the probability of verification (given the test result and covariates), and hence are subject to model misspecification. We propose a new method for correcting verification bias based on the propensity score, defined as the predicted probability of verification given the test result and observed covariates. This is estimated separately for those with positive and negative test results. The new method classifies the verified sample into several subsamples that have homogeneous propensity scores and allows correction for verification bias. Simulation studies demonstrate that the new estimators are more robust to model misspecification than existing methods, but still perform well when the models for the probability of disease and probability of verification are correctly specified. PMID:21856650

  1. Model-free aftershock forecasts constructed from similar sequences in the past

    NASA Astrophysics Data System (ADS)

    van der Elst, N.; Page, M. T.

    2017-12-01

    The basic premise behind aftershock forecasting is that sequences in the future will be similar to those in the past. Forecast models typically use empirically tuned parametric distributions to approximate past sequences, and project those distributions into the future to make a forecast. While parametric models do a good job of describing average outcomes, they are not explicitly designed to capture the full range of variability between sequences, and can suffer from over-tuning of the parameters. In particular, parametric forecasts may produce a high rate of "surprises" - sequences that land outside the forecast range. Here we present a non-parametric forecast method that cuts out the parametric "middleman" between training data and forecast. The method is based on finding past sequences that are similar to the target sequence, and evaluating their outcomes. We quantify similarity as the Poisson probability that the observed event count in a past sequence reflects the same underlying intensity as the observed event count in the target sequence. Event counts are defined in terms of differential magnitude relative to the mainshock. The forecast is then constructed from the distribution of past sequences outcomes, weighted by their similarity. We compare the similarity forecast with the Reasenberg and Jones (RJ95) method, for a set of 2807 global aftershock sequences of M≥6 mainshocks. We implement a sequence-specific RJ95 forecast using a global average prior and Bayesian updating, but do not propagate epistemic uncertainty. The RJ95 forecast is somewhat more precise than the similarity forecast: 90% of observed sequences fall within a factor of two of the median RJ95 forecast value, whereas the fraction is 85% for the similarity forecast. However, the surprise rate is much higher for the RJ95 forecast; 10% of observed sequences fall in the upper 2.5% of the (Poissonian) forecast range. The surprise rate is less than 3% for the similarity forecast. The similarity forecast may be useful to emergency managers and non-specialists when confidence or expertise in parametric forecasting may be lacking. The method makes over-tuning impossible, and minimizes the rate of surprises. At the least, this forecast constitutes a useful benchmark for more precisely tuned parametric forecasts.

  2. Application Study of Comprehensive Forecasting Model Based on Entropy Weighting Method on Trend of PM2.5 Concentration in Guangzhou, China

    PubMed Central

    Liu, Dong-jun; Li, Li

    2015-01-01

    For the issue of haze-fog, PM2.5 is the main influence factor of haze-fog pollution in China. The trend of PM2.5 concentration was analyzed from a qualitative point of view based on mathematical models and simulation in this study. The comprehensive forecasting model (CFM) was developed based on the combination forecasting ideas. Autoregressive Integrated Moving Average Model (ARIMA), Artificial Neural Networks (ANNs) model and Exponential Smoothing Method (ESM) were used to predict the time series data of PM2.5 concentration. The results of the comprehensive forecasting model were obtained by combining the results of three methods based on the weights from the Entropy Weighting Method. The trend of PM2.5 concentration in Guangzhou China was quantitatively forecasted based on the comprehensive forecasting model. The results were compared with those of three single models, and PM2.5 concentration values in the next ten days were predicted. The comprehensive forecasting model balanced the deviation of each single prediction method, and had better applicability. It broadens a new prediction method for the air quality forecasting field. PMID:26110332

  3. Application Study of Comprehensive Forecasting Model Based on Entropy Weighting Method on Trend of PM2.5 Concentration in Guangzhou, China.

    PubMed

    Liu, Dong-jun; Li, Li

    2015-06-23

    For the issue of haze-fog, PM2.5 is the main influence factor of haze-fog pollution in China. The trend of PM2.5 concentration was analyzed from a qualitative point of view based on mathematical models and simulation in this study. The comprehensive forecasting model (CFM) was developed based on the combination forecasting ideas. Autoregressive Integrated Moving Average Model (ARIMA), Artificial Neural Networks (ANNs) model and Exponential Smoothing Method (ESM) were used to predict the time series data of PM2.5 concentration. The results of the comprehensive forecasting model were obtained by combining the results of three methods based on the weights from the Entropy Weighting Method. The trend of PM2.5 concentration in Guangzhou China was quantitatively forecasted based on the comprehensive forecasting model. The results were compared with those of three single models, and PM2.5 concentration values in the next ten days were predicted. The comprehensive forecasting model balanced the deviation of each single prediction method, and had better applicability. It broadens a new prediction method for the air quality forecasting field.

  4. An impact analysis of forecasting methods and forecasting parameters on bullwhip effect

    NASA Astrophysics Data System (ADS)

    Silitonga, R. Y. H.; Jelly, N.

    2018-04-01

    Bullwhip effect is an increase of variance of demand fluctuation from downstream to upstream of supply chain. Forecasting methods and forecasting parameters were recognized as some factors that affect bullwhip phenomena. To study these factors, we can develop simulations. There are several ways to simulate bullwhip effect in previous studies, such as mathematical equation modelling, information control modelling, computer program, and many more. In this study a spreadsheet program named Bullwhip Explorer was used to simulate bullwhip effect. Several scenarios were developed to show the change in bullwhip effect ratio because of the difference in forecasting methods and forecasting parameters. Forecasting methods used were mean demand, moving average, exponential smoothing, demand signalling, and minimum expected mean squared error. Forecasting parameters were moving average period, smoothing parameter, signalling factor, and safety stock factor. It showed that decreasing moving average period, increasing smoothing parameter, increasing signalling factor can create bigger bullwhip effect ratio. Meanwhile, safety stock factor had no impact to bullwhip effect.

  5. The development rainfall forecasting using kalman filter

    NASA Astrophysics Data System (ADS)

    Zulfi, Mohammad; Hasan, Moh.; Dwidja Purnomo, Kosala

    2018-04-01

    Rainfall forecasting is very interesting for agricultural planing. Rainfall information is useful to make decisions about the plan planting certain commodities. In this studies, the rainfall forecasting by ARIMA and Kalman Filter method. Kalman Filter method is used to declare a time series model of which is shown in the form of linear state space to determine the future forecast. This method used a recursive solution to minimize error. The rainfall data in this research clustered by K-means clustering. Implementation of Kalman Filter method is for modelling and forecasting rainfall in each cluster. We used ARIMA (p,d,q) to construct a state space for KalmanFilter model. So, we have four group of the data and one model in each group. In conclusions, Kalman Filter method is better than ARIMA model for rainfall forecasting in each group. It can be showed from error of Kalman Filter method that smaller than error of ARIMA model.

  6. Forecasting electricity usage using univariate time series models

    NASA Astrophysics Data System (ADS)

    Hock-Eam, Lim; Chee-Yin, Yip

    2014-12-01

    Electricity is one of the important energy sources. A sufficient supply of electricity is vital to support a country's development and growth. Due to the changing of socio-economic characteristics, increasing competition and deregulation of electricity supply industry, the electricity demand forecasting is even more important than before. It is imperative to evaluate and compare the predictive performance of various forecasting methods. This will provide further insights on the weakness and strengths of each method. In literature, there are mixed evidences on the best forecasting methods of electricity demand. This paper aims to compare the predictive performance of univariate time series models for forecasting the electricity demand using a monthly data of maximum electricity load in Malaysia from January 2003 to December 2013. Results reveal that the Box-Jenkins method produces the best out-of-sample predictive performance. On the other hand, Holt-Winters exponential smoothing method is a good forecasting method for in-sample predictive performance.

  7. Compensated Box-Jenkins transfer function for short term load forecast

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Breipohl, A.; Yu, Z.; Lee, F.N.

    In the past years, the Box-Jenkins ARIMA method and the Box-Jenkins transfer function method (BJTF) have been among the most commonly used methods for short term electrical load forecasting. But when there exists a sudden change in the temperature, both methods tend to exhibit larger errors in the forecast. This paper demonstrates that the load forecasting errors resulting from either the BJ ARIMA model or the BJTF model are not simply white noise, but rather well-patterned noise, and the patterns in the noise can be used to improve the forecasts. Thus a compensated Box-Jenkins transfer method (CBJTF) is proposed tomore » improve the accuracy of the load prediction. Some case studies have been made which result in about a 14-33% reduction of the root mean square (RMS) errors of the forecasts, depending on the compensation time period as well as the compensation method used.« less

  8. Adjusting for partial verification or workup bias in meta-analyses of diagnostic accuracy studies.

    PubMed

    de Groot, Joris A H; Dendukuri, Nandini; Janssen, Kristel J M; Reitsma, Johannes B; Brophy, James; Joseph, Lawrence; Bossuyt, Patrick M M; Moons, Karel G M

    2012-04-15

    A key requirement in the design of diagnostic accuracy studies is that all study participants receive both the test under evaluation and the reference standard test. For a variety of practical and ethical reasons, sometimes only a proportion of patients receive the reference standard, which can bias the accuracy estimates. Numerous methods have been described for correcting this partial verification bias or workup bias in individual studies. In this article, the authors describe a Bayesian method for obtaining adjusted results from a diagnostic meta-analysis when partial verification or workup bias is present in a subset of the primary studies. The method corrects for verification bias without having to exclude primary studies with verification bias, thus preserving the main advantages of a meta-analysis: increased precision and better generalizability. The results of this method are compared with the existing methods for dealing with verification bias in diagnostic meta-analyses. For illustration, the authors use empirical data from a systematic review of studies of the accuracy of the immunohistochemistry test for diagnosis of human epidermal growth factor receptor 2 status in breast cancer patients.

  9. Forecasting next season's Ixodes ricinus nymphal density: the example of southern Germany 2018.

    PubMed

    Brugger, Katharina; Walter, Melanie; Chitimia-Dobler, Lidia; Dobler, Gerhard; Rubel, Franz

    2018-05-30

    The castor bean tick, Ixodes ricinus (L.) (Ixodida: Ixodidae), is the principal vector of pathogens causing tick-borne encephalitis or Lyme borreliosis in Europe. It is therefore of general interest to make an estimate of the density of I. ricinus for the whole year at the beginning of the tick season. There are two necessary conditions for making a successful prediction: a long homogeneous time series of observed tick density and a clear biological relationship between environmental predictors and tick density. A 9-year time series covering the period 2009-2017 of nymphal I. ricinus flagged at monthly intervals in southern Germany has been used. With the hypothesis that I. ricinus density is triggered by the fructification of the European beech 2 years before, the mean annual temperature of the previous year, and the current mean winter temperature (December-February), a forecast of the annual nymphal tick density has been made. Therefore, a Poisson regression model was generated resulting in an explained variance of 93.4% and an error of [Formula: see text] ticks per [Formula: see text] (annual [Formula: see text] collected ticks/[Formula: see text]). An independent verification of the forecast for the year 2017 resulted in 187 predicted versus 180 observed nymphs per [Formula: see text]. For the year 2018 a relatively high number of 443 questing I. ricinus nymphs per [Formula: see text] is forecasted, i.e., a "good" tick year.

  10. Mesoscale Predictability and Error Growth in Short Range Ensemble Forecasts

    NASA Astrophysics Data System (ADS)

    Gingrich, Mark

    Although it was originally suggested that small-scale, unresolved errors corrupt forecasts at all scales through an inverse error cascade, some authors have proposed that those mesoscale circulations resulting from stationary forcing on the larger scale may inherit the predictability of the large-scale motions. Further, the relative contributions of large- and small-scale uncertainties in producing error growth in the mesoscales remain largely unknown. Here, 100 member ensemble forecasts are initialized from an ensemble Kalman filter (EnKF) to simulate two winter storms impacting the East Coast of the United States in 2010. Four verification metrics are considered: the local snow water equivalence, total liquid water, and 850 hPa temperatures representing mesoscale features; and the sea level pressure field representing a synoptic feature. It is found that while the predictability of the mesoscale features can be tied to the synoptic forecast, significant uncertainty existed on the synoptic scale at lead times as short as 18 hours. Therefore, mesoscale details remained uncertain in both storms due to uncertainties at the large scale. Additionally, the ensemble perturbation kinetic energy did not show an appreciable upscale propagation of error for either case. Instead, the initial condition perturbations from the cycling EnKF were maximized at large scales and immediately amplified at all scales without requiring initial upscale propagation. This suggests that relatively small errors in the synoptic-scale initialization may have more importance in limiting predictability than errors in the unresolved, small-scale initial conditions.

  11. SOLAR FLARE PREDICTION USING SDO/HMI VECTOR MAGNETIC FIELD DATA WITH A MACHINE-LEARNING ALGORITHM

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bobra, M. G.; Couvidat, S., E-mail: couvidat@stanford.edu

    2015-01-10

    We attempt to forecast M- and X-class solar flares using a machine-learning algorithm, called support vector machine (SVM), and four years of data from the Solar Dynamics Observatory's Helioseismic and Magnetic Imager, the first instrument to continuously map the full-disk photospheric vector magnetic field from space. Most flare forecasting efforts described in the literature use either line-of-sight magnetograms or a relatively small number of ground-based vector magnetograms. This is the first time a large data set of vector magnetograms has been used to forecast solar flares. We build a catalog of flaring and non-flaring active regions sampled from a databasemore » of 2071 active regions, comprised of 1.5 million active region patches of vector magnetic field data, and characterize each active region by 25 parameters. We then train and test the machine-learning algorithm and we estimate its performances using forecast verification metrics with an emphasis on the true skill statistic (TSS). We obtain relatively high TSS scores and overall predictive abilities. We surmise that this is partly due to fine-tuning the SVM for this purpose and also to an advantageous set of features that can only be calculated from vector magnetic field data. We also apply a feature selection algorithm to determine which of our 25 features are useful for discriminating between flaring and non-flaring active regions and conclude that only a handful are needed for good predictive abilities.« less

  12. A Study of Rapidly Developing Low Cloud Ceilings in a Stable Atmosphere at the Florida Spaceport

    NASA Technical Reports Server (NTRS)

    Wheeler, Mark M.; Case, Jonathan L.; Baggett, G. Wayne

    2006-01-01

    Forecasters at the Space Meteorology Group (SMG) issue 30 to 90 minute forecasts for low cloud ceilings at the Shuttle Landing Facility (KTTS) in Kennedy Space Center, FL for all Space Shuttle missions. Mission verification statistics have shown cloud ceilings to be the biggest forecast challenge. SMG forecasters are especially concerned with rapidly developing cloud ceilings below 8000 ft. in a stable, capped thermodynamic environment because ceilings below 8000 ft restrict Shuttle landing operations and are the most challenging to predict accurately. This project involves the development of a database of these cases over east-central Florida in order to identify the onset, location, and if possible, dissipation times of rapidly-developing low cloud ceilings. Another goal is to document the atmospheric regimes favoring this type of cloud development to improve forecast skill of such events during Space Shuttle launch and landing operations. A 10-year database of stable, rapid low cloud development days during the daylight hours was compiled for the Florida cool-season months by examining the Cape Canaveral Air Force Station sounding data, and identifying days that had high boundary layer relative humidity associated with a thermally-capped environment below 8000 ft. Archived hourly surface observations from KTTS and Melbourne, Orlando, Sanford, and Ocala, FL were then examined for the onset of cloud ceilings below 8000 ft between 1100 and 2000 UTC. Once the database was supplemented with the hourly surface cloud observations, visible satellite imagery was examined in 30-minute intervals to confirm event occurrences. This paper will present results from some of the rapidly developing cloud ceiling cases and the prevailing meteorological conditions associated with these events, focusing on potential pre-curser information that may help improve their prediction.

  13. Verification of the skill of numerical weather prediction models in forecasting rainfall from U.S. landfalling tropical cyclones

    NASA Astrophysics Data System (ADS)

    Luitel, Beda; Villarini, Gabriele; Vecchi, Gabriel A.

    2018-01-01

    The goal of this study is the evaluation of the skill of five state-of-the-art numerical weather prediction (NWP) systems [European Centre for Medium-Range Weather Forecasts (ECMWF), UK Met Office (UKMO), National Centers for Environmental Prediction (NCEP), China Meteorological Administration (CMA), and Canadian Meteorological Center (CMC)] in forecasting rainfall from North Atlantic tropical cyclones (TCs). Analyses focus on 15 North Atlantic TCs that made landfall along the U.S. coast over the 2007-2012 period. As reference data we use gridded rainfall provided by the Climate Prediction Center (CPC). We consider forecast lead-times up to five days. To benchmark the skill of these models, we consider rainfall estimates from one radar-based (Stage IV) and four satellite-based [Tropical Rainfall Measuring Mission - Multi-satellite Precipitation Analysis (TMPA, both real-time and research version); Precipitation Estimation from Remotely Sensed Information using Artificial Neural Networks (PERSIANN); the CPC MORPHing Technique (CMORPH)] rainfall products. Daily and storm total rainfall fields from each of these remote sensing products are compared to the reference data to obtain information about the range of errors we can expect from "observational data." The skill of the NWP models is quantified: (1) by visual examination of the distribution of the errors in storm total rainfall for the different lead-times, and numerical examination of the first three moments of the error distribution; (2) relative to climatology at the daily scale. Considering these skill metrics, we conclude that the NWP models can provide skillful forecasts of TC rainfall with lead-times up to 48 h, without a consistently best or worst NWP model.

  14. Impact of Moist Physics Complexity on Tropical Cyclone Simulations from the Hurricane Weather Research and Forecast System

    NASA Astrophysics Data System (ADS)

    Kalina, E. A.; Biswas, M.; Newman, K.; Grell, E. D.; Bernardet, L.; Frimel, J.; Carson, L.

    2017-12-01

    The parameterization of moist physics in numerical weather prediction models plays an important role in modulating tropical cyclone structure, intensity, and evolution. The Hurricane Weather Research and Forecast system (HWRF), the National Oceanic and Atmospheric Administration's operational model for tropical cyclone prediction, uses the Scale-Aware Simplified Arakawa-Schubert (SASAS) cumulus scheme and a modified version of the Ferrier-Aligo (FA) microphysics scheme to parameterize moist physics. The FA scheme contains a number of simplifications that allow it to run efficiently in an operational setting, which includes prescribing values for hydrometeor number concentrations (i.e., single-moment microphysics) and advecting the total condensate rather than the individual hydrometeor species. To investigate the impact of these simplifying assumptions on the HWRF forecast, the FA scheme was replaced with the more complex double-moment Thompson microphysics scheme, which individually advects cloud ice, cloud water, rain, snow, and graupel. Retrospective HWRF forecasts of tropical cyclones that occurred in the Atlantic and eastern Pacific ocean basins from 2015-2017 were then simulated and compared to those produced by the operational HWRF configuration. Both traditional model verification metrics (i.e., tropical cyclone track and intensity) and process-oriented metrics (e.g., storm size, precipitation structure, and heating rates from the microphysics scheme) will be presented and compared. The sensitivity of these results to the cumulus scheme used (i.e., the operational SASAS versus the Grell-Freitas scheme) also will be examined. Finally, the merits of replacing the moist physics schemes that are used operationally with the alternatives tested here will be discussed from a standpoint of forecast accuracy versus computational resources.

  15. Preliminary Results of a U.S. Deep South Warm Season Deep Convective Initiation Modeling Experiment using NASA SPoRT Initialization Datasets for Operational National Weather Service Local Model Runs

    NASA Technical Reports Server (NTRS)

    Medlin, Jeffrey M.; Wood, Lance; Zavodsky, Brad; Case, Jon; Molthan, Andrew

    2012-01-01

    The initiation of deep convection during the warm season is a forecast challenge in the relative high instability and low wind shear environment of the U.S. Deep South. Despite improved knowledge of the character of well known mesoscale features such as local sea-, bay- and land-breezes, observations show the evolution of these features fall well short in fully describing the location of first initiates. A joint collaborative modeling effort among the NWS offices in Mobile, AL, and Houston, TX, and NASA s Short-term Prediction Research and Transition (SPoRT) Center was undertaken during the 2012 warm season to examine the impact of certain NASA produced products on the Weather Research and Forecasting Environmental Modeling System. The NASA products were: a 4-km Land Information System data, a 1-km sea surface temperature analysis, and a 4-km greenness vegetation fraction analysis. Similar domains were established over the southeast Texas and Alabama coastlines, each with a 9 km outer grid spacing and a 3 km inner nest spacing. The model was run at each NWS office once per day out to 24 hours from 0600 UTC, using the NCEP Global Forecast System for initial and boundary conditions. Control runs without the NASA products were made at the NASA SPoRT Center. The NCAR Model Evaluation Tools verification package was used to evaluate both the forecast timing and location of the first initiates, with a focus on the impacts of the NASA products on the model forecasts. Select case studies will be presented to highlight the influence of the products.

  16. Naive vs. Sophisticated Methods of Forecasting Public Library Circulations.

    ERIC Educational Resources Information Center

    Brooks, Terrence A.

    1984-01-01

    Two sophisticated--autoregressive integrated moving average (ARIMA), straight-line regression--and two naive--simple average, monthly average--forecasting techniques were used to forecast monthly circulation totals of 34 public libraries. Comparisons of forecasts and actual totals revealed that ARIMA and monthly average methods had smallest mean…

  17. Remote and Local Influences in Forecasting Pacific SST: a Linear Inverse Model and a Multimodel Ensemble Study

    NASA Astrophysics Data System (ADS)

    Faggiani Dias, D.; Subramanian, A. C.; Zanna, L.; Miller, A. J.

    2017-12-01

    Sea surface temperature (SST) in the Pacific sector is well known to vary on time scales from seasonal to decadal, and the ability to predict these SST fluctuations has many societal and economical benefits. Therefore, we use a suite of statistical linear inverse models (LIMs) to understand the remote and local SST variability that influences SST predictions over the North Pacific region and further improve our understanding on how the long-observed SST record can help better guide multi-model ensemble forecasts. Observed monthly SST anomalies in the Pacific sector (between 15oS and 60oN) are used to construct different regional LIMs for seasonal to decadal prediction. The forecast skills of the LIMs are compared to that from two operational forecast systems in the North American Multi-Model Ensemble (NMME) revealing that the LIM has better skill in the Northeastern Pacific than NMME models. The LIM is also found to have comparable forecast skill for SST in the Tropical Pacific with NMME models. This skill, however, is highly dependent on the initialization month, with forecasts initialized during the summer having better skill than those initialized during the winter. The forecast skill with LIM is also influenced by the verification period utilized to make the predictions, likely due to the changing character of El Niño in the 20th century. The North Pacific seems to be a source of predictability for the Tropics on seasonal to interannual time scales, while the Tropics act to worsen the skill for the forecast in the North Pacific. The data were also bandpassed into seasonal, interannual and decadal time scales to identify the relationships between time scales using the structure of the propagator matrix. For the decadal component, this coupling occurs the other way around: Tropics seem to be a source of predictability for the Extratropics, but the Extratropics don't improve the predictability for the Tropics. These results indicate the importance of temporal scale interactions in improving predictability on decadal timescales. Hence, we show that LIMs are not only useful as benchmarks for estimates of statistical skill, but also to isolate contributions to the forecast skills from different timescales, spatial scales or even model components.

  18. An overview of 2016 WISE Urban Summer Observation Campaign (WUSOC 2016) in the Seoul metropolitan area of South Korea

    NASA Astrophysics Data System (ADS)

    Jung, Jae-Won; Kim, Sang-Woo; Shim, Jae-Kwan; Kwak, Kyung-Hwan

    2017-04-01

    The Weather Information Service Engine (WISE), launched project of the Korea Meteorological Administration (KMA), aims to operate the urban meteorological observation network from 2012 to 2019 and to test and operate the application weather service (e.g., flash flood, road weather, city ecology, city microclimate, dispersion of hazardous substance etc.) in 2019 through the development of Advanced Storm-scale Analysis Prediction System(ASAPS) for the production of storm-scale hazard weather monitoring and prediction system. The WISE institute has completed construction of 31 urban meteorological observation cities in Seoul metropolitan area and has built a real-time test operation and verification system by improving the ASAPS that produces 1 km and 6 hour forecast information based on the 5 km forecast information of KMA. Field measurements of 2016 WISE Urban Summer Observation Campaign (WUSOC 2016) was conducted in the Seoul metropolitan area of South Korea from August 22 to October 14, 2016. Involving over 70 researchers from more than 12 environmental and atmospheric science research groups in South Korea, WUSOC2016 focused on special observations, severe rain storm observations using mobile observation car and radiosonde, wind profile observations using Wind Doppler Lidar and radiosonde, etc., around the Seoul metropolitan area. WUSOC2016 purpose at data quality control, accuracy verification, usability check, and quality improvement of ASAPS at observation stations constructed in WISE. In addition, we intend to contribute to the activation of urban fusion weather research and risk weather research through joint observation and data sharing.

  19. EURATOM safeguards efforts in the development of spent fuel verification methods by non-destructive assay

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Matloch, L.; Vaccaro, S.; Couland, M.

    The back end of the nuclear fuel cycle continues to develop. The European Commission, particularly the Nuclear Safeguards Directorate of the Directorate General for Energy, implements Euratom safeguards and needs to adapt to this situation. The verification methods for spent nuclear fuel, which EURATOM inspectors can use, require continuous improvement. Whereas the Euratom on-site laboratories provide accurate verification results for fuel undergoing reprocessing, the situation is different for spent fuel which is destined for final storage. In particular, new needs arise from the increasing number of cask loadings for interim dry storage and the advanced plans for the construction ofmore » encapsulation plants and geological repositories. Various scenarios present verification challenges. In this context, EURATOM Safeguards, often in cooperation with other stakeholders, is committed to further improvement of NDA methods for spent fuel verification. In this effort EURATOM plays various roles, ranging from definition of inspection needs to direct participation in development of measurement systems, including support of research in the framework of international agreements and via the EC Support Program to the IAEA. This paper presents recent progress in selected NDA methods. These methods have been conceived to satisfy different spent fuel verification needs, ranging from attribute testing to pin-level partial defect verification. (authors)« less

  20. Using SysML for verification and validation planning on the Large Synoptic Survey Telescope (LSST)

    NASA Astrophysics Data System (ADS)

    Selvy, Brian M.; Claver, Charles; Angeli, George

    2014-08-01

    This paper provides an overview of the tool, language, and methodology used for Verification and Validation Planning on the Large Synoptic Survey Telescope (LSST) Project. LSST has implemented a Model Based Systems Engineering (MBSE) approach as a means of defining all systems engineering planning and definition activities that have historically been captured in paper documents. Specifically, LSST has adopted the Systems Modeling Language (SysML) standard and is utilizing a software tool called Enterprise Architect, developed by Sparx Systems. Much of the historical use of SysML has focused on the early phases of the project life cycle. Our approach is to extend the advantages of MBSE into later stages of the construction project. This paper details the methodology employed to use the tool to document the verification planning phases, including the extension of the language to accommodate the project's needs. The process includes defining the Verification Plan for each requirement, which in turn consists of a Verification Requirement, Success Criteria, Verification Method(s), Verification Level, and Verification Owner. Each Verification Method for each Requirement is defined as a Verification Activity and mapped into Verification Events, which are collections of activities that can be executed concurrently in an efficient and complementary way. Verification Event dependency and sequences are modeled using Activity Diagrams. The methodology employed also ties in to the Project Management Control System (PMCS), which utilizes Primavera P6 software, mapping each Verification Activity as a step in a planned activity. This approach leads to full traceability from initial Requirement to scheduled, costed, and resource loaded PMCS task-based activities, ensuring all requirements will be verified.

  1. Forecasting Non-Stationary Diarrhea, Acute Respiratory Infection, and Malaria Time-Series in Niono, Mali

    PubMed Central

    Medina, Daniel C.; Findley, Sally E.; Guindo, Boubacar; Doumbia, Seydou

    2007-01-01

    Background Much of the developing world, particularly sub-Saharan Africa, exhibits high levels of morbidity and mortality associated with diarrhea, acute respiratory infection, and malaria. With the increasing awareness that the aforementioned infectious diseases impose an enormous burden on developing countries, public health programs therein could benefit from parsimonious general-purpose forecasting methods to enhance infectious disease intervention. Unfortunately, these disease time-series often i) suffer from non-stationarity; ii) exhibit large inter-annual plus seasonal fluctuations; and, iii) require disease-specific tailoring of forecasting methods. Methodology/Principal Findings In this longitudinal retrospective (01/1996–06/2004) investigation, diarrhea, acute respiratory infection of the lower tract, and malaria consultation time-series are fitted with a general-purpose econometric method, namely the multiplicative Holt-Winters, to produce contemporaneous on-line forecasts for the district of Niono, Mali. This method accommodates seasonal, as well as inter-annual, fluctuations and produces reasonably accurate median 2- and 3-month horizon forecasts for these non-stationary time-series, i.e., 92% of the 24 time-series forecasts generated (2 forecast horizons, 3 diseases, and 4 age categories = 24 time-series forecasts) have mean absolute percentage errors circa 25%. Conclusions/Significance The multiplicative Holt-Winters forecasting method: i) performs well across diseases with dramatically distinct transmission modes and hence it is a strong general-purpose forecasting method candidate for non-stationary epidemiological time-series; ii) obliquely captures prior non-linear interactions between climate and the aforementioned disease dynamics thus, obviating the need for more complex disease-specific climate-based parametric forecasting methods in the district of Niono; furthermore, iii) readily decomposes time-series into seasonal components thereby potentially assisting with programming of public health interventions, as well as monitoring of disease dynamics modification. Therefore, these forecasts could improve infectious diseases management in the district of Niono, Mali, and elsewhere in the Sahel. PMID:18030322

  2. Forecasting non-stationary diarrhea, acute respiratory infection, and malaria time-series in Niono, Mali.

    PubMed

    Medina, Daniel C; Findley, Sally E; Guindo, Boubacar; Doumbia, Seydou

    2007-11-21

    Much of the developing world, particularly sub-Saharan Africa, exhibits high levels of morbidity and mortality associated with diarrhea, acute respiratory infection, and malaria. With the increasing awareness that the aforementioned infectious diseases impose an enormous burden on developing countries, public health programs therein could benefit from parsimonious general-purpose forecasting methods to enhance infectious disease intervention. Unfortunately, these disease time-series often i) suffer from non-stationarity; ii) exhibit large inter-annual plus seasonal fluctuations; and, iii) require disease-specific tailoring of forecasting methods. In this longitudinal retrospective (01/1996-06/2004) investigation, diarrhea, acute respiratory infection of the lower tract, and malaria consultation time-series are fitted with a general-purpose econometric method, namely the multiplicative Holt-Winters, to produce contemporaneous on-line forecasts for the district of Niono, Mali. This method accommodates seasonal, as well as inter-annual, fluctuations and produces reasonably accurate median 2- and 3-month horizon forecasts for these non-stationary time-series, i.e., 92% of the 24 time-series forecasts generated (2 forecast horizons, 3 diseases, and 4 age categories = 24 time-series forecasts) have mean absolute percentage errors circa 25%. The multiplicative Holt-Winters forecasting method: i) performs well across diseases with dramatically distinct transmission modes and hence it is a strong general-purpose forecasting method candidate for non-stationary epidemiological time-series; ii) obliquely captures prior non-linear interactions between climate and the aforementioned disease dynamics thus, obviating the need for more complex disease-specific climate-based parametric forecasting methods in the district of Niono; furthermore, iii) readily decomposes time-series into seasonal components thereby potentially assisting with programming of public health interventions, as well as monitoring of disease dynamics modification. Therefore, these forecasts could improve infectious diseases management in the district of Niono, Mali, and elsewhere in the Sahel.

  3. Study on verifying the angle measurement performance of the rotary-laser system

    NASA Astrophysics Data System (ADS)

    Zhao, Jin; Ren, Yongjie; Lin, Jiarui; Yin, Shibin; Zhu, Jigui

    2018-04-01

    An angle verification method to verify the angle measurement performance of the rotary-laser system was developed. Angle measurement performance has a great impact on measuring accuracy. Although there is some previous research on the verification of angle measuring uncertainty for the rotary-laser system, there are still some limitations. High-precision reference angles are used in the study of the method, and an integrated verification platform is set up to evaluate the performance of the system. This paper also probes the error that has biggest influence on the verification system. Some errors of the verification system are avoided via the experimental method, and some are compensated through the computational formula and curve fitting. Experimental results show that the angle measurement performance meets the requirement for coordinate measurement. The verification platform can evaluate the uncertainty of angle measurement for the rotary-laser system efficiently.

  4. RTL validation methodology on high complexity wireless microcontroller using OVM technique for fast time to market

    NASA Astrophysics Data System (ADS)

    Zhafirah Muhammad, Nurul; Harun, A.; Hambali, N. A. M. A.; Murad, S. A. Z.; Mohyar, S. N.; Isa, M. N.; Jambek, AB

    2017-11-01

    Increased demand in internet of thing (IOT) application based has inadvertently forced the move towards higher complexity of integrated circuit supporting SoC. Such spontaneous increased in complexity poses unequivocal complicated validation strategies. Hence, the complexity allows researchers to come out with various exceptional methodologies in order to overcome this problem. This in essence brings about the discovery of dynamic verification, formal verification and hybrid techniques. In reserve, it is very important to discover bugs at infancy of verification process in (SoC) in order to reduce time consuming and fast time to market for the system. Ergo, in this paper we are focusing on the methodology of verification that can be done at Register Transfer Level of SoC based on the AMBA bus design. On top of that, the discovery of others verification method called Open Verification Methodology (OVM) brings out an easier way in RTL validation methodology neither as the replacement for the traditional method yet as an effort for fast time to market for the system. Thus, the method called OVM is proposed in this paper as the verification method for larger design to avert the disclosure of the bottleneck in validation platform.

  5. Implementation of Automatic Clustering Algorithm and Fuzzy Time Series in Motorcycle Sales Forecasting

    NASA Astrophysics Data System (ADS)

    Rasim; Junaeti, E.; Wirantika, R.

    2018-01-01

    Accurate forecasting for the sale of a product depends on the forecasting method used. The purpose of this research is to build motorcycle sales forecasting application using Fuzzy Time Series method combined with interval determination using automatic clustering algorithm. Forecasting is done using the sales data of motorcycle sales in the last ten years. Then the error rate of forecasting is measured using Means Percentage Error (MPE) and Means Absolute Percentage Error (MAPE). The results of forecasting in the one-year period obtained in this study are included in good accuracy.

  6. A Hybrid On-line Verification Method of Relay Setting

    NASA Astrophysics Data System (ADS)

    Gao, Wangyuan; Chen, Qing; Si, Ji; Huang, Xin

    2017-05-01

    Along with the rapid development of the power industry, grid structure gets more sophisticated. The validity and rationality of protective relaying are vital to the security of power systems. To increase the security of power systems, it is essential to verify the setting values of relays online. Traditional verification methods mainly include the comparison of protection range and the comparison of calculated setting value. To realize on-line verification, the verifying speed is the key. The verifying result of comparing protection range is accurate, but the computation burden is heavy, and the verifying speed is slow. Comparing calculated setting value is much faster, but the verifying result is conservative and inaccurate. Taking the overcurrent protection as example, this paper analyses the advantages and disadvantages of the two traditional methods above, and proposes a hybrid method of on-line verification which synthesizes the advantages of the two traditional methods. This hybrid method can meet the requirements of accurate on-line verification.

  7. School District Enrollment Projections: A Comparison of Three Methods.

    ERIC Educational Resources Information Center

    Pettibone, Timothy J.; Bushan, Latha

    This study assesses three methods of forecasting school enrollments: the cohort-sruvival method (grade progression), the statistical forecasting procedure developed by the Statistical Analysis System (SAS) Institute, and a simple ratio computation. The three methods were used to forecast school enrollments for kindergarten through grade 12 in a…

  8. FORMED: Bringing Formal Methods to the Engineering Desktop

    DTIC Science & Technology

    2016-02-01

    integrates formal verification into software design and development by precisely defining semantics for a restricted subset of the Unified Modeling...input-output contract satisfaction and absence of null pointer dereferences. 15. SUBJECT TERMS Formal Methods, Software Verification , Model-Based...Domain specific languages (DSLs) drive both implementation and formal verification

  9. Detection of mesoscale zones of atmospheric instabilities using remote sensing and weather forecasting model data

    NASA Astrophysics Data System (ADS)

    Winnicki, I.; Jasinski, J.; Kroszczynski, K.; Pietrek, S.

    2009-04-01

    The paper presents elements of research conducted in the Faculty of Civil Engineering and Geodesy of the Military University of Technology, Warsaw, Poland, concerning application of mesoscale models and remote sensing data to determining meteorological conditions of aircraft flight directly related with atmospheric instabilities. The quality of meteorological support of aviation depends on prompt and effective forecasting of weather conditions changes. The paper presents a computer module for detecting and monitoring zones of cloud cover, precipitation and turbulence along the aircraft flight route. It consists of programs and scripts for managing, processing and visualizing meteorological and remote sensing databases. The application was developed in Matlab® for Windows®. The module uses products of COAMPS (Coupled Ocean/Atmosphere Mesoscale Prediction System) mesoscale non-hydrostatic model of the atmosphere developed by the US Naval Research Laboratory, satellite images acquisition system from the MSG-2 (Meteosat Second Generation) of the European Organization for the Exploitation of Meteorological Satellites (EUMETSAT) and meteorological radars data acquired from the Institute of Meteorology and Water Management (IMGW), Warsaw, Poland. The satellite images acquisition system and the COAMPS model are run operationally in the Faculty of Civil Engineering and Geodesy. The mesoscale model is run on an IA64 Feniks multiprocessor 64-bit computer cluster. The basic task of the module is to enable a complex analysis of data sets of miscellaneous information structure and to verify COAMPS results using satellite and radar data. The research is conducted using uniform cartographic projection of all elements of the database. Satellite and radar images are transformed into the Lambert Conformal projection of COAMPS. This facilitates simultaneous interpretation and supports decision making process for safe execution of flights. Forecasts are based on horizontal distributions and vertical profiles of meteorological parameters produced by the module. Verification of forecasts includes research of spatial and temporal correlations of structures generated by the model, e.g.: cloudiness, meteorological phenomena (fogs, precipitation, turbulence) and structures identified on current satellite images. The developed module determines meteorological parameters fields for vertical profiles of the atmosphere. Interpolation procedures run at user selected standard (pressure) or height levels of the model enable to determine weather conditions along any route of aircraft. Basic parameters of the procedures determining e.g. flight safety include: cloud base, visibility, cloud cover, turbulence coefficient, icing and precipitation intensity. Determining icing and turbulence characteristics is based on standard and new methods (from other mesoscale models). The research includes also investigating new generation mesoscale models, especially remote sensing data assimilation. This is required by necessity to develop and introduce objective methods of forecasting weather conditions. Current research in the Faculty of Civil Engineering and Geodesy concerns validation of the mesoscale module performance.

  10. Daily Peak Load Forecasting of Next Day using Weather Distribution and Comparison Value of Each Nearby Date Data

    NASA Astrophysics Data System (ADS)

    Ito, Shigenobu; Yukita, Kazuto; Goto, Yasuyuki; Ichiyanagi, Katsuhiro; Nakano, Hiroyuki

    By the development of industry, in recent years; dependence to electric energy is growing year by year. Therefore, reliable electric power supply is in need. However, to stock a huge amount of electric energy is very difficult. Also, there is a necessity to keep balance between the demand and supply, which changes hour after hour. Consequently, to supply the high quality and highly dependable electric power supply, economically, and with high efficiency, there is a need to forecast the movement of the electric power demand carefully in advance. And using that forecast as the source, supply and demand management plan should be made. Thus load forecasting is said to be an important job among demand investment of electric power companies. So far, forecasting method using Fuzzy logic, Neural Net Work, Regression model has been suggested for the development of forecasting accuracy. Those forecasting accuracy is in a high level. But to invest electric power in higher accuracy more economically, a new forecasting method with higher accuracy is needed. In this paper, to develop the forecasting accuracy of the former methods, the daily peak load forecasting method using the weather distribution of highest and lowest temperatures, and comparison value of each nearby date data is suggested.

  11. Adaptive spline autoregression threshold method in forecasting Mitsubishi car sales volume at PT Srikandi Diamond Motors

    NASA Astrophysics Data System (ADS)

    Susanti, D.; Hartini, E.; Permana, A.

    2017-01-01

    Sale and purchase of the growing competition between companies in Indonesian, make every company should have a proper planning in order to win the competition with other companies. One of the things that can be done to design the plan is to make car sales forecast for the next few periods, it’s required that the amount of inventory of cars that will be sold in proportion to the number of cars needed. While to get the correct forecasting, on of the methods that can be used is the method of Adaptive Spline Threshold Autoregression (ASTAR). Therefore, this time the discussion will focus on the use of Adaptive Spline Threshold Autoregression (ASTAR) method in forecasting the volume of car sales in PT.Srikandi Diamond Motors using time series data.In the discussion of this research, forecasting using the method of forecasting value Adaptive Spline Threshold Autoregression (ASTAR) produce approximately correct.

  12. [Improved euler algorithm for trend forecast model and its application to oil spectrum analysis].

    PubMed

    Zheng, Chang-song; Ma, Biao

    2009-04-01

    The oil atomic spectrometric analysis technology is one of the most important methods for fault diagnosis and state monitoring of large machine equipment. The gray method is preponderant in the trend forecast at the same time. With the use of oil atomic spectrometric analysis result and combining the gray forecast theory, the present paper established a gray forecast model of the Fe/Cu concentration trend in the power-shift steering transmission. Aiming at the shortage of the gray method used in the trend forecast, the improved Euler algorithm was put forward for the first time to resolve the problem of the gray model and avoid the non-precision that the old gray model's forecast value depends on the first test value. This new method can make the forecast value more precision as shown in the example. Combined with the threshold value of the oil atomic spectrometric analysis, the new method was applied on the Fe/Cu concentration forecast and the premonition of fault information was obtained. So we can take steps to prevent the fault and this algorithm can be popularized to the state monitoring in the industry.

  13. Applying different independent component analysis algorithms and support vector regression for IT chain store sales forecasting.

    PubMed

    Dai, Wensheng; Wu, Jui-Yu; Lu, Chi-Jie

    2014-01-01

    Sales forecasting is one of the most important issues in managing information technology (IT) chain store sales since an IT chain store has many branches. Integrating feature extraction method and prediction tool, such as support vector regression (SVR), is a useful method for constructing an effective sales forecasting scheme. Independent component analysis (ICA) is a novel feature extraction technique and has been widely applied to deal with various forecasting problems. But, up to now, only the basic ICA method (i.e., temporal ICA model) was applied to sale forecasting problem. In this paper, we utilize three different ICA methods including spatial ICA (sICA), temporal ICA (tICA), and spatiotemporal ICA (stICA) to extract features from the sales data and compare their performance in sales forecasting of IT chain store. Experimental results from a real sales data show that the sales forecasting scheme by integrating stICA and SVR outperforms the comparison models in terms of forecasting error. The stICA is a promising tool for extracting effective features from branch sales data and the extracted features can improve the prediction performance of SVR for sales forecasting.

  14. Applying Different Independent Component Analysis Algorithms and Support Vector Regression for IT Chain Store Sales Forecasting

    PubMed Central

    Dai, Wensheng

    2014-01-01

    Sales forecasting is one of the most important issues in managing information technology (IT) chain store sales since an IT chain store has many branches. Integrating feature extraction method and prediction tool, such as support vector regression (SVR), is a useful method for constructing an effective sales forecasting scheme. Independent component analysis (ICA) is a novel feature extraction technique and has been widely applied to deal with various forecasting problems. But, up to now, only the basic ICA method (i.e., temporal ICA model) was applied to sale forecasting problem. In this paper, we utilize three different ICA methods including spatial ICA (sICA), temporal ICA (tICA), and spatiotemporal ICA (stICA) to extract features from the sales data and compare their performance in sales forecasting of IT chain store. Experimental results from a real sales data show that the sales forecasting scheme by integrating stICA and SVR outperforms the comparison models in terms of forecasting error. The stICA is a promising tool for extracting effective features from branch sales data and the extracted features can improve the prediction performance of SVR for sales forecasting. PMID:25165740

  15. A scoping review of malaria forecasting: past work and future directions

    PubMed Central

    Zinszer, Kate; Verma, Aman D; Charland, Katia; Brewer, Timothy F; Brownstein, John S; Sun, Zhuoyu; Buckeridge, David L

    2012-01-01

    Objectives There is a growing body of literature on malaria forecasting methods and the objective of our review is to identify and assess methods, including predictors, used to forecast malaria. Design Scoping review. Two independent reviewers searched information sources, assessed studies for inclusion and extracted data from each study. Information sources Search strategies were developed and the following databases were searched: CAB Abstracts, EMBASE, Global Health, MEDLINE, ProQuest Dissertations & Theses and Web of Science. Key journals and websites were also manually searched. Eligibility criteria for included studies We included studies that forecasted incidence, prevalence or epidemics of malaria over time. A description of the forecasting model and an assessment of the forecast accuracy of the model were requirements for inclusion. Studies were restricted to human populations and to autochthonous transmission settings. Results We identified 29 different studies that met our inclusion criteria for this review. The forecasting approaches included statistical modelling, mathematical modelling and machine learning methods. Climate-related predictors were used consistently in forecasting models, with the most common predictors being rainfall, relative humidity, temperature and the normalised difference vegetation index. Model evaluation was typically based on a reserved portion of data and accuracy was measured in a variety of ways including mean-squared error and correlation coefficients. We could not compare the forecast accuracy of models from the different studies as the evaluation measures differed across the studies. Conclusions Applying different forecasting methods to the same data, exploring the predictive ability of non-environmental variables, including transmission reducing interventions and using common forecast accuracy measures will allow malaria researchers to compare and improve models and methods, which should improve the quality of malaria forecasting. PMID:23180505

  16. Combination of synoptical-analogous and dynamical methods to increase skill score of monthly air temperature forecasts over Northern Eurasia

    NASA Astrophysics Data System (ADS)

    Khan, Valentina; Tscepelev, Valery; Vilfand, Roman; Kulikova, Irina; Kruglova, Ekaterina; Tischenko, Vladimir

    2016-04-01

    Long-range forecasts at monthly-seasonal time scale are in great demand of socio-economic sectors for exploiting climate-related risks and opportunities. At the same time, the quality of long-range forecasts is not fully responding to user application necessities. Different approaches, including combination of different prognostic models, are used in forecast centers to increase the prediction skill for specific regions and globally. In the present study, two forecasting methods are considered which are exploited in operational practice of Hydrometeorological Center of Russia. One of them is synoptical-analogous method of forecasting of surface air temperature at monthly scale. Another one is dynamical system based on the global semi-Lagrangian model SL-AV, developed in collaboration of Institute of Numerical Mathematics and Hydrometeorological Centre of Russia. The seasonal version of this model has been used to issue global and regional forecasts at monthly-seasonal time scales. This study presents results of the evaluation of surface air temperature forecasts generated with using above mentioned synoptical-statistical and dynamical models, and their combination to potentially increase skill score over Northern Eurasia. The test sample of operational forecasts is encompassing period from 2010 through 2015. The seasonal and interannual variability of skill scores of these methods has been discussed. It was noticed that the quality of all forecasts is highly dependent on the inertia of macro-circulation processes. The skill scores of forecasts are decreasing during significant alterations of synoptical fields for both dynamical and empirical schemes. Procedure of combination of forecasts from different methods, in some cases, has demonstrated its effectiveness. For this study the support has been provided by Grant of Russian Science Foundation (№14-37-00053).

  17. Statistical models of temperature in the Sacramento-San Joaquin delta under climate-change scenarios and ecological implications

    USGS Publications Warehouse

    Wagner, R.W.; Stacey, M.; Brown, L.R.; Dettinger, M.

    2011-01-01

    Changes in water temperatures caused by climate change in California's Sacramento-San Joaquin Delta will affect the ecosystem through physiological rates of fishes and invertebrates. This study presents statistical models that can be used to forecast water temperature within the Delta as a response to atmospheric conditions. The daily average model performed well (R2 values greater than 0.93 during verification periods) for all stations within the Delta and San Francisco Bay provided there was at least 1 year of calibration data. To provide long-term projections of Delta water temperature, we forced the model with downscaled data from climate scenarios. Based on these projections, the ecological implications for the delta smelt, a key species, were assessed based on temperature thresholds. The model forecasts increases in the number of days above temperatures causing high mortality (especially along the Sacramento River) and a shift in thermal conditions for spawning to earlier in the year. ?? 2011 The Author(s).

  18. Prediction of Winter Storm Tracks and Intensities Using the GFDL fvGFS Model

    NASA Astrophysics Data System (ADS)

    Rees, S.; Boaggio, K.; Marchok, T.; Morin, M.; Lin, S. J.

    2017-12-01

    The GFDL Finite-Volume Cubed-Sphere Dynamical core (FV3) is coupled to a modified version of the Global Forecast System (GFS) physics and initial conditions, to form the fvGFS model. This model is similar to the one being implemented as the next-generation operational weather model for the NWS, which is also FV3-powered. Much work has been done to verify fvGFS tropical cyclone prediction, but little has been done to verify winter storm prediction. These costly and dangerous storms impact parts of the U.S. every year. To verify winter storms we ran the NCEP operational cyclone tracker, developed at GFDL, on semi-real-time 13 km horizontal resolution fvGFS forecasts. We have found that fvGFS compares well to the operational GFS in storm track and intensity, though often predicts slightly higher intensities. This presentation will show the track and intensity verification from the past two winter seasons and explore possible reasons for bias.

  19. A stochastic post-processing method for solar irradiance forecasts derived from NWPs models

    NASA Astrophysics Data System (ADS)

    Lara-Fanego, V.; Pozo-Vazquez, D.; Ruiz-Arias, J. A.; Santos-Alamillos, F. J.; Tovar-Pescador, J.

    2010-09-01

    Solar irradiance forecast is an important area of research for the future of the solar-based renewable energy systems. Numerical Weather Prediction models (NWPs) have proved to be a valuable tool for solar irradiance forecasting with lead time up to a few days. Nevertheless, these models show low skill in forecasting the solar irradiance under cloudy conditions. Additionally, climatic (averaged over seasons) aerosol loading are usually considered in these models, leading to considerable errors for the Direct Normal Irradiance (DNI) forecasts during high aerosols load conditions. In this work we propose a post-processing method for the Global Irradiance (GHI) and DNI forecasts derived from NWPs. Particularly, the methods is based on the use of Autoregressive Moving Average with External Explanatory Variables (ARMAX) stochastic models. These models are applied to the residuals of the NWPs forecasts and uses as external variables the measured cloud fraction and aerosol loading of the day previous to the forecast. The method is evaluated for a set one-moth length three-days-ahead forecast of the GHI and DNI, obtained based on the WRF mesoscale atmospheric model, for several locations in Andalusia (Southern Spain). The Cloud fraction is derived from MSG satellite estimates and the aerosol loading from the MODIS platform estimates. Both sources of information are readily available at the time of the forecast. Results showed a considerable improvement of the forecasting skill of the WRF model using the proposed post-processing method. Particularly, relative improvement (in terms of the RMSE) for the DNI during summer is about 20%. A similar value is obtained for the GHI during the winter.

  20. Toward Improved Land Surface Initialization in Support of Regional WRF Forecasts at the Kenya Meteorological Service (KMS)

    NASA Technical Reports Server (NTRS)

    Case, Johnathan L.; Mungai, John; Sakwa, Vincent; Kabuchanga, Eric; Zavodsky, Bradley T.; Limaye, Ashutosh S.

    2014-01-01

    Flooding and drought are two key forecasting challenges for the Kenya Meteorological Service (KMS). Atmospheric processes leading to excessive precipitation and/or prolonged drought can be quite sensitive to the state of the land surface, which interacts with the planetary boundary layer (PBL) of the atmosphere providing a source of heat and moisture. The development and evolution of precipitation systems are affected by heat and moisture fluxes from the land surface, particularly within weakly-sheared environments such as in the tropics and sub-tropics. These heat and moisture fluxes during the day can be strongly influenced by land cover, vegetation, and soil moisture content. Therefore, it is important to represent the land surface state as accurately as possible in land surface and numerical weather prediction (NWP) models. Enhanced regional modeling capabilities have the potential to improve forecast guidance in support of daily operations and high-impact weather over eastern Africa. KMS currently runs a configuration of the Weather Research and Forecasting (WRF) NWP model in real time to support its daily forecasting operations, making use of the NOAA/National Weather Service (NWS) Science and Training Resource Center's Environmental Modeling System (EMS) to manage and produce the KMS-WRF runs on a regional grid over eastern Africa. Two organizations at the NASA Marshall Space Flight Center in Huntsville, AL, SERVIR and the Shortterm Prediction Research and Transition (SPoRT) Center, have established a working partnership with KMS for enhancing its regional modeling capabilities through new datasets and tools. To accomplish this goal, SPoRT and SERVIR is providing enhanced, experimental land surface initialization datasets and model verification capabilities to KMS as part of this collaboration. To produce a land-surface initialization more consistent with the resolution of the KMS-WRF runs, the NASA Land Information System (LIS) is run at a comparable resolution to provide real-time, daily soil initialization data in place of data interpolated from the National Centers for Environmental Prediction (NCEP) Global Forecast System (GFS) model soil moisture and temperature fields. Additionally, realtime green vegetation fraction (GVF) data from the Visible Infrared Imaging Radiometer Suite (VIIRS) on the Suomi National Polar-orbiting Partnership (Suomi- NPP) satellite will be incorporated into the KMS-WRF runs, once it becomes publicly available from the National Environmental Satellite Data and Information Service (NESDIS). Finally, model verification capabilities will be transitioned to KMS using the Model Evaluation Tools (MET; Brown et al. 2009) package in conjunction with a dynamic scripting package developed by SPoRT (Zavodsky et al. 2014), to help quantify possible improvements in simulated temperature, moisture and precipitation resulting from the experimental land surface initialization. Furthermore, the transition of these MET tools will enable KMS to monitor model forecast accuracy in near real time. This paper presents preliminary efforts to improve land surface model initialization over eastern Africa in support of operations at KMS. The remainder of this extended abstract is organized as follows: The collaborating organizations involved in the project are described in Section 2; background information on LIS and the configuration for eastern Africa is presented in Section 3; the WRF configuration used in this modeling experiment is described in Section 4; sample experimental WRF output with and without LIS initialization data are given in Section 5; a summary is given in Section 6 followed by acknowledgements and references.

  1. The Evaluation of the Regional Atmospheric Modeling System in the Eastern Range Dispersion Assessment System

    NASA Technical Reports Server (NTRS)

    Case, Jonathan

    2001-01-01

    The Applied Meteorology Unit (AMU) evaluated the Regional Atmospheric Modeling System (RAMS) contained within the Eastern Range Dispersion Assessment System (ERDAS). ERDAS provides emergency response guidance for Cape Canaveral Air Force Station and Kennedy Space Center operations in the event of an accidental hazardous material release or aborted vehicle launch. The RAMS prognostic data are available to ERDAS for display and are used to initialize the 45th Space Wing/Range Safety dispersion model. Thus, the accuracy of the dispersion predictions is dependent upon the accuracy of RAMS forecasts. The RAMS evaluation consisted of an objective and subjective component for the 1999 and 2000 Florida warm seasons, and the 1999-2000 cool season. In the objective evaluation, the AMU generated model error statistics at surface and upper-level observational sites, compared RAMS errors to a coarser RAMS grid configuration, and benchmarked RAMS against the nationally-used Eta model. In the subjective evaluation, the AMU compared forecast cold fronts, low-level temperature inversions, and precipitation to observations during the 1999-2000 cool season, verified the development of the RAMS forecast east coast sea breeze during both warm seasons, and examined the RAMS daily thunderstorm initiation and precipitation patterns during the 2000 warm season. This report summarizes the objective and subjective verification for all three seasons.

  2. Flash flood warnings for ungauged basins based on high-resolution precipitation forecasts

    NASA Astrophysics Data System (ADS)

    Demargne, Julie; Javelle, Pierre; Organde, Didier; de Saint Aubin, Céline; Janet, Bruno

    2016-04-01

    Early detection of flash floods, which are typically triggered by severe rainfall events, is still challenging due to large meteorological and hydrologic uncertainties at the spatial and temporal scales of interest. Also the rapid rising of waters necessarily limits the lead time of warnings to alert communities and activate effective emergency procedures. To better anticipate such events and mitigate their impacts, the French national service in charge of flood forecasting (SCHAPI) is implementing a national flash flood warning system for small-to-medium (up to 1000 km²) ungauged basins based on a discharge-threshold flood warning method called AIGA (Javelle et al. 2014). The current deterministic AIGA system has been run in real-time in the South of France since 2005 and has been tested in the RHYTMME project (rhytmme.irstea.fr/). It ingests the operational radar-gauge QPE grids from Météo-France to run a simplified hourly distributed hydrologic model at a 1-km² resolution every 15 minutes. This produces real-time peak discharge estimates along the river network, which are subsequently compared to regionalized flood frequency estimates to provide warnings according to the AIGA-estimated return period of the ongoing event. The calibration and regionalization of the hydrologic model has been recently enhanced for implementing the national flash flood warning system for the entire French territory by 2016. To further extend the effective warning lead time, the flash flood warning system is being enhanced to ingest Météo-France's AROME-NWC high-resolution precipitation nowcasts. The AROME-NWC system combines the most recent available observations with forecasts from the nowcasting version of the AROME convection-permitting model (Auger et al. 2015). AROME-NWC pre-operational deterministic precipitation forecasts, produced every hour at a 2.5-km resolution for a 6-hr forecast horizon, were provided for 3 significant rain events in September and November 2014 and ingested as time-lagged ensembles. The time-lagged approach is a practical choice of accounting for the atmospheric forecast uncertainty when no extensive forecast archive is available for statistical modelling. The evaluation on 185 basins in the South of France showed significant improvements in terms of flash flood event detection and effective warning lead-time, compared to warnings from the current AIGA setup (without any future precipitation). Various verification metrics (e.g., Relative Mean Error, Brier Skill Score) show the skill of ensemble precipitation and flow forecasts compared to single-valued persistency benchmarks. Planned enhancements include integrating additional probabilistic NWP products (e.g., AROME precipitation ensembles on longer forecast horizon), accounting for and reducing hydrologic uncertainties from the model parameters and initial conditions via data assimilation, and developing a comprehensive observational and post-event damage database to determine decision-relevant warning thresholds for flood magnitude and probability. Javelle, P., Demargne, J., Defrance, D., Arnaud, P., 2014. Evaluating flash flood warnings at ungauged locations using post-event surveys: a case study with the AIGA warning system. Hydrological Sciences Journal, doi: 10.1080/02626667.2014.923970 Auger, L., Dupont, O., Hagelin, S., Brousseau, P., Brovelli, P., 2015. AROME-NWC: a new nowcasting tool based on an operational mesoscale forecasting system. Quarterly Journal of the Royal Meteorological Society, 141: 1603-1611, doi: 10.1002/qj.2463

  3. New Models for Forecasting Enrollments: Fuzzy Time Series and Neural Network Approaches.

    ERIC Educational Resources Information Center

    Song, Qiang; Chissom, Brad S.

    Since university enrollment forecasting is very important, many different methods and models have been proposed by researchers. Two new methods for enrollment forecasting are introduced: (1) the fuzzy time series model; and (2) the artificial neural networks model. Fuzzy time series has been proposed to deal with forecasting problems within a…

  4. Post-processing method for wind speed ensemble forecast using wind speed and direction

    NASA Astrophysics Data System (ADS)

    Sofie Eide, Siri; Bjørnar Bremnes, John; Steinsland, Ingelin

    2017-04-01

    Statistical methods are widely applied to enhance the quality of both deterministic and ensemble NWP forecasts. In many situations, like wind speed forecasting, most of the predictive information is contained in one variable in the NWP models. However, in statistical calibration of deterministic forecasts it is often seen that including more variables can further improve forecast skill. For ensembles this is rarely taken advantage of, mainly due to that it is generally not straightforward how to include multiple variables. In this study, it is demonstrated how multiple variables can be included in Bayesian model averaging (BMA) by using a flexible regression method for estimating the conditional means. The method is applied to wind speed forecasting at 204 Norwegian stations based on wind speed and direction forecasts from the ECMWF ensemble system. At about 85 % of the sites the ensemble forecasts were improved in terms of CRPS by adding wind direction as predictor compared to only using wind speed. On average the improvements were about 5 %, but mainly for moderate to strong wind situations. For weak wind speeds adding wind direction had more or less neutral impact.

  5. Runoff measurements and hydrological modelling for the estimation of rainfall volumes in an Alpine basin

    NASA Astrophysics Data System (ADS)

    Ranzi, R.; Bacchi, B.; Grossi, G.

    2003-01-01

    Streamflow data and water levels in reservoirs have been collected at 30 recording sites in the Toce river basin and its surroundings, upstream of Lago Maggiore, one of the target areas of the Mesoscale Alpine Programme (MAP) experiment. These data have been used for two purposes: firstly, the verification of a hydrological model, forced by rain-gauge data and the output of a mesoscale meteorological model, for flood simulation and forecasting; secondly, to solve an inverse problem--to estimate rainfall volumes from the runoff data in mountain areas where the influence of orography and the limits of actual monitoring systems prevent accurate measurement of precipitation. The methods are illustrated for 19-20 September 1999, MAP Intensive Observing Period 2b, an event with a 4-year return period for the Toce river basin. Uncertainties in the estimates of the areal rainfall volumes based on rain-gauge data and via the inverse solution are assessed.

  6. Applications systems verification and transfer project. Volume 5: Operational applications of satellite snow-cover observations, northwest United States

    NASA Technical Reports Server (NTRS)

    Dillard, J. P.

    1981-01-01

    The study objective was to develop or modify methods in an operational framework that would allow incorporation of satellite derived snow cover observations for prediction of snowmelt derived runoff. Data were reviewed and verified for five basins in the Pacific Northwest. The data were analyzed for up to a 6-year period ending July 1978, and in all cases cover a low, average, and high snow cover/runoff year. Cloud cover is a major problem in these springtime runoff analyses and have hampered data collection for periods of up to 52 days. Tree cover and terrain are sufficiently dense and rugged to have caused problems. The interpretation of snowlines from satellite data was compared with conventional ground truth data and tested in operational streamflow forecasting models. When the satellite snow-covered area (SCA) data are incorporated in the SSARR (Streamflow Synthesis and Reservoir Regulation) model, there is a definite but minor improvement.

  7. Developing empirical lightning cessation forecast guidance for the Kennedy Space Center

    NASA Astrophysics Data System (ADS)

    Stano, Geoffrey T.

    The Kennedy Space Center in east Central Florida is one of the few locations in the country that issues lightning advisories. These forecasts are vital to the daily operations of the Space Center and take on even greater significance during launch operations. The U.S. Air Force's 45th Weather Squadron (45WS), who provides forecasts for the Space Center, has a good record of forecasting the initiation of lightning near their locations of special concern. However, the remaining problem is knowing when to cancel a lightning advisory. Without specific scientific guidelines detailing cessation activity, the Weather Squadron must keep advisories in place longer than necessary to ensure the safety of personnel and equipment. This unnecessary advisory time costs the Space Center millions of dollars in lost manpower each year. This research presents storm and environmental characteristics associated with lightning cessation that then are utilized to create lightning cessation guidelines for isolated thunderstorms for use by the 45WS during the warm season months of May through September. The research uses data from the Lightning Detection and Ranging (LDAR) network at the Kennedy Space Center, which can observe intra-cloud and portions of cloud-to-ground lightning strikes. Supporting data from the Cloud-to-Ground Lightning Surveillance System (CGLSS), radar observations from the Melbourne WSR-88D, and Cape Canaveral morning radiosonde launches also are included. Characteristics of 116 thunderstorms comprising our dataset are presented. Most of these characteristics are based on LDAR-derived spark and flash data and have not been described previously. In particular, the first lightning activity is quantified as either cloud-to-ground (CG) or intra-cloud (IC). Only 10% of the storms in this research are found to initiate with a CG strike. Conversely, only 16% of the storms end with a CG strike. Another characteristic is the average horizontal extent of all the flashes comprising a storm. Our average is 12-14 km, while the greatest flash extends 26 km. Comparisons between the starting altitude of the median and last flashes of a storm are analyzed, with only 37% of the storms having a higher last flash initiating altitude. Additional observations are made of the total lightning flash rate, percentage of CG to IC lightning, trends of individual flash initiation altitudes versus the average initiation altitude, the average inter-flash time distribution, and time series of inter-flash times. Five schemes to forecast lightning cessation are developed and evaluated. 100 of the 116 storms were randomly selected as the dependent sample, while the remaining 16 storms were used for verification. The schemes included a correlation and regression tree analysis, multiple linear regression, trends of storm duration, trend of the altitude of the greatest reflectivity to the time of the final flash, and a percentile scheme. Surprisingly, the percentile method was found to be the most effective technique and the simplest. The inclusion of real time storm parameters is found to have little effect on the results, suggesting that different forecast predictors, such as microphysical data from polarimetric radar, will be necessary to produce improved skill. When the percentile method used a confidence level of 99.5%, it successfully maintained lightning advisories for all 16 independent storms on which the schemes were tested. Since the computed wait time was 25 min, compared to the 45WS' most conservative and accurate wait time of 30 min, the percentile method saves 5 min for each advisory. This 5 min of savings safely shortens the Weather Squadron's advisories and saves money. Additionally, these results are the first to evaluate the 30/30 rule that is used commonly. The success of the percentile method is surprising since it out performs more complex procedures involving correlation and regression tree analysis and regression schemes. These more sophisticated statistical analyses were expected to perform better since they include more predictors in the forecasts. However, with the predictors available to us, this was not the case. While not the expected result, the percentile method succeeds in creating a safe and expedited forecast.

  8. A Comparison of Forecast Error Generators for Modeling Wind and Load Uncertainty

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lu, Ning; Diao, Ruisheng; Hafen, Ryan P.

    2013-07-25

    This paper presents four algorithms to generate random forecast error time series. The performance of four algorithms is compared. The error time series are used to create real-time (RT), hour-ahead (HA), and day-ahead (DA) wind and load forecast time series that statistically match historically observed forecasting data sets used in power grid operation to study the net load balancing need in variable generation integration studies. The four algorithms are truncated-normal distribution models, state-space based Markov models, seasonal autoregressive moving average (ARMA) models, and a stochastic-optimization based approach. The comparison is made using historical DA load forecast and actual load valuesmore » to generate new sets of DA forecasts with similar stoical forecast error characteristics (i.e., mean, standard deviation, autocorrelation, and cross-correlation). The results show that all methods generate satisfactory results. One method may preserve one or two required statistical characteristics better the other methods, but may not preserve other statistical characteristics as well compared with the other methods. Because the wind and load forecast error generators are used in wind integration studies to produce wind and load forecasts time series for stochastic planning processes, it is sometimes critical to use multiple methods to generate the error time series to obtain a statistically robust result. Therefore, this paper discusses and compares the capabilities of each algorithm to preserve the characteristics of the historical forecast data sets.« less

  9. Sufficient Forecasting Using Factor Models

    PubMed Central

    Fan, Jianqing; Xue, Lingzhou; Yao, Jiawei

    2017-01-01

    We consider forecasting a single time series when there is a large number of predictors and a possible nonlinear effect. The dimensionality was first reduced via a high-dimensional (approximate) factor model implemented by the principal component analysis. Using the extracted factors, we develop a novel forecasting method called the sufficient forecasting, which provides a set of sufficient predictive indices, inferred from high-dimensional predictors, to deliver additional predictive power. The projected principal component analysis will be employed to enhance the accuracy of inferred factors when a semi-parametric (approximate) factor model is assumed. Our method is also applicable to cross-sectional sufficient regression using extracted factors. The connection between the sufficient forecasting and the deep learning architecture is explicitly stated. The sufficient forecasting correctly estimates projection indices of the underlying factors even in the presence of a nonparametric forecasting function. The proposed method extends the sufficient dimension reduction to high-dimensional regimes by condensing the cross-sectional information through factor models. We derive asymptotic properties for the estimate of the central subspace spanned by these projection directions as well as the estimates of the sufficient predictive indices. We further show that the natural method of running multiple regression of target on estimated factors yields a linear estimate that actually falls into this central subspace. Our method and theory allow the number of predictors to be larger than the number of observations. We finally demonstrate that the sufficient forecasting improves upon the linear forecasting in both simulation studies and an empirical study of forecasting macroeconomic variables. PMID:29731537

  10. Quantifying Uncertainty of Wind Power Production Through an Analog Ensemble

    NASA Astrophysics Data System (ADS)

    Shahriari, M.; Cervone, G.

    2016-12-01

    The Analog Ensemble (AnEn) method is used to generate probabilistic weather forecasts that quantify the uncertainty in power estimates at hypothetical wind farm locations. The data are from the NREL Eastern Wind Dataset that includes more than 1,300 modeled wind farms. The AnEn model uses a two-dimensional grid to estimate the probability distribution of wind speed (the predictand) given the values of predictor variables such as temperature, pressure, geopotential height, U-component and V-component of wind. The meteorological data is taken from the NCEP GFS which is available on a 0.25 degree grid resolution. The methodology first divides the data into two classes: training period and verification period. The AnEn selects a point in the verification period and searches for the best matching estimates (analogs) in the training period. The predictand value at those analogs are the ensemble prediction for the point in the verification period. The model provides a grid of wind speed values and the uncertainty (probability index) associated with each estimate. Each wind farm is associated with a probability index which quantifies the degree of difficulty to estimate wind power. Further, the uncertainty in estimation is related to other factors such as topography, land cover and wind resources. This is achieved by using a GIS system to compute the correlation between the probability index and geographical characteristics. This study has significant applications for investors in renewable energy sector especially wind farm developers. Lower level of uncertainty facilitates the process of submitting bids into day ahead and real time electricity markets. Thus, building wind farms in regions with lower levels of uncertainty will reduce the real-time operational risks and create a hedge against volatile real-time prices. Further, the links between wind estimate uncertainty and factors such as topography and wind resources, provide wind farm developers with valuable information regarding wind farm siting.

  11. Evolving forecasting classifications and applications in health forecasting

    PubMed Central

    Soyiri, Ireneous N; Reidpath, Daniel D

    2012-01-01

    Health forecasting forewarns the health community about future health situations and disease episodes so that health systems can better allocate resources and manage demand. The tools used for developing and measuring the accuracy and validity of health forecasts commonly are not defined although they are usually adapted forms of statistical procedures. This review identifies previous typologies used in classifying the forecasting methods commonly used in forecasting health conditions or situations. It then discusses the strengths and weaknesses of these methods and presents the choices available for measuring the accuracy of health-forecasting models, including a note on the discrepancies in the modes of validation. PMID:22615533

  12. HEPS4Power - Extended-range Hydrometeorological Ensemble Predictions for Improved Hydropower Operations and Revenues

    NASA Astrophysics Data System (ADS)

    Bogner, Konrad; Monhart, Samuel; Liniger, Mark; Spririg, Christoph; Jordan, Fred; Zappa, Massimiliano

    2015-04-01

    In recent years large progresses have been achieved in the operational prediction of floods and hydrological drought with up to ten days lead time. Both the public and the private sectors are currently using probabilistic runoff forecast in order to monitoring water resources and take actions when critical conditions are to be expected. The use of extended-range predictions with lead times exceeding 10 days is not yet established. The hydropower sector in particular might have large benefits from using hydro meteorological forecasts for the next 15 to 60 days in order to optimize the operations and the revenues from their watersheds, dams, captions, turbines and pumps. The new Swiss Competence Centers in Energy Research (SCCER) targets at boosting research related to energy issues in Switzerland. The objective of HEPS4POWER is to demonstrate that operational extended-range hydro meteorological forecasts have the potential to become very valuable tools for fine tuning the production of energy from hydropower systems. The project team covers a specific system-oriented value chain starting from the collection and forecast of meteorological data (MeteoSwiss), leading to the operational application of state-of-the-art hydrological models (WSL) and terminating with the experience in data presentation and power production forecasts for end-users (e-dric.ch). The first task of the HEPS4POWER will be the downscaling and post-processing of ensemble extended-range meteorological forecasts (EPS). The goal is to provide well-tailored forecasts of probabilistic nature that should be reliable in statistical and localized at catchment or even station level. The hydrology related task will consist in feeding the post-processed meteorological forecasts into a HEPS using a multi-model approach by implementing models with different complexity. Also in the case of the hydrological ensemble predictions, post-processing techniques need to be tested in order to improve the quality of the forecasts against observed discharge. Analysis should be specifically oriented to the maximisation of hydroelectricity production. Thus, verification metrics should include economic measures like cost loss approaches. The final step will include the transfer of the HEPS system to several hydropower systems, the connection with the energy market prices and the development of probabilistic multi-reservoir production and management optimizations guidelines. The baseline model chain yielding three-days forecasts established for a hydropower system in southern-Switzerland will be presented alongside with the work-plan to achieve seasonal ensemble predictions.

  13. Role of the Internet in Anticipating and Mitigating Earthquake Catastrophes, and the Emergence of Personal Risk Management (Invited)

    NASA Astrophysics Data System (ADS)

    Rundle, J. B.; Holliday, J. R.; Donnellan, A.; Graves, W.; Tiampo, K. F.; Klein, W.

    2009-12-01

    Risks from natural and financial catastrophes are currently managed by a combination of large public and private institutions. Public institutions usually are comprised of government agencies that conduct studies, formulate policies and guidelines, enforce regulations, and make “official” forecasts. Private institutions include insurance and reinsurance companies, and financial service companies that underwrite catastrophe (“cat”) bonds, and make private forecasts. Although decisions about allocating resources and developing solutions are made by large institutions, the costs of dealing with catastrophes generally fall for the most part on businesses and the general public. Information on potential risks is generally available to the public for some hazards but not others. For example, in the case of weather, private forecast services are provided by www.weather.com and www.wunderground.com. For earthquakes in California (only), the official forecast is the WGCEP-USGS forecast, but provided in a format that is difficult for the public to use. Other privately made forecasts are currently available, for example by the JPL QuakeSim and Russian groups, but these efforts are limited. As more of the world’s population moves increasingly into major seismic zones, new strategies are needed to allow individuals to manage their personal risk from large and damaging earthquakes. Examples include individual mitigation measures such as retrofitting, as well as microinsurance in both developing and developed countries, as well as other financial strategies. We argue that the “long tail” of the internet offers an ideal, and greatly underutilized mechanism to reach out to consumers and to provide them with the information and tools they need to confront and manage seismic hazard and risk on an individual, personalized basis. Information of this type includes not only global hazard forecasts, which are now possible, but also global risk estimation. Additionally, social networking tools are available that will allow self-organizing, disaster-resilient communities to arise as emergent structures from the underlying nonlinear social dynamics. In this talk, we argue that the current style of risk management is not making adequate use of modern internet technology, and that significantly more can be done. We suggest several avenues to proceed, in particular making use of the internet for earthquake forecast and information delivery, as well as tracking forecast validation and verification on a real-time basis. We also show examples of forecasts delivered over the internet, and describe how these are made.

  14. The new Met Office strategy for seasonal forecasts

    NASA Astrophysics Data System (ADS)

    Hewson, T. D.

    2012-04-01

    In October 2011 the Met Office began issuing a new-format UK seasonal forecast, called "The 3-month Outlook". Government interest in a UK-relevant product had been heightened by infrastructure issues arising during the severe cold of previous winters. At the same time there was evidence that the Met Office's "GLOSEA4" long range forecasting system exhibited some hindcast skill for the UK, that was comparable to its hindcast skill for the larger (and therefore less useful) 'northern Europe' region. Also, the NAO- and AO- signals prevailing in the previous two winters had been highlighted by the GLOSEA4 model well in advance. This presentation will initially give a brief overview of GLOSEA4, describing key features such as evolving sea-ice, a well-resolved stratosphere, and the perturbation strategy. Skill measures will be shown, along with forecasts for the last 3 winters. The new structure 3-month outlook will then be described and presented. Previously, our seasonal forecasts had been based on a tercile approach. The new format outlook aims to substantially improve upon this by illustrating graphically, and with text, the full range of possible outcomes, and by placing those outcomes in the context of climatology. In one key component the forecast pdfs (probability density functions) are displayed alongside climatological pdfs. To generate the forecast pdf we take the bias-corrected GLOSEA4 output (42 members), and then incorporate, via expert team, all other relevant information. Firstly model forecasts from other centres are examined. Then external 'forcing factors', such as solar, and the state of the land-ocean-ice system, are referenced, assessing how well the models represent their influence, and bringing in statistical relationships where appropriate. The expert team thereby decides upon any changes to the GLOSEA4 data, employing an interactive tool to shift, expand or contract the forecast pdfs accordingly. The full modification process will be illustrated during the presentation. Another key component of the 3-month outlook is the focus it places on potential hazards and impacts. To date specific references have been made to snow and ice disruption, to replenishment expectation for regions suffering water supply shortages, and to windstorm frequency. This aspect will be discussed, showing also some subjective verification. In future we hope to extend the 3-month outlook framework to other parts of the world, notably Africa, a region where the Met Office, with DfID support, is working collaboratively to improve real-time long range forecasts. Brief reference will also be made to such activities.

  15. Automated time series forecasting for biosurveillance.

    PubMed

    Burkom, Howard S; Murphy, Sean Patrick; Shmueli, Galit

    2007-09-30

    For robust detection performance, traditional control chart monitoring for biosurveillance is based on input data free of trends, day-of-week effects, and other systematic behaviour. Time series forecasting methods may be used to remove this behaviour by subtracting forecasts from observations to form residuals for algorithmic input. We describe three forecast methods and compare their predictive accuracy on each of 16 authentic syndromic data streams. The methods are (1) a non-adaptive regression model using a long historical baseline, (2) an adaptive regression model with a shorter, sliding baseline, and (3) the Holt-Winters method for generalized exponential smoothing. Criteria for comparing the forecasts were the root-mean-square error, the median absolute per cent error (MedAPE), and the median absolute deviation. The median-based criteria showed best overall performance for the Holt-Winters method. The MedAPE measures over the 16 test series averaged 16.5, 11.6, and 9.7 for the non-adaptive regression, adaptive regression, and Holt-Winters methods, respectively. The non-adaptive regression forecasts were degraded by changes in the data behaviour in the fixed baseline period used to compute model coefficients. The mean-based criterion was less conclusive because of the effects of poor forecasts on a small number of calendar holidays. The Holt-Winters method was also most effective at removing serial autocorrelation, with most 1-day-lag autocorrelation coefficients below 0.15. The forecast methods were compared without tuning them to the behaviour of individual series. We achieved improved predictions with such tuning of the Holt-Winters method, but practical use of such improvements for routine surveillance will require reliable data classification methods.

  16. A Machine LearningFramework to Forecast Wave Conditions

    NASA Astrophysics Data System (ADS)

    Zhang, Y.; James, S. C.; O'Donncha, F.

    2017-12-01

    Recently, significant effort has been undertaken to quantify and extract wave energy because it is renewable, environmental friendly, abundant, and often close to population centers. However, a major challenge is the ability to accurately and quickly predict energy production, especially across a 48-hour cycle. Accurate forecasting of wave conditions is a challenging undertaking that typically involves solving the spectral action-balance equation on a discretized grid with high spatial resolution. The nature of the computations typically demands high-performance computing infrastructure. Using a case-study site at Monterey Bay, California, a machine learning framework was trained to replicate numerically simulated wave conditions at a fraction of the typical computational cost. Specifically, the physics-based Simulating WAves Nearshore (SWAN) model, driven by measured wave conditions, nowcast ocean currents, and wind data, was used to generate training data for machine learning algorithms. The model was run between April 1st, 2013 and May 31st, 2017 generating forecasts at three-hour intervals yielding 11,078 distinct model outputs. SWAN-generated fields of 3,104 wave heights and a characteristic period could be replicated through simple matrix multiplications using the mapping matrices from machine learning algorithms. In fact, wave-height RMSEs from the machine learning algorithms (9 cm) were less than those for the SWAN model-verification exercise where those simulations were compared to buoy wave data within the model domain (>40 cm). The validated machine learning approach, which acts as an accurate surrogate for the SWAN model, can now be used to perform real-time forecasts of wave conditions for the next 48 hours using available forecasted boundary wave conditions, ocean currents, and winds. This solution has obvious applications to wave-energy generation as accurate wave conditions can be forecasted with over a three-order-of-magnitude reduction in computational expense. The low computational cost (and by association low computer-power requirement) means that the machine learning algorithms could be installed on a wave-energy converter as a form of "edge computing" where a device could forecast its own 48-hour energy production.

  17. Severe rainfall prediction systems for civil protection purposes

    NASA Astrophysics Data System (ADS)

    Comellas, A.; Llasat, M. C.; Molini, L.; Parodi, A.; Siccardi, F.

    2010-09-01

    One of the most common natural hazards impending on Mediterranean regions is the occurrence of severe weather structures able to produce heavy rainfall. Floods have killed about 1000 people across all Europe in last 10 years. With the aim of mitigating this kind of risk, quantitative precipitation forecasts (QPF) and rain probability forecasts are two tools nowadays available for national meteorological services and institutions responsible for weather forecasting in order to and predict rainfall, by using either the deterministic or the probabilistic approach. This study provides an insight of the different approaches used by Italian (DPC) and Catalonian (SMC) Civil Protection and the results they achieved with their peculiar issuing-system for early warnings. For the former, the analysis considers the period between 2006-2009 in which the predictive ability of the forecasting system, based on the numerical weather prediction model COSMO-I7, has been put into comparison with ground based observations (composed by more than 2000 raingauge stations, Molini et al., 2009). Italian system is mainly focused on regional-scale warnings providing forecasts for periods never shorter than 18 hours and very often have a 36-hour maximum duration . The information contained in severe weather bulletins is not quantitative and usually is referred to a specific meteorological phenomena (thunderstorms, wind gales et c.). Updates and refining have a usual refresh time of 24 hours. SMC operates within the Catalonian boundaries and uses a warning system that mixes both quantitative and probabilistic information. For each administrative region ("comarca") Catalonia is divided into, forecasters give an approximate value of the average predicted rainfall and the probability of overcoming that threshold. Usually warnings are re-issued every 6 hours and their duration depends on the predicted time extent of the storm. In order to provide a comprehensive QPF verification, the rainfall predicted by Mesoscale Model 5 (MM5), the SMC forecast operational model, is compared with the local rain gauge network for year 2008 (Comellas et al., 2010). This study presents benefits and drawbacks of both Italian and Catalonian systems. Moreover, a particular attention is paid on the link between system's predictive ability and the predicted severe weather type as a function of its space-time development.

  18. Impact assessment of GPS radio occultation data on Antarctic analysis and forecast using WRF 3DVAR

    NASA Astrophysics Data System (ADS)

    Zhang, H.; Wee, T. K.; Liu, Z.; Lin, H. C.; Kuo, Y. H.

    2016-12-01

    This study assesses the impact of Global Positioning System (GPS) Radio Occultation (RO) refractivity data on the analysis and forecast in the Antarctic region. The RO data are continuously assimilated into the Weather Research and Forecasting (WRF) Model using the WRF 3DVAR along with other observations that were operationally available to the National Center for Environmental Prediction (NCEP) during a month period, October 2010, including the Advance Microwave Sounding Unit (AMSU) radiance data. For the month-long data assimilation experiments, three RO datasets are used: 1) The actual operational dataset, which was produced by the near real-time RO processing at that time and provided to weather forecasting centers; 2) a post-processed dataset with posterior clock and orbit estimates, and with improved RO processing algorithms; and, 3) another post-processed dataset, produced with a variational RO processing. The data impact is evaluated with comparing the forecasts and analyses to independent driftsonde observations that are made available through the Concordiasi field campaign, in addition to utilizing other traditional means of verification. A denial of RO data (while keeping all other observations) resulted in a remarkable quality degradation of analysis and forecast, indicating the high value of RO data over the Antarctic area. The post-processed RO data showed a significantly larger positive impact compared to the near real-time data, due to extra RO data from the TerraSAR-X satellite (unavailable at the time of the near real-time processing) as well as the supposedly improved data quality as a result of the post-processing. This strongly suggests that the future polar constellation of COSMIC-2 is vital. The variational RO processing further reduced the systematic and random errors in both analysis and forecasts, for instance, leading to a smaller background departure of AMSU radiance. This indicates that the variational RO processing provides an improved reference for the bias correction of satellite radiance, making the bias correction more effective. This study finds that advanced RO data processing algorithms may further enhance the high quality of RO data in high Southern latitudes.

  19. Real-time eruption forecasting using the material Failure Forecast Method with a Bayesian approach

    NASA Astrophysics Data System (ADS)

    Boué, A.; Lesage, P.; Cortés, G.; Valette, B.; Reyes-Dávila, G.

    2015-04-01

    Many attempts for deterministic forecasting of eruptions and landslides have been performed using the material Failure Forecast Method (FFM). This method consists in adjusting an empirical power law on precursory patterns of seismicity or deformation. Until now, most of the studies have presented hindsight forecasts based on complete time series of precursors and do not evaluate the ability of the method for carrying out real-time forecasting with partial precursory sequences. In this study, we present a rigorous approach of the FFM designed for real-time applications on volcano-seismic precursors. We use a Bayesian approach based on the FFM theory and an automatic classification of seismic events. The probability distributions of the data deduced from the performance of this classification are used as input. As output, it provides the probability of the forecast time at each observation time before the eruption. The spread of the a posteriori probability density function of the prediction time and its stability with respect to the observation time are used as criteria to evaluate the reliability of the forecast. We test the method on precursory accelerations of long-period seismicity prior to vulcanian explosions at Volcán de Colima (Mexico). For explosions preceded by a single phase of seismic acceleration, we obtain accurate and reliable forecasts using approximately 80% of the whole precursory sequence. It is, however, more difficult to apply the method to multiple acceleration patterns.

  20. Numerical simulation and analysis of impact of non-orographic gravity waves drag of middle atmosphere in framework of a general circulation model

    NASA Astrophysics Data System (ADS)

    Zhao, J.; Wang, S.

    2017-12-01

    Gravity wave drag (GWD) is among the drivers of meridional overturning in the middle atmosphere, also known as the Brewer-Dobson Circulation, and of the quasi-biennial oscillation (QBO). The small spatial scales and complications due to wave breaking require their effects to be parameterised. GWD parameterizations are usually divided into two parts, orographic and non-orographic. The basic dynamical and physical processes of the middle atmosphere and the mechanism of the interactions between the troposphere and the middle atmosphere were studied in the frame of a general circulation model. The model for the troposphere was expanded to a global model considering middle atmosphere with the capability of describing the basic processes in the middle atmosphere and the troposphere-middle atmosphere interactions. Currently, it is too costly to include full non-hydrostatic and rotational wave dynamics in an operational parameterization. The hydrostatic non-rotational wave dynamics which allow an efficient implementation that is suitably fast for operation. The simplified parameterization of non-orographic GWD follows from the WM96 scheme in which a framework is developed using conservative propagation of gravity waves, critical level filtering, and non-linear dissipation. In order to simulate and analysis the influence of non-orographic GWD on the stratospheric wind and temperature fields, experiments using Stratospheric Sudden Warming (SSW) event case occurred in January 2013 were carried out, and results of objective weather forecast verifications of the two months period were compared in detail. The verification of monthly mean of forecast anomaly correlation (ACC) and root mean square (RMS) errors shows consistently positive impact of non-orographic GWD on skill score of forecasting for the three to eight days, both in the stratosphere and troposphere, and visible positive impact on prediction of the stratospheric wind and temperature fields. Numerical simulation during SSW event demonstrates that the influence on the temperature of middle stratosphere is mainly positive and there were larger departure both for the wind and temperature fields considering the non-orographic GWD during the warming process.

  1. HDL to verification logic translator

    NASA Technical Reports Server (NTRS)

    Gambles, J. W.; Windley, P. J.

    1992-01-01

    The increasingly higher number of transistors possible in VLSI circuits compounds the difficulty in insuring correct designs. As the number of possible test cases required to exhaustively simulate a circuit design explodes, a better method is required to confirm the absence of design faults. Formal verification methods provide a way to prove, using logic, that a circuit structure correctly implements its specification. Before verification is accepted by VLSI design engineers, the stand alone verification tools that are in use in the research community must be integrated with the CAD tools used by the designers. One problem facing the acceptance of formal verification into circuit design methodology is that the structural circuit descriptions used by the designers are not appropriate for verification work and those required for verification lack some of the features needed for design. We offer a solution to this dilemma: an automatic translation from the designers' HDL models into definitions for the higher-ordered logic (HOL) verification system. The translated definitions become the low level basis of circuit verification which in turn increases the designer's confidence in the correctness of higher level behavioral models.

  2. Verification of VLSI designs

    NASA Technical Reports Server (NTRS)

    Windley, P. J.

    1991-01-01

    In this paper we explore the specification and verification of VLSI designs. The paper focuses on abstract specification and verification of functionality using mathematical logic as opposed to low-level boolean equivalence verification such as that done using BDD's and Model Checking. Specification and verification, sometimes called formal methods, is one tool for increasing computer dependability in the face of an exponentially increasing testing effort.

  3. Forecasting the short-term passenger flow on high-speed railway with neural networks.

    PubMed

    Xie, Mei-Quan; Li, Xia-Miao; Zhou, Wen-Liang; Fu, Yan-Bing

    2014-01-01

    Short-term passenger flow forecasting is an important component of transportation systems. The forecasting result can be applied to support transportation system operation and management such as operation planning and revenue management. In this paper, a divide-and-conquer method based on neural network and origin-destination (OD) matrix estimation is developed to forecast the short-term passenger flow in high-speed railway system. There are three steps in the forecasting method. Firstly, the numbers of passengers who arrive at each station or depart from each station are obtained from historical passenger flow data, which are OD matrices in this paper. Secondly, short-term passenger flow forecasting of the numbers of passengers who arrive at each station or depart from each station based on neural network is realized. At last, the OD matrices in short-term time are obtained with an OD matrix estimation method. The experimental results indicate that the proposed divide-and-conquer method performs well in forecasting the short-term passenger flow on high-speed railway.

  4. Validation and Verification of Operational Land Analysis Activities at the Air Force Weather Agency

    NASA Technical Reports Server (NTRS)

    Shaw, Michael; Kumar, Sujay V.; Peters-Lidard, Christa D.; Cetola, Jeffrey

    2011-01-01

    The NASA developed Land Information System (LIS) is the Air Force Weather Agency's (AFWA) operational Land Data Assimilation System (LDAS) combining real time precipitation observations and analyses, global forecast model data, vegetation, terrain, and soil parameters with the community Noah land surface model, along with other hydrology module options, to generate profile analyses of global soil moisture, soil temperature, and other important land surface characteristics. (1) A range of satellite data products and surface observations used to generate the land analysis products (2) Global, 1/4 deg spatial resolution (3) Model analysis generated at 3 hours

  5. How well can we test probabilistic seismic hazard maps?

    NASA Astrophysics Data System (ADS)

    Vanneste, Kris; Stein, Seth; Camelbeeck, Thierry; Vleminckx, Bart

    2017-04-01

    Recent large earthquakes that gave rise to shaking much stronger than shown in probabilistic seismic hazard (PSH) maps have stimulated discussion about how well these maps forecast future shaking. These discussions have brought home the fact that although the maps are designed to achieve certain goals, we know little about how well they actually perform. As for any other forecast, this question involves verification and validation. Verification involves assessing how well the algorithm used to produce hazard maps implements the conceptual PSH model ("have we built the model right?"). Validation asks how well the model forecasts the shaking that actually occurs ("have we built the right model?"). We explore the verification issue by simulating shaking histories for an area with assumed uniform distribution of earthquakes, Gutenberg-Richter magnitude-frequency relation, Poisson temporal occurrence model, and ground-motion prediction equation (GMPE). We compare the maximum simulated shaking at many sites over time with that predicted by a hazard map generated for the same set of parameters. The Poisson model predicts that the fraction of sites at which shaking will exceed that of the hazard map is p = 1 - exp(-t/T), where t is the duration of observations and T is the map's return period. Exceedance is typically associated with infrequent large earthquakes, as observed in real cases. The ensemble of simulated earthquake histories yields distributions of fractional exceedance with mean equal to the predicted value. Hence, the PSH algorithm appears to be internally consistent and can be regarded as verified for this set of simulations. However, simulated fractional exceedances show a large scatter about the mean value that decreases with increasing t/T, increasing observation time and increasing Gutenberg-Richter a-value (combining intrinsic activity rate and surface area), but is independent of GMPE uncertainty. This scatter is due to the variability of earthquake recurrence, and so decreases as the largest earthquakes occur in more simulations. Our results are important for evaluating the performance of a hazard map based on misfits in fractional exceedance, and for assessing whether such misfit arises by chance or reflects a bias in the map. More specifically, we determined for a broad range of Gutenberg-Richter a-values theoretical confidence intervals on allowed misfits in fractional exceedance and on the percentage of hazard-map bias that can thus be detected by comparison with observed shaking histories. Given that in the real world we only have one shaking history for an area, these results indicate that even if a hazard map does not fit the observations, it is very difficult to assess its veracity, especially for low-to-moderate-seismicity regions. Because our model is a simplified version of reality, any additional uncertainty or complexity will tend to widen these confidence intervals.

  6. Sensitivity of monthly streamflow forecasts to the quality of rainfall forcing: When do dynamical climate forecasts outperform the Ensemble Streamflow Prediction (ESP) method?

    NASA Astrophysics Data System (ADS)

    Tanguy, M.; Prudhomme, C.; Harrigan, S.; Smith, K. A.; Parry, S.

    2017-12-01

    Forecasting hydrological extremes is challenging, especially at lead times over 1 month for catchments with limited hydrological memory and variable climates. One simple way to derive monthly or seasonal hydrological forecasts is to use historical climate data to drive hydrological models using the Ensemble Streamflow Prediction (ESP) method. This gives a range of possible future streamflow given known initial hydrologic conditions alone. The degree of skill of ESP depends highly on the forecast initialisation month and catchment type. Using dynamic rainfall forecasts as driving data instead of historical data could potentially improve streamflow predictions. A lot of effort is being invested within the meteorological community to improve these forecasts. However, while recent progress shows promise (e.g. NAO in winter), the skill of these forecasts at monthly to seasonal timescales is generally still limited, and the extent to which they might lead to improved hydrological forecasts is an area of active research. Additionally, these meteorological forecasts are currently being produced at 1 month or seasonal time-steps in the UK, whereas hydrological models require forcings at daily or sub-daily time-steps. Keeping in mind these limitations of available rainfall forecasts, the objectives of this study are to find out (i) how accurate monthly dynamical rainfall forecasts need to be to outperform ESP, and (ii) how the method used to disaggregate monthly rainfall forecasts into daily rainfall time series affects results. For the first objective, synthetic rainfall time series were created by increasingly degrading observed data (proxy for a `perfect forecast') from 0 % to +/-50 % error. For the second objective, three different methods were used to disaggregate monthly rainfall data into daily time series. These were used to force a simple lumped hydrological model (GR4J) to generate streamflow predictions at a one-month lead time for over 300 catchments representative of the range of UK's hydro-climatic conditions. These forecasts were then benchmarked against the traditional ESP method. It is hoped that the results of this work will help the meteorological community to identify where to focus their efforts in order to increase the usefulness of their forecasts within hydrological forecasting systems.

  7. Forecasting Consumer Adoption of Information Technology and Services--Lessons from Home Video Forecasting.

    ERIC Educational Resources Information Center

    Klopfenstein, Bruce C.

    1989-01-01

    Describes research that examined the strengths and weaknesses of technological forecasting methods by analyzing forecasting studies made for home video players. The discussion covers assessments and explications of correct and incorrect forecasting assumptions, and their implications for forecasting the adoption of home information technologies…

  8. Real-time emergency forecasting technique for situation management systems

    NASA Astrophysics Data System (ADS)

    Kopytov, V. V.; Kharechkin, P. V.; Naumenko, V. V.; Tretyak, R. S.; Tebueva, F. B.

    2018-05-01

    The article describes the real-time emergency forecasting technique that allows increasing accuracy and reliability of forecasting results of any emergency computational model applied for decision making in situation management systems. Computational models are improved by the Improved Brown’s method applying fractal dimension to forecast short time series data being received from sensors and control systems. Reliability of emergency forecasting results is ensured by the invalid sensed data filtering according to the methods of correlation analysis.

  9. A Delphi forecast of technology in education

    NASA Technical Reports Server (NTRS)

    Robinson, B. E.

    1973-01-01

    The results are reported of a Delphi forecast of the utilization and social impacts of large-scale educational telecommunications technology. The focus is on both forecasting methodology and educational technology. The various methods of forecasting used by futurists are analyzed from the perspective of the most appropriate method for a prognosticator of educational technology, and review and critical analysis are presented of previous forecasts and studies. Graphic responses, summarized comments, and a scenario of education in 1990 are presented.

  10. Applications of the gambling score in evaluating earthquake predictions and forecasts

    NASA Astrophysics Data System (ADS)

    Zhuang, Jiancang; Zechar, Jeremy D.; Jiang, Changsheng; Console, Rodolfo; Murru, Maura; Falcone, Giuseppe

    2010-05-01

    This study presents a new method, namely the gambling score, for scoring the performance earthquake forecasts or predictions. Unlike most other scoring procedures that require a regular scheme of forecast and treat each earthquake equally, regardless their magnitude, this new scoring method compensates the risk that the forecaster has taken. Starting with a certain number of reputation points, once a forecaster makes a prediction or forecast, he is assumed to have betted some points of his reputation. The reference model, which plays the role of the house, determines how many reputation points the forecaster can gain if he succeeds, according to a fair rule, and also takes away the reputation points bet by the forecaster if he loses. This method is also extended to the continuous case of point process models, where the reputation points betted by the forecaster become a continuous mass on the space-time-magnitude range of interest. For discrete predictions, we apply this method to evaluate performance of Shebalin's predictions made by using the Reverse Tracing of Precursors (RTP) algorithm and of the outputs of the predictions from the Annual Consultation Meeting on Earthquake Tendency held by China Earthquake Administration. For the continuous case, we use it to compare the probability forecasts of seismicity in the Abruzzo region before and after the L'aquila earthquake based on the ETAS model and the PPE model.

  11. Gambling scores for earthquake predictions and forecasts

    NASA Astrophysics Data System (ADS)

    Zhuang, Jiancang

    2010-04-01

    This paper presents a new method, namely the gambling score, for scoring the performance earthquake forecasts or predictions. Unlike most other scoring procedures that require a regular scheme of forecast and treat each earthquake equally, regardless their magnitude, this new scoring method compensates the risk that the forecaster has taken. Starting with a certain number of reputation points, once a forecaster makes a prediction or forecast, he is assumed to have betted some points of his reputation. The reference model, which plays the role of the house, determines how many reputation points the forecaster can gain if he succeeds, according to a fair rule, and also takes away the reputation points betted by the forecaster if he loses. This method is also extended to the continuous case of point process models, where the reputation points betted by the forecaster become a continuous mass on the space-time-magnitude range of interest. We also calculate the upper bound of the gambling score when the true model is a renewal process, the stress release model or the ETAS model and when the reference model is the Poisson model.

  12. Verification of a non-hydrostatic dynamical core using horizontally spectral element vertically finite difference method: 2-D aspects

    NASA Astrophysics Data System (ADS)

    Choi, S.-J.; Giraldo, F. X.; Kim, J.; Shin, S.

    2014-06-01

    The non-hydrostatic (NH) compressible Euler equations of dry atmosphere are solved in a simplified two dimensional (2-D) slice framework employing a spectral element method (SEM) for the horizontal discretization and a finite difference method (FDM) for the vertical discretization. The SEM uses high-order nodal basis functions associated with Lagrange polynomials based on Gauss-Lobatto-Legendre (GLL) quadrature points. The FDM employs a third-order upwind biased scheme for the vertical flux terms and a centered finite difference scheme for the vertical derivative terms and quadrature. The Euler equations used here are in a flux form based on the hydrostatic pressure vertical coordinate, which are the same as those used in the Weather Research and Forecasting (WRF) model, but a hybrid sigma-pressure vertical coordinate is implemented in this model. We verified the model by conducting widely used standard benchmark tests: the inertia-gravity wave, rising thermal bubble, density current wave, and linear hydrostatic mountain wave. The results from those tests demonstrate that the horizontally spectral element vertically finite difference model is accurate and robust. By using the 2-D slice model, we effectively show that the combined spatial discretization method of the spectral element and finite difference method in the horizontal and vertical directions, respectively, offers a viable method for the development of a NH dynamical core.

  13. Load Forecasting of Central Urban Area Power Grid Based on Saturated Load Density Index

    NASA Astrophysics Data System (ADS)

    Huping, Yang; Chengyi, Tang; Meng, Yu

    2018-03-01

    In the current society, coordination between urban power grid development and city development has become more and more prominent. Electricity saturated load forecasting plays an important role in the planning and development of power grids. Electricity saturated load forecasting is a new concept put forward by China in recent years in the field of grid planning. Urban saturation load forecast is different from the traditional load forecasting method for specific years, the time span of it often relatively large, and involves a wide range of aspects. This study takes a county in eastern Jiangxi as an example, this paper chooses a variety of load forecasting methods to carry on the recent load forecasting calculation to central urban area. At the same time, this paper uses load density index method to predict the Longterm load forecasting of electric saturation load of central urban area lasted until 2030. And further study shows the general distribution of the urban saturation load in space.

  14. Wildland Fire Forecasting: Predicting Wildfire Behavior, Growth, and Feedbacks on Weather

    NASA Astrophysics Data System (ADS)

    Coen, J. L.

    2005-12-01

    Recent developments in wildland fire research models have represented more complex of fire behavior. The cost has been to increase the computational requirements. When operational constraints are included, such as the need to produce such forecasts faster than real time, the challenge becomes a balance of how much complexity (with corresponding gains in realism) and accuracy can be achieved in producing the quantities of interest while meeting the specified operational constraints. Current field tools are calculator or Palm-Pilot based algorithms such as BEHAVE and BEHAVE Plus that produce timely estimates of instantaneous fire spread rates, flame length, and fire intensity at a point using readily estimated inputs of fuel model, terrain slope, and atmospheric wind speed at a point. At the cost of requiring a PC and slower calculation, FARSITE represents two-dimensional fire spread and adds capabilities including a parameterized representation of crown fire ignition, This work describes how a coupled atmosphere-fire model previously used as a research tool has been adapted for production of real-time forecasts of fire growth and its interactions with weather over a domain focusing on Colorado during summer 2004. The coupled atmosphere-wildland fire-environment (CAWFE) model composed of a 3-dimensional atmospheric prediction model that has been two-way coupled with an empirical fire spread model. The models are connected in that atmospheric conditions (and fuel conditions influenced by the atmosphere) affect the rate and direction of fire propagation, which releases sensible and latent heat (i.e. thermal and water vapor fluxes) to the atmosphere that in turn alter the winds and atmospheric structure around the fire. Thus, it can represent time and spatially-varying weather and the fire feedbacks on the atmospheric which are at the heart of sudden changes in fire behavior and examples of extreme fire behavior such as blow ups, which are now not predictable with current tools. Thus, although this work shows that is it possible to perform more detailed simulations in real time, fire behavior forecasting remains a challenging problem. This is due to challenges in weather prediction, particularly at fine spatial and temporal scales considered "nowcasting" (0-6 hrs), uncertainties in fire behavior even with known meteorological conditions, limitations in quantitative datasets on fuel properties such as fuel loading, and verification. This work describes efforts to advance these capabilities with input from remote sensing data on fuel characteristics and dynamic steering and object-based verification with remotely sensed fire perimeters.

  15. Long- Range Forecasting Of The Onset Of Southwest Monsoon Winds And Waves Near The Horn Of Africa

    DTIC Science & Technology

    2017-12-01

    SUMMARY OF CLIMATE ANALYSIS AND LONG-RANGE FORECAST METHODOLOGY Prior theses from Heidt (2006) and Lemke (2010) used methods similar to ours and to...6 II. DATA AND METHODS .......................................................................................7 A...9 D. ANALYSIS AND FORECAST METHODS .........................................10 1. Predictand Selection

  16. Predicting Academic Library Circulations: A Forecasting Methods Competition.

    ERIC Educational Resources Information Center

    Brooks, Terrence A.; Forys, John W., Jr.

    Based on sample data representing five years of monthly circulation totals from 50 academic libraries in Illinois, Iowa, Michigan, Minnesota, Missouri, and Ohio, a study was conducted to determine the most efficient smoothing forecasting methods for academic libraries. Smoothing forecasting methods were chosen because they have been characterized…

  17. Formal Methods for Life-Critical Software

    NASA Technical Reports Server (NTRS)

    Butler, Ricky W.; Johnson, Sally C.

    1993-01-01

    The use of computer software in life-critical applications, such as for civil air transports, demands the use of rigorous formal mathematical verification procedures. This paper demonstrates how to apply formal methods to the development and verification of software by leading the reader step-by-step through requirements analysis, design, implementation, and verification of an electronic phone book application. The current maturity and limitations of formal methods tools and techniques are then discussed, and a number of examples of the successful use of formal methods by industry are cited.

  18. Statistical Short-Range Forecast Guidance for Cloud Ceilings Over the Shuttle Landing Facility

    NASA Technical Reports Server (NTRS)

    Lambert, Winifred C.

    2001-01-01

    This report describes the results of the AMU's Short-Range Statistical Forecasting task. The cloud ceiling forecast over the Shuttle Landing Facility (SLF) is a critical element in determining whether a Shuttle should land. Spaceflight Meteorology Group (SMG) forecasters find that ceilings at the SLF are challenging to forecast. The AMU was tasked to develop ceiling forecast equations to minimize the challenge. Studies in the literature that showed success in improving short-term forecasts of ceiling provided the basis for the AMU task. A 20-year record of cool-season hourly surface observations from stations in east-central Florida was used for the equation development. Two methods were used: an observations-based (OBS) method that incorporated data from all stations, and a persistence climatology (PCL) method used as the benchmark. Equations were developed for 1-, 2-, and 3-hour lead times at each hour of the day. A comparison between the two methods indicated that the OBS equations performed well and produced an improvement over the PCL equations. Therefore, the conclusion of the AMU study is that OBS equations produced more accurate forecasts than the PCL equations, and can be used in operations. They provide another tool with which to make the ceiling forecasts that are critical to safe Shuttle landings at KSC.

  19. Improving semi-text-independent method of writer verification using difference vector

    NASA Astrophysics Data System (ADS)

    Li, Xin; Ding, Xiaoqing

    2009-01-01

    The semi-text-independent method of writer verification based on the linear framework is a method that can use all characters of two handwritings to discriminate the writers in the condition of knowing the text contents. The handwritings are allowed to just have small numbers of even totally different characters. This fills the vacancy of the classical text-dependent methods and the text-independent methods of writer verification. Moreover, the information, what every character is, is used for the semi-text-independent method in this paper. Two types of standard templates, generated from many writer-unknown handwritten samples and printed samples of each character, are introduced to represent the content information of each character. The difference vectors of the character samples are gotten by subtracting the standard templates from the original feature vectors and used to replace the original vectors in the process of writer verification. By removing a large amount of content information and remaining the style information, the verification accuracy of the semi-text-independent method is improved. On a handwriting database involving 30 writers, when the query handwriting and the reference handwriting are composed of 30 distinct characters respectively, the average equal error rate (EER) of writer verification reaches 9.96%. And when the handwritings contain 50 characters, the average EER falls to 6.34%, which is 23.9% lower than the EER of not using the difference vectors.

  20. A framework for improving a seasonal hydrological forecasting system using sensitivity analysis

    NASA Astrophysics Data System (ADS)

    Arnal, Louise; Pappenberger, Florian; Smith, Paul; Cloke, Hannah

    2017-04-01

    Seasonal streamflow forecasts are of great value for the socio-economic sector, for applications such as navigation, flood and drought mitigation and reservoir management for hydropower generation and water allocation to agriculture and drinking water. However, as we speak, the performance of dynamical seasonal hydrological forecasting systems (systems based on running seasonal meteorological forecasts through a hydrological model to produce seasonal hydrological forecasts) is still limited in space and time. In this context, the ESP (Ensemble Streamflow Prediction) remains an attractive forecasting method for seasonal streamflow forecasting as it relies on forcing a hydrological model (starting from the latest observed or simulated initial hydrological conditions) with historical meteorological observations. This makes it cheaper to run than a standard dynamical seasonal hydrological forecasting system, for which the seasonal meteorological forecasts will first have to be produced, while still producing skilful forecasts. There is thus the need to focus resources and time towards improvements in dynamical seasonal hydrological forecasting systems which will eventually lead to significant improvements in the skill of the streamflow forecasts generated. Sensitivity analyses are a powerful tool that can be used to disentangle the relative contributions of the two main sources of errors in seasonal streamflow forecasts, namely the initial hydrological conditions (IHC; e.g., soil moisture, snow cover, initial streamflow, among others) and the meteorological forcing (MF; i.e., seasonal meteorological forecasts of precipitation and temperature, input to the hydrological model). Sensitivity analyses are however most useful if they inform and change current operational practices. To this end, we propose a method to improve the design of a seasonal hydrological forecasting system. This method is based on sensitivity analyses, informing the forecasters as to which element of the forecasting chain (i.e., IHC or MF) could potentially lead to the highest increase in seasonal hydrological forecasting performance, after each forecast update.

  1. Verification test report on a solar heating and hot water system

    NASA Technical Reports Server (NTRS)

    1978-01-01

    Information is provided on the development, qualification and acceptance verification of commercial solar heating and hot water systems and components. The verification includes the performances, the efficiences and the various methods used, such as similarity, analysis, inspection, test, etc., that are applicable to satisfying the verification requirements.

  2. A simplified real time method to forecast semi-enclosed basins storm surge

    NASA Astrophysics Data System (ADS)

    Pasquali, D.; Di Risio, M.; De Girolamo, P.

    2015-11-01

    Semi-enclosed basins are often prone to storm surge events. Indeed, their meteorological exposition, the presence of large continental shelf and their shape can lead to strong sea level set-up. A real time system aimed at forecasting storm surge may be of great help to protect human activities (i.e. to forecast flooding due to storm surge events), to manage ports and to safeguard coasts safety. This paper aims at illustrating a simple method able to forecast storm surge events in semi-enclosed basins in real time. The method is based on a mixed approach in which the results obtained by means of a simplified physics based model with low computational costs are corrected by means of statistical techniques. The proposed method is applied to a point of interest located in the Northern part of the Adriatic Sea. The comparison of forecasted levels against observed values shows the satisfactory reliability of the forecasts.

  3. Development of a short-term irradiance prediction system using post-processing tools on WRF-ARW meteorological forecasts in Spain

    NASA Astrophysics Data System (ADS)

    Rincón, A.; Jorba, O.; Baldasano, J. M.

    2010-09-01

    The increased contribution of solar energy in power generation sources requires an accurate estimation of surface solar irradiance conditioned by geographical, temporal and meteorological conditions. The knowledge of the variability of these factors is essential to estimate the expected energy production and therefore help stabilizing the electricity grid and increase the reliability of available solar energy. The use of numerical meteorological models in combination with statistical post-processing tools may have the potential to satisfy the requirements for short-term forecasting of solar irradiance for up to several days ahead and its application in solar devices. In this contribution, we present an assessment of a short-term irradiance prediction system based on the WRF-ARW mesoscale meteorological model (Skamarock et al., 2005) and several post-processing tools in order to improve the overall skills of the system in an annual simulation of the year 2004 in Spain. The WRF-ARW model is applied with 4 km x 4 km horizontal resolution and 38 vertical layers over the Iberian Peninsula. The hourly model irradiance is evaluated against more than 90 surface stations. The stations are used to assess the temporal and spatial fluctuations and trends of the system evaluating three different post-processes: Model Output Statistics technique (MOS; Glahn and Lowry, 1972), Recursive statistical method (REC; Boi, 2004) and Kalman Filter Predictor (KFP, Bozic, 1994; Roeger et al., 2003). A first evaluation of the system without post-processing tools shows an overestimation of the surface irradiance, due to the lack of atmospheric absorbers attenuation different than clouds not included in the meteorological model. This produces an annual BIAS of 16 W m-2 h-1, annual RMSE of 106 W m-2 h-1 and annual NMAE of 42%. The largest errors are observed in spring and summer, reaching RMSE of 350 W m-2 h-1. Results using Kalman Filter Predictor show a reduction of 8% of RMSE, 83% of BIAS, and NMAE decreases down to 32%. The REC method shows a reduction of 6% of RMSE, 79% of BIAS, and NMAE decreases down to 28%. When comparing stations at different altitudes, the overestimation is enhanced at coastal stations (less than 200m) up to 900 W m-2 h-1. The results allow us to analyze strengths and drawbacks of the irradiance prediction system and its application in the estimation of energy production from photovoltaic system cells. References Boi, P.: A statistical method for forecasting extreme daily temperatures using ECMWF 2-m temperatures and ground station measurements, Meteorol. Appl., 11, 245-251, 2004. Bozic, S.: Digital and Kalman filtering, John Wiley, Hoboken, New Jersey, 2nd edn., 1994. Glahn, H. and Lowry, D.: The use of Model Output Statistics (MOS) in Objective Weather Forecasting, Applied Meteorology, 11, 1203-1211, 1972. Roeger, C., Stull, R., McClung, D., Hacker, J., Deng, X., and Modzelewski, H.: Verification of Mesoscale Numerical Weather Forecasts in Mountainous Terrain for Application to Avalanche Prediction, Weather and forecasting, 18, 1140-1160, 2003. Skamarock, W., Klemp, J., Dudhia, J., Gill, D., Barker, D. M., Wang, W., and Powers, J. G.: A Description of the Advanced Research WRF Version 2, Tech. Rep. NCAR/TN-468+STR, NCAR Technical note, 2005.

  4. Gas demand forecasting by a new artificial intelligent algorithm

    NASA Astrophysics Data System (ADS)

    Khatibi. B, Vahid; Khatibi, Elham

    2012-01-01

    Energy demand forecasting is a key issue for consumers and generators in all energy markets in the world. This paper presents a new forecasting algorithm for daily gas demand prediction. This algorithm combines a wavelet transform and forecasting models such as multi-layer perceptron (MLP), linear regression or GARCH. The proposed method is applied to real data from the UK gas markets to evaluate their performance. The results show that the forecasting accuracy is improved significantly by using the proposed method.

  5. Statistical evaluation of forecasts

    NASA Astrophysics Data System (ADS)

    Mader, Malenka; Mader, Wolfgang; Gluckman, Bruce J.; Timmer, Jens; Schelter, Björn

    2014-08-01

    Reliable forecasts of extreme but rare events, such as earthquakes, financial crashes, and epileptic seizures, would render interventions and precautions possible. Therefore, forecasting methods have been developed which intend to raise an alarm if an extreme event is about to occur. In order to statistically validate the performance of a prediction system, it must be compared to the performance of a random predictor, which raises alarms independent of the events. Such a random predictor can be obtained by bootstrapping or analytically. We propose an analytic statistical framework which, in contrast to conventional methods, allows for validating independently the sensitivity and specificity of a forecasting method. Moreover, our method accounts for the periods during which an event has to remain absent or occur after a respective forecast.

  6. Development of a flood early warning system and communication with end-users: the Vipava/Vipacco case study in the KULTURisk FP7 project

    NASA Astrophysics Data System (ADS)

    Grossi, Giovanna; Caronna, Paolo; Ranzi, Roberto

    2014-05-01

    Within the framework of risk communication, the goal of an early warning system is to support the interaction between technicians and authorities (and subsequently population) as a prevention measure. The methodology proposed in the KULTURisk FP7 project aimed to build a closer collaboration between these actors, in the perspective of promoting pro-active actions to mitigate the effects of flood hazards. The transnational (Slovenia/ Italy) Soča/Isonzo case study focused on this concept of cooperation between stakeholders and hydrological forecasters. The DIMOSHONG_VIP hydrological model was calibrated for the Vipava/Vipacco River (650 km2), a tributary of the Soča/Isonzo River, on the basis of flood events occurred between 1998 and 2012. The European Centre for Medium-Range Weather Forecasts (ECMWF) provided the past meteorological forecasts, both deterministic (1 forecast) and probabilistic (51 ensemble members). The resolution of the ECMWF grid is currently about 15 km (Deterministic-DET) and 30 km (Ensemble Prediction System-EPS). A verification was conducted to validate the flood-forecast outputs of the DIMOSHONG_VIP+ECMWF early warning system. Basic descriptive statistics, like event probability, probability of a forecast occurrence and frequency bias were determined. Some performance measures were calculated, such as hit rate (probability of detection) and false alarm rate (probability of false detection). Relative Opening Characteristic (ROC) curves were generated both for deterministic and probabilistic forecasts. These analysis showed a good performance of the early warning system, in respect of the small size of the sample. A particular attention was spent to the design of flood-forecasting output charts, involving and inquiring stakeholders (Alto Adriatico River Basin Authority), hydrology specialists in the field, and common people. Graph types for both forecasted precipitation and discharge were set. Three different risk thresholds were identified ("attention", "pre-alarm" or "alert", "alarm"), with an "icon-style" representation, suitable for communication to civil protection stakeholders or the public. Aiming at showing probabilistic representations in a "user-friendly" way, we opted for the visualization of the single deterministic forecasted hydrograph together with the 5%, 25%, 50%, 75% and 95% percentiles bands of the Hydrological Ensemble Prediction System (HEPS). HEPS is generally used for 3-5 days hydrological forecasts, while the error due to incorrect initial data is comparable to the error due to the lower resolution with respect to the deterministic forecast. In the short term forecasting (12-48 hours) the HEPS-members show obviously a similar tendency; in this case, considering its higher resolution, the deterministic forecast is expected to be more effective. The plot of different forecasts in the same chart allows the use of model outputs from 4/5 days to few hours before a potential flood event. This framework was built to help a stakeholder, like a mayor, a civil protection authority, etc, in the flood control and management operations, and was designed to be included in a wider decision support system.

  7. An assessment of two methods for identifying undocumented levees using remotely sensed data

    USGS Publications Warehouse

    Czuba, Christiana R.; Williams, Byron K.; Westman, Jack; LeClaire, Keith

    2015-01-01

    Many undocumented and commonly unmaintained levees exist in the landscape complicating flood forecasting, risk management, and emergency response. This report describes a pilot study completed by the U.S. Geological Survey in cooperation with the U.S. Army Corps of Engineers to assess two methods to identify undocumented levees by using remotely sensed, high-resolution topographic data. For the first method, the U.S. Army Corps of Engineers examined hillshades computed from a digital elevation model that was derived from light detection and ranging (lidar) to visually identify potential levees and then used detailed site visits to assess the validity of the identifications. For the second method, the U.S. Geological Survey applied a wavelet transform to a lidar-derived digital elevation model to identify potential levees. The hillshade method was applied to Delano, Minnesota, and the wavelet-transform method was applied to Delano and Springfield, Minnesota. Both methods were successful in identifying levees but also identified other features that required interpretation to differentiate from levees such as constructed barriers, high banks, and bluffs. Both methods are complementary to each other, and a potential conjunctive method for testing in the future includes (1) use of the wavelet-transform method to rapidly identify slope-break features in high-resolution topographic data, (2) further examination of topographic data using hillshades and aerial photographs to classify features and map potential levees, and (3) a verification check of each identified potential levee with local officials and field visits.

  8. Predicting global thunderstorm activity for sprite observations from the International Space Station

    NASA Astrophysics Data System (ADS)

    Yair, Y.; Mezuman, K.; Ziv, B.; Priente, M.; Glickman, M.; Takahashi, Y.; Inoue, T.

    2012-04-01

    The global rate of sprites occurring above thunderstorms, estimated from the ISUAL satellite data, is ~0.5 per minute (Chen et al., 2008). During the summer 2011, in the framework of the "Cosmic Shore" project, we conducted a concentrated attempt to image sprites from the ISS. The methodology for target selection was based on that developed for the space shuttle MEIDEX sprite campaign (Ziv et al., 2004). There are several types of convective systems generating thunderstorms which differ in their effectiveness for sprite production (Lyons et al., 2009), and so we had to evaluate the ability of the predicted storms to produce sprites. We used the Aviation Weather Center (http://aviationweather.gov) daily significant weather forecast maps (SIGWX) to select regions with high probability for convective storms and lightning such that they were within the camera filed-of-view as deduced from the ISS trajectory and distance to the limb. In order to enhance the chance for success, only storms with predicted "Frequent Cb" and cloud tops above 45 Kft (~14 km) were selected. Additionally, we targeted tropical storms and hurricanes over the oceans. The accuracy of the forecast method enabled obtaining the first-ever color images of sprites from space. We will report the observations showing various types of sprites in many different geographical locations, and correlated parent lightning properties derived from ELF and global and local lightning location networks. Chen, A. B., et al. (2008), Global distributions and occurrence rates of transient luminous events, J. Geophys. Res., 113,A08306, doi:10.1029/2008JA013101 Lyons, W. A., et al. (2009), The meteorological and electrical structure of TLE-producing convective storms. In: Betz et al. (eds.): Lighting: principles instruments and applications, Springer-Science + Business Media B.V.. Ziv, B., Y. Yair, K. Pressman and M. Fullekrug, (2004), Verification of the Aviation Center global forecasts of Mesoscale Convective Systems. Jour. App. Meteor., 43, 720-726.

  9. Seasonal scale water deficit forecasting in Africa and the Middle East using NASA's Land Information System (LIS)

    NASA Astrophysics Data System (ADS)

    Shukla, Shraddhanand; Arsenault, Kristi R.; Getirana, Augusto; Kumar, Sujay V.; Roningen, Jeanne; Zaitchik, Ben; McNally, Amy; Koster, Randal D.; Peters-Lidard, Christa

    2017-04-01

    Drought and water scarcity are among the important issues facing several regions within Africa and the Middle East. A seamless and effective monitoring and early warning system is needed by regional/national stakeholders. Such system should support a proactive drought management approach and mitigate the socio-economic losses up to the extent possible. In this presentation, we report on the ongoing development and validation of a seasonal scale water deficit forecasting system based on NASA's Land Information System (LIS) and seasonal climate forecasts. First, our presentation will focus on the implementation and validation of the LIS models used for drought and water availability monitoring in the region. The second part will focus on evaluating drought and water availability forecasts. Finally, details will be provided of our ongoing collaboration with end-user partners in the region (e.g., USAID's Famine Early Warning Systems Network, FEWS NET), on formulating meaningful early warning indicators, effective communication and seamless dissemination of the monitoring and forecasting products through NASA's web-services. The water deficit forecasting system thus far incorporates NOAA's Noah land surface model (LSM), version 3.3, the Variable Infiltration Capacity (VIC) model, version 4.12, NASA GMAO's Catchment LSM, and the Noah Multi-Physics (MP) LSM (the latter two incorporate prognostic water table schemes). In addition, the LSMs' surface and subsurface runoff are routed through the Hydrological Modeling and Analysis Platform (HyMAP) to simulate surface water dynamics. The LSMs are driven by NASA/GMAO's Modern-Era Retrospective analysis for Research and Applications, Version 2 (MERRA-2), and the USGS and UCSB Climate Hazards Group InfraRed Precipitation with Station (CHIRPS) daily rainfall dataset. The LIS software framework integrates these forcing datasets and drives the four LSMs and HyMAP. The Land Verification Toolkit (LVT) is used for the evaluation of the LSMs, as it provides model ensemble metrics and the ability to compare against a variety of remotely sensed measurements, like different evapotranspiration (ET) and soil moisture products, and other reanalysis datasets that are available for this region. Comparison of the models' energy and hydrological budgets will be shown for this region (and sub-basin level, e.g., Blue Nile River) and time period (1981-2015), along with evaluating ET, streamflow, groundwater storage and soil moisture, using evaluation metrics (e.g., anomaly correlation, RMSE, etc.). The system uses seasonal climate forecasts from NASA's GMAO (the Goddard Earth Observing System Model, version 5) and NCEP's Climate Forecast System, version 2, and it produces forecasts of soil moisture, ET and streamflow out to 6 months in the future. Forecasts of those variables are formulated in terms of indicators to provide forecasts of drought and water availability in the region.

  10. Forest Fire Danger Rating (FFDR) Prediction over the Korean Peninsula

    NASA Astrophysics Data System (ADS)

    Song, B.; Won, M.; Jang, K.; Yoon, S.; Lim, J.

    2016-12-01

    Approximately five hundred forest fires occur and inflict the losses of both life and property each year in Korea during the forest fire seasons in the spring and autumn. Thus, an accurate prediction of forest fire is essential for effective forest fire prevention. The meteorology is one of important factors to predict and understand the fire occurrence as well as its behaviors and spread. In this study, we present the Forest Fire Danger Rating Systems (FFDRS) on the Korean Peninsula based on the Daily Weather Index (DWI) which represents the meteorological characteristics related to forest fire. The thematic maps including temperature, humidity, and wind speed produced from Korea Meteorology Administration (KMA) were applied to the forest fire occurrence probability model by logistic regression to analyze the DWI over the Korean Peninsula. The regional data assimilation and prediction system (RDAPS) and the improved digital forecast model were used to verify the sensitivity of DWI. The result of verification test revealed that the improved digital forecast model dataset showed better agreements with the real-time weather data. The forest fire danger rating index (FFDRI) calculated by the improved digital forecast model dataset showed a good agreement with the real-time weather dataset at the 233 administrative districts (R2=0.854). In addition, FFDRI were compared with observation-based FFDRI at 76 national weather stations. The mean difference was 0.5 at the site-level. The results produced in this study indicate that the improved digital forecast model dataset can be useful to predict the FFDRI in the Korean Peninsula successfully.

  11. Forecasting in foodservice: model development, testing, and evaluation.

    PubMed

    Miller, J L; Thompson, P A; Orabella, M M

    1991-05-01

    This study was designed to develop, test, and evaluate mathematical models appropriate for forecasting menu-item production demand in foodservice. Data were collected from residence and dining hall foodservices at Ohio State University. Objectives of the study were to collect, code, and analyze the data; develop and test models using actual operation data; and compare forecasting results with current methods in use. Customer count was forecast using deseasonalized simple exponential smoothing. Menu-item demand was forecast by multiplying the count forecast by a predicted preference statistic. Forecasting models were evaluated using mean squared error, mean absolute deviation, and mean absolute percentage error techniques. All models were more accurate than current methods. A broad spectrum of forecasting techniques could be used by foodservice managers with access to a personal computer and spread-sheet and database-management software. The findings indicate that mathematical forecasting techniques may be effective in foodservice operations to control costs, increase productivity, and maximize profits.

  12. The value of forecasting key-decision variables for rain-fed farming

    NASA Astrophysics Data System (ADS)

    Winsemius, Hessel; Werner, Micha

    2013-04-01

    Rain-fed farmers are highly vulnerable to variability in rainfall. Timely knowledge of the onset of the rainy season, the expected amount of rainfall and the occurrence of dry spells can help rain-fed farmers to plan the cropping season. Seasonal probabilistic weather forecasts may provide such information to farmers, but need to provide reliable forecasts of key variables with which farmers can make decisions. In this contribution, we present a new method to evaluate the value of meteorological forecasts in predicting these key variables. The proposed method measures skill by assessing whether a forecast was useful to this decision. This is done by taking into account the required accuracy of timing of the event to make the decision useful. The method progresses the estimate of forecast skill to forecast value by taking into account the required accuracy that is needed to make the decision valuable, based on the cost/loss ratio of possible decisions. The method is applied over the Limpopo region in Southern Africa. We demonstrate the method using the example of temporary water harvesting techniques. Such techniques require time to construct and must be ready long enough before the occurrence of a dry spell to be effective. The value of the forecasts to the decision used as an example is shown to be highly sensitive to the accuracy in the timing of forecasted dry spells, and the tolerance in the decision to timing error. The skill with which dry spells can be predicted is shown to be higher in some parts of the basin, indicating that these forecasts have higher value for the decision in those parts than in others. Through assessing the skill of forecasting key decision variables to the farmers we show that it is easier to understand if the forecasts have value in reducing risk, or if other adaptation strategies should be implemented.

  13. Added value of non-calibrated and BMA calibrated AEMET-SREPS probabilistic forecasts: the 24 January 2009 extreme wind event over Catalonia

    NASA Astrophysics Data System (ADS)

    Escriba, P. A.; Callado, A.; Santos, D.; Santos, C.; Simarro, J.; García-Moya, J. A.

    2009-09-01

    At 00 UTC 24 January 2009 an explosive ciclogenesis originated over the Atlantic Ocean reached its maximum intensity with observed surface pressures lower than 970 hPa on its center and placed at Gulf of Vizcaya. During its path through southern France this low caused strong westerly and north-westerly winds over the Iberian Peninsula higher than 150 km/h at some places. These extreme winds leaved 10 casualties in Spain, 8 of them in Catalonia. The aim of this work is to show whether exists an added value in the short range prediction of the 24 January 2009 strong winds when using the Short Range Ensemble Prediction System (SREPS) of the Spanish Meteorological Agency (AEMET), with respect to the operational forecasting tools. This study emphasizes two aspects of probabilistic forecasting: the ability of a 3-day forecast of warn an extreme windy event and the ability of quantifying the predictability of the event so that giving value to deterministic forecast. Two type of probabilistic forecasts of wind are carried out, a non-calibrated and a calibrated one using Bayesian Model Averaging (BMA). AEMET runs daily experimentally SREPS twice a day (00 and 12 UTC). This system consists of 20 members that are constructed by integrating 5 local area models, COSMO (COSMO), HIRLAM (HIRLAM Consortium), HRM (DWD), MM5 (NOAA) and UM (UKMO), at 25 km of horizontal resolution. Each model uses 4 different initial and boundary conditions, the global models GFS (NCEP), GME (DWD), IFS (ECMWF) and UM. By this way it is obtained a probabilistic forecast that takes into account the initial, the contour and the model errors. BMA is a statistical tool for combining predictive probability functions from different sources. The BMA predictive probability density function (PDF) is a weighted average of PDFs centered on the individual bias-corrected forecasts. The weights are equal to posterior probabilities of the models generating the forecasts and reflect the skill of the ensemble members. Here BMA is applied to provide probabilistic forecasts of wind speed. In this work several forecasts for different time ranges (H+72, H+48 and H+24) of 10 meters wind speed over Catalonia are verified subjectively at one of the instants of maximum intensity, 12 UTC 24 January 2009. On one hand, three probabilistic forecasts are compared, ECMWF EPS, non-calibrated SREPS and calibrated SREPS. On the other hand, the relationship between predictability and skill of deterministic forecast is studied by looking at HIRLAM 0.16 deterministic forecasts of the event. Verification is focused on location and intensity of 10 meters wind speed and 10-minutal measures from AEMET automatic ground stations are used as observations. The results indicate that SREPS is able to forecast three days ahead mean winds higher than 36 km/h and that correctly localizes them with a significant probability of ocurrence in the affected area. The probability is higher after BMA calibration of the ensemble. The fact that probability of strong winds is high allows us to state that the predictability of the event is also high and, as a consequence, deterministic forecasts are more reliable. This is confirmed when verifying HIRLAM deterministic forecasts against observed values.

  14. Analysis of forecasting and inventory control of raw material supplies in PT INDAC INT’L

    NASA Astrophysics Data System (ADS)

    Lesmana, E.; Subartini, B.; Riaman; Jabar, D. A.

    2018-03-01

    This study discusses the data forecasting sales of carbon electrodes at PT. INDAC INT L uses winters and double moving average methods, while for predicting the amount of inventory and cost required in ordering raw material of carbon electrode next period using Economic Order Quantity (EOQ) model. The result of error analysis shows that winters method for next period gives result of MAE, MSE, and MAPE, the winters method is a better forecasting method for forecasting sales of carbon electrode products. So that PT. INDAC INT L is advised to provide products that will be sold following the sales amount by the winters method.

  15. Forecasting the Short-Term Passenger Flow on High-Speed Railway with Neural Networks

    PubMed Central

    Xie, Mei-Quan; Li, Xia-Miao; Zhou, Wen-Liang; Fu, Yan-Bing

    2014-01-01

    Short-term passenger flow forecasting is an important component of transportation systems. The forecasting result can be applied to support transportation system operation and management such as operation planning and revenue management. In this paper, a divide-and-conquer method based on neural network and origin-destination (OD) matrix estimation is developed to forecast the short-term passenger flow in high-speed railway system. There are three steps in the forecasting method. Firstly, the numbers of passengers who arrive at each station or depart from each station are obtained from historical passenger flow data, which are OD matrices in this paper. Secondly, short-term passenger flow forecasting of the numbers of passengers who arrive at each station or depart from each station based on neural network is realized. At last, the OD matrices in short-term time are obtained with an OD matrix estimation method. The experimental results indicate that the proposed divide-and-conquer method performs well in forecasting the short-term passenger flow on high-speed railway. PMID:25544838

  16. Spatial forecast of landslides in three gorges based on spatial data mining.

    PubMed

    Wang, Xianmin; Niu, Ruiqing

    2009-01-01

    The Three Gorges is a region with a very high landslide distribution density and a concentrated population. In Three Gorges there are often landslide disasters, and the potential risk of landslides is tremendous. In this paper, focusing on Three Gorges, which has a complicated landform, spatial forecasting of landslides is studied by establishing 20 forecast factors (spectra, texture, vegetation coverage, water level of reservoir, slope structure, engineering rock group, elevation, slope, aspect, etc). China-Brazil Earth Resources Satellite (Cbers) images were adopted based on C4.5 decision tree to mine spatial forecast landslide criteria in Guojiaba Town (Zhigui County) in Three Gorges and based on this knowledge, perform intelligent spatial landslide forecasts for Guojiaba Town. All landslides lie in the dangerous and unstable regions, so the forecast result is good. The method proposed in the paper is compared with seven other methods: IsoData, K-Means, Mahalanobis Distance, Maximum Likelihood, Minimum Distance, Parallelepiped and Information Content Model. The experimental results show that the method proposed in this paper has a high forecast precision, noticeably higher than that of the other seven methods.

  17. Spatial Forecast of Landslides in Three Gorges Based On Spatial Data Mining

    PubMed Central

    Wang, Xianmin; Niu, Ruiqing

    2009-01-01

    The Three Gorges is a region with a very high landslide distribution density and a concentrated population. In Three Gorges there are often landslide disasters, and the potential risk of landslides is tremendous. In this paper, focusing on Three Gorges, which has a complicated landform, spatial forecasting of landslides is studied by establishing 20 forecast factors (spectra, texture, vegetation coverage, water level of reservoir, slope structure, engineering rock group, elevation, slope, aspect, etc). China-Brazil Earth Resources Satellite (Cbers) images were adopted based on C4.5 decision tree to mine spatial forecast landslide criteria in Guojiaba Town (Zhigui County) in Three Gorges and based on this knowledge, perform intelligent spatial landslide forecasts for Guojiaba Town. All landslides lie in the dangerous and unstable regions, so the forecast result is good. The method proposed in the paper is compared with seven other methods: IsoData, K-Means, Mahalanobis Distance, Maximum Likelihood, Minimum Distance, Parallelepiped and Information Content Model. The experimental results show that the method proposed in this paper has a high forecast precision, noticeably higher than that of the other seven methods. PMID:22573999

  18. Tropospheric Airborne Meteorological Data Reporting (TAMDAR) Sensor Validation and Verification on National Oceanographic and Atmospheric Administration (NOAA) Lockheed WP-3D Aircraft

    NASA Technical Reports Server (NTRS)

    Tsoucalas, George; Daniels, Taumi S.; Zysko, Jan; Anderson, Mark V.; Mulally, Daniel J.

    2010-01-01

    As part of the National Aeronautics and Space Administration's Aviation Safety and Security Program, the Tropospheric Airborne Meteorological Data Reporting project (TAMDAR) developed a low-cost sensor for aircraft flying in the lower troposphere. This activity was a joint effort with support from Federal Aviation Administration, National Oceanic and Atmospheric Administration, and industry. This paper reports the TAMDAR sensor performance validation and verification, as flown on board NOAA Lockheed WP-3D aircraft. These flight tests were conducted to assess the performance of the TAMDAR sensor for measurements of temperature, relative humidity, and wind parameters. The ultimate goal was to develop a small low-cost sensor, collect useful meteorological data, downlink the data in near real time, and use the data to improve weather forecasts. The envisioned system will initially be used on regional and package carrier aircraft. The ultimate users of the data are National Centers for Environmental Prediction forecast modelers. Other users include air traffic controllers, flight service stations, and airline weather centers. NASA worked with an industry partner to develop the sensor. Prototype sensors were subjected to numerous tests in ground and flight facilities. As a result of these earlier tests, many design improvements were made to the sensor. The results of tests on a final version of the sensor are the subject of this report. The sensor is capable of measuring temperature, relative humidity, pressure, and icing. It can compute pressure altitude, indicated air speed, true air speed, ice presence, wind speed and direction, and eddy dissipation rate. Summary results from the flight test are presented along with corroborative data from aircraft instruments.

  19. Calibration and combination of dynamical seasonal forecasts to enhance the value of predicted probabilities for managing risk

    NASA Astrophysics Data System (ADS)

    Dutton, John A.; James, Richard P.; Ross, Jeremy D.

    2013-06-01

    Seasonal probability forecasts produced with numerical dynamics on supercomputers offer great potential value in managing risk and opportunity created by seasonal variability. The skill and reliability of contemporary forecast systems can be increased by calibration methods that use the historical performance of the forecast system to improve the ongoing real-time forecasts. Two calibration methods are applied to seasonal surface temperature forecasts of the US National Weather Service, the European Centre for Medium Range Weather Forecasts, and to a World Climate Service multi-model ensemble created by combining those two forecasts with Bayesian methods. As expected, the multi-model is somewhat more skillful and more reliable than the original models taken alone. The potential value of the multimodel in decision making is illustrated with the profits achieved in simulated trading of a weather derivative. In addition to examining the seasonal models, the article demonstrates that calibrated probability forecasts of weekly average temperatures for leads of 2-4 weeks are also skillful and reliable. The conversion of ensemble forecasts into probability distributions of impact variables is illustrated with degree days derived from the temperature forecasts. Some issues related to loss of stationarity owing to long-term warming are considered. The main conclusion of the article is that properly calibrated probabilistic forecasts possess sufficient skill and reliability to contribute to effective decisions in government and business activities that are sensitive to intraseasonal and seasonal climate variability.

  20. Why preferring parametric forecasting to nonparametric methods?

    PubMed

    Jabot, Franck

    2015-05-07

    A recent series of papers by Charles T. Perretti and collaborators have shown that nonparametric forecasting methods can outperform parametric methods in noisy nonlinear systems. Such a situation can arise because of two main reasons: the instability of parametric inference procedures in chaotic systems which can lead to biased parameter estimates, and the discrepancy between the real system dynamics and the modeled one, a problem that Perretti and collaborators call "the true model myth". Should ecologists go on using the demanding parametric machinery when trying to forecast the dynamics of complex ecosystems? Or should they rely on the elegant nonparametric approach that appears so promising? It will be here argued that ecological forecasting based on parametric models presents two key comparative advantages over nonparametric approaches. First, the likelihood of parametric forecasting failure can be diagnosed thanks to simple Bayesian model checking procedures. Second, when parametric forecasting is diagnosed to be reliable, forecasting uncertainty can be estimated on virtual data generated with the fitted to data parametric model. In contrast, nonparametric techniques provide forecasts with unknown reliability. This argumentation is illustrated with the simple theta-logistic model that was previously used by Perretti and collaborators to make their point. It should convince ecologists to stick to standard parametric approaches, until methods have been developed to assess the reliability of nonparametric forecasting. Copyright © 2015 Elsevier Ltd. All rights reserved.

  1. Application and evaluation of forecasting methods for municipal solid waste generation in an Eastern-European city.

    PubMed

    Rimaityte, Ingrida; Ruzgas, Tomas; Denafas, Gintaras; Racys, Viktoras; Martuzevicius, Dainius

    2012-01-01

    Forecasting of generation of municipal solid waste (MSW) in developing countries is often a challenging task due to the lack of data and selection of suitable forecasting method. This article aimed to select and evaluate several methods for MSW forecasting in a medium-scaled Eastern European city (Kaunas, Lithuania) with rapidly developing economics, with respect to affluence-related and seasonal impacts. The MSW generation was forecast with respect to the economic activity of the city (regression modelling) and using time series analysis. The modelling based on social-economic indicators (regression implemented in LCA-IWM model) showed particular sensitivity (deviation from actual data in the range from 2.2 to 20.6%) to external factors, such as the synergetic effects of affluence parameters or changes in MSW collection system. For the time series analysis, the combination of autoregressive integrated moving average (ARIMA) and seasonal exponential smoothing (SES) techniques were found to be the most accurate (mean absolute percentage error equalled to 6.5). Time series analysis method was very valuable for forecasting the weekly variation of waste generation data (r (2) > 0.87), but the forecast yearly increase should be verified against the data obtained by regression modelling. The methods and findings of this study may assist the experts, decision-makers and scientists performing forecasts of MSW generation, especially in developing countries.

  2. Low Streamflow Forcasting using Minimum Relative Entropy

    NASA Astrophysics Data System (ADS)

    Cui, H.; Singh, V. P.

    2013-12-01

    Minimum relative entropy spectral analysis is derived in this study, and applied to forecast streamflow time series. Proposed method extends the autocorrelation in the manner that the relative entropy of underlying process is minimized so that time series data can be forecasted. Different prior estimation, such as uniform, exponential and Gaussian assumption, is taken to estimate the spectral density depending on the autocorrelation structure. Seasonal and nonseasonal low streamflow series obtained from Colorado River (Texas) under draught condition is successfully forecasted using proposed method. Minimum relative entropy determines spectral of low streamflow series with higher resolution than conventional method. Forecasted streamflow is compared to the prediction using Burg's maximum entropy spectral analysis (MESA) and Configurational entropy. The advantage and disadvantage of each method in forecasting low streamflow is discussed.

  3. Short Term Load Forecasting with Fuzzy Logic Systems for power system planning and reliability-A Review

    NASA Astrophysics Data System (ADS)

    Holmukhe, R. M.; Dhumale, Mrs. Sunita; Chaudhari, Mr. P. S.; Kulkarni, Mr. P. P.

    2010-10-01

    Load forecasting is very essential to the operation of Electricity companies. It enhances the energy efficient and reliable operation of power system. Forecasting of load demand data forms an important component in planning generation schedules in a power system. The purpose of this paper is to identify issues and better method for load foecasting. In this paper we focus on fuzzy logic system based short term load forecasting. It serves as overview of the state of the art in the intelligent techniques employed for load forecasting in power system planning and reliability. Literature review has been conducted and fuzzy logic method has been summarized to highlight advantages and disadvantages of this technique. The proposed technique for implementing fuzzy logic based forecasting is by Identification of the specific day and by using maximum and minimum temperature for that day and finally listing the maximum temperature and peak load for that day. The results show that Load forecasting where there are considerable changes in temperature parameter is better dealt with Fuzzy Logic system method as compared to other short term forecasting techniques.

  4. Development and Verification of the Charring Ablating Thermal Protection Implicit System Solver

    NASA Technical Reports Server (NTRS)

    Amar, Adam J.; Calvert, Nathan D.; Kirk, Benjamin S.

    2010-01-01

    The development and verification of the Charring Ablating Thermal Protection Implicit System Solver is presented. This work concentrates on the derivation and verification of the stationary grid terms in the equations that govern three-dimensional heat and mass transfer for charring thermal protection systems including pyrolysis gas flow through the porous char layer. The governing equations are discretized according to the Galerkin finite element method with first and second order implicit time integrators. The governing equations are fully coupled and are solved in parallel via Newton's method, while the fully implicit linear system is solved with the Generalized Minimal Residual method. Verification results from exact solutions and the Method of Manufactured Solutions are presented to show spatial and temporal orders of accuracy as well as nonlinear convergence rates.

  5. Development and Verification of the Charring, Ablating Thermal Protection Implicit System Simulator

    NASA Technical Reports Server (NTRS)

    Amar, Adam J.; Calvert, Nathan; Kirk, Benjamin S.

    2011-01-01

    The development and verification of the Charring Ablating Thermal Protection Implicit System Solver (CATPISS) is presented. This work concentrates on the derivation and verification of the stationary grid terms in the equations that govern three-dimensional heat and mass transfer for charring thermal protection systems including pyrolysis gas flow through the porous char layer. The governing equations are discretized according to the Galerkin finite element method (FEM) with first and second order fully implicit time integrators. The governing equations are fully coupled and are solved in parallel via Newton s method, while the linear system is solved via the Generalized Minimum Residual method (GMRES). Verification results from exact solutions and Method of Manufactured Solutions (MMS) are presented to show spatial and temporal orders of accuracy as well as nonlinear convergence rates.

  6. Real-Time Verification of a High-Dose-Rate Iridium 192 Source Position Using a Modified C-Arm Fluoroscope

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nose, Takayuki, E-mail: nose-takayuki@nms.ac.jp; Chatani, Masashi; Otani, Yuki

    Purpose: High-dose-rate (HDR) brachytherapy misdeliveries can occur at any institution, and they can cause disastrous results. Even a patient's death has been reported. Misdeliveries could be avoided with real-time verification methods. In 1996, we developed a modified C-arm fluoroscopic verification of an HDR Iridium 192 source position prevent these misdeliveries. This method provided excellent image quality sufficient to detect errors, and it has been in clinical use at our institutions for 20 years. The purpose of the current study is to introduce the mechanisms and validity of our straightforward C-arm fluoroscopic verification method. Methods and Materials: Conventional X-ray fluoroscopic images aremore » degraded by spurious signals and quantum noise from Iridium 192 photons, which make source verification impractical. To improve image quality, we quadrupled the C-arm fluoroscopic X-ray dose per pulse. The pulse rate was reduced by a factor of 4 to keep the average exposure compliant with Japanese medical regulations. The images were then displayed with quarter-frame rates. Results: Sufficient quality was obtained to enable observation of the source position relative to both the applicators and the anatomy. With this method, 2 errors were detected among 2031 treatment sessions for 370 patients within a 6-year period. Conclusions: With the use of a modified C-arm fluoroscopic verification method, treatment errors that were otherwise overlooked were detected in real time. This method should be given consideration for widespread use.« less

  7. Nonmechanistic forecasts of seasonal influenza with iterative one-week-ahead distributions.

    PubMed

    Brooks, Logan C; Farrow, David C; Hyun, Sangwon; Tibshirani, Ryan J; Rosenfeld, Roni

    2018-06-15

    Accurate and reliable forecasts of seasonal epidemics of infectious disease can assist in the design of countermeasures and increase public awareness and preparedness. This article describes two main contributions we made recently toward this goal: a novel approach to probabilistic modeling of surveillance time series based on "delta densities", and an optimization scheme for combining output from multiple forecasting methods into an adaptively weighted ensemble. Delta densities describe the probability distribution of the change between one observation and the next, conditioned on available data; chaining together nonparametric estimates of these distributions yields a model for an entire trajectory. Corresponding distributional forecasts cover more observed events than alternatives that treat the whole season as a unit, and improve upon multiple evaluation metrics when extracting key targets of interest to public health officials. Adaptively weighted ensembles integrate the results of multiple forecasting methods, such as delta density, using weights that can change from situation to situation. We treat selection of optimal weightings across forecasting methods as a separate estimation task, and describe an estimation procedure based on optimizing cross-validation performance. We consider some details of the data generation process, including data revisions and holiday effects, both in the construction of these forecasting methods and when performing retrospective evaluation. The delta density method and an adaptively weighted ensemble of other forecasting methods each improve significantly on the next best ensemble component when applied separately, and achieve even better cross-validated performance when used in conjunction. We submitted real-time forecasts based on these contributions as part of CDC's 2015/2016 FluSight Collaborative Comparison. Among the fourteen submissions that season, this system was ranked by CDC as the most accurate.

  8. Performance of time-series methods in forecasting the demand for red blood cell transfusion.

    PubMed

    Pereira, Arturo

    2004-05-01

    Planning the future blood collection efforts must be based on adequate forecasts of transfusion demand. In this study, univariate time-series methods were investigated for their performance in forecasting the monthly demand for RBCs at one tertiary-care, university hospital. Three time-series methods were investigated: autoregressive integrated moving average (ARIMA), the Holt-Winters family of exponential smoothing models, and one neural-network-based method. The time series consisted of the monthly demand for RBCs from January 1988 to December 2002 and was divided into two segments: the older one was used to fit or train the models, and the younger to test for the accuracy of predictions. Performance was compared across forecasting methods by calculating goodness-of-fit statistics, the percentage of months in which forecast-based supply would have met the RBC demand (coverage rate), and the outdate rate. The RBC transfusion series was best fitted by a seasonal ARIMA(0,1,1)(0,1,1)(12) model. Over 1-year time horizons, forecasts generated by ARIMA or exponential smoothing laid within the +/- 10 percent interval of the real RBC demand in 79 percent of months (62% in the case of neural networks). The coverage rate for the three methods was 89, 91, and 86 percent, respectively. Over 2-year time horizons, exponential smoothing largely outperformed the other methods. Predictions by exponential smoothing laid within the +/- 10 percent interval of real values in 75 percent of the 24 forecasted months, and the coverage rate was 87 percent. Over 1-year time horizons, predictions of RBC demand generated by ARIMA or exponential smoothing are accurate enough to be of help in the planning of blood collection efforts. For longer time horizons, exponential smoothing outperforms the other forecasting methods.

  9. Exploration of Uncertainty in Glacier Modelling

    NASA Technical Reports Server (NTRS)

    Thompson, David E.

    1999-01-01

    There are procedures and methods for verification of coding algebra and for validations of models and calculations that are in use in the aerospace computational fluid dynamics (CFD) community. These methods would be efficacious if used by the glacier dynamics modelling community. This paper is a presentation of some of those methods, and how they might be applied to uncertainty management supporting code verification and model validation for glacier dynamics. The similarities and differences between their use in CFD analysis and the proposed application of these methods to glacier modelling are discussed. After establishing sources of uncertainty and methods for code verification, the paper looks at a representative sampling of verification and validation efforts that are underway in the glacier modelling community, and establishes a context for these within overall solution quality assessment. Finally, an information architecture and interactive interface is introduced and advocated. This Integrated Cryospheric Exploration (ICE) Environment is proposed for exploring and managing sources of uncertainty in glacier modelling codes and methods, and for supporting scientific numerical exploration and verification. The details and functionality of this Environment are described based on modifications of a system already developed for CFD modelling and analysis.

  10. Forecasting volcanic eruptions and other material failure phenomena: An evaluation of the failure forecast method

    NASA Astrophysics Data System (ADS)

    Bell, Andrew F.; Naylor, Mark; Heap, Michael J.; Main, Ian G.

    2011-08-01

    Power-law accelerations in the mean rate of strain, earthquakes and other precursors have been widely reported prior to material failure phenomena, including volcanic eruptions, landslides and laboratory deformation experiments, as predicted by several theoretical models. The Failure Forecast Method (FFM), which linearizes the power-law trend, has been routinely used to forecast the failure time in retrospective analyses; however, its performance has never been formally evaluated. Here we use synthetic and real data, recorded in laboratory brittle creep experiments and at volcanoes, to show that the assumptions of the FFM are inconsistent with the error structure of the data, leading to biased and imprecise forecasts. We show that a Generalized Linear Model method provides higher-quality forecasts that converge more accurately to the eventual failure time, accounting for the appropriate error distributions. This approach should be employed in place of the FFM to provide reliable quantitative forecasts and estimate their associated uncertainties.

  11. Characterizing Time Series Data Diversity for Wind Forecasting: Preprint

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hodge, Brian S; Chartan, Erol Kevin; Feng, Cong

    Wind forecasting plays an important role in integrating variable and uncertain wind power into the power grid. Various forecasting models have been developed to improve the forecasting accuracy. However, it is challenging to accurately compare the true forecasting performances from different methods and forecasters due to the lack of diversity in forecasting test datasets. This paper proposes a time series characteristic analysis approach to visualize and quantify wind time series diversity. The developed method first calculates six time series characteristic indices from various perspectives. Then the principal component analysis is performed to reduce the data dimension while preserving the importantmore » information. The diversity of the time series dataset is visualized by the geometric distribution of the newly constructed principal component space. The volume of the 3-dimensional (3D) convex polytope (or the length of 1D number axis, or the area of the 2D convex polygon) is used to quantify the time series data diversity. The method is tested with five datasets with various degrees of diversity.« less

  12. Forecasting Occurrences of Activities.

    PubMed

    Minor, Bryan; Cook, Diane J

    2017-07-01

    While activity recognition has been shown to be valuable for pervasive computing applications, less work has focused on techniques for forecasting the future occurrence of activities. We present an activity forecasting method to predict the time that will elapse until a target activity occurs. This method generates an activity forecast using a regression tree classifier and offers an advantage over sequence prediction methods in that it can predict expected time until an activity occurs. We evaluate this algorithm on real-world smart home datasets and provide evidence that our proposed approach is most effective at predicting activity timings.

  13. Forecasting Jakarta composite index (IHSG) based on chen fuzzy time series and firefly clustering algorithm

    NASA Astrophysics Data System (ADS)

    Ningrum, R. W.; Surarso, B.; Farikhin; Safarudin, Y. M.

    2018-03-01

    This paper proposes the combination of Firefly Algorithm (FA) and Chen Fuzzy Time Series Forecasting. Most of the existing fuzzy forecasting methods based on fuzzy time series use the static length of intervals. Therefore, we apply an artificial intelligence, i.e., Firefly Algorithm (FA) to set non-stationary length of intervals for each cluster on Chen Method. The method is evaluated by applying on the Jakarta Composite Index (IHSG) and compare with classical Chen Fuzzy Time Series Forecasting. Its performance verified through simulation using Matlab.

  14. Verification of Triple Modular Redundancy (TMR) Insertion for Reliable and Trusted Systems

    NASA Technical Reports Server (NTRS)

    Berg, Melanie; LaBel, Kenneth A.

    2016-01-01

    We propose a method for TMR insertion verification that satisfies the process for reliable and trusted systems. If a system is expected to be protected using TMR, improper insertion can jeopardize the reliability and security of the system. Due to the complexity of the verification process, there are currently no available techniques that can provide complete and reliable confirmation of TMR insertion. This manuscript addresses the challenge of confirming that TMR has been inserted without corruption of functionality and with correct application of the expected TMR topology. The proposed verification method combines the usage of existing formal analysis tools with a novel search-detect-and-verify tool. Field programmable gate array (FPGA),Triple Modular Redundancy (TMR),Verification, Trust, Reliability,

  15. Voltage verification unit

    DOEpatents

    Martin, Edward J [Virginia Beach, VA

    2008-01-15

    A voltage verification unit and method for determining the absence of potentially dangerous potentials within a power supply enclosure without Mode 2 work is disclosed. With this device and method, a qualified worker, following a relatively simple protocol that involves a function test (hot, cold, hot) of the voltage verification unit before Lock Out/Tag Out and, and once the Lock Out/Tag Out is completed, testing or "trying" by simply reading a display on the voltage verification unit can be accomplished without exposure of the operator to the interior of the voltage supply enclosure. According to a preferred embodiment, the voltage verification unit includes test leads to allow diagnostics with other meters, without the necessity of accessing potentially dangerous bus bars or the like.

  16. The Research of Regression Method for Forecasting Monthly Electricity Sales Considering Coupled Multi-factor

    NASA Astrophysics Data System (ADS)

    Wang, Jiangbo; Liu, Junhui; Li, Tiantian; Yin, Shuo; He, Xinhui

    2018-01-01

    The monthly electricity sales forecasting is a basic work to ensure the safety of the power system. This paper presented a monthly electricity sales forecasting method which comprehensively considers the coupled multi-factors of temperature, economic growth, electric power replacement and business expansion. The mathematical model is constructed by using regression method. The simulation results show that the proposed method is accurate and effective.

  17. Near real time wind energy forecasting incorporating wind tunnel modeling

    NASA Astrophysics Data System (ADS)

    Lubitz, William David

    A series of experiments and investigations were carried out to inform the development of a day-ahead wind power forecasting system. An experimental near-real time wind power forecasting system was designed and constructed that operates on a desktop PC and forecasts 12--48 hours in advance. The system uses model output of the Eta regional scale forecast (RSF) to forecast the power production of a wind farm in the Altamont Pass, California, USA from 12 to 48 hours in advance. It is of modular construction and designed to also allow diagnostic forecasting using archived RSF data, thereby allowing different methods of completing each forecasting step to be tested and compared using the same input data. Wind-tunnel investigations of the effect of wind direction and hill geometry on wind speed-up above a hill were conducted. Field data from an Altamont Pass, California site was used to evaluate several speed-up prediction algorithms, both with and without wind direction adjustment. These algorithms were found to be of limited usefulness for the complex terrain case evaluated. Wind-tunnel and numerical simulation-based methods were developed for determining a wind farm power curve (the relation between meteorological conditions at a point in the wind farm and the power production of the wind farm). Both methods, as well as two methods based on fits to historical data, ultimately showed similar levels of accuracy: mean absolute errors predicting power production of 5 to 7 percent of the wind farm power capacity. The downscaling of RSF forecast data to the wind farm was found to be complicated by the presence of complex terrain. Poor results using the geostrophic drag law and regression methods motivated the development of a database search method that is capable of forecasting not only wind speeds but also power production with accuracy better than persistence.

  18. Translating Forest Change to Carbon Emissions and Removals By Linking Disturbance Products, Biomass Maps, and Carbon Cycle Modeling in a Comprehensive Carbon Monitoring Framework for the Conterminous US Forests

    NASA Astrophysics Data System (ADS)

    Williams, C. A.; Gu, H.

    2016-12-01

    Protecting forest carbon stores and uptake is central to national and international policies aimed at mitigating climate change. The success of such polices relies on high quality, accurate reporting (Tier 3) that earns the greatest financial value of carbon credits and hence incentivizes forest conservation and protection. Methods for Tier 3 Measuring, Reporting, and Verification (MRV) are still in development, generally involving some combination of direct remote sensing, ground based inventorying, and computer modeling, but have tended to emphasize assessments of live aboveground carbon stocks with a less clear connection to the real target of MRV which is carbon emissions and removals. Most existing methods are also ambiguous as to the mechanisms that underlie carbon accumulation, and any have limited capacity for forecasting carbon dynamics over time. This paper reports on the design and implementation of a new method for Tier 3 MRV, decision support, and forecasting that is being applied to assess forest carbon dynamics across the conterminous US. The method involves parameterization of a carbon cycle model (CASA) to match yield data from the US forest inventory (FIA). A range of disturbance types and severities are imposed in the model to estimate resulting carbon emissions, carbon uptake, and carbon stock changes post-disturbance. Resulting trajectories are then applied to landscapes at the 30-m pixel level based on two remote-sensing based data products. One documents the year, type, and severity of disturbance in recent decades. The second documents aboveground biomass which is used to estimate time since disturbance and associated carbon fluxes and stocks. Results will highlight high-resolution (30 m) annual carbon stocks and fluxes from 1990 to 2010 for select regions of interest across the US. Spatial analyses reveal regional patterns in US forest carbon stocks and fluxes as they respond to forest types, climate, and disturbances. Temporal analyses document effects of recent disturbance trends and demonstrate the method's capacity for quantifying changes in forest carbon over time as needed for UNFCCC reporting.

  19. Verification of chemistry reference ranges using a simple method in sub-Saharan Africa

    PubMed Central

    Taylor, Douglas; Mandala, Justin; Nanda, Kavita; Van Campenhout, Christel; Agingu, Walter; Madurai, Lorna; Barsch, Eva-Maria; Deese, Jennifer; Van Damme, Lut; Crucitti, Tania

    2016-01-01

    Background Chemistry safety assessments are interpreted by using chemistry reference ranges (CRRs). Verification of CRRs is time consuming and often requires a statistical background. Objectives We report on an easy and cost-saving method to verify CRRs. Methods Using a former method introduced by Sigma Diagnostics, three study sites in sub-Saharan Africa, Bondo, Kenya, and Pretoria and Bloemfontein, South Africa, verified the CRRs for hepatic and renal biochemistry assays performed during a clinical trial of HIV antiretroviral pre-exposure prophylaxis. The aspartate aminotransferase/alanine aminotransferase, creatinine and phosphorus results from 10 clinically-healthy participants at the screening visit were used. In the event the CRRs did not pass the verification, new CRRs had to be calculated based on 40 clinically-healthy participants. Results Within a few weeks, the study sites accomplished verification of the CRRs without additional costs. The aspartate aminotransferase reference ranges for the Bondo, Kenya site and the alanine aminotransferase reference ranges for the Pretoria, South Africa site required adjustment. The phosphorus CRR passed verification and the creatinine CRR required adjustment at every site. The newly-established CRR intervals were narrower than the CRRs used previously at these study sites due to decreases in the upper limits of the reference ranges. As a result, more toxicities were detected. Conclusion To ensure the safety of clinical trial participants, verification of CRRs should be standard practice in clinical trials conducted in settings where the CRR has not been validated for the local population. This verification method is simple, inexpensive, and can be performed by any medical laboratory. PMID:28879112

  20. Formal methods for dependable real-time systems

    NASA Technical Reports Server (NTRS)

    Rushby, John

    1993-01-01

    The motivation for using formal methods to specify and reason about real time properties is outlined and approaches that were proposed and used are sketched. The formal verifications of clock synchronization algorithms are concluded as showing that mechanically supported reasoning about complex real time behavior is feasible. However, there was significant increase in the effectiveness of verification systems since those verifications were performed, at it is to be expected that verifications of comparable difficulty will become fairly routine. The current challenge lies in developing perspicuous and economical approaches to the formalization and specification of real time properties.

  1. Cross-correlation redshift calibration without spectroscopic calibration samples in DES Science Verification Data

    NASA Astrophysics Data System (ADS)

    Davis, C.; Rozo, E.; Roodman, A.; Alarcon, A.; Cawthon, R.; Gatti, M.; Lin, H.; Miquel, R.; Rykoff, E. S.; Troxel, M. A.; Vielzeuf, P.; Abbott, T. M. C.; Abdalla, F. B.; Allam, S.; Annis, J.; Bechtol, K.; Benoit-Lévy, A.; Bertin, E.; Brooks, D.; Buckley-Geer, E.; Burke, D. L.; Carnero Rosell, A.; Carrasco Kind, M.; Carretero, J.; Castander, F. J.; Crocce, M.; Cunha, C. E.; D'Andrea, C. B.; da Costa, L. N.; Desai, S.; Diehl, H. T.; Doel, P.; Drlica-Wagner, A.; Fausti Neto, A.; Flaugher, B.; Fosalba, P.; Frieman, J.; García-Bellido, J.; Gaztanaga, E.; Gerdes, D. W.; Giannantonio, T.; Gruen, D.; Gruendl, R. A.; Gutierrez, G.; Honscheid, K.; Jain, B.; James, D. J.; Jeltema, T.; Krause, E.; Kuehn, K.; Kuhlmann, S.; Kuropatkin, N.; Lahav, O.; Li, T. S.; Lima, M.; March, M.; Marshall, J. L.; Martini, P.; Melchior, P.; Ogando, R. L. C.; Plazas, A. A.; Romer, A. K.; Sanchez, E.; Scarpine, V.; Schindler, R.; Schubnell, M.; Sevilla-Noarbe, I.; Smith, M.; Soares-Santos, M.; Sobreira, F.; Suchyta, E.; Swanson, M. E. C.; Tarle, G.; Thomas, D.; Vikram, V.; Walker, A. R.; Wechsler, R. H.

    2018-06-01

    Galaxy cross-correlations with high-fidelity redshift samples hold the potential to precisely calibrate systematic photometric redshift uncertainties arising from the unavailability of complete and representative training and validation samples of galaxies. However, application of this technique in the Dark Energy Survey (DES) is hampered by the relatively low number density, small area, and modest redshift overlap between photometric and spectroscopic samples. We propose instead using photometric catalogues with reliable photometric redshifts for photo-z calibration via cross-correlations. We verify the viability of our proposal using redMaPPer clusters from the Sloan Digital Sky Survey (SDSS) to successfully recover the redshift distribution of SDSS spectroscopic galaxies. We demonstrate how to combine photo-z with cross-correlation data to calibrate photometric redshift biases while marginalizing over possible clustering bias evolution in either the calibration or unknown photometric samples. We apply our method to DES Science Verification (DES SV) data in order to constrain the photometric redshift distribution of a galaxy sample selected for weak lensing studies, constraining the mean of the tomographic redshift distributions to a statistical uncertainty of Δz ˜ ±0.01. We forecast that our proposal can, in principle, control photometric redshift uncertainties in DES weak lensing experiments at a level near the intrinsic statistical noise of the experiment over the range of redshifts where redMaPPer clusters are available. Our results provide strong motivation to launch a programme to fully characterize the systematic errors from bias evolution and photo-z shapes in our calibration procedure.

  2. Cross-correlation redshift calibration without spectroscopic calibration samples in DES Science Verification Data

    DOE PAGES

    Davis, C.; Rozo, E.; Roodman, A.; ...

    2018-03-26

    Galaxy cross-correlations with high-fidelity redshift samples hold the potential to precisely calibrate systematic photometric redshift uncertainties arising from the unavailability of complete and representative training and validation samples of galaxies. However, application of this technique in the Dark Energy Survey (DES) is hampered by the relatively low number density, small area, and modest redshift overlap between photometric and spectroscopic samples. We propose instead using photometric catalogs with reliable photometric redshifts for photo-z calibration via cross-correlations. We verify the viability of our proposal using redMaPPer clusters from the Sloan Digital Sky Survey (SDSS) to successfully recover the redshift distribution of SDSS spectroscopic galaxies. We demonstrate how to combine photo-z with cross-correlation data to calibrate photometric redshift biases while marginalizing over possible clustering bias evolution in either the calibration or unknown photometric samples. We apply our method to DES Science Verification (DES SV) data in order to constrain the photometric redshift distribution of a galaxy sample selected for weak lensing studies, constraining the mean of the tomographic redshift distributions to a statistical uncertainty ofmore » $$\\Delta z \\sim \\pm 0.01$$. We forecast that our proposal can in principle control photometric redshift uncertainties in DES weak lensing experiments at a level near the intrinsic statistical noise of the experiment over the range of redshifts where redMaPPer clusters are available. Here, our results provide strong motivation to launch a program to fully characterize the systematic errors from bias evolution and photo-z shapes in our calibration procedure.« less

  3. Cross-correlation redshift calibration without spectroscopic calibration samples in DES Science Verification Data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Davis, C.; Rozo, E.; Roodman, A.

    Galaxy cross-correlations with high-fidelity redshift samples hold the potential to precisely calibrate systematic photometric redshift uncertainties arising from the unavailability of complete and representative training and validation samples of galaxies. However, application of this technique in the Dark Energy Survey (DES) is hampered by the relatively low number density, small area, and modest redshift overlap between photometric and spectroscopic samples. We propose instead using photometric catalogs with reliable photometric redshifts for photo-z calibration via cross-correlations. We verify the viability of our proposal using redMaPPer clusters from the Sloan Digital Sky Survey (SDSS) to successfully recover the redshift distribution of SDSS spectroscopic galaxies. We demonstrate how to combine photo-z with cross-correlation data to calibrate photometric redshift biases while marginalizing over possible clustering bias evolution in either the calibration or unknown photometric samples. We apply our method to DES Science Verification (DES SV) data in order to constrain the photometric redshift distribution of a galaxy sample selected for weak lensing studies, constraining the mean of the tomographic redshift distributions to a statistical uncertainty ofmore » $$\\Delta z \\sim \\pm 0.01$$. We forecast that our proposal can in principle control photometric redshift uncertainties in DES weak lensing experiments at a level near the intrinsic statistical noise of the experiment over the range of redshifts where redMaPPer clusters are available. Here, our results provide strong motivation to launch a program to fully characterize the systematic errors from bias evolution and photo-z shapes in our calibration procedure.« less

  4. Using EFSO/PQC to Improve the Quality of Observations and Analyses

    NASA Astrophysics Data System (ADS)

    Chen, T. C.; Kalnay, E.

    2017-12-01

    Massive amounts of observations are assimilated every day into modern Numerical Weather Prediction (NWP) systems. This makes difficult to estimate the impact of a new observing system using the current approach (Observing System Experiments, OSEs) because there is already so much information provided by existing observations. In addition, the large volume of data also prevents monitoring the impact of each assimilated observation with OSEs. We demonstrate here how using Ensemble Forecast Sensitivity to Observations (EFSO) allows monitoring and improving the impact of observations on the analyses and forecasts in the Hybrid GSI/LETKF system. In addition, we show how EFSO can identify flow dependent detrimental observations from observing systems that, on the average, are beneficial. For example, using EFSO we find that positive zonal wind innovations in MODIS polar winds are generally detrimental, whereas no such bias is present in other satellite wind systems. We also show how EFSO can be used to identify and reject very detrimental observations (Proactive Quality Control, PQC). The withdrawal of these detrimental observations leads to improved analyses and 5-day forecasts, which also serves as a verification of EFSO. The operational implementation of PQC/EFSO is computationally very feasible, and will provide detailed QC monitoring of every observing system. Finally, we provide a theoretical justification of the PQC and its connection to dynamical instabilities with a simple Lorenz 96 model.

  5. Forecasting space weather: Can new econometric methods improve accuracy?

    NASA Astrophysics Data System (ADS)

    Reikard, Gordon

    2011-06-01

    Space weather forecasts are currently used in areas ranging from navigation and communication to electric power system operations. The relevant forecast horizons can range from as little as 24 h to several days. This paper analyzes the predictability of two major space weather measures using new time series methods, many of them derived from econometrics. The data sets are the A p geomagnetic index and the solar radio flux at 10.7 cm. The methods tested include nonlinear regressions, neural networks, frequency domain algorithms, GARCH models (which utilize the residual variance), state transition models, and models that combine elements of several techniques. While combined models are complex, they can be programmed using modern statistical software. The data frequency is daily, and forecasting experiments are run over horizons ranging from 1 to 7 days. Two major conclusions stand out. First, the frequency domain method forecasts the A p index more accurately than any time domain model, including both regressions and neural networks. This finding is very robust, and holds for all forecast horizons. Combining the frequency domain method with other techniques yields a further small improvement in accuracy. Second, the neural network forecasts the solar flux more accurately than any other method, although at short horizons (2 days or less) the regression and net yield similar results. The neural net does best when it includes measures of the long-term component in the data.

  6. System load forecasts for an electric utility. [Hourly loads using Box-Jenkins method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Uri, N.D.

    This paper discusses forecasting hourly system load for an electric utility using Box-Jenkins time-series analysis. The results indicate that a model based on the method of Box and Jenkins, given its simplicity, gives excellent results over the forecast horizon.

  7. An application of a multi model approach for solar energy prediction in Southern Italy

    NASA Astrophysics Data System (ADS)

    Avolio, Elenio; Lo Feudo, Teresa; Calidonna, Claudia Roberta; Contini, Daniele; Torcasio, Rosa Claudia; Tiriolo, Luca; Montesanti, Stefania; Transerici, Claudio; Federico, Stefano

    2015-04-01

    The accuracy of the short and medium range forecast of solar irradiance is very important for solar energy integration into the grid. This issue is particularly important for Southern Italy where a significant availability of solar energy is associated with a poor development of the grid. In this work we analyse the performance of two deterministic models for the prediction of surface temperature and short-wavelength radiance for two sites in southern Italy. Both parameters are needed to forecast the power production from solar power plants, so the performance of the forecast for these meteorological parameters is of paramount importance. The models considered in this work are the RAMS (Regional Atmospheric Modeling System) and the WRF (Weather Research and Forecasting Model) and they were run for the summer 2013 at 4 km horizontal resolution over Italy. The forecast lasts three days. Initial and dynamic boundary conditions are given by the 12 UTC deterministic forecast of the ECMWF-IFS (European Centre for Medium Weather Range Forecast - Integrated Forecasting System) model, and were available every 6 hours. Verification is given against two surface stations located in Southern Italy, Lamezia Terme and Lecce, and are based on hourly output of models forecast. Results for the whole period for temperature show a positive bias for the RAMS model and a negative bias for the WRF model. RMSE is between 1 and 2 °C for both models. Results for the whole period for the short-wavelength radiance show a positive bias for both models (about 30 W/m2 for both models) and a RMSE of 100 W/m2. To reduce the model errors, a statistical post-processing technique, i.e the multi-model, is adopted. In this approach the two model's outputs are weighted with an adequate set of weights computed for a training period. In general, the performance is improved by the application of the technique, and the RMSE is reduced by a sizeable fraction (i.e. larger than 10% of the initial RMSE) depending on the forecasting time and parameter. The performance of the multi model is discussed as a function of the length of the training period and is compared with the performance of the MOS (Model Output Statistics) approach. ACKNOWLEDGMENTS This work is partially supported by projects PON04a2E Sinergreen-ResNovae - "Smart Energy Master for the energetic government of the territory" and PONa3_00363 "High Technology Infrastructure for Climate and Environment Monitoring" (I-AMICA) founded by Italian Ministry of University and Research (MIUR) PON 2007-2013. The ECMWF and CNMCA (Centro Nazionale di Meteorologia e Climatologia Aeronautica) are acknowledged for the use of the MARS (Meteorological Archive and Retrieval System).

  8. Forecasting and analyzing high O3 time series in educational area through an improved chaotic approach

    NASA Astrophysics Data System (ADS)

    Hamid, Nor Zila Abd; Adenan, Nur Hamiza; Noorani, Mohd Salmi Md

    2017-08-01

    Forecasting and analyzing the ozone (O3) concentration time series is important because the pollutant is harmful to health. This study is a pilot study for forecasting and analyzing the O3 time series in one of Malaysian educational area namely Shah Alam using chaotic approach. Through this approach, the observed hourly scalar time series is reconstructed into a multi-dimensional phase space, which is then used to forecast the future time series through the local linear approximation method. The main purpose is to forecast the high O3 concentrations. The original method performed poorly but the improved method addressed the weakness thereby enabling the high concentrations to be successfully forecast. The correlation coefficient between the observed and forecasted time series through the improved method is 0.9159 and both the mean absolute error and root mean squared error are low. Thus, the improved method is advantageous. The time series analysis by means of the phase space plot and Cao method identified the presence of low-dimensional chaotic dynamics in the observed O3 time series. Results showed that at least seven factors affect the studied O3 time series, which is consistent with the listed factors from the diurnal variations investigation and the sensitivity analysis from past studies. In conclusion, chaotic approach has been successfully forecast and analyzes the O3 time series in educational area of Shah Alam. These findings are expected to help stakeholders such as Ministry of Education and Department of Environment in having a better air pollution management.

  9. A case study of the sensitivity of forecast skill to data and data analysis techniques

    NASA Technical Reports Server (NTRS)

    Baker, W. E.; Atlas, R.; Halem, M.; Susskind, J.

    1983-01-01

    A series of experiments have been conducted to examine the sensitivity of forecast skill to various data and data analysis techniques for the 0000 GMT case of January 21, 1979. These include the individual components of the FGGE observing system, the temperatures obtained with different satellite retrieval methods, and the method of vertical interpolation between the mandatory pressure analysis levels and the model sigma levels. It is found that NESS TIROS-N infrared retrievals seriously degrade a rawinsonde-only analysis over land, resulting in a poorer forecast over North America. Less degradation in the 72-hr forecast skill at sea level and some improvement at 500 mb is noted, relative to the control with TIROS-N retrievals produced with a physical inversion method which utilizes a 6-hr forecast first guess. NESS VTPR oceanic retrievals lead to an improved forecast over North America when added to the control.

  10. Predicting and downscaling ENSO impacts on intraseasonal precipitation statistics in California: The 1997/98 event

    USGS Publications Warehouse

    Gershunov, A.; Barnett, T.P.; Cayan, D.R.; Tubbs, T.; Goddard, L.

    2000-01-01

    Three long-range forecasting methods have been evaluated for prediction and downscaling of seasonal and intraseasonal precipitation statistics in California. Full-statistical, hybrid-dynamical - statistical and full-dynamical approaches have been used to forecast El Nin??o - Southern Oscillation (ENSO) - related total precipitation, daily precipitation frequency, and average intensity anomalies during the January - March season. For El Nin??o winters, the hybrid approach emerges as the best performer, while La Nin??a forecasting skill is poor. The full-statistical forecasting method features reasonable forecasting skill for both La Nin??a and El Nin??o winters. The performance of the full-dynamical approach could not be evaluated as rigorously as that of the other two forecasting schemes. Although the full-dynamical forecasting approach is expected to outperform simpler forecasting schemes in the long run, evidence is presented to conclude that, at present, the full-dynamical forecasting approach is the least viable of the three, at least in California. The authors suggest that operational forecasting of any intraseasonal temperature, precipitation, or streamflow statistic derivable from the available records is possible now for ENSO-extreme years.

  11. The Second NWRA Flare-Forecasting Comparison Workshop: Methods Compared and Methodology

    NASA Astrophysics Data System (ADS)

    Leka, K. D.; Barnes, G.; the Flare Forecasting Comparison Group

    2013-07-01

    The Second NWRA Workshop to compare methods of solar flare forecasting was held 2-4 April 2013 in Boulder, CO. This is a follow-on to the First NWRA Workshop on Flare Forecasting Comparison, also known as the ``All-Clear Forecasting Workshop'', held in 2009 jointly with NASA/SRAG and NOAA/SWPC. For this most recent workshop, many researchers who are active in the field participated, and diverse methods were represented in terms of both the characterization of the Sun and the statistical approaches used to create a forecast. A standard dataset was created for this investigation, using data from the Solar Dynamics Observatory/ Helioseismic and Magnetic Imager (SDO/HMI) vector magnetic field HARP series. For each HARP on each day, 6 hours of data were used, allowing for nominal time-series analysis to be included in the forecasts. We present here a summary of the forecasting methods that participated and the standardized dataset that was used. Funding for the workshop and the data analysis was provided by NASA/Living with a Star contract NNH09CE72C and NASA/Guest Investigator contract NNH12CG10C.

  12. Real-Time Verification of a High-Dose-Rate Iridium 192 Source Position Using a Modified C-Arm Fluoroscope.

    PubMed

    Nose, Takayuki; Chatani, Masashi; Otani, Yuki; Teshima, Teruki; Kumita, Shinichirou

    2017-03-15

    High-dose-rate (HDR) brachytherapy misdeliveries can occur at any institution, and they can cause disastrous results. Even a patient's death has been reported. Misdeliveries could be avoided with real-time verification methods. In 1996, we developed a modified C-arm fluoroscopic verification of an HDR Iridium 192 source position prevent these misdeliveries. This method provided excellent image quality sufficient to detect errors, and it has been in clinical use at our institutions for 20 years. The purpose of the current study is to introduce the mechanisms and validity of our straightforward C-arm fluoroscopic verification method. Conventional X-ray fluoroscopic images are degraded by spurious signals and quantum noise from Iridium 192 photons, which make source verification impractical. To improve image quality, we quadrupled the C-arm fluoroscopic X-ray dose per pulse. The pulse rate was reduced by a factor of 4 to keep the average exposure compliant with Japanese medical regulations. The images were then displayed with quarter-frame rates. Sufficient quality was obtained to enable observation of the source position relative to both the applicators and the anatomy. With this method, 2 errors were detected among 2031 treatment sessions for 370 patients within a 6-year period. With the use of a modified C-arm fluoroscopic verification method, treatment errors that were otherwise overlooked were detected in real time. This method should be given consideration for widespread use. Copyright © 2016 Elsevier Inc. All rights reserved.

  13. Applications of Principled Search Methods in Climate Influences and Mechanisms

    NASA Technical Reports Server (NTRS)

    Glymour, Clark

    2005-01-01

    Forest and grass fires cause economic losses in the billions of dollars in the U.S. alone. In addition, boreal forests constitute a large carbon store; it has been estimated that, were no burning to occur, an additional 7 gigatons of carbon would be sequestered in boreal soils each century. Effective wildfire suppression requires anticipation of locales and times for which wildfire is most probable, preferably with a two to four week forecast, so that limited resources can be efficiently deployed. The United States Forest Service (USFS), and other experts and agencies have developed several measures of fire risk combining physical principles and expert judgment, and have used them in automated procedures for forecasting fire risk. Forecasting accuracies for some fire risk indices in combination with climate and other variables have been estimated for specific locations, with the value of fire risk index variables assessed by their statistical significance in regressions. In other cases, the MAPSS forecasts [23, 241 for example, forecasting accuracy has been estimated only by simulated data. We describe alternative forecasting methods that predict fire probability by locale and time using statistical or machine learning procedures trained on historical data, and we give comparative assessments of their forecasting accuracy for one fire season year, April- October, 2003, for all U.S. Forest Service lands. Aside from providing an accuracy baseline for other forecasting methods, the results illustrate the interdependence between the statistical significance of prediction variables and the forecasting method used.

  14. A Load-Based Temperature Prediction Model for Anomaly Detection

    NASA Astrophysics Data System (ADS)

    Sobhani, Masoud

    Electric load forecasting, as a basic requirement for the decision-making in power utilities, has been improved in various aspects in the past decades. Many factors may affect the accuracy of the load forecasts, such as data quality, goodness of the underlying model and load composition. Due to the strong correlation between the input variables (e.g., weather and calendar variables) and the load, the quality of input data plays a vital role in forecasting practices. Even if the forecasting model were able to capture most of the salient features of the load, a low quality input data may result in inaccurate forecasts. Most of the data cleansing efforts in the load forecasting literature have been devoted to the load data. Few studies focused on weather data cleansing for load forecasting. This research proposes an anomaly detection method for the temperature data. The method consists of two components: a load-based temperature prediction model and a detection technique. The effectiveness of the proposed method is demonstrated through two case studies: one based on the data from the Global Energy Forecasting Competition 2014, and the other based on the data published by ISO New England. The results show that by removing the detected observations from the original input data, the final load forecast accuracy is enhanced.

  15. Offline signature verification using convolution Siamese network

    NASA Astrophysics Data System (ADS)

    Xing, Zi-Jian; Yin, Fei; Wu, Yi-Chao; Liu, Cheng-Lin

    2018-04-01

    This paper presents an offline signature verification approach using convolutional Siamese neural network. Unlike the existing methods which consider feature extraction and metric learning as two independent stages, we adopt a deepleaning based framework which combines the two stages together and can be trained end-to-end. The experimental results on two offline public databases (GPDSsynthetic and CEDAR) demonstrate the superiority of our method on the offline signature verification problem.

  16. A framework of multitemplate ensemble for fingerprint verification

    NASA Astrophysics Data System (ADS)

    Yin, Yilong; Ning, Yanbin; Ren, Chunxiao; Liu, Li

    2012-12-01

    How to improve performance of an automatic fingerprint verification system (AFVS) is always a big challenge in biometric verification field. Recently, it becomes popular to improve the performance of AFVS using ensemble learning approach to fuse related information of fingerprints. In this article, we propose a novel framework of fingerprint verification which is based on the multitemplate ensemble method. This framework is consisted of three stages. In the first stage, enrollment stage, we adopt an effective template selection method to select those fingerprints which best represent a finger, and then, a polyhedron is created by the matching results of multiple template fingerprints and a virtual centroid of the polyhedron is given. In the second stage, verification stage, we measure the distance between the centroid of the polyhedron and a query image. In the final stage, a fusion rule is used to choose a proper distance from a distance set. The experimental results on the FVC2004 database prove the improvement on the effectiveness of the new framework in fingerprint verification. With a minutiae-based matching method, the average EER of four databases in FVC2004 drops from 10.85 to 0.88, and with a ridge-based matching method, the average EER of these four databases also decreases from 14.58 to 2.51.

  17. Conditional Monthly Weather Resampling Procedure for Operational Seasonal Water Resources Forecasting

    NASA Astrophysics Data System (ADS)

    Beckers, J.; Weerts, A.; Tijdeman, E.; Welles, E.; McManamon, A.

    2013-12-01

    To provide reliable and accurate seasonal streamflow forecasts for water resources management several operational hydrologic agencies and hydropower companies around the world use the Extended Streamflow Prediction (ESP) procedure. The ESP in its original implementation does not accommodate for any additional information that the forecaster may have about expected deviations from climatology in the near future. Several attempts have been conducted to improve the skill of the ESP forecast, especially for areas which are affected by teleconnetions (e,g. ENSO, PDO) via selection (Hamlet and Lettenmaier, 1999) or weighting schemes (Werner et al., 2004; Wood and Lettenmaier, 2006; Najafi et al., 2012). A disadvantage of such schemes is that they lead to a reduction of the signal to noise ratio of the probabilistic forecast. To overcome this, we propose a resampling method conditional on climate indices to generate meteorological time series to be used in the ESP. The method can be used to generate a large number of meteorological ensemble members in order to improve the statistical properties of the ensemble. The effectiveness of the method was demonstrated in a real-time operational hydrologic seasonal forecasts system for the Columbia River basin operated by the Bonneville Power Administration. The forecast skill of the k-nn resampler was tested against the original ESP for three basins at the long-range seasonal time scale. The BSS and CRPSS were used to compare the results to those of the original ESP method. Positive forecast skill scores were found for the resampler method conditioned on different indices for the prediction of spring peak flows in the Dworshak and Hungry Horse basin. For the Libby Dam basin however, no improvement of skill was found. The proposed resampling method is a promising practical approach that can add skill to ESP forecasts at the seasonal time scale. Further improvement is possible by fine tuning the method and selecting the most informative climate indices for the region of interest.

  18. Replacement Beef Cow Valuation under Data Availability Constraints

    PubMed Central

    Hagerman, Amy D.; Thompson, Jada M.; Ham, Charlotte; Johnson, Kamina K.

    2017-01-01

    Economists are often tasked with estimating the benefits or costs associated with livestock production losses; however, lack of available data or absence of consistent reporting can reduce the accuracy of these valuations. This work looks at three potential estimation techniques for determining the value for replacement beef cows with varying types of market data to proxy constrained data availability and discusses the potential margin of error for each technique. Oklahoma bred replacement cows are valued using hedonic pricing based on Oklahoma bred cow data—a best case scenario—vector error correction modeling (VECM) based on national cow sales data and cost of production (COP) based on just a representative enterprise budget and very limited sales data. Each method was then used to perform a within-sample forecast of 2016 January to December, and forecasts are compared with the 2016 monthly observed market prices in Oklahoma using the mean absolute percent error (MAPE). Hedonic pricing methods tend to overvalue for within-sample forecasting but performed best, as measured by MAPE for high quality cows. The VECM tended to undervalue cows but performed best for younger animals. COP performed well, compared with the more data intensive methods. Examining each method individually across eight representative replacement beef female types, the VECM forecast resulted in a MAPE under 10% for 33% of forecasted months, followed by hedonic pricing at 24% of the forecasted months and COP at 14% of the forecasted months for average quality beef females. For high quality females, the hedonic pricing method worked best producing a MAPE under 10% in 36% of the forecasted months followed by the COP method at 21% of months and the VECM at 14% of the forecasted months. These results suggested that livestock valuation method selection was not one-size-fits-all and may need to vary based not only on the data available but also on the characteristics (e.g., quality or age) of the livestock being valued. PMID:29164141

  19. Forecasting in Complex Systems

    NASA Astrophysics Data System (ADS)

    Rundle, J. B.; Holliday, J. R.; Graves, W. R.; Turcotte, D. L.; Donnellan, A.

    2014-12-01

    Complex nonlinear systems are typically characterized by many degrees of freedom, as well as interactions between the elements. Interesting examples can be found in the areas of earthquakes and finance. In these two systems, fat tails play an important role in the statistical dynamics. For earthquake systems, the Gutenberg-Richter magnitude-frequency is applicable, whereas for daily returns for the securities in the financial markets are known to be characterized by leptokurtotic statistics in which the tails are power law. Very large fluctuations are present in both systems. In earthquake systems, one has the example of great earthquakes such as the M9.1, March 11, 2011 Tohoku event. In financial systems, one has the example of the market crash of October 19, 1987. Both were largely unexpected events that severely impacted the earth and financial systems systemically. Other examples include the M9.3 Andaman earthquake of December 26, 2004, and the Great Recession which began with the fall of Lehman Brothers investment bank on September 12, 2013. Forecasting the occurrence of these damaging events has great societal importance. In recent years, national funding agencies in a variety of countries have emphasized the importance of societal relevance in research, and in particular, the goal of improved forecasting technology. Previous work has shown that both earthquakes and financial crashes can be described by a common Landau-Ginzburg-type free energy model. These metastable systems are characterized by fat tail statistics near the classical spinodal. Correlations in these systems can grow and recede, but do not imply causation, a common source of misunderstanding. In both systems, a common set of techniques can be used to compute the probabilities of future earthquakes or crashes. In this talk, we describe the basic phenomenology of these systems and emphasize their similarities and differences. We also consider the problem of forecast validation and verification. In both of these systems, we show that small event counts (the natural time domain) is an important component of a forecast system.

  20. Seasonal forecasts for the agricultural sector in Peru through user-tailored indices

    NASA Astrophysics Data System (ADS)

    Sedlmeier, Katrin; Gubler, Stefanie; Spierig, Christoph; Quevedo, Karim; Escajadillo, Yury; Avalos, Griña; Liniger, Mark A.; Schwierz, Cornelia

    2017-04-01

    In the agricultural sector, the demand for seasonal forecast information is high since agriculture depends strongly on climatic conditions during the growing season. Unfavorable weather and climate events, such as droughts or frost events, can lead to crop losses and thereby to large economic damages or life-threatening conditions in case of subsistence farming. The generally used presentation form of tercile probabilities of seasonally averaged meteorological quantities are not specific enough for end users. More user-tailored seasonal information is necessary. For example, warmer than average temperatures might be favorable for a crop as long as they remain below a plant-specific critical threshold. If, on the other hand, too many days show temperatures above this critical threshold, a mitigation action such as e.g. changing the crop type would be required. In the framework of the CLIMANDES project (a pilot project of the Global Framework for Climate Services led by WMO [http://www.wmo.int/gfcs/climandes]), user-tailored seasonal forecast products are developed for the agricultural sector in the Peruvian Andes. Such products include indices such as e.g. the frost risk, the occurrence of long dry periods, or the start of the rainy season which is crucial to schedule sowing. Furthermore, more specific indices derived from crop requirement studies are elaborated such as the number of days exceeding or falling below plant specific temperature thresholds for given phenological stages. The applicability of these products highly depends on forecast skill. In this study, the potential predictability and the skill of selected indicators are presented using seasonal hindcast data of the ECMWF system 4 for Peru during the time period 1981-2010. Furthermore, the influence of ENSO on the prediction skill is investigated. In this study, reanalysis data, ground measurements, and a gridded precipitation dataset are used for verification. The results indicate that temperature-based indicators show sizeable skill in the Peruvian highlands while precipitation-based forecasts are much more challenging.

Top