Sample records for probability forecast tool

  1. Developing a Peak Wind Probability Forecast Tool for Kennedy Space Center and Cape Canaveral Air Force Station

    NASA Technical Reports Server (NTRS)

    Lambert, WInifred; Roeder, William

    2007-01-01

    This conference presentation describes the development of a peak wind forecast tool to assist forecasters in determining the probability of violating launch commit criteria (LCC) at Kennedy Space Center (KSC) and Cape Canaveral Air Force Station (CCAFS) in east-central Florida. The peak winds are an important forecast element for both the Space Shuttle and Expendable Launch Vehicle (ELV) programs. The LCC define specific peak wind thresholds for each launch operation that cannot be exceeded in order to ensure the safety of the vehicle. The 45th Weather Squadron (45 WS) has found that peak winds are a challenging parameter to forecast, particularly in the cool season months of October through April. Based on the importance of forecasting peak winds, the 45 WS tasked the Applied Meteorology Unit (AMU) to develop a short-range peak-wind forecast tool to assist in forecasting LCC violations. The tool will include climatologies of the 5-minute mean and peak winds by month, hour, and direction, and probability distributions of the peak winds as a function of the 5-minute mean wind speeds.

  2. A Peak Wind Probability Forecast Tool for Kennedy Space Center and Cape Canaveral Air Force Station

    NASA Technical Reports Server (NTRS)

    Crawford, Winifred; Roeder, William

    2008-01-01

    This conference abstract describes the development of a peak wind forecast tool to assist forecasters in determining the probability of violating launch commit criteria (LCC) at Kennedy Space Center (KSC) and Cape Canaveral Air Force Station (CCAFS) in east-central Florida. The peak winds are an important forecast element for both the Space Shuttle and Expendable Launch Vehicle (ELV) programs. The LCC define specific peak wind thresholds for each launch operation that cannot be exceeded in order to ensure the safety of the vehicle. The 45th Weather Squadron (45 WS) has found that peak winds are a challenging parameter to forecast, particularly in the cool season months of October through April. Based on the importance of forecasting peak winds, the 45 WS tasked the Applied Meteorology Unit (AMU) to develop a short-range peak-wind forecast tool to assist in forecasting LCC violatioas.The tool will include climatologies of the 5-minute mean end peak winds by month, hour, and direction, and probability distributions of the peak winds as a function of the 5-minute mean wind speeds.

  3. Objective Lightning Forecasting at Kennedy Space Center and Cape Canaveral Air Force Station using Cloud-to-Ground Lightning Surveillance System Data

    NASA Technical Reports Server (NTRS)

    Lambert, Winfred; Wheeler, Mark; Roeder, William

    2005-01-01

    The 45th Weather Squadron (45 WS) at Cape Canaveral Air-Force Station (CCAFS)ln Florida issues a probability of lightning occurrence in their daily 24-hour and weekly planning forecasts. This information is used for general planning of operations at CCAFS and Kennedy Space Center (KSC). These facilities are located in east-central Florida at the east end of a corridor known as 'Lightning Alley', an indication that lightning has a large impact on space-lift operations. Much of the current lightning probability forecast is based on a subjective analysis of model and observational data and an objective forecast tool developed over 30 years ago. The 45 WS requested that a new lightning probability forecast tool based on statistical analysis of more recent historical warm season (May-September) data be developed in order to increase the objectivity of the daily thunderstorm probability forecast. The resulting tool is a set of statistical lightning forecast equations, one for each month of the warm season, that provide a lightning occurrence probability for the day by 1100 UTC (0700 EDT) during the warm season.

  4. Objective Lightning Probability Forecasting for Kennedy Space Center and Cape Canaveral Air Force Station

    NASA Technical Reports Server (NTRS)

    Lambert, Winifred; Wheeler, Mark

    2005-01-01

    Five logistic regression equations were created that predict the probability of cloud-to-ground lightning occurrence for the day in the KSC/CCAFS area for each month in the warm season. These equations integrated the results from several studies over recent years to improve thunderstorm forecasting at KSC/CCAFS. All of the equations outperform persistence, which is known to outperform NPTI, the current objective tool used in 45 WS lightning forecasting operations. The equations also performed well in other tests. As a result, the new equations will be added to the current set of tools used by the 45 WS to determine the probability of lightning for their daily planning forecast. The results from these equations are meant to be used as first-guess guidance when developing the lightning probability forecast for the day. They provide an objective base from which forecasters can use other observations, model data, consultation with other forecasters, and their own experience to create the final lightning probability for the 1100 UTC briefing.

  5. Objective Lightning Probability Forecasts for East-Central Florida Airports

    NASA Technical Reports Server (NTRS)

    Crawford, Winfred C.

    2013-01-01

    The forecasters at the National Weather Service in Melbourne, FL, (NWS MLB) identified a need to make more accurate lightning forecasts to help alleviate delays due to thunderstorms in the vicinity of several commercial airports in central Florida at which they are responsible for issuing terminal aerodrome forecasts. Such forecasts would also provide safer ground operations around terminals, and would be of value to Center Weather Service Units serving air traffic controllers in Florida. To improve the forecast, the AMU was tasked to develop an objective lightning probability forecast tool for the airports using data from the National Lightning Detection Network (NLDN). The resulting forecast tool is similar to that developed by the AMU to support space launch operations at Kennedy Space Center (KSC) and Cape Canaveral Air Force Station (CCAFS) for use by the 45th Weather Squadron (45 WS) in previous tasks (Lambert and Wheeler 2005, Lambert 2007). The lightning probability forecasts are valid for the time periods and areas needed by the NWS MLB forecasters in the warm season months, defined in this task as May-September.

  6. Modifications to the Objective Lightning Probability Forecast Tool at Kennedy Space Center/Cape Canaveral Air Force Station, Florida

    NASA Technical Reports Server (NTRS)

    Crawford, Winifred; Roeder, William

    2010-01-01

    The 45th Weather Squadron (45 WS) at Cape Canaveral Air Force Station (CCAFS) includes the probability of lightning occurrence in their 24-Hour and Weekly Planning Forecasts, briefed at 0700 EDT for daily operations planning on Kennedy Space Center (KSC) and CCAFS. This forecast is based on subjective analyses of model and observational data and output from an objective tool developed by the Applied Meteorology Unit (AMU). This tool was developed over two phases (Lambert and Wheeler 2005, Lambert 2007). It consists of five equations, one for each warm season month (May-Sep), that calculate the probability of lightning occurrence for the day and a graphical user interface (GUI) to display the output. The Phase I and II equations outperformed previous operational tools by a total of 56%. Based on this success, the 45 WS tasked the AMU with Phase III to improve the tool further.

  7. Tool for Forecasting Cool-Season Peak Winds Across Kennedy Space Center and Cape Canaveral Air Force Station

    NASA Technical Reports Server (NTRS)

    Barrett, Joe H., III; Roeder, William P.

    2010-01-01

    The expected peak wind speed for the day is an important element in the daily morning forecast for ground and space launch operations at Kennedy Space Center (KSC) and Cape Canaveral Air Force Station (CCAFS). The 45th Weather Squadron (45 WS) must issue forecast advisories for KSC/CCAFS when they expect peak gusts for >= 25, >= 35, and >= 50 kt thresholds at any level from the surface to 300 ft. In Phase I of this task, the 45 WS tasked the Applied Meteorology Unit (AMU) to develop a cool-season (October - April) tool to help forecast the non-convective peak wind from the surface to 300 ft at KSC/CCAFS. During the warm season, these wind speeds are rarely exceeded except during convective winds or under the influence of tropical cyclones, for which other techniques are already in use. The tool used single and multiple linear regression equations to predict the peak wind from the morning sounding. The forecaster manually entered several observed sounding parameters into a Microsoft Excel graphical user interface (GUI), and then the tool displayed the forecast peak wind speed, average wind speed at the time of the peak wind, the timing of the peak wind and the probability the peak wind will meet or exceed 35, 50 and 60 kt. The 45 WS customers later dropped the requirement for >= 60 kt wind warnings. During Phase II of this task, the AMU expanded the period of record (POR) by six years to increase the number of observations used to create the forecast equations. A large number of possible predictors were evaluated from archived soundings, including inversion depth and strength, low-level wind shear, mixing height, temperature lapse rate and winds from the surface to 3000 ft. Each day in the POR was stratified in a number of ways, such as by low-level wind direction, synoptic weather pattern, precipitation and Bulk Richardson number. The most accurate Phase II equations were then selected for an independent verification. The Phase I and II forecast methods were compared using an independent verification data set. The two methods were compared to climatology, wind warnings and advisories issued by the 45 WS, and North American Mesoscale (NAM) model (MesoNAM) forecast winds. The performance of the Phase I and II methods were similar with respect to mean absolute error. Since the Phase I data were not stratified by precipitation, this method's peak wind forecasts had a large negative bias on days with precipitation and a small positive bias on days with no precipitation. Overall, the climatology methods performed the worst while the MesoNAM performed the best. Since the MesoNAM winds were the most accurate in the comparison, the final version of the tool was based on the MesoNAM winds. The probability the peak wind will meet or exceed the warning thresholds were based on the one standard deviation error bars from the linear regression. For example, the linear regression might forecast the most likely peak speed to be 35 kt and the error bars used to calculate that the probability of >= 25 kt = 76%, the probability of >= 35 kt = 50%, and the probability of >= 50 kt = 19%. The authors have not seen this application of linear regression error bars in any other meteorological applications. Although probability forecast tools should usually be developed with logistic regression, this technique could be easily generalized to any linear regression forecast tool to estimate the probability of exceeding any desired threshold . This could be useful for previously developed linear regression forecast tools or new forecast applications where statistical analysis software to perform logistic regression is not available. The tool was delivered in two formats - a Microsoft Excel GUI and a Tool Command Language/Tool Kit (Tcl/Tk) GUI in the Meteorological Interactive Data Display System (MIDDS). The Microsoft Excel GUI reads a MesoNAM text file containing hourly forecasts from 0 to 84 hours, from one model run (00 or 12 UTC). The GUI then displays e peak wind speed, average wind speed, and the probability the peak wind will meet or exceed the 25-, 35- and 50-kt thresholds. The user can display the Day-1 through Day-3 peak wind forecasts, and separate forecasts are made for precipitation and non-precipitation days. The MIDDS GUI uses data from the NAM and Global Forecast System (GFS), instead of the MesoNAM. It can display Day-1 and Day-2 forecasts using NAM data, and Day-1 through Day-5 forecasts using GFS data. The timing of the peak wind is not displayed, since the independent verification showed that none of the forecast methods performed significantly better than climatology. The forecaster should use the climatological timing of the peak wind (2248 UTC) as a first guess and then adjust it based on the movement of weather features.

  8. Statistical Short-Range Guidance for Peak Wind Forecasts on Kennedy Space Center/Cape Canaveral Air Force Station, Phase III

    NASA Technical Reports Server (NTRS)

    Crawford, Winifred

    2010-01-01

    This final report describes the development of a peak wind forecast tool to assist forecasters in determining the probability of violating launch commit criteria (LCC) at Kennedy Space Center (KSC) and Cape Canaveral Air Force Station (CCAFS). The peak winds are an important forecast element for both the Space Shuttle and Expendable Launch Vehicle (ELV) programs. The LCC define specific peak wind thresholds for each launch operation that cannot be exceeded in order to ensure the safety of the vehicle. The 45th Weather Squadron (45 WS) has found that peak winds are a challenging parameter to forecast, particularly in the cool season months of October through April. Based on the importance of forecasting peak winds, the 45 WS tasked the Applied Meteorology Unit (AMU) to develop a short-range peak-wind forecast tool to assist in forecasting LCC violations.The tool includes climatologies of the 5-minute mean and peak winds by month, hour, and direction, and probability distributions of the peak winds as a function of the 5-minute mean wind speeds.

  9. Peak Wind Forecasts for the Launch-Critical Wind Towers on Kennedy Space Center/Cape Canaveral Air Force Station, Phase IV

    NASA Technical Reports Server (NTRS)

    Crawford, Winifred

    2011-01-01

    This final report describes the development of a peak wind forecast tool to assist forecasters in determining the probability of violating launch commit criteria (LCC) at Kennedy Space Center (KSC) and Cape Canaveral Air Force Station (CCAFS). The peak winds arc an important forecast clement for both the Space Shuttle and Expendable Launch Vehicle (ELV) programs. The LCC define specific peak wind thresholds for each launch operation that cannot be exceeded in order to ensure the safety of the vehicle. The 45th Weather Squadron (45 WS) has found that peak winds are a challenging parameter to forecast, particularly in the cool season months of October through April. Based on the importance of forecasting peak winds, the 45 WS tasked the Applied Meteorology Unit (AMU) to update the statistics in the current peak-wind forecast tool to assist in forecasting LCC violations. The tool includes onshore and offshore flow climatologies of the 5-minute mean and peak winds and probability distributions of the peak winds as a function of the 5-minute mean wind speeds.

  10. Statistical Short-Range Guidance for Peak Wind Speed Forecasts on Kennedy Space Center/Cape Canaveral Air Force Station: Phase I Results

    NASA Technical Reports Server (NTRS)

    Lambert, Winifred C.; Merceret, Francis J. (Technical Monitor)

    2002-01-01

    This report describes the results of the ANU's (Applied Meteorology Unit) Short-Range Statistical Forecasting task for peak winds. The peak wind speeds are an important forecast element for the Space Shuttle and Expendable Launch Vehicle programs. The Keith Weather Squadron and the Spaceflight Meteorology Group indicate that peak winds are challenging to forecast. The Applied Meteorology Unit was tasked to develop tools that aid in short-range forecasts of peak winds at tower sites of operational interest. A 7 year record of wind tower data was used in the analysis. Hourly and directional climatologies by tower and month were developed to determine the seasonal behavior of the average and peak winds. In all climatologies, the average and peak wind speeds were highly variable in time. This indicated that the development of a peak wind forecasting tool would be difficult. Probability density functions (PDF) of peak wind speed were calculated to determine the distribution of peak speed with average speed. These provide forecasters with a means of determining the probability of meeting or exceeding a certain peak wind given an observed or forecast average speed. The climatologies and PDFs provide tools with which to make peak wind forecasts that are critical to safe operations.

  11. Forecasting Lightning at Kennedy Space Center/Cape Canaveral Air Force Station, Florida

    NASA Technical Reports Server (NTRS)

    Lambert, Winfred; Wheeler, Mark; Roeder, William

    2005-01-01

    The Applied Meteorology Unit (AMU) developed a set of statistical forecast equations that provide a probability of lightning occurrence on Kennedy Space Center (KSC) I Cape Canaveral Air Force Station (CCAFS) for the day during the warm season (May September). The 45th Weather Squadron (45 WS) forecasters at CCAFS in Florida include a probability of lightning occurrence in their daily 24-hour and weekly planning forecasts, which are briefed at 1100 UTC (0700 EDT). This information is used for general scheduling of operations at CCAFS and KSC. Forecasters at the Spaceflight Meteorology Group also make thunderstorm forecasts for the KSC/CCAFS area during Shuttle flight operations. Much of the current lightning probability forecast at both groups is based on a subjective analysis of model and observational data. The objective tool currently available is the Neumann-Pfeffer Thunderstorm Index (NPTI, Neumann 1971), developed specifically for the KSCICCAFS area over 30 years ago. However, recent studies have shown that 1-day persistence provides a better forecast than the NPTI, indicating that the NPTI needed to be upgraded or replaced. Because they require a tool that provides a reliable estimate of the daily thunderstorm probability forecast, the 45 WS forecasters requested that the AMU develop a new lightning probability forecast tool using recent data and more sophisticated techniques now possible through more computing power than that available over 30 years ago. The equation development incorporated results from two research projects that investigated causes of lightning occurrence near KSCICCAFS and over the Florida peninsula. One proved that logistic regression outperformed the linear regression method used in NPTI, even when the same predictors were used. The other study found relationships between large scale flow regimes and spatial lightning distributions over Florida. Lightning, probabilities based on these flow regimes were used as candidate predictors in the equation development. Fifteen years (1 989-2003) of warm season data were used to develop the forecast equations. The data sources included a local network of cloud-to-ground lightning sensors called the Cloud-to-Ground Lightning Surveillance System (CGLSS), 1200 UTC Florida synoptic soundings, and the 1000 UTC CCAFS sounding. Data from CGLSS were used to determine lightning occurrence for each day. The 1200 UTC soundings were used to calculate the synoptic-scale flow regimes and the 1000 UTC soundings were used to calculate local stability parameters, which were used as candidate predictors of lightning occurrence. Five logistic regression forecast equations were created through careful selection and elimination of the candidate predictors. The resulting equations contain five to six predictors each. Results from four performance tests indicated that the equations showed an increase in skill over several standard forecasting methods, good reliability, an ability to distinguish between non-lightning and lightning days, and good accuracy measures and skill scores. Given the overall good performance the 45 WS requested that the equations be transitioned to operations and added to the current set of tools used to determine the daily lightning probability of occurrence.

  12. Applied Meteorology Unit (AMU)

    NASA Technical Reports Server (NTRS)

    Bauman, William; Lambert, Winifred; Wheeler, Mark; Barrett, Joe; Watson, Leela

    2007-01-01

    This report summarizes the Applied Meteorology Unit (AMU) activities for the second quarter of Fiscal Year 2007 (January - March 2007). Tasks reported on are: Obiective Lightning Probability Tool, Peak Wind Tool for General Forecasting, Situational Lightning Climatologies for Central Florida, Anvil Threat Corridor Forecast Tool in AWIPS, Volume Averaqed Heiqht lnteq rated Radar Reflectivity (VAHIRR), Tower Data Skew-t Tool, and Weather Research and Forecastini (WRF) Model Sensitivity Study

  13. Tool for Forecasting Cool-Season Peak Winds Across Kennedy Space Center and Cape Canaveral Air Force Station (CCAFS)

    NASA Technical Reports Server (NTRS)

    Barrett, Joe H., III; Roeder, William P.

    2010-01-01

    Peak wind speed is important element in 24-Hour and Weekly Planning Forecasts issued by 45th Weather Squadron (45 WS). Forecasts issued for planning operations at KSC/CCAFS. 45 WS wind advisories issued for wind gusts greater than or equal to 25 kt. 35 kt and 50 kt from surface to 300 ft. AMU developed cool-season (Oct - Apr) tool to help 45 WS forecast: daily peak wind speed, 5-minute average speed at time of peak wind, and probability peak speed greater than or equal to 25 kt, 35 kt, 50 kt. AMU tool also forecasts daily average wind speed from 30 ft to 60 ft. Phase I and II tools delivered as a Microsoft Excel graphical user interface (GUI). Phase II tool also delivered as Meteorological Interactive Data Display System (MIDDS) GUI. Phase I and II forecast methods were compared to climatology, 45 WS wind advisories and North American Mesoscale model (MesoNAM) forecasts in a verification data set.

  14. A Bayesian Assessment of Seismic Semi-Periodicity Forecasts

    NASA Astrophysics Data System (ADS)

    Nava, F.; Quinteros, C.; Glowacka, E.; Frez, J.

    2016-01-01

    Among the schemes for earthquake forecasting, the search for semi-periodicity during large earthquakes in a given seismogenic region plays an important role. When considering earthquake forecasts based on semi-periodic sequence identification, the Bayesian formalism is a useful tool for: (1) assessing how well a given earthquake satisfies a previously made forecast; (2) re-evaluating the semi-periodic sequence probability; and (3) testing other prior estimations of the sequence probability. A comparison of Bayesian estimates with updated estimates of semi-periodic sequences that incorporate new data not used in the original estimates shows extremely good agreement, indicating that: (1) the probability that a semi-periodic sequence is not due to chance is an appropriate estimate for the prior sequence probability estimate; and (2) the Bayesian formalism does a very good job of estimating corrected semi-periodicity probabilities, using slightly less data than that used for updated estimates. The Bayesian approach is exemplified explicitly by its application to the Parkfield semi-periodic forecast, and results are given for its application to other forecasts in Japan and Venezuela.

  15. Objective Lightning Forecasting at Kennedy Space Center/Cape Canaveral Air Force Station using Cloud-to-Ground Lightning Surveillance System Data

    NASA Technical Reports Server (NTRS)

    Lambert, Winifred; Wheeler, Mark

    2004-01-01

    The 45th Weather Squadron (45 WS) forecasters at Cape Canaveral Air Force Station (CCAFS) in Florida include a probability of thunderstorm occurrence in their daily morning briefings. This information is used by personnel involved in determining the possibility of violating Launch Commit Criteria, evaluating Flight Rules for the Space Shuttle, and daily planning for ground operation activities on Kennedy Space Center (KSC)/CCAFS. Much of the current lightning probability forecast is based on a subjective analysis of model and observational data. The forecasters requested that a lightning probability forecast tool based on statistical analysis of historical warm-season (May - September) data be developed in order to increase the objectivity of the daily thunderstorm probability forecast. The tool is a set of statistical lightning forecast equations that provide a lightning occurrence probability for the day by 1100 UTC (0700 EDT) during the warm season. This study used 15 years (1989-2003) of warm season data to develop the objective forecast equations. The local CCAFS 1000 UTC sounding was used to calculate stability parameters for equation predictors. The Cloud-to-Ground Lightning Surveillance System (CGLSS) data were used to determine lightning occurrence for each day. The CGLSS data have been found to be more reliable indicators of lightning in the area than surface observations through local informal analyses. This work was based on the results from two earlier research projects. Everitt (1999) used surface observations and rawinsonde data to develop logistic regression equations that forecast the daily thunderstorm probability at CCAFS. The Everitt (1999) equations showed an improvement in skill over the Neumann-Pfeffer thunderstorm index (Neumann 1971), which uses multiple linear regression, and also persistence and climatology forecasts. Lericos et al. (2002) developed lightning distributions over the Florida peninsula based on specific flow regimes. The flow regimes were inferred from the average wind direction in the 1000-700 mb layer at Miami (MIA), Tampa (TBW), and Jacksonville (JAX), Florida, and the lightning data were from the National Lightning Detection Network. The results suggested that the daily flow regime may be an important predictor of lightning occurrence on KSC/CCAFS.

  16. Extended Statistical Short-Range Guidance for Peak Wind Speed Analyses at the Shuttle Landing Facility: Phase II Results

    NASA Technical Reports Server (NTRS)

    Lambert, Winifred C.

    2003-01-01

    This report describes the results from Phase II of the AMU's Short-Range Statistical Forecasting task for peak winds at the Shuttle Landing Facility (SLF). The peak wind speeds are an important forecast element for the Space Shuttle and Expendable Launch Vehicle programs. The 45th Weather Squadron and the Spaceflight Meteorology Group indicate that peak winds are challenging to forecast. The Applied Meteorology Unit was tasked to develop tools that aid in short-range forecasts of peak winds at tower sites of operational interest. A seven year record of wind tower data was used in the analysis. Hourly and directional climatologies by tower and month were developed to determine the seasonal behavior of the average and peak winds. Probability density functions (PDF) of peak wind speed were calculated to determine the distribution of peak speed with average speed. These provide forecasters with a means of determining the probability of meeting or exceeding a certain peak wind given an observed or forecast average speed. A PC-based Graphical User Interface (GUI) tool was created to display the data quickly.

  17. Anvil Forecast Tool in the Advanced Weather Interactive Processing System, Phase II

    NASA Technical Reports Server (NTRS)

    Barrett, Joe H., III

    2008-01-01

    Meteorologists from the 45th Weather Squadron (45 WS) and Spaceflight Meteorology Group have identified anvil forecasting as one of their most challenging tasks when predicting the probability of violations of the Lightning Launch Commit Criteria and Space Light Rules. As a result, the Applied Meteorology Unit (AMU) created a graphical overlay tool for the Meteorological Interactive Data Display Systems (MIDDS) to indicate the threat of thunderstorm anvil clouds, using either observed or model forecast winds as input.

  18. An online tool for Operational Probabilistic Drought Forecasting System (OPDFS): a Statistical-Dynamical Framework

    NASA Astrophysics Data System (ADS)

    Zarekarizi, M.; Moradkhani, H.; Yan, H.

    2017-12-01

    The Operational Probabilistic Drought Forecasting System (OPDFS) is an online tool recently developed at Portland State University for operational agricultural drought forecasting. This is an integrated statistical-dynamical framework issuing probabilistic drought forecasts monthly for the lead times of 1, 2, and 3 months. The statistical drought forecasting method utilizes copula functions in order to condition the future soil moisture values on the antecedent states. Due to stochastic nature of land surface properties, the antecedent soil moisture states are uncertain; therefore, data assimilation system based on Particle Filtering (PF) is employed to quantify the uncertainties associated with the initial condition of the land state, i.e. soil moisture. PF assimilates the satellite soil moisture data to Variable Infiltration Capacity (VIC) land surface model and ultimately updates the simulated soil moisture. The OPDFS builds on the NOAA's seasonal drought outlook by offering drought probabilities instead of qualitative ordinal categories and provides the user with the probability maps associated with a particular drought category. A retrospective assessment of the OPDFS showed that the forecasting of the 2012 Great Plains and 2014 California droughts were possible at least one month in advance. The OPDFS offers a timely assistance to water managers, stakeholders and decision-makers to develop resilience against uncertain upcoming droughts.

  19. Regional Earthquake Likelihood Models: A realm on shaky grounds?

    NASA Astrophysics Data System (ADS)

    Kossobokov, V.

    2005-12-01

    Seismology is juvenile and its appropriate statistical tools to-date may have a "medievil flavor" for those who hurry up to apply a fuzzy language of a highly developed probability theory. To become "quantitatively probabilistic" earthquake forecasts/predictions must be defined with a scientific accuracy. Following the most popular objectivists' viewpoint on probability, we cannot claim "probabilities" adequate without a long series of "yes/no" forecast/prediction outcomes. Without "antiquated binary language" of "yes/no" certainty we cannot judge an outcome ("success/failure"), and, therefore, quantify objectively a forecast/prediction method performance. Likelihood scoring is one of the delicate tools of Statistics, which could be worthless or even misleading when inappropriate probability models are used. This is a basic loophole for a misuse of likelihood as well as other statistical methods on practice. The flaw could be avoided by an accurate verification of generic probability models on the empirical data. It is not an easy task in the frames of the Regional Earthquake Likelihood Models (RELM) methodology, which neither defines the forecast precision nor allows a means to judge the ultimate success or failure in specific cases. Hopefully, the RELM group realizes the problem and its members do their best to close the hole with an adequate, data supported choice. Regretfully, this is not the case with the erroneous choice of Gerstenberger et al., who started the public web site with forecasts of expected ground shaking for `tomorrow' (Nature 435, 19 May 2005). Gerstenberger et al. have inverted the critical evidence of their study, i.e., the 15 years of recent seismic record accumulated just in one figure, which suggests rejecting with confidence above 97% "the generic California clustering model" used in automatic calculations. As a result, since the date of publication in Nature the United States Geological Survey website delivers to the public, emergency planners and the media, a forecast product, which is based on wrong assumptions that violate the best-documented earthquake statistics in California, which accuracy was not investigated, and which forecasts were not tested in a rigorous way.

  20. A New Tool for Forecasting Solar Drivers of Severe Space Weather

    NASA Technical Reports Server (NTRS)

    Adams, J. H.; Falconer, D.; Barghouty, A. F.; Khazanov, I.; Moore, R.

    2010-01-01

    This poster describes a tool that is designed to forecast solar drivers for severe space weather. Since most severe space weather is driven by Solar flares and Coronal Mass Ejections (CMEs) - the strongest of these originate in active regions and are driven by the release of coronal free magnetic energy and There is a positive correlation between an active region's free magnetic energy and the likelihood of flare and CME production therefore we can use this positive correlation as the basis of our empirical space weather forecasting tool. The new tool takes a full disk Michelson Doppler Imager (MDI) magnetogram, identifies strong magnetic field areas, identifies these with NOAA active regions, and measures a free-magnetic-energy proxy. It uses an empirically derived forecasting function to convert the free-magnetic-energy proxy to an expected event rate. It adds up the expected event rates from all active regions on the disk to forecast the expected rate and probability of each class of events -- X-class flares, X&M class flares, CMEs, fast CMEs, and solar particle events (SPEs).

  1. Objective Lightning Probability Forecast Tool Phase II

    NASA Technical Reports Server (NTRS)

    Lambert, Winnie

    2007-01-01

    This presentation describes the improvement of a set of lightning probability forecast equations that are used by the 45th Weather Squadron forecasters for their daily 1100 UTC (0700 EDT) weather briefing during the warm season months of May-September. This information is used for general scheduling of operations at Cape Canaveral Air Force Station and Kennedy Space Center. Forecasters at the Spaceflight Meteorology Group also make thunderstorm forecasts during Shuttle flight operations. Five modifications were made by the Applied Meteorology Unit: increased the period of record from 15 to 17 years, changed the method of calculating the flow regime of the day, calculated a new optimal layer relative humidity, used a new smoothing technique for the daily climatology, and used a new valid area. The test results indicated that the modified equations showed and increase in skill over the current equations, good reliability, and an ability to distinguish between lightning and non-lightning days.

  2. Communicating the Threat of a Tropical Cyclone to the Eastern Range

    NASA Technical Reports Server (NTRS)

    Winters, Katherine A.; Roeder, William P.; McAleenan, Mike; Belson, Brian L.; Shafer, Jaclyn A.

    2012-01-01

    The 45th Weather Squadron (45 WS) has developed a tool to help visualize the Wind Speed Probability product from the National Hurricane Center (NHC) and to help communicate that information to space launch customers and decision makers at the 45th Space Wing (45 SW) and Kennedy Space Center (KSC) located in east central Florida. This paper reviews previous work and presents the new visualization tool, including initial feedback as well as the pros and cons. The NHC began issuing their Wind Speed Probability product for tropical cyclones publicly in 2006. The 45 WS uses this product to provide a threat assessment to 45 SW and KSC leadership for risk evaluations with an approaching tropical cyclone. Although the wind speed probabilities convey the uncertainty of a tropical cyclone well, communicating this information to customers is a challenge. The 45 WS continually strives to provide the wind speed probability information to customers in a context which clearly communicates the threat of a tropical cyclone. First, an intern from the Florida Institute of Technology (FIT) Atmospheric Sciences department, sponsored by Scitor Corporation, independently evaluated the NHC wind speed probability product. This work was later extended into a M.S. thesis at FIT, partially funded by Scitor Corporation and KSC. A second thesis at FIT further extended the evaluation partially funded by KSC. Using this analysis, the 45 WS categorized the probabilities into five probability interpretation categories: Very Low, Low, Moderate, High, and Very High. These probability interpretation categories convert the forecast probability and forecast interval into easily understood categories that are consistent across all ranges of probabilities and forecast intervals. As a follow-on project, KSC funded a summer intern to evaluate the human factors of the probability interpretation categories, which ultimately refined some of the thresholds. The 45 WS created a visualization tool to express the timing and risk for multiple locations in a single graphic. Preliminary results on an on-going project by FIT will be included in this paper. This project is developing a new method of assigning the probability interpretation categories and updating the evaluation of the performance of the NHC wind speed probability analysis.

  3. Added value of non-calibrated and BMA calibrated AEMET-SREPS probabilistic forecasts: the 24 January 2009 extreme wind event over Catalonia

    NASA Astrophysics Data System (ADS)

    Escriba, P. A.; Callado, A.; Santos, D.; Santos, C.; Simarro, J.; García-Moya, J. A.

    2009-09-01

    At 00 UTC 24 January 2009 an explosive ciclogenesis originated over the Atlantic Ocean reached its maximum intensity with observed surface pressures lower than 970 hPa on its center and placed at Gulf of Vizcaya. During its path through southern France this low caused strong westerly and north-westerly winds over the Iberian Peninsula higher than 150 km/h at some places. These extreme winds leaved 10 casualties in Spain, 8 of them in Catalonia. The aim of this work is to show whether exists an added value in the short range prediction of the 24 January 2009 strong winds when using the Short Range Ensemble Prediction System (SREPS) of the Spanish Meteorological Agency (AEMET), with respect to the operational forecasting tools. This study emphasizes two aspects of probabilistic forecasting: the ability of a 3-day forecast of warn an extreme windy event and the ability of quantifying the predictability of the event so that giving value to deterministic forecast. Two type of probabilistic forecasts of wind are carried out, a non-calibrated and a calibrated one using Bayesian Model Averaging (BMA). AEMET runs daily experimentally SREPS twice a day (00 and 12 UTC). This system consists of 20 members that are constructed by integrating 5 local area models, COSMO (COSMO), HIRLAM (HIRLAM Consortium), HRM (DWD), MM5 (NOAA) and UM (UKMO), at 25 km of horizontal resolution. Each model uses 4 different initial and boundary conditions, the global models GFS (NCEP), GME (DWD), IFS (ECMWF) and UM. By this way it is obtained a probabilistic forecast that takes into account the initial, the contour and the model errors. BMA is a statistical tool for combining predictive probability functions from different sources. The BMA predictive probability density function (PDF) is a weighted average of PDFs centered on the individual bias-corrected forecasts. The weights are equal to posterior probabilities of the models generating the forecasts and reflect the skill of the ensemble members. Here BMA is applied to provide probabilistic forecasts of wind speed. In this work several forecasts for different time ranges (H+72, H+48 and H+24) of 10 meters wind speed over Catalonia are verified subjectively at one of the instants of maximum intensity, 12 UTC 24 January 2009. On one hand, three probabilistic forecasts are compared, ECMWF EPS, non-calibrated SREPS and calibrated SREPS. On the other hand, the relationship between predictability and skill of deterministic forecast is studied by looking at HIRLAM 0.16 deterministic forecasts of the event. Verification is focused on location and intensity of 10 meters wind speed and 10-minutal measures from AEMET automatic ground stations are used as observations. The results indicate that SREPS is able to forecast three days ahead mean winds higher than 36 km/h and that correctly localizes them with a significant probability of ocurrence in the affected area. The probability is higher after BMA calibration of the ensemble. The fact that probability of strong winds is high allows us to state that the predictability of the event is also high and, as a consequence, deterministic forecasts are more reliable. This is confirmed when verifying HIRLAM deterministic forecasts against observed values.

  4. Fine-temporal forecasting of outbreak probability and severity: Ross River virus in Western Australia.

    PubMed

    Koolhof, I S; Bettiol, S; Carver, S

    2017-10-01

    Health warnings of mosquito-borne disease risk require forecasts that are accurate at fine-temporal resolutions (weekly scales); however, most forecasting is coarse (monthly). We use environmental and Ross River virus (RRV) surveillance to predict weekly outbreak probabilities and incidence spanning tropical, semi-arid, and Mediterranean regions of Western Australia (1991-2014). Hurdle and linear models were used to predict outbreak probabilities and incidence respectively, using time-lagged environmental variables. Forecast accuracy was assessed by model fit and cross-validation. Residual RRV notification data were also examined against mitigation expenditure for one site, Mandurah 2007-2014. Models were predictive of RRV activity, except at one site (Capel). Minimum temperature was an important predictor of RRV outbreaks and incidence at all predicted sites. Precipitation was more likely to cause outbreaks and greater incidence among tropical and semi-arid sites. While variable, mitigation expenditure coincided positively with increased RRV incidence (r 2 = 0·21). Our research demonstrates capacity to accurately predict mosquito-borne disease outbreaks and incidence at fine-temporal resolutions. We apply our findings, developing a user-friendly tool enabling managers to easily adopt this research to forecast region-specific RRV outbreaks and incidence. Approaches here may be of value to fine-scale forecasting of RRV in other areas of Australia, and other mosquito-borne diseases.

  5. Validation of the CME Geomagnetic forecast alerts under COMESEP alert system

    NASA Astrophysics Data System (ADS)

    Dumbovic, Mateja; Srivastava, Nandita; Khodia, Yamini; Vršnak, Bojan; Devos, Andy; Rodriguez, Luciano

    2017-04-01

    An automated space weather alert system has been developed under the EU FP7 project COMESEP (COronal Mass Ejections and Solar Energetic Particles: http://comesep.aeronomy.be) to forecast solar energetic particles (SEP) and coronal mass ejection (CME) risk levels at Earth. COMESEP alert system uses automated detection tool CACTus to detect potentially threatening CMEs, drag-based model (DBM) to predict their arrival and CME geo-effectiveness tool (CGFT) to predict their geomagnetic impact. Whenever CACTus detects a halo or partial halo CME and issues an alert, DBM calculates its arrival time at Earth and CGFT calculates its geomagnetic risk level. Geomagnetic risk level is calculated based on an estimation of the CME arrival probability and its likely geo-effectiveness, as well as an estimate of the geomagnetic-storm duration. We present the evaluation of the CME risk level forecast with COMESEP alert system based on a study of geo-effective CMEs observed during 2014. The validation of the forecast tool is done by comparing the forecasts with observations. In addition, we test the success rate of the automatic forecasts (without human intervention) against the forecasts with human intervention using advanced versions of DBM and CGFT (self standing tools available at Hvar Observatory website: http://oh.geof.unizg.hr). The results implicate that the success rate of the forecast is higher with human intervention and using more advanced tools. This work has received funding from the European Commission FP7 Project COMESEP (263252). We acknowledge the support of Croatian Science Foundation under the project 6212 „Solar and Stellar Variability".

  6. ENSURF: multi-model sea level forecast - implementation and validation results for the IBIROOS and Western Mediterranean regions

    NASA Astrophysics Data System (ADS)

    Pérez, B.; Brower, R.; Beckers, J.; Paradis, D.; Balseiro, C.; Lyons, K.; Cure, M.; Sotillo, M. G.; Hacket, B.; Verlaan, M.; Alvarez Fanjul, E.

    2011-04-01

    ENSURF (Ensemble SURge Forecast) is a multi-model application for sea level forecast that makes use of existing storm surge or circulation models today operational in Europe, as well as near-real time tide gauge data in the region, with the following main goals: - providing an easy access to existing forecasts, as well as to its performance and model validation, by means of an adequate visualization tool - generation of better forecasts of sea level, including confidence intervals, by means of the Bayesian Model Average Technique (BMA) The system was developed and implemented within ECOOP (C.No. 036355) European Project for the NOOS and the IBIROOS regions, based on MATROOS visualization tool developed by Deltares. Both systems are today operational at Deltares and Puertos del Estado respectively. The Bayesian Modelling Average technique generates an overall forecast probability density function (PDF) by making a weighted average of the individual forecasts PDF's; the weights represent the probability that a model will give the correct forecast PDF and are determined and updated operationally based on the performance of the models during a recent training period. This implies the technique needs the availability of sea level data from tide gauges in near-real time. Results of validation of the different models and BMA implementation for the main harbours will be presented for the IBIROOS and Western Mediterranean regions, where this kind of activity is performed for the first time. The work has proved to be useful to detect problems in some of the circulation models not previously well calibrated with sea level data, to identify the differences on baroclinic and barotropic models for sea level applications and to confirm the general improvement of the BMA forecasts.

  7. Update to the Objective Lightning Probability Forecast Tool in Use at Cape Canaveral Air Force Station, Florida

    NASA Technical Reports Server (NTRS)

    Lambert, Winifred; Roeder, William

    2008-01-01

    This conference presentation describes the improvement of a set of lightning probability forecast equations that are used by the 45th Weather Squadron forecasters for their daily 1100 UTC (0700 EDT) weather briefing during the warm season months of May-September. This information is used for general scheduling of operations at Cape Canaveral Air Force Station and Kennedy Space Center. Forecasters at the Spaceflight Meteorology Group also make thunderstorm forecasts during Shuttle flight operations. Five modifications were made by the Applied Meteorology Unit: increased the period of record from 15 to 17 years, changed the method of calculating the flow regime of the day, calculated a new optimal layer relative humidity, used a new smoothing technique for the daily climatology, and used a new valid area. The test results indicated that the modified equaitions showed and increase in skill over the current equations, good reliability, and an ability to distinguish between lightning and non-lightning days.

  8. Update to the Objective Lightning Probability Forecast Tool in use at Cape Canaveral Air Force Station, Florida

    NASA Technical Reports Server (NTRS)

    Lambert, Winifred; Roeder, William

    2013-01-01

    This conference poster describes the improvement of a set of lightning probability forecast equations that are used by the 45th Weather Squadron forecasters for their daily 1100 UTC (0700 EDT) weather briefing during the warm season months of May-September. This information is used for general scheduling of operations at Cape Canaveral Air Force Station and Kennedy Space Center. Forecasters at the Spaceflight Meteorology Group also make thunderstorm forecasts during Shuttle flight operations. Five modifications were made by the Applied Meteorology Unit: increased the period of record from 15 to 17 years, changed the method of calculating the flow regime of the day, calculated a new optimal layer relative humidity, used a new smoothing technique for the daily climatology, and used a new valid area. The test results indicated that the modified equations showed and increase in skill over the current equations, good reliability and an ability to distinguish between lightning and non-lightning days.

  9. Anvil Forecast Tool in the Advanced Weather Interactive Processing System (AWIPS)

    NASA Technical Reports Server (NTRS)

    Barrett, Joe H., III; Hood, Doris

    2009-01-01

    Launch Weather Officers (LWOs) from the 45th Weather Squadron (45 WS) and forecasters from the National Weather Service (NWS) Spaceflight Meteorology Group (SMG) have identified anvil forecasting as one of their most challenging tasks when predicting the probability of violating the Lightning Launch Commit Criteria (LLCC) (Krider et al. 2006; Space Shuttle Flight Rules (FR), NASA/JSC 2004)). As a result, the Applied Meteorology Unit (AMU) developed a tool that creates an anvil threat corridor graphic that can be overlaid on satellite imagery using the Meteorological Interactive Data Display System (MIDDS, Short and Wheeler, 2002). The tool helps forecasters estimate the locations of thunderstorm anvils at one, two, and three hours into the future. It has been used extensively in launch and landing operations by both the 45 WS and SMG. The Advanced Weather Interactive Processing System (AWIPS) is now used along with MIDDS for weather analysis and display at SMG. In Phase I of this task, SMG tasked the AMU to transition the tool from MIDDS to AWIPS (Barrett et aI., 2007). For Phase II, SMG requested the AMU make the Anvil Forecast Tool in AWIPS more configurable by creating the capability to read model gridded data from user-defined model files instead of hard-coded files. An NWS local AWIPS application called AGRID was used to accomplish this. In addition, SMG needed to be able to define the pressure levels for the model data, instead of hard-coding the bottom level as 300 mb and the top level as 150 mb. This paper describes the initial development of the Anvil Forecast Tool for MIDDS, followed by the migration of the tool to AWIPS in Phase I. It then gives a detailed presentation of the Phase II improvements to the AWIPS tool.

  10. Validation of the CME Geomagnetic Forecast Alerts Under the COMESEP Alert System

    NASA Astrophysics Data System (ADS)

    Dumbović, Mateja; Srivastava, Nandita; Rao, Yamini K.; Vršnak, Bojan; Devos, Andy; Rodriguez, Luciano

    2017-08-01

    Under the European Union 7th Framework Programme (EU FP7) project Coronal Mass Ejections and Solar Energetic Particles (COMESEP, http://comesep.aeronomy.be), an automated space weather alert system has been developed to forecast solar energetic particles (SEP) and coronal mass ejection (CME) risk levels at Earth. The COMESEP alert system uses the automated detection tool called Computer Aided CME Tracking (CACTus) to detect potentially threatening CMEs, a drag-based model (DBM) to predict their arrival, and a CME geoeffectiveness tool (CGFT) to predict their geomagnetic impact. Whenever CACTus detects a halo or partial halo CME and issues an alert, the DBM calculates its arrival time at Earth and the CGFT calculates its geomagnetic risk level. The geomagnetic risk level is calculated based on an estimation of the CME arrival probability and its likely geoeffectiveness, as well as an estimate of the geomagnetic storm duration. We present the evaluation of the CME risk level forecast with the COMESEP alert system based on a study of geoeffective CMEs observed during 2014. The validation of the forecast tool is made by comparing the forecasts with observations. In addition, we test the success rate of the automatic forecasts (without human intervention) against the forecasts with human intervention using advanced versions of the DBM and CGFT (independent tools available at the Hvar Observatory website, http://oh.geof.unizg.hr). The results indicate that the success rate of the forecast in its current form is unacceptably low for a realistic operation system. Human intervention improves the forecast, but the false-alarm rate remains unacceptably high. We discuss these results and their implications for possible improvement of the COMESEP alert system.

  11. Mixture EMOS model for calibrating ensemble forecasts of wind speed.

    PubMed

    Baran, S; Lerch, S

    2016-03-01

    Ensemble model output statistics (EMOS) is a statistical tool for post-processing forecast ensembles of weather variables obtained from multiple runs of numerical weather prediction models in order to produce calibrated predictive probability density functions. The EMOS predictive probability density function is given by a parametric distribution with parameters depending on the ensemble forecasts. We propose an EMOS model for calibrating wind speed forecasts based on weighted mixtures of truncated normal (TN) and log-normal (LN) distributions where model parameters and component weights are estimated by optimizing the values of proper scoring rules over a rolling training period. The new model is tested on wind speed forecasts of the 50 member European Centre for Medium-range Weather Forecasts ensemble, the 11 member Aire Limitée Adaptation dynamique Développement International-Hungary Ensemble Prediction System ensemble of the Hungarian Meteorological Service, and the eight-member University of Washington mesoscale ensemble, and its predictive performance is compared with that of various benchmark EMOS models based on single parametric families and combinations thereof. The results indicate improved calibration of probabilistic and accuracy of point forecasts in comparison with the raw ensemble and climatological forecasts. The mixture EMOS model significantly outperforms the TN and LN EMOS methods; moreover, it provides better calibrated forecasts than the TN-LN combination model and offers an increased flexibility while avoiding covariate selection problems. © 2016 The Authors Environmetrics Published by JohnWiley & Sons Ltd.

  12. Global Positioning System (GPS) Precipitable Water in Forecasting Lightning at Spaceport Canaveral

    NASA Technical Reports Server (NTRS)

    Kehrer, Kristen; Graf, Brian G.; Roeder, William

    2005-01-01

    Using meteorology data, focusing on precipitable water (PW), obtained during the 2000-2003 thunderstorm seasons in Central Florida, this paper will, one, assess the skill and accuracy measurements of the current Mazany forecasting tool and, two, provide additional forecasting tools that can be used in predicting lightning. Kennedy Space Center (KSC) and Cape Canaveral Air Force Station (CCAFS) are located in east Central Florida. KSC and CCAFS process and launch manned (NASA Space Shuttle) and unmanned (NASA and Air Force Expendable Launch Vehicles) space vehicles. One of the biggest cost impacts is unplanned launch scrubs due to inclement weather conditions such as thunderstorms. Each launch delay/scrub costs over a quarter million dollars, and the need to land the Shuttle at another landing site and return to KSC costs approximately $ 1M. Given the amount of time lost and costs incurred, the ability to accurately forecast (predict) when lightning will occur can result in significant cost and time savings. All lightning prediction models were developed using binary logistic regression. Lightning is the dependent variable and is binary. The independent variables are the Precipitable Water (PW) value for a given time of the day, the change in PW up to 12 hours, the electric field mill value, and the K-index value. In comparing the Mazany model results for the 1999 period B against actual observations for the 2000-2003 thunderstorm seasons, differences were found in the False Alarm Rate (FAR), Probability of Detection (POD) and Hit Rate (H). On average, the False Alarm Rate (FAR) increased by 58%, the Probability of Detection (POD) decreased by 31% and the Hit Rate decreased by 20%. In comparing the performance of the 6 hour forecast period to the performance of the 1.5 hour forecast period for the Mazany model, the FAR was lower by 15% and the Hit Rate was higher by 7%. However, the POD for the 6 hour forecast period was lower by 16% as compared to the POD of the 1.5 hour forecast period. Neither forecast period performed at the accuracy measures expected. A 2-Hr Forecasting Tool was developed to support a Phase I Lightning Advisory, which requires a 30-minute lead time for predicting lightning.

  13. Statistical Short-Range Guidance for Peak Wind Speed Forecasts at Edwards Air Force Base, CA

    NASA Technical Reports Server (NTRS)

    Dreher, Joseph; Crawford, Winifred; Lafosse, Richard; Hoeth, Brian; Burns, Kerry

    2008-01-01

    The peak winds near the surface are an important forecast element for Space Shuttle landings. As defined in the Shuttle Flight Rules (FRs), there are peak wind thresholds that cannot be exceeded in order to ensure the safety of the shuttle during landing operations. The National Weather Service Spaceflight Meteorology Group (SMG) is responsible for weather forecasts for all shuttle landings. They indicate peak winds are a challenging parameter to forecast. To alleviate the difficulty in making such wind forecasts, the Applied Meteorology Unit (AMTJ) developed a personal computer based graphical user interface (GUI) for displaying peak wind climatology and probabilities of exceeding peak-wind thresholds for the Shuttle Landing Facility (SLF) at Kennedy Space Center. However, the shuttle must land at Edwards Air Force Base (EAFB) in southern California when weather conditions at Kennedy Space Center in Florida are not acceptable, so SMG forecasters requested that a similar tool be developed for EAFB. Marshall Space Flight Center (MSFC) personnel archived and performed quality control of 2-minute average and 10-minute peak wind speeds at each tower adjacent to the main runway at EAFB from 1997- 2004. They calculated wind climatologies and probabilities of average peak wind occurrence based on the average speed. The climatologies were calculated for each tower and month, and were stratified by hour, direction, and direction/hour. For the probabilities of peak wind occurrence, MSFC calculated empirical and modeled probabilities of meeting or exceeding specific 10-minute peak wind speeds using probability density functions. The AMU obtained and reformatted the data into Microsoft Excel PivotTables, which allows users to display different values with point-click-drag techniques. The GUT was then created from the PivotTables using Visual Basic for Applications code. The GUI is run through a macro within Microsoft Excel and allows forecasters to quickly display and interpret peak wind climatology and likelihoods in a fast-paced operational environment. A summary of how the peak wind climatologies and probabilities were created and an overview of the GUT will be presented.

  14. Flow Regime Based Climatologies of Lightning Probabilities for Spaceports and Airports

    NASA Technical Reports Server (NTRS)

    Bauman, William H., III; Sharp, David; Spratt, Scott; Lafosse, Richard A.

    2008-01-01

    The objective of this work was to provide forecasters with a tool to indicate the warm season climatological probability of one or more lightning strikes within a circle at a site within a specified time interval. This paper described the AMU work conducted in developing flow regime based climatologies of lightning probabilities for the SLF and seven airports in the NWS MLB CWA in east-central Florida. The paper also described the GUI developed by the AMU that is used to display the data for the operational forecasters. There were challenges working with gridded lightning data as well as the code that accompanied the gridded data. The AMU modified the provided code to be able to produce the climatologies of lightning probabilities based on eight flow regimes for 5-, 10-, 20-, and 30-n mi circles centered on eight sites in 1-, 3-, and 6-hour increments.

  15. Spatial organization of foreshocks as a tool to forecast large earthquakes.

    PubMed

    Lippiello, E; Marzocchi, W; de Arcangelis, L; Godano, C

    2012-01-01

    An increase in the number of smaller magnitude events, retrospectively named foreshocks, is often observed before large earthquakes. We show that the linear density probability of earthquakes occurring before and after small or intermediate mainshocks displays a symmetrical behavior, indicating that the size of the area fractured during the mainshock is encoded in the foreshock spatial organization. This observation can be used to discriminate spatial clustering due to foreshocks from the one induced by aftershocks and is implemented in an alarm-based model to forecast m > 6 earthquakes. A retrospective study of the last 19 years Southern California catalog shows that the daily occurrence probability presents isolated peaks closely located in time and space to the epicenters of five of the six m > 6 earthquakes. We find daily probabilities as high as 25% (in cells of size 0.04 × 0.04deg(2)), with significant probability gains with respect to standard models.

  16. Spatial organization of foreshocks as a tool to forecast large earthquakes

    PubMed Central

    Lippiello, E.; Marzocchi, W.; de Arcangelis, L.; Godano, C.

    2012-01-01

    An increase in the number of smaller magnitude events, retrospectively named foreshocks, is often observed before large earthquakes. We show that the linear density probability of earthquakes occurring before and after small or intermediate mainshocks displays a symmetrical behavior, indicating that the size of the area fractured during the mainshock is encoded in the foreshock spatial organization. This observation can be used to discriminate spatial clustering due to foreshocks from the one induced by aftershocks and is implemented in an alarm-based model to forecast m > 6 earthquakes. A retrospective study of the last 19 years Southern California catalog shows that the daily occurrence probability presents isolated peaks closely located in time and space to the epicenters of five of the six m > 6 earthquakes. We find daily probabilities as high as 25% (in cells of size 0.04 × 0.04deg2), with significant probability gains with respect to standard models. PMID:23152938

  17. Communicating Uncertainty in Volcanic Ash Forecasts: Decision-Making and Information Preferences

    NASA Astrophysics Data System (ADS)

    Mulder, Kelsey; Black, Alison; Charlton-Perez, Andrew; McCloy, Rachel; Lickiss, Matthew

    2016-04-01

    The Robust Assessment and Communication of Environmental Risk (RACER) consortium, an interdisciplinary research team focusing on communication of uncertainty with respect to natural hazards, hosted a Volcanic Ash Workshop to discuss issues related to volcanic ash forecasting, especially forecast uncertainty. Part of the workshop was a decision game in which participants including forecasters, academics, and members of the Aviation Industry were given hypothetical volcanic ash concentration forecasts and asked whether they would approve a given flight path. The uncertainty information was presented in different formats including hazard maps, line graphs, and percent probabilities. Results from the decision game will be presented with a focus on information preferences, understanding of the forecasts, and whether different formats of the same volcanic ash forecast resulted in different flight decisions. Implications of this research will help the design and presentation of volcanic ash plume decision tools and can also help advise design of other natural hazard information.

  18. Anvil Forecast Tool in the Advanced Weather Interactive Processing System

    NASA Technical Reports Server (NTRS)

    Barrett, Joe H., III; Hood, Doris

    2009-01-01

    Meteorologists from the 45th Weather Squadron (45 WS) and National Weather Service Spaceflight Meteorology Group (SMG) have identified anvil forecasting as one of their most challenging tasks when predicting the probability of violations of the Lightning Launch Commit Criteria and Space Shuttle Flight Rules. As a result, the Applied Meteorology Unit (AMU) was tasked to create a graphical overlay tool for the Meteorological Interactive Data Display System (MIDDS) that indicates the threat of thunderstorm anvil clouds, using either observed or model forecast winds as input. The tool creates a graphic depicting the potential location of thunderstorm anvils one, two, and three hours into the future. The locations are based on the average of the upper level observed or forecasted winds. The graphic includes 10 and 20 n mi standoff circles centered at the location of interest, as well as one-, two-, and three-hour arcs in the upwind direction. The arcs extend outward across a 30 sector width based on a previous AMU study that determined thunderstorm anvils move in a direction plus or minus 15 of the upper-level wind direction. The AMU was then tasked to transition the tool to the Advanced Weather Interactive Processing System (AWIPS). SMG later requested the tool be updated to provide more flexibility and quicker access to model data. This presentation describes the work performed by the AMU to transition the tool into AWIPS, as well as the subsequent improvements made to the tool.

  19. Probability fire weather forecasts .. show promise in 3-year trial

    Treesearch

    Paul G. Scowcroft

    1970-01-01

    Probability fire weather forecasts were compared with categorical and climatological forecasts in a trial in southern California during the 1965-1967 fire seasons. Equations were developed to express the reliability of forecasts and degree of skill shown by the forecaster. Evaluation of 336 daily reports suggests that probability forecasts were more reliable. For...

  20. Economic assessment of flood forecasts for a risk-averse decision-maker

    NASA Astrophysics Data System (ADS)

    Matte, Simon; Boucher, Marie-Amélie; Boucher, Vincent; Fortier-Filion, Thomas-Charles

    2017-04-01

    A large effort has been made over the past 10 years to promote the operational use of probabilistic or ensemble streamflow forecasts. It has also been suggested in past studies that ensemble forecasts might possess a greater economic value than deterministic forecasts. However, the vast majority of recent hydro-economic literature is based on the cost-loss ratio framework, which might be appealing for its simplicity and intuitiveness. One important drawback of the cost-loss ratio is that it implicitly assumes a risk-neutral decision maker. By definition, a risk-neutral individual is indifferent to forecasts' sharpness: as long as forecasts agree with observations on average, the risk-neutral individual is satisfied. A risk-averse individual, however, is sensitive to the level of precision (sharpness) of forecasts. This person is willing to pay to increase his or her certainty about future events. In fact, this is how insurance companies operate: the probability of seeing one's house burn down is relatively low, so the expected cost related to such event is also low. However, people are willing to buy insurance to avoid the risk, however small, of loosing everything. Similarly, in a context where people's safety and property is at stake, the typical decision maker is more risk-averse than risk-neutral. Consequently, the cost-loss ratio is not the most appropriate tool to assess the economic value of flood forecasts. This presentation describes a more realistic framework for assessing the economic value of such forecasts for flood mitigation purposes. Borrowing from economics, the Constant Absolute Risk Aversion utility function (CARA) is the central tool of this new framework. Utility functions allow explicitly accounting for the level of risk aversion of the decision maker and fully exploiting the information related to ensemble forecasts' uncertainty. Three concurrent ensemble streamflow forecasting systems are compared in terms of quality (comparison with observed values) and in terms of their economic value. This assessment is performed for lead times of one to five days. The three systems are: (1) simple statistically dressed deterministic forecasts, (2) forecasts based on meteorological ensembles and (3) a variant of the latter that also includes an estimation of state variables uncertainty. The comparison takes place on the Montmorency River, a small flood-prone watershed in south central Quebec, Canada. The results show that forecasts quality as assessed by well-known tools such as the Continuous Ranked Probability Score or the reliability diagram do not necessarily translate directly into economic value, especially if the decision maker is not risk-neutral. In addition, results show that the economic value of forecasts for a risk-averse decision maker is very much influenced by the most extreme members of ensemble forecasts (upper tail of the predictive distributions). This study provides a new basis for further improvement of our comprehension of the complex interactions between forecasts uncertainty, risk-aversion and decision-making.

  1. More Intense Experiences, Less Intense Forecasts: Why People Overweight Probability Specifications in Affective Forecasts

    PubMed Central

    Buechel, Eva C.; Zhang, Jiao; Morewedge, Carey K.; Vosgerau, Joachim

    2014-01-01

    We propose that affective forecasters overestimate the extent to which experienced hedonic responses to an outcome are influenced by the probability of its occurrence. The experience of an outcome (e.g., winning a gamble) is typically more affectively intense than the simulation of that outcome (e.g., imagining winning a gamble) upon which the affective forecast for it is based. We suggest that, as a result, experiencers allocate a larger share of their attention toward the outcome (e.g., winning the gamble) and less to its probability specifications than do affective forecasters. Consequently, hedonic responses to an outcome are less sensitive to its probability specifications than are affective forecasts for that outcome. The results of 6 experiments provide support for our theory. Affective forecasters overestimated how sensitive experiencers would be to the probability of positive and negative outcomes (Experiments 1 and 2). Consistent with our attentional account, differences in sensitivity to probability specifications disappeared when the attention of forecasters was diverted from probability specifications (Experiment 3) or when the attention of experiencers was drawn toward probability specifications (Experiment 4). Finally, differences in sensitivity to probability specifications between forecasters and experiencers were diminished when the forecasted outcome was more affectively intense (Experiments 5 and 6). PMID:24128184

  2. More intense experiences, less intense forecasts: why people overweight probability specifications in affective forecasts.

    PubMed

    Buechel, Eva C; Zhang, Jiao; Morewedge, Carey K; Vosgerau, Joachim

    2014-01-01

    We propose that affective forecasters overestimate the extent to which experienced hedonic responses to an outcome are influenced by the probability of its occurrence. The experience of an outcome (e.g., winning a gamble) is typically more affectively intense than the simulation of that outcome (e.g., imagining winning a gamble) upon which the affective forecast for it is based. We suggest that, as a result, experiencers allocate a larger share of their attention toward the outcome (e.g., winning the gamble) and less to its probability specifications than do affective forecasters. Consequently, hedonic responses to an outcome are less sensitive to its probability specifications than are affective forecasts for that outcome. The results of 6 experiments provide support for our theory. Affective forecasters overestimated how sensitive experiencers would be to the probability of positive and negative outcomes (Experiments 1 and 2). Consistent with our attentional account, differences in sensitivity to probability specifications disappeared when the attention of forecasters was diverted from probability specifications (Experiment 3) or when the attention of experiencers was drawn toward probability specifications (Experiment 4). Finally, differences in sensitivity to probability specifications between forecasters and experiencers were diminished when the forecasted outcome was more affectively intense (Experiments 5 and 6).

  3. Intrinsic Property Forecast Index (iPFI) as a Rule of Thumb for Medicinal Chemists to Remove a Phototoxicity Liability.

    PubMed

    Fournier, Jean-François; Bouix-Peter, Claire; Duvert, Denis; Luzy, Anne-Pascale; Ouvry, Gilles

    2018-04-12

    Phototoxicity occurs when UV irradiation causes otherwise benign compounds to become irritant, sensitizers, or even genotoxic. This toxicity is particularly a concern after topical application and in dermatological programs where skin irritation can be incompatible with the desired therapeutic outcome. This brief article establishes that the intrinsic property forecast index (iPFI) can be used to evaluate the probability of a compound being phototoxic and gives medicinal chemists a practical tool to handle this liability.

  4. Completion of the Edward Air Force Base Statistical Guidance Wind Tool

    NASA Technical Reports Server (NTRS)

    Dreher, Joseph G.

    2008-01-01

    The goal of this task was to develop a GUI using EAFB wind tower data similar to the KSC SLF peak wind tool that is already in operations at SMG. In 2004, MSFC personnel began work to replicate the KSC SLF tool using several wind towers at EAFB. They completed the analysis and QC of the data, but due to higher priority work did not start development of the GUI. MSFC personnel calculated wind climatologies and probabilities of 10-minute peak wind occurrence based on the 2-minute average wind speed for several EAFB wind towers. Once the data were QC'ed and analyzed the climatologies were calculated following the methodology outlined in Lambert (2003). The climatologies were calculated for each tower and month, and then were stratified by hour, direction (10" sectors), and direction (45" sectors)/hour. For all climatologies, MSFC calculated the mean, standard deviation and observation counts of the Zminute average and 10-minute peak wind speeds. MSFC personnel also calculated empirical and modeled probabilities of meeting or exceeding specific 10- minute peak wind speeds using PDFs. The empirical PDFs were asymmetrical and bounded on the left by the 2- minute average wind speed. They calculated the parametric PDFs by fitting the GEV distribution to the empirical distributions. Parametric PDFs were calculated in order to smooth and interpolate over variations in the observed values due to possible under-sampling of certain peak winds and to estimate probabilities associated with average winds outside the observed range. MSFC calculated the individual probabilities of meeting or exceeding specific 10- minute peak wind speeds by integrating the area under each curve. The probabilities assist SMG forecasters in assessing the shuttle FR for various Zminute average wind speeds. The A M ' obtained the processed EAFB data from Dr. Lee Bums of MSFC and reformatted them for input to Excel PivotTables, which allow users to display different values with point-click-drag techniques. The GUI was created from the PivotTables using VBA code. It is run through a macro within Excel and allows forecasters to quickly display and interpret peak wind climatology and probabilities in a fast-paced operational environment. The GUI was designed to look and operate exactly the same as the KSC SLF tool since SMG forecasters were already familiar with that product. SMG feedback was continually incorporated into the GUI ensuring the end product met their needs. The final version of the GUI along with all climatologies, PDFs, and probabilities has been delivered to SMG and will be put into operational use.

  5. Statistical Short-Range Guidance for Peak Wind Speed Forecasts at Edwards Air Force Base, CA

    NASA Technical Reports Server (NTRS)

    Dreher, Joseph G.; Crawford, Winifred; Lafosse, Richard; Hoeth, Brian; Burns, Kerry

    2009-01-01

    The peak winds near the surface are an important forecast element for space shuttle landings. As defined in the Flight Rules (FR), there are peak wind thresholds that cannot be exceeded in order to ensure the safety of the shuttle during landing operations. The National Weather Service Spaceflight Meteorology Group (SMG) is responsible for weather forecasts for all shuttle landings, and is required to issue surface average and 10-minute peak wind speed forecasts. They indicate peak winds are a challenging parameter to forecast. To alleviate the difficulty in making such wind forecasts, the Applied Meteorology Unit (AMU) developed a PC-based graphical user interface (GUI) for displaying peak wind climatology and probabilities of exceeding peak wind thresholds for the Shuttle Landing Facility (SLF) at Kennedy Space Center (KSC; Lambert 2003). However, the shuttle occasionally may land at Edwards Air Force Base (EAFB) in southern California when weather conditions at KSC in Florida are not acceptable, so SMG forecasters requested a similar tool be developed for EAFB.

  6. Cardiac catheterization laboratory inpatient forecast tool: a prospective evaluation

    PubMed Central

    Flanagan, Eleni; Siddiqui, Sauleh; Appelbaum, Jeff; Kasper, Edward K; Levin, Scott

    2016-01-01

    Objective To develop and prospectively evaluate a web-based tool that forecasts the daily bed need for admissions from the cardiac catheterization laboratory using routinely available clinical data within electronic medical records (EMRs). Methods The forecast model was derived using a 13-month retrospective cohort of 6384 catheterization patients. Predictor variables such as demographics, scheduled procedures, and clinical indicators mined from free-text notes were input to a multivariable logistic regression model that predicted the probability of inpatient admission. The model was embedded into a web-based application connected to the local EMR system and used to support bed management decisions. After implementation, the tool was prospectively evaluated for accuracy on a 13-month test cohort of 7029 catheterization patients. Results The forecast model predicted admission with an area under the receiver operating characteristic curve of 0.722. Daily aggregate forecasts were accurate to within one bed for 70.3% of days and within three beds for 97.5% of days during the prospective evaluation period. The web-based application housing the forecast model was used by cardiology providers in practice to estimate daily admissions from the catheterization laboratory. Discussion The forecast model identified older age, male gender, invasive procedures, coronary artery bypass grafts, and a history of congestive heart failure as qualities indicating a patient was at increased risk for admission. Diagnostic procedures and less acute clinical indicators decreased patients’ risk of admission. Despite the site-specific limitations of the model, these findings were supported by the literature. Conclusion Data-driven predictive analytics may be used to accurately forecast daily demand for inpatient beds for cardiac catheterization patients. Connecting these analytics to EMR data sources has the potential to provide advanced operational decision support. PMID:26342217

  7. Peak Wind Tool for General Forecasting

    NASA Technical Reports Server (NTRS)

    Barrett, Joe H., III

    2010-01-01

    The expected peak wind speed of the day is an important forecast element in the 45th Weather Squadron's (45 WS) daily 24-Hour and Weekly Planning Forecasts. The forecasts are used for ground and space launch operations at the Kennedy Space Center (KSC) and Cape Canaveral Air Force Station (CCAFS). The 45 WS also issues wind advisories for KSC/CCAFS when they expect wind gusts to meet or exceed 25 kt, 35 kt and 50 kt thresholds at any level from the surface to 300 ft. The 45 WS forecasters have indicated peak wind speeds are challenging to forecast, particularly in the cool season months of October - April. In Phase I of this task, the Applied Meteorology Unit (AMU) developed a tool to help the 45 WS forecast non-convective winds at KSC/CCAFS for the 24-hour period of 0800 to 0800 local time. The tool was delivered as a Microsoft Excel graphical user interface (GUI). The GUI displayed the forecast of peak wind speed, 5-minute average wind speed at the time of the peak wind, timing of the peak wind and probability the peak speed would meet or exceed 25 kt, 35 kt and 50 kt. For the current task (Phase II ), the 45 WS requested additional observations be used for the creation of the forecast equations by expanding the period of record (POR). Additional parameters were evaluated as predictors, including wind speeds between 500 ft and 3000 ft, static stability classification, Bulk Richardson Number, mixing depth, vertical wind shear, temperature inversion strength and depth and wind direction. Using a verification data set, the AMU compared the performance of the Phase I and II prediction methods. Just as in Phase I, the tool was delivered as a Microsoft Excel GUI. The 45 WS requested the tool also be available in the Meteorological Interactive Data Display System (MIDDS). The AMU first expanded the POR by two years by adding tower observations, surface observations and CCAFS (XMR) soundings for the cool season months of March 2007 to April 2009. The POR was expanded again by six years, from October 1996 to April 2002, by interpolating 1000-ft sounding data to 100-ft increments. The Phase II developmental data set included observations for the cool season months of October 1996 to February 2007. The AMU calculated 68 candidate predictors from the XMR soundings, to include 19 stability parameters, 48 wind speed parameters and one wind shear parameter. Each day in the data set was stratified by synoptic weather pattern, low-level wind direction, precipitation and Richardson Number, for a total of 60 stratification methods. Linear regression equations, using the 68 predictors and 60 stratification methods, were created for the tool's three forecast parameters: the highest peak wind speed of the day (PWSD), 5-minute average speed at the same time (A WSD), and timing of the PWSD. For PWSD and A WSD, 30 Phase II methods were selected for evaluation in the verification data set. For timing of the PWSD, 12 Phase\\I methods were selected for evaluation. The verification data set contained observations for the cool season months of March 2007 to April 2009. The data set was used to compare the Phase I and II forecast methods to climatology, model forecast winds and wind advisories issued by the 45 WS. The model forecast winds were derived from the 0000 and 1200 UTC runs of the 12-km North American Mesoscale (MesoNAM) model. The forecast methods that performed the best in the verification data set were selected for the Phase II version of the tool. For PWSD and A WSD, linear regression equations based on MesoNAM forecasts performed significantly better than the Phase I and II methods. For timing of the PWSD, none of the methods performed significantly bener than climatology. The AMU then developed the Microsoft Excel and MIDDS GUls. The GUIs display the forecasts for PWSD, AWSD and the probability the PWSD will meet or exceed 25 kt, 35 kt and 50 kt. Since none of the prediction methods for timing of the PWSD performed significantly better thanlimatology, the tool no longer displays this predictand. The Excel and MIDDS GUIs display forecasts for Day-I to Day-3 and Day-I to Day-5, respectively. The Excel GUI uses MesoNAM forecasts as input, while the MIDDS GUI uses input from the MesoNAM and Global Forecast System model. Based on feedback from the 45 WS, the AMU added the daily average wind speed from 30 ft to 60 ft to the tool, which is one of the parameters in the 24-Hour and Weekly Planning Forecasts issued by the 45 WS. In addition, the AMU expanded the MIDDS GUI to include forecasts out to Day-7.

  8. Portals for Real-Time Earthquake Data and Forecasting: Challenge and Promise (Invited)

    NASA Astrophysics Data System (ADS)

    Rundle, J. B.; Holliday, J. R.; Graves, W. R.; Feltstykket, R.; Donnellan, A.; Glasscoe, M. T.

    2013-12-01

    Earthquake forecasts have been computed by a variety of countries world-wide for over two decades. For the most part, forecasts have been computed for insurance, reinsurance and underwriters of catastrophe bonds. However, recent events clearly demonstrate that mitigating personal risk is becoming the responsibility of individual members of the public. Open access to a variety of web-based forecasts, tools, utilities and information is therefore required. Portals for data and forecasts present particular challenges, and require the development of both apps and the client/server architecture to deliver the basic information in real time. The basic forecast model we consider is the Natural Time Weibull (NTW) method (JBR et al., Phys. Rev. E, 86, 021106, 2012). This model uses small earthquakes (';seismicity-based models') to forecast the occurrence of large earthquakes, via data-mining algorithms combined with the ANSS earthquake catalog. This method computes large earthquake probabilities using the number of small earthquakes that have occurred in a region since the last large earthquake. Localizing these forecasts in space so that global forecasts can be computed in real time presents special algorithmic challenges, which we describe in this talk. Using 25 years of data from the ANSS California-Nevada catalog of earthquakes, we compute real-time global forecasts at a grid scale of 0.1o. We analyze and monitor the performance of these models using the standard tests, which include the Reliability/Attributes and Receiver Operating Characteristic (ROC) tests. It is clear from much of the analysis that data quality is a major limitation on the accurate computation of earthquake probabilities. We discuss the challenges of serving up these datasets over the web on web-based platforms such as those at www.quakesim.org , www.e-decider.org , and www.openhazards.com.

  9. An operational hydrological ensemble prediction system for the city of Zurich (Switzerland): skill, case studies and scenarios

    NASA Astrophysics Data System (ADS)

    Addor, N.; Jaun, S.; Zappa, M.

    2011-01-01

    The Sihl River flows through Zurich, Switzerland's most populated city, for which it represents the largest flood threat. To anticipate extreme discharge events and provide decision support in case of flood risk, a hydrometeorological ensemble prediction system (HEPS) was launched operationally in 2008. This models chain relies on limited-area atmospheric forecasts provided by the deterministic model COSMO-7 and the probabilistic model COSMO-LEPS. These atmospheric forecasts are used to force a semi-distributed hydrological model (PREVAH), coupled to a hydraulic model (FLORIS). The resulting hydrological forecasts are eventually communicated to the stakeholders involved in the Sihl discharge management. This fully operational setting provides a real framework to compare the potential of deterministic and probabilistic discharge forecasts for flood mitigation. To study the suitability of HEPS for small-scale basins and to quantify the added-value conveyed by the probability information, a reforecast was made for the period June 2007 to December 2009 for the Sihl catchment (336 km2). Several metrics support the conclusion that the performance gain can be of up to 2 days lead time for the catchment considered. Brier skill scores show that COSMO-LEPS-based hydrological forecasts overall outperform their COSMO-7 based counterparts for all the lead times and event intensities considered. The small size of the Sihl catchment does not prevent skillful discharge forecasts, but makes them particularly dependent on correct precipitation forecasts, as shown by comparisons with a reference run driven by observed meteorological parameters. Our evaluation stresses that the capacity of the model to provide confident and reliable mid-term probability forecasts for high discharges is limited. The two most intense events of the study period are investigated utilising a novel graphical representation of probability forecasts and used to generate high discharge scenarios. They highlight challenges for making decisions on the basis of hydrological predictions, and indicate the need for a tool to be used in addition to forecasts to compare the different mitigation actions possible in the Sihl catchment.

  10. An operational hydrological ensemble prediction system for the city of Zurich (Switzerland): assessing the added value of probabilistic forecasts

    NASA Astrophysics Data System (ADS)

    Addor, N.; Jaun, S.; Fundel, F.; Zappa, M.

    2012-04-01

    The Sihl River flows through Zurich, Switzerland's most populated city, for which it represents the largest flood threat. To anticipate extreme discharge events and provide decision support in case of flood risk, a hydrometeorological ensemble prediction system (HEPS) was launched operationally in 2008. This model chain relies on deterministic (COSMO-7) and probabilistic (COSMO-LEPS) atmospheric forecasts, which are used to force a semi-distributed hydrological model (PREVAH) coupled to a hydraulic model (FLORIS). The resulting hydrological forecasts are eventually communicated to the stakeholders involved in the Sihl discharge management. This fully operational setting provides a real framework with which we assessed the potential of deterministic and probabilistic discharge forecasts for flood mitigation. To study the suitability of HEPS for small-scale basins and to quantify the added value conveyed by the probability information, a 31-month reforecast was produced for the Sihl catchment (336 km2). Several metrics support the conclusion that the performance gain is of up to 2 days lead time for the catchment considered. Brier skill scores show that probabilistic hydrological forecasts outperform their deterministic counterparts for all the lead times and event intensities considered. The small size of the Sihl catchment does not prevent skillful discharge forecasts, but makes them particularly dependent on correct precipitation forecasts. Our evaluation stresses that the capacity of the model to provide confident and reliable mid-term probability forecasts for high discharges is limited. We finally highlight challenges for making decisions on the basis of hydrological predictions, and discuss the need for a tool to be used in addition to forecasts to compare the different mitigation actions possible in the Sihl catchment.

  11. Objective Lightning Probability Forecasting for Kennedy Space Center and Cape Canaveral Air Force Station, Phase II

    NASA Technical Reports Server (NTRS)

    Lambert, Winifred; Wheeler, Mark

    2007-01-01

    This report describes the work done by the Applied Meteorology Unit (AMU) to update the lightning probability forecast equations developed in Phase I. In the time since the Phase I equations were developed, new ideas regarding certain predictors were formulated and a desire to make the tool more automated was expressed by 45 WS forecasters. Five modifications were made to the data: 1) increased the period of record from 15 to 17 years, 2) modified the valid area to match the lighting warning areas, 3) added the 1000 UTC CCAFS sounding to the other soundings in determining the flow regime, 4) used a different smoothing function for the daily climatology, and 5) determined the optimal relative humidity (RH) layer to use as a predictor. The new equations outperformed the Phase I equations in several tests, and improved the skill of the forecast over the Phase I equations by 8%. A graphical user interface (GUI) was created in the Meteorological Interactive Data Display System (MIDDS) that gathers the predictor values for the equations automatically. The GUI was transitioned to operations in May 2007 for the 2007 warm season.

  12. Web-Based Real Time Earthquake Forecasting and Personal Risk Management

    NASA Astrophysics Data System (ADS)

    Rundle, J. B.; Holliday, J. R.; Graves, W. R.; Turcotte, D. L.; Donnellan, A.

    2012-12-01

    Earthquake forecasts have been computed by a variety of countries and economies world-wide for over two decades. For the most part, forecasts have been computed for insurance, reinsurance and underwriters of catastrophe bonds. One example is the Working Group on California Earthquake Probabilities that has been responsible for the official California earthquake forecast since 1988. However, in a time of increasingly severe global financial constraints, we are now moving inexorably towards personal risk management, wherein mitigating risk is becoming the responsibility of individual members of the public. Under these circumstances, open access to a variety of web-based tools, utilities and information is a necessity. Here we describe a web-based system that has been operational since 2009 at www.openhazards.com and www.quakesim.org. Models for earthquake physics and forecasting require input data, along with model parameters. The models we consider are the Natural Time Weibull (NTW) model for regional earthquake forecasting, together with models for activation and quiescence. These models use small earthquakes ('seismicity-based models") to forecast the occurrence of large earthquakes, either through varying rates of small earthquake activity, or via an accumulation of this activity over time. These approaches use data-mining algorithms combined with the ANSS earthquake catalog. The basic idea is to compute large earthquake probabilities using the number of small earthquakes that have occurred in a region since the last large earthquake. Each of these approaches has computational challenges associated with computing forecast information in real time. Using 25 years of data from the ANSS California-Nevada catalog of earthquakes, we show that real-time forecasting is possible at a grid scale of 0.1o. We have analyzed the performance of these models using Reliability/Attributes and standard Receiver Operating Characteristic (ROC) tests. We show how the Reliability and ROC tests allow us to judge data completeness and estimate error. It is clear from much of the analysis that data quality is a major limitation on the accurate computation of earthquake probabilities. We discuss the challenges and pitfalls in serving up these datasets over the web.

  13. ENSURF: multi-model sea level forecast - implementation and validation results for the IBIROOS and Western Mediterranean regions

    NASA Astrophysics Data System (ADS)

    Pérez, B.; Brouwer, R.; Beckers, J.; Paradis, D.; Balseiro, C.; Lyons, K.; Cure, M.; Sotillo, M. G.; Hackett, B.; Verlaan, M.; Fanjul, E. A.

    2012-03-01

    ENSURF (Ensemble SURge Forecast) is a multi-model application for sea level forecast that makes use of several storm surge or circulation models and near-real time tide gauge data in the region, with the following main goals: 1. providing easy access to existing forecasts, as well as to its performance and model validation, by means of an adequate visualization tool; 2. generation of better forecasts of sea level, including confidence intervals, by means of the Bayesian Model Average technique (BMA). The Bayesian Model Average technique generates an overall forecast probability density function (PDF) by making a weighted average of the individual forecasts PDF's; the weights represent the Bayesian likelihood that a model will give the correct forecast and are continuously updated based on the performance of the models during a recent training period. This implies the technique needs the availability of sea level data from tide gauges in near-real time. The system was implemented for the European Atlantic facade (IBIROOS region) and Western Mediterranean coast based on the MATROOS visualization tool developed by Deltares. Results of validation of the different models and BMA implementation for the main harbours are presented for these regions where this kind of activity is performed for the first time. The system is currently operational at Puertos del Estado and has proved to be useful in the detection of calibration problems in some of the circulation models, in the identification of the systematic differences between baroclinic and barotropic models for sea level forecasts and to demonstrate the feasibility of providing an overall probabilistic forecast, based on the BMA method.

  14. An operational hydrological ensemble prediction system for the city of Zurich (Switzerland): skill, case studies and scenarios

    NASA Astrophysics Data System (ADS)

    Addor, N.; Jaun, S.; Fundel, F.; Zappa, M.

    2011-07-01

    The Sihl River flows through Zurich, Switzerland's most populated city, for which it represents the largest flood threat. To anticipate extreme discharge events and provide decision support in case of flood risk, a hydrometeorological ensemble prediction system (HEPS) was launched operationally in 2008. This model chain relies on limited-area atmospheric forecasts provided by the deterministic model COSMO-7 and the probabilistic model COSMO-LEPS. These atmospheric forecasts are used to force a semi-distributed hydrological model (PREVAH), coupled to a hydraulic model (FLORIS). The resulting hydrological forecasts are eventually communicated to the stakeholders involved in the Sihl discharge management. This fully operational setting provides a real framework with which to compare the potential of deterministic and probabilistic discharge forecasts for flood mitigation. To study the suitability of HEPS for small-scale basins and to quantify the added-value conveyed by the probability information, a reforecast was made for the period June 2007 to December 2009 for the Sihl catchment (336 km2). Several metrics support the conclusion that the performance gain can be of up to 2 days lead time for the catchment considered. Brier skill scores show that overall COSMO-LEPS-based hydrological forecasts outperforms their COSMO-7-based counterparts for all the lead times and event intensities considered. The small size of the Sihl catchment does not prevent skillful discharge forecasts, but makes them particularly dependent on correct precipitation forecasts, as shown by comparisons with a reference run driven by observed meteorological parameters. Our evaluation stresses that the capacity of the model to provide confident and reliable mid-term probability forecasts for high discharges is limited. The two most intense events of the study period are investigated utilising a novel graphical representation of probability forecasts, and are used to generate high discharge scenarios. They highlight challenges for making decisions on the basis of hydrological predictions, and indicate the need for a tool to be used in addition to forecasts to compare the different mitigation actions possible in the Sihl catchment. No definitive conclusion on the model chain capacity to forecast flooding events endangering the city of Zurich could be drawn because of the under-sampling of extreme events. Further research on the form of the reforecasts needed to infer on floods associated to return periods of several decades, centuries, is encouraged.

  15. Scientific assessment of accuracy, skill and reliability of ocean probabilistic forecast products.

    NASA Astrophysics Data System (ADS)

    Wei, M.; Rowley, C. D.; Barron, C. N.; Hogan, P. J.

    2016-02-01

    As ocean operational centers are increasingly adopting and generating probabilistic forecast products for their customers with valuable forecast uncertainties, how to assess and measure these complicated probabilistic forecast products objectively is challenging. The first challenge is how to deal with the huge amount of the data from the ensemble forecasts. The second one is how to describe the scientific quality of probabilistic products. In fact, probabilistic forecast accuracy, skills, reliability, resolutions are different attributes of a forecast system. We briefly introduce some of the fundamental metrics such as the Reliability Diagram, Reliability, Resolution, Brier Score (BS), Brier Skill Score (BSS), Ranked Probability Score (RPS), Ranked Probability Skill Score (RPSS), Continuous Ranked Probability Score (CRPS), and Continuous Ranked Probability Skill Score (CRPSS). The values and significance of these metrics are demonstrated for the forecasts from the US Navy's regional ensemble system with different ensemble members. The advantages and differences of these metrics are studied and clarified.

  16. A Tool for Empirical Forecasting of Major Flares, Coronal Mass Ejections, and Solar Particle Events from a Proxy of Active-Region Free Magnetic Energy

    NASA Technical Reports Server (NTRS)

    Barghouty, A. F.; Falconer, D. A.; Adams, J. H., Jr.

    2010-01-01

    This presentation describes a new forecasting tool developed for and is currently being tested by NASA s Space Radiation Analysis Group (SRAG) at JSC, which is responsible for the monitoring and forecasting of radiation exposure levels of astronauts. The new software tool is designed for the empirical forecasting of M and X-class flares, coronal mass ejections, as well as solar energetic particle events. Its algorithm is based on an empirical relationship between the various types of events rates and a proxy of the active region s free magnetic energy, determined from a data set of approx.40,000 active-region magnetograms from approx.1,300 active regions observed by SOHO/MDI that have known histories of flare, coronal mass ejection, and solar energetic particle event production. The new tool automatically extracts each strong-field magnetic areas from an MDI full-disk magnetogram, identifies each as an NOAA active region, and measures a proxy of the active region s free magnetic energy from the extracted magnetogram. For each active region, the empirical relationship is then used to convert the free magnetic energy proxy into an expected event rate. The expected event rate in turn can be readily converted into the probability that the active region will produce such an event in a given forward time window. Descriptions of the datasets, algorithm, and software in addition to sample applications and a validation test are presented. Further development and transition of the new tool in anticipation of SDO/HMI is briefly discussed.

  17. Operational foreshock forecasting: Fifteen years after

    NASA Astrophysics Data System (ADS)

    Ogata, Y.

    2010-12-01

    We are concerned with operational forecasting of the probability that events are foreshocks of a forthcoming earthquake that is significantly larger (mainshock). Specifically, we define foreshocks as the preshocks substantially smaller than the mainshock by a magnitude gap of 0.5 or larger. The probability gain of foreshock forecast is extremely high compare to long-term forecast by renewal processes or various alarm-based intermediate-term forecasts because of a large event’s low occurrence rate in a short period and a narrow target region. Thus, it is desired to establish operational foreshock probability forecasting as seismologists have done for aftershocks. When a series of earthquakes occurs in a region, we attempt to discriminate foreshocks from a swarm or mainshock-aftershock sequence. Namely, after real time identification of an earthquake cluster using methods such as the single-link algorithm, the probability is calculated by applying statistical features that discriminate foreshocks from other types of clusters, by considering the events' stronger proximity in time and space and tendency towards chronologically increasing magnitudes. These features were modeled for probability forecasting and the coefficients of the model were estimated in Ogata et al. (1996) for the JMA hypocenter data (M≧4, 1926-1993). Currently, fifteen years has passed since the publication of the above-stated work so that we are able to present the performance and validation of the forecasts (1994-2009) by using the same model. Taking isolated events into consideration, the probability of the first events in a potential cluster being a foreshock vary in a range between 0+% and 10+% depending on their locations. This conditional forecasting performs significantly better than the unconditional (average) foreshock probability of 3.7% throughout Japan region. Furthermore, when we have the additional events in a cluster, the forecast probabilities range more widely from nearly 0% to about 40% depending on the discrimination features among the events in the cluster. This conditional forecasting further performs significantly better than the unconditional foreshock probability of 7.3%, which is the average probability of the plural events in the earthquake clusters. Indeed, the frequency ratios of the actual foreshocks are consistent with the forecasted probabilities. Reference: Ogata, Y., Utsu, T. and Katsura, K. (1996). Statistical discrimination of foreshocks from other earthquake clusters, Geophys. J. Int. 127, 17-30.

  18. Applied Meteorology Unit (AMU)

    NASA Technical Reports Server (NTRS)

    Bauman, William; Crawford, Winifred; Barrett, Joe; Watson, Leela; Wheeler, Mark

    2010-01-01

    This report summarizes the Applied Meteorology Unit (AMU) activities for the first quarter of Fiscal Year 2010 (October - December 2009). A detailed project schedule is included in the Appendix. Included tasks are: (1) Peak Wind Tool for User Launch Commit Criteria (LCC), (2) Objective Lightning Probability Tool, Phase III, (3) Peak Wind Tool for General Forecasting, Phase II, (4) Upgrade Summer Severe Weather Tool in Meteorological Interactive Data Display System (MIDDS), (5) Advanced Regional Prediction System (ARPS) Data Analysis System (ADAS) Update and Maintainability, (5) Verify 12-km resolution North American Model (MesoNAM) Performance, and (5) Hybrid Single-Particle Lagrangian Integrated Trajectory (HYSPLIT) Graphical User Interface.

  19. Applied Meteorology Unit (AMU) Quarterly Report - Fourth Quarter FY-09

    NASA Technical Reports Server (NTRS)

    Bauman, William; Crawford, Winifred; Barrett, Joe; Watson, Leela; Wheeler, Mark

    2009-01-01

    This report summarizes the Applied Meteorology Unit (AMU) activities for the fourth quarter of Fiscal Year 2009 (July - September 2009). Tasks reports include: (1) Peak Wind Tool for User Launch Commit Criteria (LCC), (2) Objective Lightning Probability Tool. Phase III, (3) Peak Wind Tool for General Forecasting. Phase II, (4) Update and Maintain Advanced Regional Prediction System (ARPS) Data Analysis System (ADAS), (5) Verify MesoNAM Performance (6) develop a Graphical User Interface to update selected parameters for the Hybrid Single-Particle Lagrangian Integrated Trajectory (HYSPLlT)

  20. The Use of the Integrated Medical Model for Forecasting and Mitigating Medical Risks for a Near-Earth Asteroid Mission

    NASA Technical Reports Server (NTRS)

    Kerstman, Eric; Saile, Lynn; Freire de Carvalho, Mary; Myers, Jerry; Walton, Marlei; Butler, Douglas; Lopez, Vilma

    2011-01-01

    Introduction The Integrated Medical Model (IMM) is a decision support tool that is useful to space flight mission managers and medical system designers in assessing risks and optimizing medical systems. The IMM employs an evidence-based, probabilistic risk assessment (PRA) approach within the operational constraints of space flight. Methods Stochastic computational methods are used to forecast probability distributions of medical events, crew health metrics, medical resource utilization, and probability estimates of medical evacuation and loss of crew life. The IMM can also optimize medical kits within the constraints of mass and volume for specified missions. The IMM was used to forecast medical evacuation and loss of crew life probabilities, as well as crew health metrics for a near-earth asteroid (NEA) mission. An optimized medical kit for this mission was proposed based on the IMM simulation. Discussion The IMM can provide information to the space program regarding medical risks, including crew medical impairment, medical evacuation and loss of crew life. This information is valuable to mission managers and the space medicine community in assessing risk and developing mitigation strategies. Exploration missions such as NEA missions will have significant mass and volume constraints applied to the medical system. Appropriate allocation of medical resources will be critical to mission success. The IMM capability of optimizing medical systems based on specific crew and mission profiles will be advantageous to medical system designers. Conclusion The IMM is a decision support tool that can provide estimates of the impact of medical events on human space flight missions, such as crew impairment, evacuation, and loss of crew life. It can be used to support the development of mitigation strategies and to propose optimized medical systems for specified space flight missions. Learning Objectives The audience will learn how an evidence-based decision support tool can be used to help assess risk, develop mitigation strategies, and optimize medical systems for exploration space flight missions.

  1. Diagnosing Geospatial Uncertainty Visualization Challenges in Seasonal Temperature and Precipitation Forecasts

    NASA Astrophysics Data System (ADS)

    Speciale, A.; Kenney, M. A.; Gerst, M.; Baer, A. E.; DeWitt, D.; Gottschalk, J.; Handel, S.

    2017-12-01

    The uncertainty of future weather and climate conditions is important for many decisions made in communities and economic sectors. One tool that decision-makers use in gauging this uncertainty is forecasts, especially maps (or visualizations) of probabilistic forecast results. However, visualizing geospatial uncertainty is challenging because including probability introduces an extra variable to represent and probability is often poorly understood by users. Using focus group and survey methods, this study seeks to understand the barriers to using probabilistic temperature and precipitation visualizations for specific decisions in the agriculture, energy, emergency management, and water resource sectors. Preliminary results shown here focus on findings of emergency manager needs. Our experimental design uses National Oceanic and Atmospheric Administration (NOAA's) Climate Prediction Center (CPC) climate outlooks, which produce probabilistic temperature and precipitation forecast visualizations at the 6-10 day, 8-14 day, 3-4 week, and 1 and 3 month timeframes. Users were asked to complete questions related to how they use weather information, how uncertainty is represented, and design elements (e.g., color, contour lines) of the visualizations. Preliminary results from the emergency management sector indicate there is significant confusion on how "normal" weather is defined, boundaries between probability ranges, and meaning of the contour lines. After a complete understandability diagnosis is made using results from all sectors, we will collaborate with CPC to suggest modifications to the climate outlook visualizations. These modifications will then be retested in similar focus groups and web-based surveys to confirm they better meet the needs of users.

  2. Performance assessment of a Bayesian Forecasting System (BFS) for real-time flood forecasting

    NASA Astrophysics Data System (ADS)

    Biondi, D.; De Luca, D. L.

    2013-02-01

    SummaryThe paper evaluates, for a number of flood events, the performance of a Bayesian Forecasting System (BFS), with the aim of evaluating total uncertainty in real-time flood forecasting. The predictive uncertainty of future streamflow is estimated through the Bayesian integration of two separate processors. The former evaluates the propagation of input uncertainty on simulated river discharge, the latter computes the hydrological uncertainty of actual river discharge associated with all other possible sources of error. A stochastic model and a distributed rainfall-runoff model were assumed, respectively, for rainfall and hydrological response simulations. A case study was carried out for a small basin in the Calabria region (southern Italy). The performance assessment of the BFS was performed with adequate verification tools suited for probabilistic forecasts of continuous variables such as streamflow. Graphical tools and scalar metrics were used to evaluate several attributes of the forecast quality of the entire time-varying predictive distributions: calibration, sharpness, accuracy, and continuous ranked probability score (CRPS). Besides the overall system, which incorporates both sources of uncertainty, other hypotheses resulting from the BFS properties were examined, corresponding to (i) a perfect hydrological model; (ii) a non-informative rainfall forecast for predicting streamflow; and (iii) a perfect input forecast. The results emphasize the importance of using different diagnostic approaches to perform comprehensive analyses of predictive distributions, to arrive at a multifaceted view of the attributes of the prediction. For the case study, the selected criteria revealed the interaction of the different sources of error, in particular the crucial role of the hydrological uncertainty processor when compensating, at the cost of wider forecast intervals, for the unreliable and biased predictive distribution resulting from the Precipitation Uncertainty Processor.

  3. Application of new methods based on ECMWF ensemble model for predicting severe convective weather situations

    NASA Astrophysics Data System (ADS)

    Lazar, Dora; Ihasz, Istvan

    2013-04-01

    The short and medium range operational forecasts, warning and alarm of the severe weather are one of the most important activities of the Hungarian Meteorological Service. Our study provides comprehensive summary of newly developed methods based on ECMWF ensemble forecasts to assist successful prediction of the convective weather situations. . In the first part of the study a brief overview is given about the components of atmospheric convection, which are the atmospheric lifting force, convergence and vertical wind shear. The atmospheric instability is often used to characterize the so-called instability index; one of the most popular and often used indexes is the convective available potential energy. Heavy convective events, like intensive storms, supercells and tornadoes are needed the vertical instability, adequate moisture and vertical wind shear. As a first step statistical studies of these three parameters are based on nine years time series of 51-member ensemble forecasting model based on convective summer time period, various statistical analyses were performed. Relationship of the rate of the convective and total precipitation and above three parameters was studied by different statistical methods. Four new visualization methods were applied for supporting successful forecasts of severe weathers. Two of the four visualization methods the ensemble meteogram and the ensemble vertical profiles had been available at the beginning of our work. Both methods show probability of the meteorological parameters for the selected location. Additionally two new methods have been developed. First method provides probability map of the event exceeding predefined values, so the incident of the spatial uncertainty is well-defined. The convective weather events are characterized by the incident of space often rhapsodic occurs rather have expected the event area can be selected so that the ensemble forecasts give very good support. Another new visualization tool shows time evolution of predefined multiple thresholds in graphical form for any selected location. With applying this tool degree of the dangerous weather conditions can be well estimated. Besides intensive convective periods are clearly marked during the forecasting period. Developments were done by MAGICS++ software under UNIX operating system. The third part of the study usefulness of these tools is demonstrated in three interesting cases studies of last summer.

  4. United States Geological Survey fire science: fire danger monitoring and forecasting

    USGS Publications Warehouse

    Eidenshink, Jeff C.; Howard, Stephen M.

    2012-01-01

    Each day, the U.S. Geological Survey produces 7-day forecasts for all Federal lands of the distributions of number of ignitions, number of fires above a given size, and conditional probabilities of fires growing larger than a specified size. The large fire probability map is an estimate of the likelihood that ignitions will become large fires. The large fire forecast map is a probability estimate of the number of fires on federal lands exceeding 100 acres in the forthcoming week. The ignition forecast map is a probability estimate of the number of fires on Federal land greater than 1 acre in the forthcoming week. The extreme event forecast is the probability estimate of the number of fires on Federal land that may exceed 5,000 acres in the forthcoming week.

  5. A bayesian cross-validation approach to evaluate genetic baselines and forecast the necessary number of informative single nucleotide polymorphisms

    USDA-ARS?s Scientific Manuscript database

    Mixed stock analysis (MSA) is a powerful tool used in the management and conservation of numerous species. Its function is to estimate the sources of contributions in a mixture of populations of a species, as well as to estimate the probabilities that individuals originated at a source. Considerable...

  6. Hailstorm forecast from stability indexes in Southwestern France

    NASA Astrophysics Data System (ADS)

    Melcón, Pablo; Merino, Andrés; Sánchez, José Luis; Dessens, Jean; Gascón, Estíbaliz; Berthet, Claude; López, Laura; García-Ortega, Eduardo

    2016-04-01

    Forecasting hailstorms is a difficult task because of their small spatial and temporal scales. Over recent decades, stability indexes have been commonly used in operational forecasting to provide a simplified representation of different thermodynamic characteristics of the atmosphere, regarding the onset of convective events. However, they are estimated from vertical profiles obtained by radiosondes, which are usually available only twice a day and have limited spatial representativeness. Numerical models predictions can be used to overcome these drawbacks, providing vertical profiles with higher spatiotemporal resolution. The main objective of this study is to create a tool for hail prediction in the southwest of France, one of the European regions where hailstorms have a higher incidence. The Association Nationale d'Etude et de Lutte contre les Fleáux Atmosphériques (ANELFA) maintains there a dense hailpad network in continuous operation, which has created an extensive database of hail events, used in this study as ground truth. The new technique is aimed to classify the spatial distribution of different stability indexes on hail days. These indexes were calculated from vertical profiles at 1200 UTC provided by WRF numerical model, validated with radiosonde data from Bordeaux. Binary logistic regression is used to select those indexes that best represent thermodynamic conditions related to occurrence of hail in the zone. Then, they are combined in a single algorithm that surpassed the predictive power they have when used independently. Regression equation results in hail days are used in cluster analysis to identify different spatial patterns given by the probability algorithm. This new tool can be used in operational forecasting, in combination with synoptic and mesoscale techniques, to properly define hail probability and distribution. Acknowledgements The authors would like to thank the CEPA González Díez Foundation and the University of Leon for its financial support.

  7. A Decision Support System for effective use of probability forecasts

    NASA Astrophysics Data System (ADS)

    De Kleermaeker, Simone; Verkade, Jan

    2013-04-01

    Often, water management decisions are based on hydrological forecasts. These forecasts, however, are affected by inherent uncertainties. It is increasingly common for forecasting agencies to make explicit estimates of these uncertainties and thus produce probabilistic forecasts. Associated benefits include the decision makers' increased awareness of forecasting uncertainties and the potential for risk-based decision-making. Also, a stricter separation of responsibilities between forecasters and decision maker can be made. However, simply having probabilistic forecasts available is not sufficient to realise the associated benefits. Additional effort is required in areas such as forecast visualisation and communication, decision making in uncertainty and forecast verification. Also, revised separation of responsibilities requires a shift in institutional arrangements and responsibilities. A recent study identified a number of additional issues related to the effective use of probability forecasts. When moving from deterministic to probability forecasting, a dimension is added to an already multi-dimensional problem; this makes it increasingly difficult for forecast users to extract relevant information from a forecast. A second issue is that while probability forecasts provide a necessary ingredient for risk-based decision making, other ingredients may not be present. For example, in many cases no estimates of flood damage, of costs of management measures and of damage reduction are available. This paper presents the results of the study, including some suggestions for resolving these issues and the integration of those solutions in a prototype decision support system (DSS). A pathway for further development of the DSS is outlined.

  8. Signature-forecasting and early outbreak detection system

    PubMed Central

    Naumova, Elena N.; MacNeill, Ian B.

    2008-01-01

    SUMMARY Daily disease monitoring via a public health surveillance system provides valuable information on population risks. Efficient statistical tools for early detection of rapid changes in the disease incidence are a must for modern surveillance. The need for statistical tools for early detection of outbreaks that are not based on historical information is apparent. A system is discussed for monitoring cases of infections with a view to early detection of outbreaks and to forecasting the extent of detected outbreaks. We propose a set of adaptive algorithms for early outbreak detection that does not rely on extensive historical recording. We also include knowledge of infection disease epidemiology into forecasts. To demonstrate this system we use data from the largest water-borne outbreak of cryptosporidiosis, which occurred in Milwaukee in 1993. Historical data are smoothed using a loess-type smoother. Upon receipt of a new datum, the smoothing is updated and estimates are made of the first two derivatives of the smooth curve, and these are used for near-term forecasting. Recent data and the near-term forecasts are used to compute a color-coded warning index, which quantify the level of concern. The algorithms for computing the warning index have been designed to balance Type I errors (false prediction of an epidemic) and Type II errors (failure to correctly predict an epidemic). If the warning index signals a sufficiently high probability of an epidemic, then a forecast of the possible size of the outbreak is made. This longer term forecast is made by fitting a ‘signature’ curve to the available data. The effectiveness of the forecast depends upon the extent to which the signature curve captures the shape of outbreaks of the infection under consideration. PMID:18716671

  9. Future WGCEP Models and the Need for Earthquake Simulators

    NASA Astrophysics Data System (ADS)

    Field, E. H.

    2008-12-01

    The 2008 Working Group on California Earthquake Probabilities (WGCEP) recently released the Uniform California Earthquake Rupture Forecast version 2 (UCERF 2), developed jointly by the USGS, CGS, and SCEC with significant support from the California Earthquake Authority. Although this model embodies several significant improvements over previous WGCEPs, the following are some of the significant shortcomings that we hope to resolve in a future UCERF3: 1) assumptions of fault segmentation and the lack of fault-to-fault ruptures; 2) the lack of an internally consistent methodology for computing time-dependent, elastic-rebound-motivated renewal probabilities; 3) the lack of earthquake clustering/triggering effects; and 4) unwarranted model complexity. It is believed by some that physics-based earthquake simulators will be key to resolving these issues, either as exploratory tools to help guide the present statistical approaches, or as a means to forecast earthquakes directly (although significant challenges remain with respect to the latter).

  10. Visualizing Uncertainty for Probabilistic Weather Forecasting based on Reforecast Analogs

    NASA Astrophysics Data System (ADS)

    Pelorosso, Leandro; Diehl, Alexandra; Matković, Krešimir; Delrieux, Claudio; Ruiz, Juan; Gröeller, M. Eduard; Bruckner, Stefan

    2016-04-01

    Numerical weather forecasts are prone to uncertainty coming from inaccuracies in the initial and boundary conditions and lack of precision in numerical models. Ensemble of forecasts partially addresses these problems by considering several runs of the numerical model. Each forecast is generated with different initial and boundary conditions and different model configurations [GR05]. The ensembles can be expressed as probabilistic forecasts, which have proven to be very effective in the decision-making processes [DE06]. The ensemble of forecasts represents only some of the possible future atmospheric states, usually underestimating the degree of uncertainty in the predictions [KAL03, PH06]. Hamill and Whitaker [HW06] introduced the "Reforecast Analog Regression" (RAR) technique to overcome the limitations of ensemble forecasting. This technique produces probabilistic predictions based on the analysis of historical forecasts and observations. Visual analytics provides tools for processing, visualizing, and exploring data to get new insights and discover hidden information patterns in an interactive exchange between the user and the application [KMS08]. In this work, we introduce Albero, a visual analytics solution for probabilistic weather forecasting based on the RAR technique. Albero targets at least two different type of users: "forecasters", who are meteorologists working in operational weather forecasting and "researchers", who work in the construction of numerical prediction models. Albero is an efficient tool for analyzing precipitation forecasts, allowing forecasters to make and communicate quick decisions. Our solution facilitates the analysis of a set of probabilistic forecasts, associated statistical data, observations and uncertainty. A dashboard with small-multiples of probabilistic forecasts allows the forecasters to analyze at a glance the distribution of probabilities as a function of time, space, and magnitude. It provides the user with a more accurate measure of forecast uncertainty that could result in better decision-making. It offers different level of abstractions to help with the recalibration of the RAR method. It also has an inspection tool that displays the selected analogs, their observations and statistical data. It gives the users access to inner parts of the method, unveiling hidden information. References [GR05] GNEITING T., RAFTERY A. E.: Weather forecasting with ensemble methods. Science 310, 5746, 248-249, 2005. [KAL03] KALNAY E.: Atmospheric modeling, data assimilation and predictability. Cambridge University Press, 2003. [PH06] PALMER T., HAGEDORN R.: Predictability of weather and climate. Cambridge University Press, 2006. [HW06] HAMILL T. M., WHITAKER J. S.: Probabilistic quantitative precipitation forecasts based on reforecast analogs: Theory and application. Monthly Weather Review 134, 11, 3209-3229, 2006. [DE06] DEITRICK S., EDSALL R.: The influence of uncertainty visualization on decision making: An empirical evaluation. Springer, 2006. [KMS08] KEIM D. A., MANSMANN F., SCHNEIDEWIND J., THOMAS J., ZIEGLER H.: Visual analytics: Scope and challenges. Springer, 2008.

  11. Flow Regime Based Climatologies of Lightning Probabilities for Spaceports and Airports

    NASA Technical Reports Server (NTRS)

    Bauman, William H., III; Volmer, Matthew; Sharp, David; Spratt, Scott; Lafosse, Richard A.

    2007-01-01

    Objective: provide forecasters with a "first guess" climatological lightning probability tool (1) Focus on Space Shuttle landings and NWS T AFs (2) Four circles around sites: 5-, 10-, 20- and 30 n mi (4) Three time intervals: hourly, every 3 hr and every 6 hr It is based on: (1) NLDN gridded data (2) Flow regime (3) Warm season months of May-Sep for years 1989-2004 Gridded data and available code yields squares, not circles Over 850 spread sheets converted into manageable user-friendly web-based GUI

  12. A novel visualisation tool for climate services: a case study of temperature extremes and human mortality in Europe

    NASA Astrophysics Data System (ADS)

    Lowe, R.; Ballester, J.; Robine, J.; Herrmann, F. R.; Jupp, T. E.; Stephenson, D.; Rodó, X.

    2013-12-01

    Users of climate information often require probabilistic information on which to base their decisions. However, communicating information contained within a probabilistic forecast presents a challenge. In this paper we demonstrate a novel visualisation technique to display ternary probabilistic forecasts on a map in order to inform decision making. In this method, ternary probabilistic forecasts, which assign probabilities to a set of three outcomes (e.g. low, medium, and high risk), are considered as a point in a triangle of barycentric coordinates. This allows a unique colour to be assigned to each forecast from a continuum of colours defined on the triangle. Colour saturation increases with information gain relative to the reference forecast (i.e. the long term average). This provides additional information to decision makers compared with conventional methods used in seasonal climate forecasting, where one colour is used to represent one forecast category on a forecast map (e.g. red = ';dry'). We use the tool to present climate-related mortality projections across Europe. Temperature and humidity are related to human mortality via location-specific transfer functions, calculated using historical data. Daily mortality data at the NUTS2 level for 16 countries in Europe were obtain from 1998-2005. Transfer functions were calculated for 54 aggregations in Europe, defined using criteria related to population and climatological similarities. Aggregations are restricted to fall within political boundaries to avoid problems related to varying adaptation policies between countries. A statistical model is fit to cold and warm tails to estimate future mortality using forecast temperatures, in a Bayesian probabilistic framework. Using predefined categories of temperature-related mortality risk, we present maps of probabilistic projections for human mortality at seasonal to decadal time scales. We demonstrate the information gained from using this technique compared to more traditional methods to display ternary probabilistic forecasts. This technique allows decision makers to identify areas where the model predicts with certainty area-specific heat waves or cold snaps, in order to effectively target resources to those areas most at risk, for a given season or year. It is hoped that this visualisation tool will facilitate the interpretation of the probabilistic forecasts not only for public health decision makers but also within a multi-sectoral climate service framework.

  13. Skill of Ensemble Seasonal Probability Forecasts

    NASA Astrophysics Data System (ADS)

    Smith, Leonard A.; Binter, Roman; Du, Hailiang; Niehoerster, Falk

    2010-05-01

    In operational forecasting, the computational complexity of large simulation models is, ideally, justified by enhanced performance over simpler models. We will consider probability forecasts and contrast the skill of ENSEMBLES-based seasonal probability forecasts of interest to the finance sector (specifically temperature forecasts for Nino 3.4 and the Atlantic Main Development Region (MDR)). The ENSEMBLES model simulations will be contrasted against forecasts from statistical models based on the observations (climatological distributions) and empirical dynamics based on the observations but conditioned on the current state (dynamical climatology). For some start dates, individual ENSEMBLES models yield significant skill even at a lead-time of 14 months. The nature of this skill is discussed, and chances of application are noted. Questions surrounding the interpretation of probability forecasts based on these multi-model ensemble simulations are then considered; the distributions considered are formed by kernel dressing the ensemble and blending with the climatology. The sources of apparent (RMS) skill in distributions based on multi-model simulations is discussed, and it is demonstrated that the inclusion of "zero-skill" models in the long range can improve Root-Mean-Square-Error scores, casting some doubt on the common justification for the claim that all models should be included in forming an operational probability forecast. It is argued that the rational response varies with lead time.

  14. USA Nutrient managment forecasting via the "Fertilizer Forecaster": linking surface runnof, nutrient application and ecohydrology.

    NASA Astrophysics Data System (ADS)

    Drohan, Patrick; Buda, Anthony; Kleinman, Peter; Miller, Douglas; Lin, Henry; Beegle, Douglas; Knight, Paul

    2017-04-01

    USA and state nutrient management planning offers strategic guidance that strives to educate farmers and those involved in nutrient management to make wise management decisions. A goal of such programs is to manage hotspots of water quality degradation that threaten human and ecosystem health, water and food security. The guidance provided by nutrient management plans does not provide the day-to-day support necessary to make operational decisions, particularly when and where to apply nutrients over the short term. These short-term decisions on when and where to apply nutrients often make the difference between whether the nutrients impact water quality or are efficiently utilized by crops. Infiltrating rainfall events occurring shortly after broadcast nutrient applications are beneficial, given they will wash soluble nutrients into the soil where they are used by crops. Rainfall events that generate runoff shortly after nutrients are broadcast may wash off applied nutrients, and produce substantial nutrient losses from that site. We are developing a model and data based support tool for nutrient management, the Fertilizer Forecaster, which identifies the relative probability of runoff or infiltrating events in Pennsylvania (PA) landscapes in order to improve water quality. This tool will support field specific decisions by farmers and land managers on when and where to apply fertilizers and manures over 24, 48 and 72 hour periods. Our objectives are to: (1) monitor agricultural hillslopes in watersheds representing four of the five Physiographic Provinces of the Chesapeake Bay basin; (2) validate a high resolution mapping model that identifies soils prone to runoff; (3) develop an empirically based approach to relate state-of-the-art weather forecast variables to site-specific rainfall infiltration or runoff occurrence; (4) test the empirical forecasting model against alternative approaches to forecasting runoff occurrence; and (5) recruit farmers from the four watersheds to use web-based forecast maps in daily manure and fertilizer application decisions. Data from on-farm trials is being used to assess farmer fertilizer, manure, and tillage management decisions before and after use of the Fertilizer Forecaster. This data will help us understand not only the effectiveness of the tool, but also characteristics of farmers with the greatest potential to benefit from such a tool. Feedback from on-farm trials will be used to refine a final tool for field deployment. We hope that the Fertilizer Forecaster will serve as the basis for state (USA-PA), regional (Chesapeake Bay), and national changes in nutrient management planning. This Fertilizer Forecaster is an innovative management practice that is designed to enhance the services of aquatic ecosystems by improving water quality and enhance the services of terrestrial ecosystems by increasing the efficiency of nutrient use by targeted crops.

  15. Applied Meteorology Unit (AMU)

    NASA Technical Reports Server (NTRS)

    Bauman, William; Crawford, Winifred; Watson, Leela; Wheeler, Mark

    2011-01-01

    The AMU Team began four new tasks in this quarter: (1) began work to improve the AMU-developed tool that provides the launch weather officers information on peak wind speeds that helps them assess their launch commit criteria; (2) began updating lightning climatologies for airfields around central Florida. These climatologies help National Weather Service and Air Force forecasters determine the probability of lightning occurrence at these sites; (3) began a study for the 30th Weather Squadron at Vandenberg Air Force Base in California to determine if precursors can be found in weather observations to help the forecasters determine when they will get strong wind gusts in their northern towers; and (4) began work to update the AMU-developed severe weather tool with more data and possibly improve its performance using a new statistical technique. Include is a section of summaries and detail reporting on the quarterly tasks: (1) Peak Wind Tool for user Meteorological Interactive Data Display System (LCC), Phase IV, (2) Situational Lightning climatologies for Central Florida, Phase V, (3) Vandenberg AFB North Base Wind Study and (4) Upgrade Summer Severe Weather Tool Meteorological Interactive Data Display System (MIDDS).

  16. Nowcasting of Low-Visibility Procedure States with Ordered Logistic Regression at Vienna International Airport

    NASA Astrophysics Data System (ADS)

    Kneringer, Philipp; Dietz, Sebastian; Mayr, Georg J.; Zeileis, Achim

    2017-04-01

    Low-visibility conditions have a large impact on aviation safety and economic efficiency of airports and airlines. To support decision makers, we develop a statistical probabilistic nowcasting tool for the occurrence of capacity-reducing operations related to low visibility. The probabilities of four different low visibility classes are predicted with an ordered logistic regression model based on time series of meteorological point measurements. Potential predictor variables for the statistical models are visibility, humidity, temperature and wind measurements at several measurement sites. A stepwise variable selection method indicates that visibility and humidity measurements are the most important model inputs. The forecasts are tested with a 30 minute forecast interval up to two hours, which is a sufficient time span for tactical planning at Vienna Airport. The ordered logistic regression models outperform persistence and are competitive with human forecasters.

  17. Calibration and combination of dynamical seasonal forecasts to enhance the value of predicted probabilities for managing risk

    NASA Astrophysics Data System (ADS)

    Dutton, John A.; James, Richard P.; Ross, Jeremy D.

    2013-06-01

    Seasonal probability forecasts produced with numerical dynamics on supercomputers offer great potential value in managing risk and opportunity created by seasonal variability. The skill and reliability of contemporary forecast systems can be increased by calibration methods that use the historical performance of the forecast system to improve the ongoing real-time forecasts. Two calibration methods are applied to seasonal surface temperature forecasts of the US National Weather Service, the European Centre for Medium Range Weather Forecasts, and to a World Climate Service multi-model ensemble created by combining those two forecasts with Bayesian methods. As expected, the multi-model is somewhat more skillful and more reliable than the original models taken alone. The potential value of the multimodel in decision making is illustrated with the profits achieved in simulated trading of a weather derivative. In addition to examining the seasonal models, the article demonstrates that calibrated probability forecasts of weekly average temperatures for leads of 2-4 weeks are also skillful and reliable. The conversion of ensemble forecasts into probability distributions of impact variables is illustrated with degree days derived from the temperature forecasts. Some issues related to loss of stationarity owing to long-term warming are considered. The main conclusion of the article is that properly calibrated probabilistic forecasts possess sufficient skill and reliability to contribute to effective decisions in government and business activities that are sensitive to intraseasonal and seasonal climate variability.

  18. Using volcanic tremor for eruption forecasting at White Island volcano (Whakaari), New Zealand

    NASA Astrophysics Data System (ADS)

    Chardot, Lauriane; Jolly, Arthur D.; Kennedy, Ben M.; Fournier, Nicolas; Sherburn, Steven

    2015-09-01

    Eruption forecasting is a challenging task because of the inherent complexity of volcanic systems. Despite remarkable efforts to develop complex models in order to explain volcanic processes prior to eruptions, the material Failure Forecast Method (FFM) is one of the very few techniques that can provide a forecast time for an eruption. However, the method requires testing and automation before being used as a real-time eruption forecasting tool at a volcano. We developed an automatic algorithm to issue forecasts from volcanic tremor increase episodes recorded by Real-time Seismic Amplitude Measurement (RSAM) at one station and optimised this algorithm for the period August 2011-January 2014 which comprises the recent unrest period at White Island volcano (Whakaari), New Zealand. A detailed residual analysis was paramount to select the most appropriate model explaining the RSAM time evolutions. In a hindsight simulation, four out of the five small eruptions reported during this period occurred within a failure window forecast by our optimised algorithm and the probability of an eruption on a day within a failure window was 0.21, which is 37 times higher than the probability of having an eruption on any day during the same period (0.0057). Moreover, the forecasts were issued prior to the eruptions by a few hours which is important from an emergency management point of view. Whereas the RSAM time evolutions preceding these four eruptions have a similar goodness-of-fit with the FFM, their spectral characteristics are different. The duration-amplitude distributions of the precursory tremor episodes support the hypothesis that several processes were likely occurring prior to these eruptions. We propose that slow rock failure and fluid flow processes are plausible candidates for the tremor source of these episodes. This hindsight exercise can be useful for future real-time implementation of the FFM at White Island. A similar methodology could also be tested at other volcanoes even if only a limited network is available.

  19. WOVOdat, A Worldwide Volcano Unrest Database, to Improve Eruption Forecasts

    NASA Astrophysics Data System (ADS)

    Widiwijayanti, C.; Costa, F.; Win, N. T. Z.; Tan, K.; Newhall, C. G.; Ratdomopurbo, A.

    2015-12-01

    WOVOdat is the World Organization of Volcano Observatories' Database of Volcanic Unrest. An international effort to develop common standards for compiling and storing data on volcanic unrests in a centralized database and freely web-accessible for reference during volcanic crises, comparative studies, and basic research on pre-eruption processes. WOVOdat will be to volcanology as an epidemiological database is to medicine. Despite the large spectrum of monitoring techniques, the interpretation of monitoring data throughout the evolution of the unrest and making timely forecasts remain the most challenging tasks for volcanologists. The field of eruption forecasting is becoming more quantitative, based on the understanding of the pre-eruptive magmatic processes and dynamic interaction between variables that are at play in a volcanic system. Such forecasts must also acknowledge and express the uncertainties, therefore most of current research in this field focused on the application of event tree analysis to reflect multiple possible scenarios and the probability of each scenario. Such forecasts are critically dependent on comprehensive and authoritative global volcano unrest data sets - the very information currently collected in WOVOdat. As the database becomes more complete, Boolean searches, side-by-side digital and thus scalable comparisons of unrest, pattern recognition, will generate reliable results. Statistical distribution obtained from WOVOdat can be then used to estimate the probabilities of each scenario after specific patterns of unrest. We established main web interface for data submission and visualizations, and have now incorporated ~20% of worldwide unrest data into the database, covering more than 100 eruptive episodes. In the upcoming years we will concentrate in acquiring data from volcano observatories develop a robust data query interface, optimizing data mining, and creating tools by which WOVOdat can be used for probabilistic eruption forecasting. The more data in WOVOdat, the more useful it will be.

  20. Sustainable Odds

    NASA Astrophysics Data System (ADS)

    Smith, L. A.

    2016-12-01

    While probability forecasting has many philosophical and mathematical attractions, it is something of a dishonest nonsense if acting on such forecasts is expected to lead to rapid ruin. Model-based probabilities, when interpreted as actionable, are shown to lead to the rapid ruin of a cooperative entity offering odds interpreting the probability forecasts at face value. Arguably, these odds would not be considered "fair", but inasmuch as some definitions of "fair odds" include this case, this presentation will focus on "sustainable odds": Odds which are not expected to lead to the rapid ruin of the cooperative under the assumption that those placing bets have no information beyond that available to the forecast system. It is argued that sustainable odds will not correspond to probabilities outside the Perfect Model Scenario, that the "implied probabilities" determined from sustainable odds will always sum to more than one, and that the excess of this sum over one reflects the skill of the forecast system, being a quantitative measure of structural model error.

  1. The probability forecast evaluation of hazard and storm wind over the territories of Russia and Europe

    NASA Astrophysics Data System (ADS)

    Perekhodtseva, E. V.

    2012-04-01

    The results of the probability forecast methods of summer storm and hazard wind over territories of Russia and Europe are submitted at this paper. These methods use the hydrodynamic-statistical model of these phenomena. The statistical model was developed for the recognition of the situation involving these phenomena. For this perhaps the samples of the values of atmospheric parameters (n=40) for the presence and for the absence of these phenomena of storm and hazard wind were accumulated. The compressing of the predictors space without the information losses was obtained by special algorithm (k=7<19m/s, the values of 65%24m/s, the values of 75%29m/s or the area of the tornado and strong squalls. The evaluation of this probability forecast was provided by criterion of Brayer. The estimation was successful and was equal for the European part of Russia B=0,37. The application of the probability forecast of storm and hazard winds allows to mitigate the economic losses when the errors of the first and second kinds of storm wind categorical forecast are not so small. A lot of examples of the storm wind probability forecast are submitted at this report.

  2. Convective Weather Forecast Quality Metrics for Air Traffic Management Decision-Making

    NASA Technical Reports Server (NTRS)

    Chatterji, Gano B.; Gyarfas, Brett; Chan, William N.; Meyn, Larry A.

    2006-01-01

    Since numerical weather prediction models are unable to accurately forecast the severity and the location of the storm cells several hours into the future when compared with observation data, there has been a growing interest in probabilistic description of convective weather. The classical approach for generating uncertainty bounds consists of integrating the state equations and covariance propagation equations forward in time. This step is readily recognized as the process update step of the Kalman Filter algorithm. The second well known method, known as the Monte Carlo method, consists of generating output samples by driving the forecast algorithm with input samples selected from distributions. The statistical properties of the distributions of the output samples are then used for defining the uncertainty bounds of the output variables. This method is computationally expensive for a complex model compared to the covariance propagation method. The main advantage of the Monte Carlo method is that a complex non-linear model can be easily handled. Recently, a few different methods for probabilistic forecasting have appeared in the literature. A method for computing probability of convection in a region using forecast data is described in Ref. 5. Probability at a grid location is computed as the fraction of grid points, within a box of specified dimensions around the grid location, with forecast convection precipitation exceeding a specified threshold. The main limitation of this method is that the results are dependent on the chosen dimensions of the box. The examples presented Ref. 5 show that this process is equivalent to low-pass filtering of the forecast data with a finite support spatial filter. References 6 and 7 describe the technique for computing percentage coverage within a 92 x 92 square-kilometer box and assigning the value to the center 4 x 4 square-kilometer box. This technique is same as that described in Ref. 5. Characterizing the forecast, following the process described in Refs. 5 through 7, in terms of percentage coverage or confidence level is notionally sound compared to characterizing in terms of probabilities because the probability of the forecast being correct can only be determined using actual observations. References 5 through 7 only use the forecast data and not the observations. The method for computing the probability of detection, false alarm ratio and several forecast quality metrics (Skill Scores) using both the forecast and observation data are given in Ref. 2. This paper extends the statistical verification method in Ref. 2 to determine co-occurrence probabilities. The method consists of computing the probability that a severe weather cell (grid location) is detected in the observation data in the neighborhood of the severe weather cell in the forecast data. Probabilities of occurrence at the grid location and in its neighborhood with higher severity, and with lower severity in the observation data compared to that in the forecast data are examined. The method proposed in Refs. 5 through 7 is used for computing the probability that a certain number of cells in the neighborhood of severe weather cells in the forecast data are seen as severe weather cells in the observation data. Finally, the probability of existence of gaps in the observation data in the neighborhood of severe weather cells in forecast data is computed. Gaps are defined as openings between severe weather cells through which an aircraft can safely fly to its intended destination. The rest of the paper is organized as follows. Section II summarizes the statistical verification method described in Ref. 2. The extension of this method for computing the co-occurrence probabilities in discussed in Section HI. Numerical examples using NCWF forecast data and NCWD observation data are presented in Section III to elucidate the characteristics of the co-occurrence probabilities. This section also discusses the procedure for computing throbabilities that the severity of convection in the observation data will be higher or lower in the neighborhood of grid locations compared to that indicated at the grid locations in the forecast data. The probability of coverage of neighborhood grid cells is also described via examples in this section. Section IV discusses the gap detection algorithm and presents a numerical example to illustrate the method. The locations of the detected gaps in the observation data are used along with the locations of convective weather cells in the forecast data to determine the probability of existence of gaps in the neighborhood of these cells. Finally, the paper is concluded in Section V.

  3. Online probabilistic learning with an ensemble of forecasts

    NASA Astrophysics Data System (ADS)

    Thorey, Jean; Mallet, Vivien; Chaussin, Christophe

    2016-04-01

    Our objective is to produce a calibrated weighted ensemble to forecast a univariate time series. In addition to a meteorological ensemble of forecasts, we rely on observations or analyses of the target variable. The celebrated Continuous Ranked Probability Score (CRPS) is used to evaluate the probabilistic forecasts. However applying the CRPS on weighted empirical distribution functions (deriving from the weighted ensemble) may introduce a bias because of which minimizing the CRPS does not produce the optimal weights. Thus we propose an unbiased version of the CRPS which relies on clusters of members and is strictly proper. We adapt online learning methods for the minimization of the CRPS. These methods generate the weights associated to the members in the forecasted empirical distribution function. The weights are updated before each forecast step using only past observations and forecasts. Our learning algorithms provide the theoretical guarantee that, in the long run, the CRPS of the weighted forecasts is at least as good as the CRPS of any weighted ensemble with weights constant in time. In particular, the performance of our forecast is better than that of any subset ensemble with uniform weights. A noteworthy advantage of our algorithm is that it does not require any assumption on the distributions of the observations and forecasts, both for the application and for the theoretical guarantee to hold. As application example on meteorological forecasts for photovoltaic production integration, we show that our algorithm generates a calibrated probabilistic forecast, with significant performance improvements on probabilistic diagnostic tools (the CRPS, the reliability diagram and the rank histogram).

  4. You Say "Probable" and I Say "Likely": Improving Interpersonal Communication With Verbal Probability Phrases

    ERIC Educational Resources Information Center

    Karelitz, Tzur M.; Budescu, David V.

    2004-01-01

    When forecasters and decision makers describe uncertain events using verbal probability terms, there is a risk of miscommunication because people use different probability phrases and interpret them in different ways. In an effort to facilitate the communication process, the authors investigated various ways of converting the forecasters' verbal…

  5. A gain-loss framework based on ensemble flow forecasts to switch the urban drainage-wastewater system management towards energy optimization during dry periods

    NASA Astrophysics Data System (ADS)

    Courdent, Vianney; Grum, Morten; Munk-Nielsen, Thomas; Mikkelsen, Peter S.

    2017-05-01

    Precipitation is the cause of major perturbation to the flow in urban drainage and wastewater systems. Flow forecasts, generated by coupling rainfall predictions with a hydrologic runoff model, can potentially be used to optimize the operation of integrated urban drainage-wastewater systems (IUDWSs) during both wet and dry weather periods. Numerical weather prediction (NWP) models have significantly improved in recent years, having increased their spatial and temporal resolution. Finer resolution NWP are suitable for urban-catchment-scale applications, providing longer lead time than radar extrapolation. However, forecasts are inevitably uncertain, and fine resolution is especially challenging for NWP. This uncertainty is commonly addressed in meteorology with ensemble prediction systems (EPSs). Handling uncertainty is challenging for decision makers and hence tools are necessary to provide insight on ensemble forecast usage and to support the rationality of decisions (i.e. forecasts are uncertain and therefore errors will be made; decision makers need tools to justify their choices, demonstrating that these choices are beneficial in the long run). This study presents an economic framework to support the decision-making process by providing information on when acting on the forecast is beneficial and how to handle the EPS. The relative economic value (REV) approach associates economic values with the potential outcomes and determines the preferential use of the EPS forecast. The envelope curve of the REV diagram combines the results from each probability forecast to provide the highest relative economic value for a given gain-loss ratio. This approach is traditionally used at larger scales to assess mitigation measures for adverse events (i.e. the actions are taken when events are forecast). The specificity of this study is to optimize the energy consumption in IUDWS during low-flow periods by exploiting the electrical smart grid market (i.e. the actions are taken when no events are forecast). Furthermore, the results demonstrate the benefit of NWP neighbourhood post-processing methods to enhance the forecast skill and increase the range of beneficial uses.

  6. A synoptic view of the Third Uniform California Earthquake Rupture Forecast (UCERF3)

    USGS Publications Warehouse

    Field, Edward; Jordan, Thomas H.; Page, Morgan T.; Milner, Kevin R.; Shaw, Bruce E.; Dawson, Timothy E.; Biasi, Glenn; Parsons, Thomas E.; Hardebeck, Jeanne L.; Michael, Andrew J.; Weldon, Ray; Powers, Peter; Johnson, Kaj M.; Zeng, Yuehua; Bird, Peter; Felzer, Karen; van der Elst, Nicholas; Madden, Christopher; Arrowsmith, Ramon; Werner, Maximillan J.; Thatcher, Wayne R.

    2017-01-01

    Probabilistic forecasting of earthquake‐producing fault ruptures informs all major decisions aimed at reducing seismic risk and improving earthquake resilience. Earthquake forecasting models rely on two scales of hazard evolution: long‐term (decades to centuries) probabilities of fault rupture, constrained by stress renewal statistics, and short‐term (hours to years) probabilities of distributed seismicity, constrained by earthquake‐clustering statistics. Comprehensive datasets on both hazard scales have been integrated into the Uniform California Earthquake Rupture Forecast, Version 3 (UCERF3). UCERF3 is the first model to provide self‐consistent rupture probabilities over forecasting intervals from less than an hour to more than a century, and it is the first capable of evaluating the short‐term hazards that result from multievent sequences of complex faulting. This article gives an overview of UCERF3, illustrates the short‐term probabilities with aftershock scenarios, and draws some valuable scientific conclusions from the modeling results. In particular, seismic, geologic, and geodetic data, when combined in the UCERF3 framework, reject two types of fault‐based models: long‐term forecasts constrained to have local Gutenberg–Richter scaling, and short‐term forecasts that lack stress relaxation by elastic rebound.

  7. Two-stage seasonal streamflow forecasts to guide water resources decisions and water rights allocation

    NASA Astrophysics Data System (ADS)

    Block, P. J.; Gonzalez, E.; Bonnafous, L.

    2011-12-01

    Decision-making in water resources is inherently uncertain producing copious risks, ranging from operational (present) to planning (season-ahead) to design/adaptation (decadal) time-scales. These risks include human activity and climate variability/change. As the risks in designing and operating water systems and allocating available supplies vary systematically in time, prospects for predicting and managing such risks become increasingly attractive. Considerable effort has been undertaken to improve seasonal forecast skill and advocate for integration to reduce risk, however only minimal adoption is evident. Impediments are well defined, yet tailoring forecast products and allowing for flexible adoption assist in overcoming some obstacles. The semi-arid Elqui River basin in Chile is contending with increasing levels of water stress and demand coupled with insufficient investment in infrastructure, taxing its ability to meet agriculture, hydropower, and environmental requirements. The basin is fed from a retreating glacier, with allocation principles founded on a system of water rights and markets. A two-stage seasonal streamflow forecast at leads of one and two seasons prescribes the probability of reductions in the value of each water right, allowing water managers to inform their constituents in advance. A tool linking the streamflow forecast to a simple reservoir decision model also allows water managers to select a level of confidence in the forecast information.

  8. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Morley, Steven

    The PyForecastTools package provides Python routines for calculating metrics for model validation, forecast verification and model comparison. For continuous predictands the package provides functions for calculating bias (mean error, mean percentage error, median log accuracy, symmetric signed bias), and for calculating accuracy (mean squared error, mean absolute error, mean absolute scaled error, normalized RMSE, median symmetric accuracy). Convenience routines to calculate the component parts (e.g. forecast error, scaled error) of each metric are also provided. To compare models the package provides: generic skill score; percent better. Robust measures of scale including median absolute deviation, robust standard deviation, robust coefficient ofmore » variation and the Sn estimator are all provided by the package. Finally, the package implements Python classes for NxN contingency tables. In the case of a multi-class prediction, accuracy and skill metrics such as proportion correct and the Heidke and Peirce skill scores are provided as object methods. The special case of a 2x2 contingency table inherits from the NxN class and provides many additional metrics for binary classification: probability of detection, probability of false detection, false alarm ration, threat score, equitable threat score, bias. Confidence intervals for many of these quantities can be calculated using either the Wald method or Agresti-Coull intervals.« less

  9. Operational Earthquake Forecasting and Decision-Making in a Low-Probability Environment

    NASA Astrophysics Data System (ADS)

    Jordan, T. H.; the International Commission on Earthquake ForecastingCivil Protection

    2011-12-01

    Operational earthquake forecasting (OEF) is the dissemination of authoritative information about the time dependence of seismic hazards to help communities prepare for potentially destructive earthquakes. Most previous work on the public utility of OEF has anticipated that forecasts would deliver high probabilities of large earthquakes; i.e., deterministic predictions with low error rates (false alarms and failures-to-predict) would be possible. This expectation has not been realized. An alternative to deterministic prediction is probabilistic forecasting based on empirical statistical models of aftershock triggering and seismic clustering. During periods of high seismic activity, short-term earthquake forecasts can attain prospective probability gains in excess of 100 relative to long-term forecasts. The utility of such information is by no means clear, however, because even with hundredfold increases, the probabilities of large earthquakes typically remain small, rarely exceeding a few percent over forecasting intervals of days or weeks. Civil protection agencies have been understandably cautious in implementing OEF in this sort of "low-probability environment." The need to move more quickly has been underscored by recent seismic crises, such as the 2009 L'Aquila earthquake sequence, in which an anxious public was confused by informal and inaccurate earthquake predictions. After the L'Aquila earthquake, the Italian Department of Civil Protection appointed an International Commission on Earthquake Forecasting (ICEF), which I chaired, to recommend guidelines for OEF utilization. Our report (Ann. Geophys., 54, 4, 2011; doi: 10.4401/ag-5350) concludes: (a) Public sources of information on short-term probabilities should be authoritative, scientific, open, and timely, and need to convey epistemic uncertainties. (b) Earthquake probabilities should be based on operationally qualified, regularly updated forecasting systems. (c) All operational models should be evaluated for reliability and skill by retrospective testing, and the models should be under continuous prospective testing against long-term forecasts and alternative time-dependent models. (d) Short-term models used in operational forecasting should be consistent with the long-term forecasts used in probabilistic seismic hazard analysis. (e) Alert procedures should be standardized to facilitate decisions at different levels of government, based in part on objective analysis of costs and benefits. (f) In establishing alert protocols, consideration should also be given to the less tangible aspects of value-of-information, such as gains in psychological preparedness and resilience. Authoritative statements of increased risk, even when the absolute probability is low, can provide a psychological benefit to the public by filling information vacuums that lead to informal predictions and misinformation. Formal OEF procedures based on probabilistic forecasting appropriately separate hazard estimation by scientists from the decision-making role of civil protection authorities. The prosecution of seven Italian scientists on manslaughter charges stemming from their actions before the L'Aquila earthquake makes clear why this separation should be explicit in defining OEF protocols.

  10. Forecasting Cool Season Daily Peak Winds at Kennedy Space Center and Cape Canaveral Air Force Station

    NASA Technical Reports Server (NTRS)

    Barrett, Joe, III; Short, David; Roeder, William

    2008-01-01

    The expected peak wind speed for the day is an important element in the daily 24-Hour and Weekly Planning Forecasts issued by the 45th Weather Squadron (45 WS) for planning operations at Kennedy Space Center (KSC) and Cape Canaveral Air Force Station (CCAFS). The morning outlook for peak speeds also begins the warning decision process for gusts ^ 35 kt, ^ 50 kt, and ^ 60 kt from the surface to 300 ft. The 45 WS forecasters have indicated that peak wind speeds are a challenging parameter to forecast during the cool season (October-April). The 45 WS requested that the Applied Meteorology Unit (AMU) develop a tool to help them forecast the speed and timing of the daily peak and average wind, from the surface to 300 ft on KSC/CCAFS during the cool season. The tool must only use data available by 1200 UTC to support the issue time of the Planning Forecasts. Based on observations from the KSC/CCAFS wind tower network, surface observations from the Shuttle Landing Facility (SLF), and CCAFS upper-air soundings from the cool season months of October 2002 to February 2007, the AMU created multiple linear regression equations to predict the timing and speed of the daily peak wind speed, as well as the background average wind speed. Several possible predictors were evaluated, including persistence, the temperature inversion depth, strength, and wind speed at the top of the inversion, wind gust factor (ratio of peak wind speed to average wind speed), synoptic weather pattern, occurrence of precipitation at the SLF, and strongest wind in the lowest 3000 ft, 4000 ft, or 5000 ft. Six synoptic patterns were identified: 1) surface high near or over FL, 2) surface high north or east of FL, 3) surface high south or west of FL, 4) surface front approaching FL, 5) surface front across central FL, and 6) surface front across south FL. The following six predictors were selected: 1) inversion depth, 2) inversion strength, 3) wind gust factor, 4) synoptic weather pattern, 5) occurrence of precipitation at the SLF, and 6) strongest wind in the lowest 3000 ft. The forecast tool was developed as a graphical user interface with Microsoft Excel to help the forecaster enter the variables, and run the appropriate regression equations. Based on the forecaster's input and regression equations, a forecast of the day's peak and average wind is generated and displayed. The application also outputs the probability that the peak wind speed will be ^ 35 kt, 50 kt, and 60 kt.

  11. Quantifying probabilities of eruptions at Mount Etna (Sicily, Italy).

    NASA Astrophysics Data System (ADS)

    Brancato, Alfonso

    2010-05-01

    One of the major goals of modern volcanology is to set up sound risk-based decision-making in land-use planning and emergency management. Volcanic hazard must be managed with reliable estimates of quantitative long- and short-term eruption forecasting, but the large number of observables involved in a volcanic process suggests that a probabilistic approach could be a suitable tool in forecasting. The aim of this work is to quantify probabilistic estimate of the vent location for a suitable lava flow hazard assessment at Mt. Etna volcano, through the application of the code named BET (Marzocchi et al., 2004, 2008). The BET_EF model is based on the event tree philosophy assessed by Newhall and Hoblitt (2002), further developing the concept of vent location, epistemic uncertainties, and a fuzzy approach for monitoring measurements. A Bayesian event tree is a specialized branching graphical representation of events in which individual branches are alternative steps from a general prior event, and evolving into increasingly specific subsequent states. Then, the event tree attempts to graphically display all relevant possible outcomes of volcanic unrest in progressively higher levels of detail. The procedure is set to estimate an a priori probability distribution based upon theoretical knowledge, to accommodate it by using past data, and to modify it further by using current monitoring data. For the long-term forecasting, an a priori model, dealing with the present tectonic and volcanic structure of the Mt. Etna, is considered. The model is mainly based on past vent locations and fracture location datasets (XX century of eruptive history of the volcano). Considering the variation of the information through time, and their relationship with the structural setting of the volcano, datasets we are also able to define an a posteriori probability map for next vent opening. For short-term forecasting vent opening hazard assessment, the monitoring has a leading role, primarily based on seismological and volcanological data, integrated with strain, geochemical, gravimetric and magnetic parameters. In the code, is necessary to fix an appropriate forecasting time window. On open-conduit volcanoes as Mt. Etna, a forecast time window of a month (as fixed in other applications worldwide) seems unduly long, because variations of the state of the volcano (significant variation of a specific monitoring parameter could occur in time scale shorter than the forecasting time window) are expected with shorter time scale (hour, day or week). This leads to set a week as forecasting time window, coherently with the number of weeks in which an unrest has been experienced. The short-term vent opening hazard assessment will be estimated during an unrest phase; the testing case (2001 July eruption) will include all the monitoring parameters collected at Mt. Etna during the six months preceding the eruption. The monitoring role has been assessed eliciting more than 50 parameters, including seismic activity, ground deformation, geochemistry, gravity, magnetism, and distributed inside the first three nodes of the procedure. Parameter values describe the Mt. Etna volcano activity, being more detailed through the code, particularly in time units. The methodology allows all assumptions and thresholds to be clearly identified and provides a rational means for their revision if new data or information are incoming. References Newhall C.G. and Hoblitt R.P.; 2002: Constructing event trees for volcanic crises, Bull. Volcanol., 64, 3-20, doi: 10.1007/s0044500100173. Marzocchi W., Sandri L., Gasparini P., Newhall C. and Boschi E.; 2004: Quantifying probabilities of volcanic events: The example of volcanic hazard at Mount Vesuvius, J. Geophys. Res., 109, B11201, doi:10.1029/2004JB00315U. Marzocchi W., Sandri, L. and Selva, J.; 2008: BET_EF: a probabilistic tool for long- and short-term eruption forecasting, Bull. Volcanol., 70, 623 - 632, doi: 10.1007/s00445-007-0157-y.

  12. Operational Earthquake Forecasting: Proposed Guidelines for Implementation (Invited)

    NASA Astrophysics Data System (ADS)

    Jordan, T. H.

    2010-12-01

    The goal of operational earthquake forecasting (OEF) is to provide the public with authoritative information about how seismic hazards are changing with time. During periods of high seismic activity, short-term earthquake forecasts based on empirical statistical models can attain nominal probability gains in excess of 100 relative to the long-term forecasts used in probabilistic seismic hazard analysis (PSHA). Prospective experiments are underway by the Collaboratory for the Study of Earthquake Predictability (CSEP) to evaluate the reliability and skill of these seismicity-based forecasts in a variety of tectonic environments. How such information should be used for civil protection is by no means clear, because even with hundredfold increases, the probabilities of large earthquakes typically remain small, rarely exceeding a few percent over forecasting intervals of days or weeks. Civil protection agencies have been understandably cautious in implementing formal procedures for OEF in this sort of “low-probability environment.” Nevertheless, the need to move more quickly towards OEF has been underscored by recent experiences, such as the 2009 L’Aquila earthquake sequence and other seismic crises in which an anxious public has been confused by informal, inconsistent earthquake forecasts. Whether scientists like it or not, rising public expectations for real-time information, accelerated by the use of social media, will require civil protection agencies to develop sources of authoritative information about the short-term earthquake probabilities. In this presentation, I will discuss guidelines for the implementation of OEF informed by my experience on the California Earthquake Prediction Evaluation Council, convened by CalEMA, and the International Commission on Earthquake Forecasting, convened by the Italian government following the L’Aquila disaster. (a) Public sources of information on short-term probabilities should be authoritative, scientific, open, and timely, and they need to convey the epistemic uncertainties in the operational forecasts. (b) Earthquake probabilities should be based on operationally qualified, regularly updated forecasting systems. All operational procedures should be rigorously reviewed by experts in the creation, delivery, and utility of earthquake forecasts. (c) The quality of all operational models should be evaluated for reliability and skill by retrospective testing, and the models should be under continuous prospective testing in a CSEP-type environment against established long-term forecasts and a wide variety of alternative, time-dependent models. (d) Short-term models used in operational forecasting should be consistent with the long-term forecasts used in PSHA. (e) Alert procedures should be standardized to facilitate decisions at different levels of government and among the public, based in part on objective analysis of costs and benefits. (f) In establishing alert procedures, consideration should also be made of the less tangible aspects of value-of-information, such as gains in psychological preparedness and resilience. Authoritative statements of increased risk, even when the absolute probability is low, can provide a psychological benefit to the public by filling information vacuums that can lead to informal predictions and misinformation.

  13. Assessing probabilistic predictions of ENSO phase and intensity from the North American Multimodel Ensemble

    NASA Astrophysics Data System (ADS)

    Tippett, Michael K.; Ranganathan, Meghana; L'Heureux, Michelle; Barnston, Anthony G.; DelSole, Timothy

    2017-05-01

    Here we examine the skill of three, five, and seven-category monthly ENSO probability forecasts (1982-2015) from single and multi-model ensemble integrations of the North American Multimodel Ensemble (NMME) project. Three-category forecasts are typical and provide probabilities for the ENSO phase (El Niño, La Niña or neutral). Additional forecast categories indicate the likelihood of ENSO conditions being weak, moderate or strong. The level of skill observed for differing numbers of forecast categories can help to determine the appropriate degree of forecast precision. However, the dependence of the skill score itself on the number of forecast categories must be taken into account. For reliable forecasts with same quality, the ranked probability skill score (RPSS) is fairly insensitive to the number of categories, while the logarithmic skill score (LSS) is an information measure and increases as categories are added. The ignorance skill score decreases to zero as forecast categories are added, regardless of skill level. For all models, forecast formats and skill scores, the northern spring predictability barrier explains much of the dependence of skill on target month and forecast lead. RPSS values for monthly ENSO forecasts show little dependence on the number of categories. However, the LSS of multimodel ensemble forecasts with five and seven categories show statistically significant advantages over the three-category forecasts for the targets and leads that are least affected by the spring predictability barrier. These findings indicate that current prediction systems are capable of providing more detailed probabilistic forecasts of ENSO phase and amplitude than are typically provided.

  14. Assessing the potential for improving S2S forecast skill through multimodel ensembling

    NASA Astrophysics Data System (ADS)

    Vigaud, N.; Robertson, A. W.; Tippett, M. K.; Wang, L.; Bell, M. J.

    2016-12-01

    Non-linear logistic regression is well suited to probability forecasting and has been successfully applied in the past to ensemble weather and climate predictions, providing access to the full probabilities distribution without any Gaussian assumption. However, little work has been done at sub-monthly lead times where relatively small re-forecast ensembles and lengths represent new challenges for which post-processing avenues have yet to be investigated. A promising approach consists in extending the definition of non-linear logistic regression by including the quantile of the forecast distribution as one of the predictors. So-called Extended Logistic Regression (ELR), which enables mutually consistent individual threshold probabilities, is here applied to ECMWF, CFSv2 and CMA re-forecasts from the S2S database in order to produce rainfall probabilities at weekly resolution. The ELR model is trained on seasonally-varying tercile categories computed for lead times of 1 to 4 weeks. It is then tested in a cross-validated manner, i.e. allowing real-time predictability applications, to produce rainfall tercile probabilities from individual weekly hindcasts that are finally combined by equal pooling. Results will be discussed over a broader North American region, where individual and MME forecasts generated out to 4 weeks lead are characterized by good probabilistic reliability but low sharpness, exhibiting systematically more skill in winter than summer.

  15. Probabilistic Nowcasting of Low-Visibility Procedure States at Vienna International Airport During Cold Season

    NASA Astrophysics Data System (ADS)

    Kneringer, Philipp; Dietz, Sebastian J.; Mayr, Georg J.; Zeileis, Achim

    2018-04-01

    Airport operations are sensitive to visibility conditions. Low-visibility events may lead to capacity reduction, delays and economic losses. Different levels of low-visibility procedures (lvp) are enacted to ensure aviation safety. A nowcast of the probabilities for each of the lvp categories helps decision makers to optimally schedule their operations. An ordered logistic regression (OLR) model is used to forecast these probabilities directly. It is applied to cold season forecasts at Vienna International Airport for lead times of 30-min out to 2 h. Model inputs are standard meteorological measurements. The skill of the forecasts is accessed by the ranked probability score. OLR outperforms persistence, which is a strong contender at the shortest lead times. The ranked probability score of the OLR is even better than the one of nowcasts from human forecasters. The OLR-based nowcasting system is computationally fast and can be updated instantaneously when new data become available.

  16. A Diagnostics Tool to detect ensemble forecast system anomaly and guide operational decisions

    NASA Astrophysics Data System (ADS)

    Park, G. H.; Srivastava, A.; Shrestha, E.; Thiemann, M.; Day, G. N.; Draijer, S.

    2017-12-01

    The hydrologic community is moving toward using ensemble forecasts to take uncertainty into account during the decision-making process. The New York City Department of Environmental Protection (DEP) implements several types of ensemble forecasts in their decision-making process: ensemble products for a statistical model (Hirsch and enhanced Hirsch); the National Weather Service (NWS) Advanced Hydrologic Prediction Service (AHPS) forecasts based on the classical Ensemble Streamflow Prediction (ESP) technique; and the new NWS Hydrologic Ensemble Forecasting Service (HEFS) forecasts. To remove structural error and apply the forecasts to additional forecast points, the DEP post processes both the AHPS and the HEFS forecasts. These ensemble forecasts provide mass quantities of complex data, and drawing conclusions from these forecasts is time-consuming and difficult. The complexity of these forecasts also makes it difficult to identify system failures resulting from poor data, missing forecasts, and server breakdowns. To address these issues, we developed a diagnostic tool that summarizes ensemble forecasts and provides additional information such as historical forecast statistics, forecast skill, and model forcing statistics. This additional information highlights the key information that enables operators to evaluate the forecast in real-time, dynamically interact with the data, and review additional statistics, if needed, to make better decisions. We used Bokeh, a Python interactive visualization library, and a multi-database management system to create this interactive tool. This tool compiles and stores data into HTML pages that allows operators to readily analyze the data with built-in user interaction features. This paper will present a brief description of the ensemble forecasts, forecast verification results, and the intended applications for the diagnostic tool.

  17. On the predictability of outliers in ensemble forecasts

    NASA Astrophysics Data System (ADS)

    Siegert, S.; Bröcker, J.; Kantz, H.

    2012-03-01

    In numerical weather prediction, ensembles are used to retrieve probabilistic forecasts of future weather conditions. We consider events where the verification is smaller than the smallest, or larger than the largest ensemble member of a scalar ensemble forecast. These events are called outliers. In a statistically consistent K-member ensemble, outliers should occur with a base rate of 2/(K+1). In operational ensembles this base rate tends to be higher. We study the predictability of outlier events in terms of the Brier Skill Score and find that forecast probabilities can be calculated which are more skillful than the unconditional base rate. This is shown analytically for statistically consistent ensembles. Using logistic regression, forecast probabilities for outlier events in an operational ensemble are calculated. These probabilities exhibit positive skill which is quantitatively similar to the analytical results. Possible causes of these results as well as their consequences for ensemble interpretation are discussed.

  18. On the use of Bayesian decision theory for issuing natural hazard warnings

    NASA Astrophysics Data System (ADS)

    Economou, T.; Stephenson, D. B.; Rougier, J. C.; Neal, R. A.; Mylne, K. R.

    2016-10-01

    Warnings for natural hazards improve societal resilience and are a good example of decision-making under uncertainty. A warning system is only useful if well defined and thus understood by stakeholders. However, most operational warning systems are heuristic: not formally or transparently defined. Bayesian decision theory provides a framework for issuing warnings under uncertainty but has not been fully exploited. Here, a decision theoretic framework is proposed for hazard warnings. The framework allows any number of warning levels and future states of nature, and a mathematical model for constructing the necessary loss functions for both generic and specific end-users is described. The approach is illustrated using one-day ahead warnings of daily severe precipitation over the UK, and compared to the current decision tool used by the UK Met Office. A probability model is proposed to predict precipitation, given ensemble forecast information, and loss functions are constructed for two generic stakeholders: an end-user and a forecaster. Results show that the Met Office tool issues fewer high-level warnings compared with our system for the generic end-user, suggesting the former may not be suitable for risk averse end-users. In addition, raw ensemble forecasts are shown to be unreliable and result in higher losses from warnings.

  19. On the use of Bayesian decision theory for issuing natural hazard warnings.

    PubMed

    Economou, T; Stephenson, D B; Rougier, J C; Neal, R A; Mylne, K R

    2016-10-01

    Warnings for natural hazards improve societal resilience and are a good example of decision-making under uncertainty. A warning system is only useful if well defined and thus understood by stakeholders. However, most operational warning systems are heuristic: not formally or transparently defined. Bayesian decision theory provides a framework for issuing warnings under uncertainty but has not been fully exploited. Here, a decision theoretic framework is proposed for hazard warnings. The framework allows any number of warning levels and future states of nature, and a mathematical model for constructing the necessary loss functions for both generic and specific end-users is described. The approach is illustrated using one-day ahead warnings of daily severe precipitation over the UK, and compared to the current decision tool used by the UK Met Office. A probability model is proposed to predict precipitation, given ensemble forecast information, and loss functions are constructed for two generic stakeholders: an end-user and a forecaster. Results show that the Met Office tool issues fewer high-level warnings compared with our system for the generic end-user, suggesting the former may not be suitable for risk averse end-users. In addition, raw ensemble forecasts are shown to be unreliable and result in higher losses from warnings.

  20. On the use of Bayesian decision theory for issuing natural hazard warnings

    PubMed Central

    Stephenson, D. B.; Rougier, J. C.; Neal, R. A.; Mylne, K. R.

    2016-01-01

    Warnings for natural hazards improve societal resilience and are a good example of decision-making under uncertainty. A warning system is only useful if well defined and thus understood by stakeholders. However, most operational warning systems are heuristic: not formally or transparently defined. Bayesian decision theory provides a framework for issuing warnings under uncertainty but has not been fully exploited. Here, a decision theoretic framework is proposed for hazard warnings. The framework allows any number of warning levels and future states of nature, and a mathematical model for constructing the necessary loss functions for both generic and specific end-users is described. The approach is illustrated using one-day ahead warnings of daily severe precipitation over the UK, and compared to the current decision tool used by the UK Met Office. A probability model is proposed to predict precipitation, given ensemble forecast information, and loss functions are constructed for two generic stakeholders: an end-user and a forecaster. Results show that the Met Office tool issues fewer high-level warnings compared with our system for the generic end-user, suggesting the former may not be suitable for risk averse end-users. In addition, raw ensemble forecasts are shown to be unreliable and result in higher losses from warnings. PMID:27843399

  1. Near Real Time Tools for ISS Plasma Science and Engineering Applications

    NASA Technical Reports Server (NTRS)

    Minow, Joseph I.; Willis, Emily M.; Parker, Linda Neergaard; Shim, Ja Soon; Kuznetsova, Maria M.; Pulkkinen, Antti, A.

    2013-01-01

    The International Space Station (ISS) program utilizes a plasma environment forecast for estimating electrical charging hazards for crews during extravehicular activity (EVA). The process uses ionospheric electron density (Ne) and temperature (Te) measurements from the ISS Floating Potential Measurement Unit (FPMU) instrument suite with the assumption that the plasma conditions will remain constant for one to fourteen days with a low probability for a space weather event which would significantly change the environment before an EVA. FPMU data is typically not available during EVA's, therefore, the most recent FPMU data available for characterizing the state of the ionosphere during EVA is typically a day or two before the start of an EVA or after the EVA has been completed. Three near real time space weather tools under development for ISS applications are described here including: (a) Ne from ground based ionosonde measurements of foF2 (b) Ne from near real time satellite radio occultation measurements of electron density profiles (c) Ne, Te from a physics based ionosphere model These applications are used to characterize the ISS space plasma environment during EVA periods when FPMU data is not available, monitor for large changes in ionosphere density that could render the ionosphere forecast and plasma hazard assessment invalid, and validate the "persistence of conditions" forecast assumption. In addition, the tools are useful for providing space environment input to science payloads on ISS and anomaly investigations during periods the FPMU is not operating.

  2. Communicating weather forecast uncertainty: Do individual differences matter?

    PubMed

    Grounds, Margaret A; Joslyn, Susan L

    2018-03-01

    Research suggests that people make better weather-related decisions when they are given numeric probabilities for critical outcomes (Joslyn & Leclerc, 2012, 2013). However, it is unclear whether all users can take advantage of probabilistic forecasts to the same extent. The research reported here assessed key cognitive and demographic factors to determine their relationship to the use of probabilistic forecasts to improve decision quality. In two studies, participants decided between spending resources to prevent icy conditions on roadways or risk a larger penalty when freezing temperatures occurred. Several forecast formats were tested, including a control condition with the night-time low temperature alone and experimental conditions that also included the probability of freezing and advice based on expected value. All but those with extremely low numeracy scores made better decisions with probabilistic forecasts. Importantly, no groups made worse decisions when probabilities were included. Moreover, numeracy was the best predictor of decision quality, regardless of forecast format, suggesting that the advantage may extend beyond understanding the forecast to general decision strategy issues. This research adds to a growing body of evidence that numerical uncertainty estimates may be an effective way to communicate weather danger to general public end users. (PsycINFO Database Record (c) 2018 APA, all rights reserved).

  3. The Forecast Interpretation Tool—a Monte Carlo technique for blending climatic distributions with probabilistic forecasts

    USGS Publications Warehouse

    Husak, Gregory J.; Michaelsen, Joel; Kyriakidis, P.; Verdin, James P.; Funk, Chris; Galu, Gideon

    2011-01-01

    Probabilistic forecasts are produced from a variety of outlets to help predict rainfall, and other meteorological events, for periods of 1 month or more. Such forecasts are expressed as probabilities of a rainfall event, e.g. being in the upper, middle, or lower third of the relevant distribution of rainfall in the region. The impact of these forecasts on the expectation for the event is not always clear or easily conveyed. This article proposes a technique based on Monte Carlo simulation for adjusting existing climatologic statistical parameters to match forecast information, resulting in new parameters defining the probability of events for the forecast interval. The resulting parameters are shown to approximate the forecasts with reasonable accuracy. To show the value of the technique as an application for seasonal rainfall, it is used with consensus forecast developed for the Greater Horn of Africa for the 2009 March-April-May season. An alternative, analytical approach is also proposed, and discussed in comparison to the first simulation-based technique.

  4. Avoiding the ensemble decorrelation problem using member-by-member post-processing

    NASA Astrophysics Data System (ADS)

    Van Schaeybroeck, Bert; Vannitsem, Stéphane

    2014-05-01

    Forecast calibration or post-processing has become a standard tool in atmospheric and climatological science due to the presence of systematic initial condition and model errors. For ensemble forecasts the most competitive methods derive from the assumption of a fixed ensemble distribution. However, when independently applying such 'statistical' methods at different locations, lead times or for multiple variables the correlation structure for individual ensemble members is destroyed. Instead of reastablishing the correlation structure as in Schefzik et al. (2013) we instead propose a calibration method that avoids such problem by correcting each ensemble member individually. Moreover, we analyse the fundamental mechanisms by which the probabilistic ensemble skill can be enhanced. In terms of continuous ranked probability score, our member-by-member approach amounts to skill gain that extends for lead times far beyond the error doubling time and which is as good as the one of the most competitive statistical approach, non-homogeneous Gaussian regression (Gneiting et al. 2005). Besides the conservation of correlation structure, additional benefits arise including the fact that higher-order ensemble moments like kurtosis and skewness are inherited from the uncorrected forecasts. Our detailed analysis is performed in the context of the Kuramoto-Sivashinsky equation and different simple models but the results extent succesfully to the ensemble forecast of the European Centre for Medium-Range Weather Forecasts (Van Schaeybroeck and Vannitsem, 2013, 2014) . References [1] Gneiting, T., Raftery, A. E., Westveld, A., Goldman, T., 2005: Calibrated probabilistic forecasting using ensemble model output statistics and minimum CRPS estimation. Mon. Weather Rev. 133, 1098-1118. [2] Schefzik, R., T.L. Thorarinsdottir, and T. Gneiting, 2013: Uncertainty Quantification in Complex Simulation Models Using Ensemble Copula Coupling. To appear in Statistical Science 28. [3] Van Schaeybroeck, B., and S. Vannitsem, 2013: Reliable probabilities through statistical post-processing of ensemble forecasts. Proceedings of the European Conference on Complex Systems 2012, Springer proceedings on complexity, XVI, p. 347-352. [4] Van Schaeybroeck, B., and S. Vannitsem, 2014: Ensemble post-processing using member-by-member approaches: theoretical aspects, under review.

  5. Severe rainfall prediction systems for civil protection purposes

    NASA Astrophysics Data System (ADS)

    Comellas, A.; Llasat, M. C.; Molini, L.; Parodi, A.; Siccardi, F.

    2010-09-01

    One of the most common natural hazards impending on Mediterranean regions is the occurrence of severe weather structures able to produce heavy rainfall. Floods have killed about 1000 people across all Europe in last 10 years. With the aim of mitigating this kind of risk, quantitative precipitation forecasts (QPF) and rain probability forecasts are two tools nowadays available for national meteorological services and institutions responsible for weather forecasting in order to and predict rainfall, by using either the deterministic or the probabilistic approach. This study provides an insight of the different approaches used by Italian (DPC) and Catalonian (SMC) Civil Protection and the results they achieved with their peculiar issuing-system for early warnings. For the former, the analysis considers the period between 2006-2009 in which the predictive ability of the forecasting system, based on the numerical weather prediction model COSMO-I7, has been put into comparison with ground based observations (composed by more than 2000 raingauge stations, Molini et al., 2009). Italian system is mainly focused on regional-scale warnings providing forecasts for periods never shorter than 18 hours and very often have a 36-hour maximum duration . The information contained in severe weather bulletins is not quantitative and usually is referred to a specific meteorological phenomena (thunderstorms, wind gales et c.). Updates and refining have a usual refresh time of 24 hours. SMC operates within the Catalonian boundaries and uses a warning system that mixes both quantitative and probabilistic information. For each administrative region ("comarca") Catalonia is divided into, forecasters give an approximate value of the average predicted rainfall and the probability of overcoming that threshold. Usually warnings are re-issued every 6 hours and their duration depends on the predicted time extent of the storm. In order to provide a comprehensive QPF verification, the rainfall predicted by Mesoscale Model 5 (MM5), the SMC forecast operational model, is compared with the local rain gauge network for year 2008 (Comellas et al., 2010). This study presents benefits and drawbacks of both Italian and Catalonian systems. Moreover, a particular attention is paid on the link between system's predictive ability and the predicted severe weather type as a function of its space-time development.

  6. Situational Lightning Climatologies for Central Florida: Phase IV: Central Florida Flow Regime Based Climatologies of Lightning Probabilities

    NASA Technical Reports Server (NTRS)

    Bauman, William H., III

    2009-01-01

    The threat of lightning is a daily concern during the warm season in Florida. Research has revealed distinct spatial and temporal distributions of lightning occurrence that are strongly influenced by large-scale atmospheric flow regimes. Previously, the Applied Meteorology Unit (AMU) calculated the gridded lightning climatologies based on seven flow regimes over Florida for 1-, 3- and 6-hr intervals in 5-, 10-, 20-, and 30-NM diameter range rings around the Shuttle Landing Facility (SLF) and eight other airfields in the National Weather Service in Melbourne (NWS MLB) county warning area (CWA). In this update to the work, the AMU recalculated the lightning climatologies for using individual lightning strike data to improve the accuracy of the climatologies. The AMU included all data regardless of flow regime as one of the stratifications, added monthly stratifications, added three years of data to the period of record and used modified flow regimes based work from the AMU's Objective Lightning Probability Forecast Tool, Phase II. The AMU made changes so the 5- and 10-NM radius range rings are consistent with the aviation forecast requirements at NWS MLB, while the 20- and 30-NM radius range rings at the SLF assist the Spaceflight Meteorology Group in making forecasts for weather Flight Rule violations during Shuttle landings. The AMU also updated the graphical user interface with the new data.

  7. Anvil Tool in the Advanced Weather Interactive Processing System

    NASA Technical Reports Server (NTRS)

    Barrett, Joe, III; Bauman, William, III; Keen, Jeremy

    2007-01-01

    Meteorologists from the 45th Weather Squadron (45 WS) and Spaceflight Meteorology Group (SMG) have identified anvil forecasting as one of their most challenging tasks when predicting the probability of violations of the lightning Launch Commit Criteria and Space Shuttle Flight Rules. As a result, the Applied Meteorology Unit (AMU) created a graphical overlay tool for the Meteorological Interactive Data Display Systems (MIDDS) to indicate the threat of thunderstorm anvil clouds, using either observed or model forecast winds as input. In order for the Anvil Tool to remain available to the meteorologists, the AMU was tasked to transition the tool to the Advanced Weather interactive Processing System (AWIPS). This report describes the work done by the AMU to develop the Anvil Tool for AWIPS to create a graphical overlay depicting the threat from thunderstorm anvil clouds. The AWIPS Anvil Tool is based on the previously deployed AMU MIDDS Anvil Tool. SMG and 45 WS forecasters have used the MIDDS Anvil Tool during launch and landing operations. SMG's primary weather analysis and display system is now AWIPS and the 45 WS has plans to replace MIDDS with AWIPS. The Anvil Tool creates a graphic that users can overlay on satellite or radar imagery to depict the potential location of thunderstorm anvils one, two, and three hours into the future. The locations are based on an average of the upper-level observed or forecasted winds. The graphic includes 10 and 20 nm standoff circles centered at the location of interest, in addition to one-, two-, and three-hour arcs in the upwind direction. The arcs extend outward across a 30 degree sector width based on a previous AMU study which determined thunderstorm anvils move in a direction plus or minus 15 degrees of the upper-level (300- to 150-mb) wind direction. This report briefly describes the history of the MIDDS Anvil Tool and then explains how the initial development of the AWIPS Anvil Tool was carried out. After testing was performed by SMG, 45 WS, and AMU, a number of needed improvements were identified. A bug report document was created that showed the status of each bug and desired improvement. This report lists the improvements that were made to increase the accuracy and user-friendliness of the tool. Final testing was carried out and documented and then the final version of the software and Users Guide was provided to SMG and the 45 WS. Several possible future improvements to the tool are identified that would increase the flexibility of the tool. This report contains a brief history of the development of the Anvil Tool in MIDDS, and then describes the transition and development of software to AWIPS.

  8. The Value, Protocols, and Scientific Ethics of Earthquake Forecasting

    NASA Astrophysics Data System (ADS)

    Jordan, Thomas H.

    2013-04-01

    Earthquakes are different from other common natural hazards because precursory signals diagnostic of the magnitude, location, and time of impending seismic events have not yet been found. Consequently, the short-term, localized prediction of large earthquakes at high probabilities with low error rates (false alarms and failures-to-predict) is not yet feasible. An alternative is short-term probabilistic forecasting based on empirical statistical models of seismic clustering. During periods of high seismic activity, short-term earthquake forecasts can attain prospective probability gains up to 1000 relative to long-term forecasts. The value of such information is by no means clear, however, because even with hundredfold increases, the probabilities of large earthquakes typically remain small, rarely exceeding a few percent over forecasting intervals of days or weeks. Civil protection agencies have been understandably cautious in implementing operational forecasting protocols in this sort of "low-probability environment." This paper will explore the complex interrelations among the valuation of low-probability earthquake forecasting, which must account for social intangibles; the protocols of operational forecasting, which must factor in large uncertainties; and the ethics that guide scientists as participants in the forecasting process, who must honor scientific principles without doing harm. Earthquake forecasts possess no intrinsic societal value; rather, they acquire value through their ability to influence decisions made by users seeking to mitigate seismic risk and improve community resilience to earthquake disasters. According to the recommendations of the International Commission on Earthquake Forecasting (www.annalsofgeophysics.eu/index.php/annals/article/view/5350), operational forecasting systems should appropriately separate the hazard-estimation role of scientists from the decision-making role of civil protection authorities and individuals. They should provide public sources of information on short-term probabilities that are authoritative, scientific, open, and timely. Alert procedures should be negotiated with end-users to facilitate decisions at different levels of society, based in part on objective analysis of costs and benefits but also on less tangible aspects of value-of-information, such as gains in psychological preparedness and resilience. Unfortunately, in most countries, operational forecasting systems do not conform to such high standards, and earthquake scientists are often called upon to advise the public in roles that exceed their civic authority, expertise in risk communication, and situational knowledge. Certain ethical principles are well established; e.g., announcing unreliable predictions in public forums should be avoided, because bad information can be dangerous. But what are the professional responsibilities of earthquake scientists during seismic crises, especially when the public information through official channels is thought to be inadequate or incorrect? How much should these responsibilities be discounted in the face of personal liability? How should scientists contend with highly uncertain forecasts? To what degree should the public be involved in controversies about forecasting results? No simple answers to these questions can be offered, but the need for answers can be reduced by improving operational forecasting systems. This will require more substantial, and more trustful, collaborations between scientists, civil authorities, and public stakeholders.

  9. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Carson, K.S.

    The presence of overpopulation or unsustainable population growth may place pressure on the food and water supplies of countries in sensitive areas of the world. Severe air or water pollution may place additional pressure on these resources. These pressures may generate both internal and international conflict in these areas as nations struggle to provide for their citizens. Such conflicts may result in United States intervention, either unilaterally, or through the United Nations. Therefore, it is in the interests of the United States to identify potential areas of conflict in order to properly train and allocate forces. The purpose of thismore » research is to forecast the probability of conflict in a nation as a function of it s environmental conditions. Probit, logit and ordered probit models are employed to forecast the probability of a given level of conflict. Data from 95 countries are used to estimate the models. Probability forecasts are generated for these 95 nations. Out-of sample forecasts are generated for an additional 22 nations. These probabilities are then used to rank nations from highest probability of conflict to lowest. The results indicate that the dependence of a nation`s economy on agriculture, the rate of deforestation, and the population density are important variables in forecasting the probability and level of conflict. These results indicate that environmental variables do play a role in generating or exacerbating conflict. It is unclear that the United States military has any direct role in mitigating the environmental conditions that may generate conflict. A more important role for the military is to aid in data gathering to generate better forecasts so that the troops are adequntely prepared when conflicts arises.« less

  10. Relating Tropical Cyclone Track Forecast Error Distributions with Measurements of Forecast Uncertainty

    DTIC Science & Technology

    2016-03-01

    cyclone THORPEX The Observing System Research and Predictability Experiment TIGGE THORPEX Interactive Grand Global Ensemble TS tropical storm ...forecast possible, but also relay the level of uncertainty unique to a given storm . This will better inform decision makers to help protect all assets at...for any given storm . Thus, the probabilities may 4 increase or decrease (and the probability swath may widen or narrow) to provide a more

  11. Development and application of an atmospheric-hydrologic-hydraulic flood forecasting model driven by TIGGE ensemble forecasts

    NASA Astrophysics Data System (ADS)

    Bao, Hongjun; Zhao, Linna

    2012-02-01

    A coupled atmospheric-hydrologic-hydraulic ensemble flood forecasting model, driven by The Observing System Research and Predictability Experiment (THORPEX) Interactive Grand Global Ensemble (TIGGE) data, has been developed for flood forecasting over the Huaihe River. The incorporation of numerical weather prediction (NWP) information into flood forecasting systems may increase forecast lead time from a few hours to a few days. A single NWP model forecast from a single forecast center, however, is insufficient as it involves considerable non-predictable uncertainties and leads to a high number of false alarms. The availability of global ensemble NWP systems through TIGGE offers a new opportunity for flood forecast. The Xinanjiang model used for hydrological rainfall-runoff modeling and the one-dimensional unsteady flow model applied to channel flood routing are coupled with ensemble weather predictions based on the TIGGE data from the Canadian Meteorological Centre (CMC), the European Centre for Medium-Range Weather Forecasts (ECMWF), the UK Met Office (UKMO), and the US National Centers for Environmental Prediction (NCEP). The developed ensemble flood forecasting model is applied to flood forecasting of the 2007 flood season as a test case. The test case is chosen over the upper reaches of the Huaihe River above Lutaizi station with flood diversion and retarding areas. The input flood discharge hydrograph from the main channel to the flood diversion area is estimated with the fixed split ratio of the main channel discharge. The flood flow inside the flood retarding area is calculated as a reservoir with the water balance method. The Muskingum method is used for flood routing in the flood diversion area. A probabilistic discharge and flood inundation forecast is provided as the end product to study the potential benefits of using the TIGGE ensemble forecasts. The results demonstrate satisfactory flood forecasting with clear signals of probability of floods up to a few days in advance, and show that TIGGE ensemble forecast data are a promising tool for forecasting of flood inundation, comparable with that driven by raingauge observations.

  12. Gaussian and Lognormal Models of Hurricane Gust Factors

    NASA Technical Reports Server (NTRS)

    Merceret, Frank

    2009-01-01

    A document describes a tool that predicts the likelihood of land-falling tropical storms and hurricanes exceeding specified peak speeds, given the mean wind speed at various heights of up to 500 feet (150 meters) above ground level. Empirical models to calculate mean and standard deviation of the gust factor as a function of height and mean wind speed were developed in Excel based on data from previous hurricanes. Separate models were developed for Gaussian and offset lognormal distributions for the gust factor. Rather than forecasting a single, specific peak wind speed, this tool provides a probability of exceeding a specified value. This probability is provided as a function of height, allowing it to be applied at a height appropriate for tall structures. The user inputs the mean wind speed, height, and operational threshold. The tool produces the probability from each model that the given threshold will be exceeded. This application does have its limits. They were tested only in tropical storm conditions associated with the periphery of hurricanes. Winds of similar speed produced by non-tropical system may have different turbulence dynamics and stability, which may change those winds statistical characteristics. These models were developed along the Central Florida seacoast, and their results may not accurately extrapolate to inland areas, or even to coastal sites that are different from those used to build the models. Although this tool cannot be generalized for use in different environments, its methodology could be applied to those locations to develop a similar tool tuned to local conditions.

  13. Objective Lightning Probability Forecasting for Kennedy Space Center and Cape Canaveral Air Force Station, Phase III

    NASA Technical Reports Server (NTRS)

    Crawford, Winifred C.

    2010-01-01

    The AMU created new logistic regression equations in an effort to increase the skill of the Objective Lightning Forecast Tool developed in Phase II (Lambert 2007). One equation was created for each of five sub-seasons based on the daily lightning climatology instead of by month as was done in Phase II. The assumption was that these equations would capture the physical attributes that contribute to thunderstorm formation more so than monthly equations. However, the SS values in Section 5.3.2 showed that the Phase III equations had worse skill than the Phase II equations and, therefore, will not be transitioned into operations. The current Objective Lightning Forecast Tool developed in Phase II will continue to be used operationally in MIDDS. Three warm seasons were added to the Phase II dataset to increase the POR from 17 to 20 years (1989-2008), and data for October were included since the daily climatology showed lightning occurrence extending into that month. None of the three methods tested to determine the start of the subseason in each individual year were able to discern the start dates with consistent accuracy. Therefore, the start dates were determined by the daily climatology shown in Figure 10 and were the same in every year. The procedures used to create the predictors and develop the equations were identical to those in Phase II. The equations were made up of one to three predictors. TI and the flow regime probabilities were the top predictors followed by 1-day persistence, then VT and Ll. Each equation outperformed four other forecast methods by 7-57% using the verification dataset, but the new equations were outperformed by the Phase II equations in every sub-season. The reason for the degradation may be due to the fact that the same sub-season start dates were used in every year. It is likely there was overlap of sub-season days at the beginning and end of each defined sub-season in each individual year, which could very well affect equation performance.

  14. The multi temporal/multi-model approach to predictive uncertainty assessment in real-time flood forecasting

    NASA Astrophysics Data System (ADS)

    Barbetta, Silvia; Coccia, Gabriele; Moramarco, Tommaso; Brocca, Luca; Todini, Ezio

    2017-08-01

    This work extends the multi-temporal approach of the Model Conditional Processor (MCP-MT) to the multi-model case and to the four Truncated Normal Distributions (TNDs) approach, demonstrating the improvement on the single-temporal one. The study is framed in the context of probabilistic Bayesian decision-making that is appropriate to take rational decisions on uncertain future outcomes. As opposed to the direct use of deterministic forecasts, the probabilistic forecast identifies a predictive probability density function that represents a fundamental knowledge on future occurrences. The added value of MCP-MT is the identification of the probability that a critical situation will happen within the forecast lead-time and when, more likely, it will occur. MCP-MT is thoroughly tested for both single-model and multi-model configurations at a gauged site on the Tiber River, central Italy. The stages forecasted by two operative deterministic models, STAFOM-RCM and MISDc, are considered for the study. The dataset used for the analysis consists of hourly data from 34 flood events selected on a time series of six years. MCP-MT improves over the original models' forecasts: the peak overestimation and the rising limb delayed forecast, characterizing MISDc and STAFOM-RCM respectively, are significantly mitigated, with a reduced mean error on peak stage from 45 to 5 cm and an increased coefficient of persistence from 0.53 up to 0.75. The results show that MCP-MT outperforms the single-temporal approach and is potentially useful for supporting decision-making because the exceedance probability of hydrometric thresholds within a forecast horizon and the most probable flooding time can be estimated.

  15. Real-time eruption forecasting using the material Failure Forecast Method with a Bayesian approach

    NASA Astrophysics Data System (ADS)

    Boué, A.; Lesage, P.; Cortés, G.; Valette, B.; Reyes-Dávila, G.

    2015-04-01

    Many attempts for deterministic forecasting of eruptions and landslides have been performed using the material Failure Forecast Method (FFM). This method consists in adjusting an empirical power law on precursory patterns of seismicity or deformation. Until now, most of the studies have presented hindsight forecasts based on complete time series of precursors and do not evaluate the ability of the method for carrying out real-time forecasting with partial precursory sequences. In this study, we present a rigorous approach of the FFM designed for real-time applications on volcano-seismic precursors. We use a Bayesian approach based on the FFM theory and an automatic classification of seismic events. The probability distributions of the data deduced from the performance of this classification are used as input. As output, it provides the probability of the forecast time at each observation time before the eruption. The spread of the a posteriori probability density function of the prediction time and its stability with respect to the observation time are used as criteria to evaluate the reliability of the forecast. We test the method on precursory accelerations of long-period seismicity prior to vulcanian explosions at Volcán de Colima (Mexico). For explosions preceded by a single phase of seismic acceleration, we obtain accurate and reliable forecasts using approximately 80% of the whole precursory sequence. It is, however, more difficult to apply the method to multiple acceleration patterns.

  16. From multi-disciplinary monitoring observation to probabilistic eruption forecasting: a Bayesian view

    NASA Astrophysics Data System (ADS)

    Marzocchi, W.

    2011-12-01

    Eruption forecasting is the probability of eruption in a specific time-space-magnitude window. The use of probabilities to track the evolution of a phase of unrest is unavoidable for two main reasons: first, eruptions are intrinsically unpredictable in a deterministic sense, and, second, probabilities represent a quantitative tool that can be rationally used by decision-makers (this is usually done in many other fields). The primary information for the probability assessment during a phase of unrest come from monitoring data of different quantities, such as the seismic activity, ground deformation, geochemical signatures, and so on. Nevertheless, the probabilistic forecast based on monitoring data presents two main difficulties. First, many high-risk volcanoes do not have monitoring pre-eruptive and unrest databases, making impossible a probabilistic assessment based on the frequency of past observations. The ongoing project WOVOdat (led by Christopher Newhall) is trying to tackle this limitation creating a sort of worldwide epidemiological database that may cope with the lack of monitoring pre-eruptive and unrest databases for a specific volcano using observations of 'analogs' volcanoes. Second, the quantity and quality of monitoring data are rapidly increasing in many volcanoes, creating strongly inhomogeneous dataset. In these cases, classical statistical analysis can be performed on high quality monitoring observations only for (usually too) short periods of time, or alternatively using only few specific monitoring data that are available for longer times (such as the number of earthquakes), therefore neglecting a lot of information carried out by the most recent kind of monitoring. Here, we explore a possible strategy to cope with these limitations. In particular, we present a Bayesian strategy that merges different kinds of information. In this approach, all relevant monitoring observations are embedded into a probabilistic scheme through expert opinion, conceptual models, and, possibly, real past data. After discussing all scientific and philosophical aspects of such approach, we present some applications for Campi Flegrei and Vesuvius.

  17. Fuzzy forecasting based on two-factors second-order fuzzy-trend logical relationship groups and the probabilities of trends of fuzzy logical relationships.

    PubMed

    Chen, Shyi-Ming; Chen, Shen-Wen

    2015-03-01

    In this paper, we present a new method for fuzzy forecasting based on two-factors second-order fuzzy-trend logical relationship groups and the probabilities of trends of fuzzy-trend logical relationships. Firstly, the proposed method fuzzifies the historical training data of the main factor and the secondary factor into fuzzy sets, respectively, to form two-factors second-order fuzzy logical relationships. Then, it groups the obtained two-factors second-order fuzzy logical relationships into two-factors second-order fuzzy-trend logical relationship groups. Then, it calculates the probability of the "down-trend," the probability of the "equal-trend" and the probability of the "up-trend" of the two-factors second-order fuzzy-trend logical relationships in each two-factors second-order fuzzy-trend logical relationship group, respectively. Finally, it performs the forecasting based on the probabilities of the down-trend, the equal-trend, and the up-trend of the two-factors second-order fuzzy-trend logical relationships in each two-factors second-order fuzzy-trend logical relationship group. We also apply the proposed method to forecast the Taiwan Stock Exchange Capitalization Weighted Stock Index (TAIEX) and the NTD/USD exchange rates. The experimental results show that the proposed method outperforms the existing methods.

  18. Comparative Analysis of NOAA REFM and SNB3GEO Tools for the Forecast of the Fluxes of High-Energy Electrons at GEO

    NASA Technical Reports Server (NTRS)

    Balikhin, M. A.; Rodriguez, J. V.; Boynton, R. J.; Walker, S. N.; Aryan, Homayon; Sibeck, D. G.; Billings, S. A.

    2016-01-01

    Reliable forecasts of relativistic electrons at geostationary orbit (GEO) are important for the mitigation of their hazardous effects on spacecraft at GEO. For a number of years the Space Weather Prediction Center at NOAA has provided advanced online forecasts of the fluence of electrons with energy >2 MeV at GEO using the Relativistic Electron Forecast Model (REFM). The REFM forecasts are based on real-time solar wind speed observations at L1. The high reliability of this forecasting tool serves as a benchmark for the assessment of other forecasting tools. Since 2012 the Sheffield SNB3GEO model has been operating online, providing a 24 h ahead forecast of the same fluxes. In addition to solar wind speed, the SNB3GEO forecasts use solar wind density and interplanetary magnetic field B(sub z) observations at L1. The period of joint operation of both of these forecasts has been used to compare their accuracy. Daily averaged measurements of electron fluxes by GOES 13 have been used to estimate the prediction efficiency of both forecasting tools. To assess the reliability of both models to forecast infrequent events of very high fluxes, the Heidke skill score was employed. The results obtained indicate that SNB3GEO provides a more accurate 1 day ahead forecast when compared to REFM. It is shown that the correction methodology utilized by REFM potentially can improve the SNB3GEO forecast.

  19. Comparative analysis of NOAA REFM and SNB3GEO tools for the forecast of the fluxes of high-energy electrons at GEO.

    PubMed

    Balikhin, M A; Rodriguez, J V; Boynton, R J; Walker, S N; Aryan, H; Sibeck, D G; Billings, S A

    2016-01-01

    Reliable forecasts of relativistic electrons at geostationary orbit (GEO) are important for the mitigation of their hazardous effects on spacecraft at GEO. For a number of years the Space Weather Prediction Center at NOAA has provided advanced online forecasts of the fluence of electrons with energy >2 MeV at GEO using the Relativistic Electron Forecast Model (REFM). The REFM forecasts are based on real-time solar wind speed observations at L1. The high reliability of this forecasting tool serves as a benchmark for the assessment of other forecasting tools. Since 2012 the Sheffield SNB 3 GEO model has been operating online, providing a 24 h ahead forecast of the same fluxes. In addition to solar wind speed, the SNB 3 GEO forecasts use solar wind density and interplanetary magnetic field B z observations at L1.The period of joint operation of both of these forecasts has been used to compare their accuracy. Daily averaged measurements of electron fluxes by GOES 13 have been used to estimate the prediction efficiency of both forecasting tools. To assess the reliability of both models to forecast infrequent events of very high fluxes, the Heidke skill score was employed. The results obtained indicate that SNB 3 GEO provides a more accurate 1 day ahead forecast when compared to REFM. It is shown that the correction methodology utilized by REFM potentially can improve the SNB 3 GEO forecast.

  20. NASA Products to Enhance Energy Utility Load Forecasting

    NASA Technical Reports Server (NTRS)

    Lough, G.; Zell, E.; Engel-Cox, J.; Fungard, Y.; Jedlovec, G.; Stackhouse, P.; Homer, R.; Biley, S.

    2012-01-01

    Existing energy load forecasting tools rely upon historical load and forecasted weather to predict load within energy company service areas. The shortcomings of load forecasts are often the result of weather forecasts that are not at a fine enough spatial or temporal resolution to capture local-scale weather events. This project aims to improve the performance of load forecasting tools through the integration of high-resolution, weather-related NASA Earth Science Data, such as temperature, relative humidity, and wind speed. Three companies are participating in operational testing one natural gas company, and two electric providers. Operational results comparing load forecasts with and without NASA weather forecasts have been generated since March 2010. We have worked with end users at the three companies to refine selection of weather forecast information and optimize load forecast model performance. The project will conclude in 2012 with transitioning documented improvements from the inclusion of NASA forecasts for sustained use by energy utilities nationwide in a variety of load forecasting tools. In addition, Battelle has consulted with energy companies nationwide to document their information needs for long-term planning, in light of climate change and regulatory impacts.

  1. Emergency preparedness: community-based short-term eruption forecasting at Campi Flegrei

    NASA Astrophysics Data System (ADS)

    Selva, Jacopo; Marzocchi, Warner; Civetta, Lucia; Del Pezzo, Edoardo; Papale, Paolo

    2010-05-01

    A key element in emergency preparedness is to define advance tools to assist decision makers and emergency management groups during crises. Such tools must be prepared in advance, accounting for all of expertise and scientific knowledge accumulated through time. During a pre-eruptive phase, the key for sound short-term eruption forecasting is the analysis of the monitoring signals. This involves the capability (i) to recognize anomalous signals and to relate single or combined anomalies to physical processes, assigning them probability values, and (ii) to quickly provide an answer to the observed phenomena even when unexpected. Here we present a > 4 years long process devoted to define the pre-eruptive Event Tree (ET) for Campi Flegrei. A community of about 40 experts in volcanology and volcano monitoring participating to two Italian Projects on Campi Flegrei funded by the Italian Civil Protection, has been constituted and trained during periodic meetings on the statistical methods and the model BET_EF (Marzocchi et al., 2008) that forms the statistical package tool for ET definition. Model calibration has been carried out through public elicitation sessions, preceded and followed by devoted meetings and web forum discussion on the monitoring parameters, their accuracy and relevance, and their potential meanings. The calibrated ET allows anomalies in the monitored parameters to be recognized and interpreted, assigning probability values to each set of data. This process de-personalizes the difficult task of interpreting multi-parametric sets of data during on-going emergencies, and provides a view of the observed variations that accounts for the averaged, weighted opinion of the scientific community. An additional positive outcome of the described ET calibration process is that of providing a picture of the degree of confidence by the expert community on the capability of the many different monitored quantities of recognizing significant variations in the state of the volcano. This picture is particularly useful since it can be used to guide future implementations in the monitoring network, as well as research investments aimed at substantially improving the capability to forecast the short-term volcanic hazard.

  2. Forecasting Emergency Department Crowding: An External, Multi-Center Evaluation

    PubMed Central

    Hoot, Nathan R.; Epstein, Stephen K.; Allen, Todd L.; Jones, Spencer S.; Baumlin, Kevin M.; Chawla, Neal; Lee, Anna T.; Pines, Jesse M.; Klair, Amandeep K.; Gordon, Bradley D.; Flottemesch, Thomas J.; LeBlanc, Larry J.; Jones, Ian; Levin, Scott R.; Zhou, Chuan; Gadd, Cynthia S.; Aronsky, Dominik

    2009-01-01

    Objective To apply a previously described tool to forecast ED crowding at multiple institutions, and to assess its generalizability for predicting the near-future waiting count, occupancy level, and boarding count. Methods The ForecastED tool was validated using historical data from five institutions external to the development site. A sliding-window design separated the data for parameter estimation and forecast validation. Observations were sampled at consecutive 10-minute intervals during 12 months (n = 52,560) at four sites and 10 months (n = 44,064) at the fifth. Three outcome measures – the waiting count, occupancy level, and boarding count – were forecast 2, 4, 6, and 8 hours beyond each observation, and forecasts were compared to observed data at corresponding times. The reliability and calibration were measured following previously described methods. After linear calibration, the forecasting accuracy was measured using the median absolute error (MAE). Results The tool was successfully used for five different sites. Its forecasts were more reliable, better calibrated, and more accurate at 2 hours than at 8 hours. The reliability and calibration of the tool were similar between the original development site and external sites; the boarding count was an exception, which was less reliable at four out of five sites. Some variability in accuracy existed among institutions; when forecasting 4 hours into the future, the MAE of the waiting count ranged between 0.6 and 3.1 patients, the MAE of the occupancy level ranged between 9.0 and 14.5% of beds, and the MAE of the boarding count ranged between 0.9 and 2.7 patients. Conclusion The ForecastED tool generated potentially useful forecasts of input and throughput measures of ED crowding at five external sites, without modifying the underlying assumptions. Noting the limitation that this was not a real-time validation, ongoing research will focus on integrating the tool with ED information systems. PMID:19716629

  3. MAG4 Versus Alternative Techniques for Forecasting Active-Region Flare Productivity

    NASA Technical Reports Server (NTRS)

    Falconer, David A.; Moore, Ronald L.; Barghouty, Abdulnasser F.; Khazanov, Igor

    2014-01-01

    MAG4 (Magnetogram Forecast), developed originally for NASA/SRAG (Space Radiation Analysis Group), is an automated program that analyzes magnetograms from the HMI (Helioseismic and Magnetic Imager) instrument on NASA SDO (Solar Dynamics Observatory), and automatically converts the rate (or probability) of major flares (M- and X-class), Coronal Mass Ejections (CMEs), and Solar Energetic Particle Events. MAG4 does not forecast that a flare will occur at a particular time in the next 24 or 48 hours; rather the probability of one occurring.

  4. Medium range forecasting of Hurricane Harvey flash flooding using ECMWF and social vulnerability data

    NASA Astrophysics Data System (ADS)

    Pillosu, F. M.; Jurlina, T.; Baugh, C.; Tsonevsky, I.; Hewson, T.; Prates, F.; Pappenberger, F.; Prudhomme, C.

    2017-12-01

    During hurricane Harvey the greater east Texas area was affected by extensive flash flooding. Their localised nature meant they were too small for conventional large scale flood forecasting systems to capture. We are testing the use of two real time forecast products from the European Centre for Medium-range Weather Forecasts (ECMWF) in combination with local vulnerability information to provide flash flood forecasting tools at the medium range (up to 7 days ahead). Meteorological forecasts are the total precipitation extreme forecast index (EFI), a measure of how the ensemble forecast probability distribution differs from the model-climate distribution for the chosen location, time of year and forecast lead time; and the shift of tails (SOT) which complements the EFI by quantifying how extreme an event could potentially be. Both products give the likelihood of flash flood generating precipitation. For hurricane Harvey, 3-day EFI and SOT products for the period 26th - 29th August 2017 were used, generated from the twice daily, 18 km, 51 ensemble member ECMWF Integrated Forecast System. After regridding to 1 km resolution the forecasts were combined with vulnerable area data to produce a flash flood hazard risk area. The vulnerability data were floodplains (EU Joint Research Centre), road networks (Texas Department of Transport) and urban areas (Census Bureau geographic database), together reflecting the susceptibility to flash floods from the landscape. The flash flood hazard risk area forecasts were verified using a traditional approach against observed National Weather Service flash flood reports, a total of 153 reported flash floods have been detected in that period. Forecasts performed best for SOT = 5 (hit ratio = 65%, false alarm ratio = 44%) and EFI = 0.7 (hit ratio = 74%, false alarm ratio = 45%) at 72 h lead time. By including the vulnerable areas data, our verification results improved by 5-15%, demonstrating the value of vulnerability information within natural hazard forecasts. This research shows that flash flooding from hurricane Harvey was predictable up to 4 days ahead and that filtering the forecasts to vulnerable areas provides a more focused guidance to civil protection agencies planning their emergency response.

  5. Probability Forecasting Using Monte Carlo Simulation

    NASA Astrophysics Data System (ADS)

    Duncan, M.; Frisbee, J.; Wysack, J.

    2014-09-01

    Space Situational Awareness (SSA) is defined as the knowledge and characterization of all aspects of space. SSA is now a fundamental and critical component of space operations. Increased dependence on our space assets has in turn lead to a greater need for accurate, near real-time knowledge of all space activities. With the growth of the orbital debris population, satellite operators are performing collision avoidance maneuvers more frequently. Frequent maneuver execution expends fuel and reduces the operational lifetime of the spacecraft. Thus the need for new, more sophisticated collision threat characterization methods must be implemented. The collision probability metric is used operationally to quantify the collision risk. The collision probability is typically calculated days into the future, so that high risk and potential high risk conjunction events are identified early enough to develop an appropriate course of action. As the time horizon to the conjunction event is reduced, the collision probability changes. A significant change in the collision probability will change the satellite mission stakeholder's course of action. So constructing a method for estimating how the collision probability will evolve improves operations by providing satellite operators with a new piece of information, namely an estimate or 'forecast' of how the risk will change as time to the event is reduced. Collision probability forecasting is a predictive process where the future risk of a conjunction event is estimated. The method utilizes a Monte Carlo simulation that produces a likelihood distribution for a given collision threshold. Using known state and state uncertainty information, the simulation generates a set possible trajectories for a given space object pair. Each new trajectory produces a unique event geometry at the time of close approach. Given state uncertainty information for both objects, a collision probability value can be computed for every trail. This yields a collision probability distribution given known, predicted uncertainty. This paper presents the details of the collision probability forecasting method. We examine various conjunction event scenarios and numerically demonstrate the utility of this approach in typical event scenarios. We explore the utility of a probability-based track scenario simulation that models expected tracking data frequency as the tasking levels are increased. The resulting orbital uncertainty is subsequently used in the forecasting algorithm.

  6. Automated Statistical Forecast Method to 36-48H ahead of Storm Wind and Dangerous Precipitation at the Mediterranean Region

    NASA Astrophysics Data System (ADS)

    Perekhodtseva, E. V.

    2009-09-01

    Development of successful method of forecast of storm winds, including squalls and tornadoes and heavy rainfalls, that often result in human and material losses, could allow one to take proper measures against destruction of buildings and to protect people. Well-in-advance successful forecast (from 12 hours to 48 hour) makes possible to reduce the losses. Prediction of the phenomena involved is a very difficult problem for synoptic till recently. The existing graphic and calculation methods still depend on subjective decision of an operator. Nowadays in Russia there is no hydrodynamic model for forecast of the maximal precipitation and wind velocity V> 25m/c, hence the main tools of objective forecast are statistical methods using the dependence of the phenomena involved on a number of atmospheric parameters (predictors). Statistical decisive rule of the alternative and probability forecast of these events was obtained in accordance with the concept of "perfect prognosis" using the data of objective analysis. For this purpose the different teaching samples of present and absent of this storm wind and rainfalls were automatically arranged that include the values of forty physically substantiated potential predictors. Then the empirical statistical method was used that involved diagonalization of the mean correlation matrix R of the predictors and extraction of diagonal blocks of strongly correlated predictors. Thus for these phenomena the most informative predictors were selected without loosing information. The statistical decisive rules for diagnosis and prognosis of the phenomena involved U(X) were calculated for choosing informative vector-predictor. We used the criterion of distance of Mahalanobis and criterion of minimum of entropy by Vapnik-Chervonenkis for the selection predictors. Successful development of hydrodynamic models for short-term forecast and improvement of 36-48h forecasts of pressure, temperature and others parameters allowed us to use the prognostic fields of those models for calculations of the discriminant functions in the nodes of the grid 150x150km and the values of probabilities P of dangerous wind and thus to get fully automated forecasts. In order to change to the alternative forecast the author proposes the empirical threshold values specified for this phenomenon and advance period 36 hours. In the accordance to the Pirsey-Obukhov criterion (T), the success of these automated statistical methods of forecast of squalls and tornadoes to 36 -48 hours ahead and heavy rainfalls in the warm season for the territory of Italy, Spain and Balkan countries is T = 1-a-b=0,54: 0,78 after author experiments. A lot of examples of very successful forecasts of summer storm wind and heavy rainfalls over the Italy and Spain territory are submitted at this report. The same decisive rules were applied to the forecast of these phenomena during cold period in this year too. This winter heavy snowfalls in Spain and in Italy and storm wind at this territory were observed very often. And our forecasts are successful.

  7. Drought forecasting in Luanhe River basin involving climatic indices

    NASA Astrophysics Data System (ADS)

    Ren, Weinan; Wang, Yixuan; Li, Jianzhu; Feng, Ping; Smith, Ronald J.

    2017-11-01

    Drought is regarded as one of the most severe natural disasters globally. This is especially the case in Tianjin City, Northern China, where drought can affect economic development and people's livelihoods. Drought forecasting, the basis of drought management, is an important mitigation strategy. In this paper, we evolve a probabilistic forecasting model, which forecasts transition probabilities from a current Standardized Precipitation Index (SPI) value to a future SPI class, based on conditional distribution of multivariate normal distribution to involve two large-scale climatic indices at the same time, and apply the forecasting model to 26 rain gauges in the Luanhe River basin in North China. The establishment of the model and the derivation of the SPI are based on the hypothesis of aggregated monthly precipitation that is normally distributed. Pearson correlation and Shapiro-Wilk normality tests are used to select appropriate SPI time scale and large-scale climatic indices. Findings indicated that longer-term aggregated monthly precipitation, in general, was more likely to be considered normally distributed and forecasting models should be applied to each gauge, respectively, rather than to the whole basin. Taking Liying Gauge as an example, we illustrate the impact of the SPI time scale and lead time on transition probabilities. Then, the controlled climatic indices of every gauge are selected by Pearson correlation test and the multivariate normality of SPI, corresponding climatic indices for current month and SPI 1, 2, and 3 months later are demonstrated using Shapiro-Wilk normality test. Subsequently, we illustrate the impact of large-scale oceanic-atmospheric circulation patterns on transition probabilities. Finally, we use a score method to evaluate and compare the performance of the three forecasting models and compare them with two traditional models which forecast transition probabilities from a current to a future SPI class. The results show that the three proposed models outperform the two traditional models and involving large-scale climatic indices can improve the forecasting accuracy.

  8. Probabilistic precipitation nowcasting based on an extrapolation of radar reflectivity and an ensemble approach

    NASA Astrophysics Data System (ADS)

    Sokol, Zbyněk; Mejsnar, Jan; Pop, Lukáš; Bližňák, Vojtěch

    2017-09-01

    A new method for the probabilistic nowcasting of instantaneous rain rates (ENS) based on the ensemble technique and extrapolation along Lagrangian trajectories of current radar reflectivity is presented. Assuming inaccurate forecasts of the trajectories, an ensemble of precipitation forecasts is calculated and used to estimate the probability that rain rates will exceed a given threshold in a given grid point. Although the extrapolation neglects the growth and decay of precipitation, their impact on the probability forecast is taken into account by the calibration of forecasts using the reliability component of the Brier score (BS). ENS forecasts the probability that the rain rates will exceed thresholds of 0.1, 1.0 and 3.0 mm/h in squares of 3 km by 3 km. The lead times were up to 60 min, and the forecast accuracy was measured by the BS. The ENS forecasts were compared with two other methods: combined method (COM) and neighbourhood method (NEI). NEI considered the extrapolated values in the square neighbourhood of 5 by 5 grid points of the point of interest as ensemble members, and the COM ensemble was comprised of united ensemble members of ENS and NEI. The results showed that the calibration technique significantly improves bias of the probability forecasts by including additional uncertainties that correspond to neglected processes during the extrapolation. In addition, the calibration can also be used for finding the limits of maximum lead times for which the forecasting method is useful. We found that ENS is useful for lead times up to 60 min for thresholds of 0.1 and 1 mm/h and approximately 30 to 40 min for a threshold of 3 mm/h. We also found that a reasonable size of the ensemble is 100 members, which provided better scores than ensembles with 10, 25 and 50 members. In terms of the BS, the best results were obtained by ENS and COM, which are comparable. However, ENS is better calibrated and thus preferable.

  9. Exploring What Determines the Use of Forecasts of Varying Time Periods in Guanacaste, Costa Rica

    NASA Astrophysics Data System (ADS)

    Babcock, M.; Wong-Parodi, G.; Grossmann, I.; Small, M. J.

    2016-12-01

    Weather and climate forecasts are promoted as ways to improve water management, especially in the face of changing environmental conditions. However, studies indicate many stakeholders who may benefit from such information do not use it. This study sought to better understand which personal factors (e.g., trust in forecast sources, perceptions of accuracy) were important determinants of the use of 4-day, 3-month, and 12-month rainfall forecasts by stakeholders in water management-related sectors in the seasonally dry province of Guanacaste, Costa Rica. From August to October 2015, we surveyed 87 stakeholders from a mix of government agencies, local water committees, large farms, tourist businesses, environmental NGO's, and the public. The result of an exploratory factor analysis suggests that trust in "informal" forecast sources (traditional methods, family advice) and in "formal" sources (government, university and private company science) are independent of each other. The result of logistic regression analyses suggest that 1) greater understanding of forecasts is associated with a greater probability of 4-day and 3-month forecast use, but not 12-month forecast use, 2) a greater probability of 3-month forecast use is associated with a lower level of trust in "informal" sources, and 3), feeling less secure about water resources, and regularly using many sources of information (and specifically formal meetings and reports) are each associated with a greater probability of using 12-month forecasts. While limited by the sample size, and affected by the factoring method and regression model assumptions, these results do appear to suggest that while forecasts of all times scales are used to some extent, local decision makers' decisions to use 4-day and 3-month forecasts appear to be more intrinsically motivated (based on their level of understanding and trust) and the use of 12-month forecasts seems to be more motivated by a sense of requirement or mandate.

  10. Scenarios for Evolving Seismic Crises: Possible Communication Strategies

    NASA Astrophysics Data System (ADS)

    Steacy, S.

    2015-12-01

    Recent advances in operational earthquake forecasting mean that we are very close to being able to confidently compute changes in earthquake probability as seismic crises develop. For instance, we now have statistical models such as ETAS and STEP which demonstrate considerable skill in forecasting earthquake rates and recent advances in Coulomb based models are also showing much promise. Communicating changes in earthquake probability is likely be very difficult, however, as the absolute probability of a damaging event is likely to remain quite small despite a significant increase in the relative value. Here, we use a hybrid Coulomb/statistical model to compute probability changes for a series of earthquake scenarios in New Zealand. We discuss the strengths and limitations of the forecasts and suggest a number of possible mechanisms that might be used to communicate results in an actual developing seismic crisis.

  11. Plant calendar pattern based on rainfall forecast and the probability of its success in Deli Serdang regency of Indonesia

    NASA Astrophysics Data System (ADS)

    Darnius, O.; Sitorus, S.

    2018-03-01

    The objective of this study was to determine the pattern of plant calendar of three types of crops; namely, palawija, rice, andbanana, based on rainfall in Deli Serdang Regency. In the first stage, we forecasted rainfall by using time series analysis, and obtained appropriate model of ARIMA (1,0,0) (1,1,1)12. Based on the forecast result, we designed a plant calendar pattern for the three types of plant. Furthermore, the probability of success in the plant types following the plant calendar pattern was calculated by using the Markov process by discretizing the continuous rainfall data into three categories; namely, Below Normal (BN), Normal (N), and Above Normal (AN) to form the probability transition matrix. Finally, the combination of rainfall forecasting models and the Markov process were used to determine the pattern of cropping calendars and the probability of success in the three crops. This research used rainfall data of Deli Serdang Regency taken from the office of BMKG (Meteorologist Climatology and Geophysics Agency), Sampali Medan, Indonesia.

  12. Forecasting seeing and parameters of long-exposure images by means of ARIMA

    NASA Astrophysics Data System (ADS)

    Kornilov, Matwey V.

    2016-02-01

    Atmospheric turbulence is the one of the major limiting factors for ground-based astronomical observations. In this paper, the problem of short-term forecasting seeing is discussed. The real data that were obtained by atmospheric optical turbulence (OT) measurements above Mount Shatdzhatmaz in 2007-2013 have been analysed. Linear auto-regressive integrated moving average (ARIMA) models are used for the forecasting. A new procedure for forecasting the image characteristics of direct astronomical observations (central image intensity, full width at half maximum, radius encircling 80 % of the energy) has been proposed. Probability density functions of the forecast of these quantities are 1.5-2 times thinner than the respective unconditional probability density functions. Overall, this study found that the described technique could adequately describe temporal stochastic variations of the OT power.

  13. Uncertainty Forecasts Improve Weather-Related Decisions and Attenuate the Effects of Forecast Error

    ERIC Educational Resources Information Center

    Joslyn, Susan L.; LeClerc, Jared E.

    2012-01-01

    Although uncertainty is inherent in weather forecasts, explicit numeric uncertainty estimates are rarely included in public forecasts for fear that they will be misunderstood. Of particular concern are situations in which precautionary action is required at low probabilities, often the case with severe events. At present, a categorical weather…

  14. Intermediate-term forecasting of aftershocks from an early aftershock sequence: Bayesian and ensemble forecasting approaches

    NASA Astrophysics Data System (ADS)

    Omi, Takahiro; Ogata, Yosihiko; Hirata, Yoshito; Aihara, Kazuyuki

    2015-04-01

    Because aftershock occurrences can cause significant seismic risks for a considerable time after the main shock, prospective forecasting of the intermediate-term aftershock activity as soon as possible is important. The epidemic-type aftershock sequence (ETAS) model with the maximum likelihood estimate effectively reproduces general aftershock activity including secondary or higher-order aftershocks and can be employed for the forecasting. However, because we cannot always expect the accurate parameter estimation from incomplete early aftershock data where many events are missing, such forecasting using only a single estimated parameter set (plug-in forecasting) can frequently perform poorly. Therefore, we here propose Bayesian forecasting that combines the forecasts by the ETAS model with various probable parameter sets given the data. By conducting forecasting tests of 1 month period aftershocks based on the first 1 day data after the main shock as an example of the early intermediate-term forecasting, we show that the Bayesian forecasting performs better than the plug-in forecasting on average in terms of the log-likelihood score. Furthermore, to improve forecasting of large aftershocks, we apply a nonparametric (NP) model using magnitude data during the learning period and compare its forecasting performance with that of the Gutenberg-Richter (G-R) formula. We show that the NP forecast performs better than the G-R formula in some cases but worse in other cases. Therefore, robust forecasting can be obtained by employing an ensemble forecast that combines the two complementary forecasts. Our proposed method is useful for a stable unbiased intermediate-term assessment of aftershock probabilities.

  15. WPC Maximum Heat Index Forecasts

    Science.gov Websites

    Forecasts for Western US CLICK ON MAPS FOR MAXIMUM HEAT INDEX AND PROBABILITY FORECASTS FROM SUN MAY 27 2018 02 CLICK to view SAT JUN 02 forecast SUN JUN 03 CLICK to view SUN JUN 03 forecast SUN JUN 03 CLICK to view SUN JUN 03 forecast SUN JUN 03 CLICK to view SUN JUN 03 forecast SUN JUN 03 CLICK to view SUN JUN

  16. COP21 climate negotiators' responses to climate model forecasts

    NASA Astrophysics Data System (ADS)

    Bosetti, Valentina; Weber, Elke; Berger, Loïc; Budescu, David V.; Liu, Ning; Tavoni, Massimo

    2017-02-01

    Policymakers involved in climate change negotiations are key users of climate science. It is therefore vital to understand how to communicate scientific information most effectively to this group. We tested how a unique sample of policymakers and negotiators at the Paris COP21 conference update their beliefs on year 2100 global mean temperature increases in response to a statistical summary of climate models' forecasts. We randomized the way information was provided across participants using three different formats similar to those used in Intergovernmental Panel on Climate Change reports. In spite of having received all available relevant scientific information, policymakers adopted such information very conservatively, assigning it less weight than their own prior beliefs. However, providing individual model estimates in addition to the statistical range was more effective in mitigating such inertia. The experiment was repeated with a population of European MBA students who, despite starting from similar priors, reported conditional probabilities closer to the provided models' forecasts than policymakers. There was also no effect of presentation format in the MBA sample. These results highlight the importance of testing visualization tools directly on the population of interest.

  17. Towards real-time eruption forecasting in the Auckland Volcanic Field: application of BET_EF during the New Zealand National Disaster Exercise `Ruaumoko'

    NASA Astrophysics Data System (ADS)

    Lindsay, Jan; Marzocchi, Warner; Jolly, Gill; Constantinescu, Robert; Selva, Jacopo; Sandri, Laura

    2010-03-01

    The Auckland Volcanic Field (AVF) is a young basaltic field that lies beneath the urban area of Auckland, New Zealand’s largest city. Over the past 250,000 years the AVF has produced at least 49 basaltic centers; the last eruption was only 600 years ago. In recognition of the high risk associated with a possible future eruption in Auckland, the New Zealand government ran Exercise Ruaumoko in March 2008, a test of New Zealand’s nation-wide preparedness for responding to a major disaster resulting from a volcanic eruption in Auckland City. The exercise scenario was developed in secret, and covered the period of precursory activity up until the eruption. During Exercise Ruaumoko we adapted a recently developed statistical code for eruption forecasting, namely BET_EF (Bayesian Event Tree for Eruption Forecasting), to independently track the unrest evolution and to forecast the most likely onset time, location and style of the initial phase of the simulated eruption. The code was set up before the start of the exercise by entering reliable information on the past history of the AVF as well as the monitoring signals expected in the event of magmatic unrest and an impending eruption. The average probabilities calculated by BET_EF during Exercise Ruaumoko corresponded well to the probabilities subjectively (and independently) estimated by the advising scientists (differences of few percentage units), and provided a sound forecast of the timing (before the event, the eruption probability reached 90%) and location of the eruption. This application of BET_EF to a volcanic field that has experienced no historical activity and for which otherwise limited prior information is available shows its versatility and potential usefulness as a tool to aid decision-making for a wide range of volcano types. Our near real-time application of BET_EF during Exercise Ruaumoko highlighted its potential to clarify and possibly optimize decision-making procedures in a future AVF eruption crisis, and as a rational starting point for discussions in a scientific advisory group. It also stimulated valuable scientific discussion around how a future AVF eruption might progress, and highlighted areas of future volcanological research that would reduce epistemic uncertainties through the development of better input models.

  18. First Cloud-to-Ground Lightning Timing Study

    NASA Technical Reports Server (NTRS)

    Huddleston, Lisa L.

    2013-01-01

    NASA's LSP, GSDO and other programs use the probability of cloud-to-ground (CG) lightning occurrence issued by the 45th Weather Squadron (45 WS) in their daily and weekly lightning probability forecasts. These organizations use this information when planning potentially hazardous outdoor activities, such as working with fuels, or rolling a vehicle to a launch pad, or whenever personnel will work outside and would be at-risk from lightning. These organizations would benefit greatly if the 45 WS could provide more accurate timing of the first CG lightning strike of the day. The Applied Meteorology Unit (AMU) has made significant improvements in forecasting the probability of lightning for the day, but forecasting the time of the first CG lightning with confidence has remained a challenge. To address this issue, the 45 WS requested the AMU to determine if flow regimes, wind speed categories, or a combination of the two could be used to forecast the timing of the first strike of the day in the Kennedy Space Center (KSC)/Cape Canaveral Air Force Station (CCAFS) lightning warning circles. The data was stratified by various sea breeze flow regimes and speed categories in the surface to 5,000-ft layer. The surface to 5,000-ft layer was selected since that is the layer the 45 WS uses to predict the behavior of sea breeze fronts, which are the dominant influence on the occurrence of first lightning in Florida during the warm season. Due to small data sample sizes after stratification, the AMU could not determine a statistical relationship between flow regimes or speed categories and the time of the first CG strike.. As expected, although the amount and timing of lightning activity varies by time of day based on the flow regimes and speed categories, there are extended tails of low lightning activity making it difficult to specify times when the threat of the first lightning flash can be avoided. However, the AMU developed a graphical user interface with input from the 45 WS that allows forecasters to visualize the climatological frequencies of the timing of the first lightning strike. This tool should contribute directly to the 45 WS goal of improving lightning timing capability for its NASA, US Air Force and commercial customers.

  19. Evaluation of model-based seasonal streamflow and water allocation forecasts for the Elqui Valley, Chile

    NASA Astrophysics Data System (ADS)

    Delorit, Justin; Cristian Gonzalez Ortuya, Edmundo; Block, Paul

    2017-09-01

    In many semi-arid regions, multisectoral demands often stress available water supplies. Such is the case in the Elqui River valley of northern Chile, which draws on a limited-capacity reservoir to allocate 25 000 water rights. Delayed infrastructure investment forces water managers to address demand-based allocation strategies, particularly in dry years, which are realized through reductions in the volume associated with each water right. Skillful season-ahead streamflow forecasts have the potential to inform managers with an indication of future conditions to guide reservoir allocations. This work evaluates season-ahead statistical prediction models of October-January (growing season) streamflow at multiple lead times associated with manager and user decision points, and links predictions with a reservoir allocation tool. Skillful results (streamflow forecasts outperform climatology) are produced for short lead times (1 September: ranked probability skill score (RPSS) of 0.31, categorical hit skill score of 61 %). At longer lead times, climatological skill exceeds forecast skill due to fewer observations of precipitation. However, coupling the 1 September statistical forecast model with a sea surface temperature phase and strength statistical model allows for equally skillful categorical streamflow forecasts to be produced for a 1 May lead, triggered for 60 % of years (1950-2015), suggesting forecasts need not be strictly deterministic to be useful for water rights holders. An early (1 May) categorical indication of expected conditions is reinforced with a deterministic forecast (1 September) as more observations of local variables become available. The reservoir allocation model is skillful at the 1 September lead (categorical hit skill score of 53 %); skill improves to 79 % when categorical allocation prediction certainty exceeds 80 %. This result implies that allocation efficiency may improve when forecasts are integrated into reservoir decision frameworks. The methods applied here advance the understanding of the mechanisms and timing responsible for moisture transport to the Elqui Valley and provide a unique application of streamflow forecasting in the prediction of water right allocations.

  20. A probabilistic verification score for contours demonstrated with idealized ice-edge forecasts

    NASA Astrophysics Data System (ADS)

    Goessling, Helge; Jung, Thomas

    2017-04-01

    We introduce a probabilistic verification score for ensemble-based forecasts of contours: the Spatial Probability Score (SPS). Defined as the spatial integral of local (Half) Brier Scores, the SPS can be considered the spatial analog of the Continuous Ranked Probability Score (CRPS). Applying the SPS to idealized seasonal ensemble forecasts of the Arctic sea-ice edge in a global coupled climate model, we demonstrate that the SPS responds properly to ensemble size, bias, and spread. When applied to individual forecasts or ensemble means (or quantiles), the SPS is reduced to the 'volume' of mismatch, in case of the ice edge corresponding to the Integrated Ice Edge Error (IIEE).

  1. Probability hazard map for future vent opening at Etna volcano (Sicily, Italy).

    NASA Astrophysics Data System (ADS)

    Brancato, Alfonso; Tusa, Giuseppina; Coltelli, Mauro; Proietti, Cristina

    2014-05-01

    Mount Etna is a composite stratovolcano located along the Ionian coast of eastern Sicily. The frequent flank eruptions occurrence (at an interval of years, mostly concentrated along the NE, S and W rift zones) lead to a high volcanic hazard that, linked with intense urbanization, poses a high volcanic risk. A long-term volcanic hazard assessment, mainly based on the past behaviour of the Etna volcano, is the basic tool for the evaluation of this risk. Then, a reliable forecast where the next eruption will occur is needed. A computer-assisted analysis and probabilistic evaluations will provide the relative map, thus allowing identification of the areas prone to the highest hazard. Based on these grounds, the use of a code such BET_EF (Bayesian Event Tree_Eruption Forecasting) showed that a suitable analysis can be explored (Selva et al., 2012). Following an analysis we are performing, a total of 6886 point-vents referring to the last 4.0 ka of Etna flank activity, and spread over an area of 744 km2 (divided into N=2976 squared cell, with side of 500 m), allowed us to estimate a pdf by applying a Gaussian kernel. The probability values represent a complete set of outcomes mutually exclusive and the relative sum is normalized to one over the investigated area; then, the basic assumptions of a Dirichlet distribution (the prior distribution set in the BET_EF code (Marzocchi et al., 2004, 2008)) still hold. One fundamental parameter is the the equivalent number of data, that depicts our confidence on the best guess probability. The BET_EF code also works with a likelihood function. This is modelled by a Multinomial distribution, with parameters representing the number of vents in each cell and the total number of past data (i.e. the 6886 point-vents). Given the grid of N cells, the final posterior distribution will be evaluated by multiplying the a priori Dirichlet probability distribution with the past data in each cell through the likelihood. The probability hazard map shows a tendency to concentrate along the NE and S rifts, as well as Valle del Bove, increasing the difference in probability between these areas and the rest of the volcano edifice. It is worthy notice that a higher significance is still evident along the W rift, even if not comparable with the ones of the above mentioned areas. References Marzocchi W., Sandri L., Gasparini P., Newhall C. and Boschi E.; 2004: Quantifying probabilities of volcanic events: The example of volcanic hazard at Mount Vesuvius, J. Geophys. Res., 109, B11201, doi:10.1029/2004JB00315U. Marzocchi W., Sandri, L. and Selva, J.; 2008: BET_EF: a probabilistic tool for long- and short-term eruption forecasting, Bull. Volcanol., 70, 623 - 632, doi: 10.1007/s00445-007-0157-y. Selva J., Orsi G., Di Vito M.A., Marzocchi W. And Sandri L.; 2012: Probability hazard mapfor future vent opening atthe Campi Flegrei caldera, Italy, Bull. Volcanol., 74, 497 - 510, doi: 10.1007/s00445-011-0528-2.

  2. The Uniform California Earthquake Rupture Forecast, Version 2 (UCERF 2)

    USGS Publications Warehouse

    ,

    2008-01-01

    California?s 35 million people live among some of the most active earthquake faults in the United States. Public safety demands credible assessments of the earthquake hazard to maintain appropriate building codes for safe construction and earthquake insurance for loss protection. Seismic hazard analysis begins with an earthquake rupture forecast?a model of probabilities that earthquakes of specified magnitudes, locations, and faulting types will occur during a specified time interval. This report describes a new earthquake rupture forecast for California developed by the 2007 Working Group on California Earthquake Probabilities (WGCEP 2007).

  3. Retrospective validation of renewal-based, medium-term earthquake forecasts

    NASA Astrophysics Data System (ADS)

    Rotondi, R.

    2013-10-01

    In this paper, some methods for scoring the performances of an earthquake forecasting probability model are applied retrospectively for different goals. The time-dependent occurrence probabilities of a renewal process are tested against earthquakes of Mw ≥ 5.3 recorded in Italy according to decades of the past century. An aim was to check the capability of the model to reproduce the data by which the model was calibrated. The scoring procedures used can be distinguished on the basis of the requirement (or absence) of a reference model and of probability thresholds. Overall, a rank-based score, information gain, gambling scores, indices used in binary predictions and their loss functions are considered. The definition of various probability thresholds as percentages of the hazard functions allows proposals of the values associated with the best forecasting performance as alarm level in procedures for seismic risk mitigation. Some improvements are then made to the input data concerning the completeness of the historical catalogue and the consistency of the composite seismogenic sources with the hypotheses of the probability model. Another purpose of this study was thus to obtain hints on what is the most influential factor and on the suitability of adopting the consequent changes of the data sets. This is achieved by repeating the estimation procedure of the occurrence probabilities and the retrospective validation of the forecasts obtained under the new assumptions. According to the rank-based score, the completeness appears to be the most influential factor, while there are no clear indications of the usefulness of the decomposition of some composite sources, although in some cases, it has led to improvements of the forecast.

  4. Near-term probabilistic forecast of significant wildfire events for the Western United States

    Treesearch

    Haiganoush K. Preisler; Karin L. Riley; Crystal S. Stonesifer; Dave E. Calkin; Matt Jolly

    2016-01-01

    Fire danger and potential for large fires in the United States (US) is currently indicated via several forecasted qualitative indices. However, landscape-level quantitative forecasts of the probability of a large fire are currently lacking. In this study, we present a framework for forecasting large fire occurrence - an extreme value event - and evaluating...

  5. Evaluation of probabilistic forecasts with the scoringRules package

    NASA Astrophysics Data System (ADS)

    Jordan, Alexander; Krüger, Fabian; Lerch, Sebastian

    2017-04-01

    Over the last decades probabilistic forecasts in the form of predictive distributions have become popular in many scientific disciplines. With the proliferation of probabilistic models arises the need for decision-theoretically principled tools to evaluate the appropriateness of models and forecasts in a generalized way in order to better understand sources of prediction errors and to improve the models. Proper scoring rules are functions S(F,y) which evaluate the accuracy of a forecast distribution F , given that an outcome y was observed. In coherence with decision-theoretical principles they allow to compare alternative models, a crucial ability given the variety of theories, data sources and statistical specifications that is available in many situations. This contribution presents the software package scoringRules for the statistical programming language R, which provides functions to compute popular scoring rules such as the continuous ranked probability score for a variety of distributions F that come up in applied work. For univariate variables, two main classes are parametric distributions like normal, t, or gamma distributions, and distributions that are not known analytically, but are indirectly described through a sample of simulation draws. For example, ensemble weather forecasts take this form. The scoringRules package aims to be a convenient dictionary-like reference for computing scoring rules. We offer state of the art implementations of several known (but not routinely applied) formulas, and implement closed-form expressions that were previously unavailable. Whenever more than one implementation variant exists, we offer statistically principled default choices. Recent developments include the addition of scoring rules to evaluate multivariate forecast distributions. The use of the scoringRules package is illustrated in an example on post-processing ensemble forecasts of temperature.

  6. THE AGWA – KINEROS2 SUITE OF MODELING TOOLS

    USDA-ARS?s Scientific Manuscript database

    A suite of modeling tools ranging from the event-based KINEROS2 flash-flood forecasting tool to the continuous (K2-O2) KINEROS-OPUS biogeochemistry tool. The KINEROS2 flash flood forecasting tool is being tested with the National Weather Service (NEW) is described. Tne NWS version assimilates Dig...

  7. Weighing costs and losses: A decision making game using probabilistic forecasts

    NASA Astrophysics Data System (ADS)

    Werner, Micha; Ramos, Maria-Helena; Wetterhall, Frederik; Cranston, Michael; van Andel, Schalk-Jan; Pappenberger, Florian; Verkade, Jan

    2017-04-01

    Probabilistic forecasts are increasingly recognised as an effective and reliable tool to communicate uncertainties. The economic value of probabilistic forecasts has been demonstrated by several authors, showing the benefit to using probabilistic forecasts over deterministic forecasts in several sectors, including flood and drought warning, hydropower, and agriculture. Probabilistic forecasting is also central to the emerging concept of risk-based decision making, and underlies emerging paradigms such as impact-based forecasting. Although the economic value of probabilistic forecasts is easily demonstrated in academic works, its evaluation in practice is more complex. The practical use of probabilistic forecasts requires decision makers to weigh the cost of an appropriate response to a probabilistic warning against the projected loss that would occur if the event forecast becomes reality. In this paper, we present the results of a simple game that aims to explore how decision makers are influenced by the costs required for taking a response and the potential losses they face in case the forecast flood event occurs. Participants play the role of one of three possible different shop owners. Each type of shop has losses of quite different magnitude, should a flood event occur. The shop owners are presented with several forecasts, each with a probability of a flood event occurring, which would inundate their shop and lead to those losses. In response, they have to decide if they want to do nothing, raise temporary defences, or relocate their inventory. Each action comes at a cost; and the different shop owners therefore have quite different cost/loss ratios. The game was played on four occasions. Players were attendees of the ensemble hydro-meteorological forecasting session of the 2016 EGU Assembly, professionals participating at two other conferences related to hydrometeorology, and a group of students. All audiences were familiar with the principles of forecasting and water-related risks, and one of the audiences comprised a group of experts in probabilistic forecasting. Results show that the different shop owners do take the costs of taking action and the potential losses into account in their decisions. Shop owners with a low cost/loss ratio were found to be more inclined to take actions based on the forecasts, though the absolute value of the losses also increased the willingness to take action. Little differentiation was found between the different groups of players.

  8. Effects of track and threat information on judgments of hurricane strike probability.

    PubMed

    Wu, Hao-Che; Lindell, Michael K; Prater, Carla S; Samuelson, Charles D

    2014-06-01

    Although evacuation is one of the best strategies for protecting citizens from hurricane threat, the ways that local elected officials use hurricane data in deciding whether to issue hurricane evacuation orders is not well understood. To begin to address this problem, we examined the effects of hurricane track and intensity information in a laboratory setting where participants judged the probability that hypothetical hurricanes with a constant bearing (i.e., straight line forecast track) would make landfall in each of eight 45 degree sectors around the Gulf of Mexico. The results from 162 participants in a student sample showed that the judged strike probability distributions over the eight sectors within each scenario were, unsurprisingly, unimodal and centered on the sector toward which the forecast track pointed. More significantly, although strike probability judgments for the sector in the direction of the forecast track were generally higher than the corresponding judgments for the other sectors, the latter were not zero. Most significantly, there were no appreciable differences in the patterns of strike probability judgments for hurricane tracks represented by a forecast track only, an uncertainty cone only, or forecast track with an uncertainty cone-a result consistent with a recent survey of coastal residents threatened by Hurricane Charley. The study results suggest that people are able to correctly process basic information about hurricane tracks but they do make some errors. More research is needed to understand the sources of these errors and to identify better methods of displaying uncertainty about hurricane parameters. © 2013 Society for Risk Analysis.

  9. Stochastic demographic forecasting.

    PubMed

    Lee, R D

    1992-11-01

    "This paper describes a particular approach to stochastic population forecasting, which is implemented for the U.S.A. through 2065. Statistical time series methods are combined with demographic models to produce plausible long run forecasts of vital rates, with probability distributions. The resulting mortality forecasts imply gains in future life expectancy that are roughly twice as large as those forecast by the Office of the Social Security Actuary.... Resulting stochastic forecasts of the elderly population, elderly dependency ratios, and payroll tax rates for health, education and pensions are presented." excerpt

  10. Monthly streamflow forecasting based on hidden Markov model and Gaussian Mixture Regression

    NASA Astrophysics Data System (ADS)

    Liu, Yongqi; Ye, Lei; Qin, Hui; Hong, Xiaofeng; Ye, Jiajun; Yin, Xingli

    2018-06-01

    Reliable streamflow forecasts can be highly valuable for water resources planning and management. In this study, we combined a hidden Markov model (HMM) and Gaussian Mixture Regression (GMR) for probabilistic monthly streamflow forecasting. The HMM is initialized using a kernelized K-medoids clustering method, and the Baum-Welch algorithm is then executed to learn the model parameters. GMR derives a conditional probability distribution for the predictand given covariate information, including the antecedent flow at a local station and two surrounding stations. The performance of HMM-GMR was verified based on the mean square error and continuous ranked probability score skill scores. The reliability of the forecasts was assessed by examining the uniformity of the probability integral transform values. The results show that HMM-GMR obtained reasonably high skill scores and the uncertainty spread was appropriate. Different HMM states were assumed to be different climate conditions, which would lead to different types of observed values. We demonstrated that the HMM-GMR approach can handle multimodal and heteroscedastic data.

  11. Probabilistic forecasting for extreme NO2 pollution episodes.

    PubMed

    Aznarte, José L

    2017-10-01

    In this study, we investigate the convenience of quantile regression to predict extreme concentrations of NO 2 . Contrarily to the usual point-forecasting, where a single value is forecast for each horizon, probabilistic forecasting through quantile regression allows for the prediction of the full probability distribution, which in turn allows to build models specifically fit for the tails of this distribution. Using data from the city of Madrid, including NO 2 concentrations as well as meteorological measures, we build models that predict extreme NO 2 concentrations, outperforming point-forecasting alternatives, and we prove that the predictions are accurate, reliable and sharp. Besides, we study the relative importance of the independent variables involved, and show how the important variables for the median quantile are different than those important for the upper quantiles. Furthermore, we present a method to compute the probability of exceedance of thresholds, which is a simple and comprehensible manner to present probabilistic forecasts maximizing their usefulness. Copyright © 2017 Elsevier Ltd. All rights reserved.

  12. Near Real Time Tools for ISS Plasma Science and Engineering Applications

    NASA Astrophysics Data System (ADS)

    Minow, J. I.; Willis, E. M.; Parker, L. N.; Shim, J.; Kuznetsova, M. M.; Pulkkinen, A. A.

    2013-12-01

    The International Space Station (ISS) program utilizes a plasma environment forecast for estimating electrical charging hazards for crews during extravehicular activity (EVA). The process uses ionospheric electron density and temperature measurements from the ISS Floating Potential Measurement Unit (FPMU) instrument suite with the assumption that the plasma conditions will remain constant for one to fourteen days with a low probability for a space weather event which would significantly change the environment before an EVA. FPMU data is typically not available during EVA's, therefore, the most recent FPMU data available for characterizing the state of the ionosphere during EVA is typically a day or two before the start of an EVA or after the EVA has been completed. In addition to EVA support, information on ionospheric plasma densities is often needed for support of ISS science payloads and anomaly investigations during periods when the FPMU is not operating. This presentation describes the application of space weather tools developed by MSFC using data from near real time satellite radio occultation and ground based ionosonde measurements of ionospheric electron density and a first principle ionosphere model providing electron density and temperature run in a real time mode by GSFC. These applications are used to characterize the space environment during EVA periods when FPMU data is not available, monitor for large charges in ionosphere density that could render the ionosphere forecast and plasma hazard assessment invalid, and validate the assumption of 'persistence of conditions' used in deriving the hazard forecast. In addition, the tools are used to provide space environment input to science payloads on ISS and anomaly investigations during periods the FPMU is not operating.

  13. Quantifying the Usefulness of Ensemble-Based Precipitation Forecasts with Respect to Water Use and Yield during a Field Trial

    NASA Astrophysics Data System (ADS)

    Christ, E.; Webster, P. J.; Collins, G.; Byrd, S.

    2014-12-01

    Recent droughts and the continuing water wars between the states of Georgia, Alabama and Florida have made agricultural producers more aware of the importance of managing their irrigation systems more efficiently. Many southeastern states are beginning to consider laws that will require monitoring and regulation of water used for irrigation. Recently, Georgia suspended issuing irrigation permits in some areas of the southwestern portion of the state to try and limit the amount of water being used in irrigation. However, even in southern Georgia, which receives on average between 23 and 33 inches of rain during the growing season, irrigation can significantly impact crop yields. In fact, studies have shown that when fields do not receive rainfall at the most critical stages in the life of cotton, yield for irrigated fields can be up to twice as much as fields for non-irrigated cotton. This leads to the motivation for this study, which is to produce a forecast tool that will enable producers to make more efficient irrigation management decisions. We will use the ECMWF (European Centre for Medium-Range Weather Forecasts) vars EPS (Ensemble Prediction System) model precipitation forecasts for the grid points included in the 1◦ x 1◦ lat/lon square surrounding the point of interest. We will then apply q-to-q bias corrections to the forecasts. Once we have applied the bias corrections, we will use the check-book method of irrigation scheduling to determine the probability of receiving the required amount of rainfall for each week of the growing season. These forecasts will be used during a field trial conducted at the CM Stripling Irrigation Research Park in Camilla, Georgia. This research will compare differences in yield and water use among the standard checkbook method of irrigation, which uses no precipitation forecast knowledge, the weather.com forecast, a dry land plot, and the ensemble-based forecasts mentioned above.

  14. Optimising seasonal streamflow forecast lead time for operational decision making in Australia

    NASA Astrophysics Data System (ADS)

    Schepen, Andrew; Zhao, Tongtiegang; Wang, Q. J.; Zhou, Senlin; Feikema, Paul

    2016-10-01

    Statistical seasonal forecasts of 3-month streamflow totals are released in Australia by the Bureau of Meteorology and updated on a monthly basis. The forecasts are often released in the second week of the forecast period, due to the onerous forecast production process. The current service relies on models built using data for complete calendar months, meaning the forecast production process cannot begin until the first day of the forecast period. Somehow, the bureau needs to transition to a service that provides forecasts before the beginning of the forecast period; timelier forecast release will become critical as sub-seasonal (monthly) forecasts are developed. Increasing the forecast lead time to one month ahead is not considered a viable option for Australian catchments that typically lack any predictability associated with snowmelt. The bureau's forecasts are built around Bayesian joint probability models that have antecedent streamflow, rainfall and climate indices as predictors. In this study, we adapt the modelling approach so that forecasts have any number of days of lead time. Daily streamflow and sea surface temperatures are used to develop predictors based on 28-day sliding windows. Forecasts are produced for 23 forecast locations with 0-14- and 21-day lead time. The forecasts are assessed in terms of continuous ranked probability score (CRPS) skill score and reliability metrics. CRPS skill scores, on average, reduce monotonically with increase in days of lead time, although both positive and negative differences are observed. Considering only skilful forecast locations, CRPS skill scores at 7-day lead time are reduced on average by 4 percentage points, with differences largely contained within +5 to -15 percentage points. A flexible forecasting system that allows for any number of days of lead time could benefit Australian seasonal streamflow forecast users by allowing more time for forecasts to be disseminated, comprehended and made use of prior to the commencement of a forecast season. The system would allow for forecasts to be updated if necessary.

  15. Hydrodaynamic - Statistical Forecast Method To 36-48h Ahead Of Storm Wind And Tornadoes Over The Territory Of Europe And Siberia

    NASA Astrophysics Data System (ADS)

    Perekhodtseva, Elvira V.

    2010-05-01

    Development of successful method of forecast of storm winds, including squalls and tornadoes, that often result in human and material losses, could allow one to take proper measures against destruction of buildings and to protect people. Well-in-advance successful forecast (from 12 hours to 48 hour) makes possible to reduce the losses. Prediction of the phenomena involved is a very difficult problem for synoptic till recently. The existing graphic and calculation methods still depend on subjective decision of an operator. Nowadays in Russia there is no hydrodynamic model for forecast of the maximal wind velocity V> 25m/c, hence the main tools of objective forecast are statistical methods using the dependence of the phenomena involved on a number of atmospheric parameters (predictors). . Statistical decisive rule of the alternative and probability forecast of these events was obtained in accordance with the concept of "perfect prognosis" using the data of objective analysis. For this purpose the different teaching samples of present and absent of this storm wind and rainfalls were automatically arranged that include the values of forty physically substantiated potential predictors. Then the empirical statistical method was used that involved diagonalization of the mean correlation matrix R of the predictors and extraction of diagonal blocks of strongly correlated predictors. Thus for these phenomena the most informative predictors were selected without loosing information. The statistical decisive rules for diagnosis and prognosis of the phenomena involved U(X) were calculated for choosing informative vector-predictor. We used the criterion of distance of Mahalanobis and criterion of minimum of entropy by Vapnik-Chervonenkis for the selection predictors. Successful development of hydrodynamic models for short-term forecast and improvement of 36-48h forecasts of pressure, temperature and others parameters allowed us to use the prognostic fields of those models for calculations of the discriminant functions in the nodes of the grid 75x75km and the values of probabilities P of dangerous wind and thus to get fully automated forecasts. . In order to apply the alternative forecast to European part of Russia and Europe the author proposes the empirical threshold values specified for this phenomenon and advance period 36 hours. According to the Pirsey-Obukhov criterion (T), the success of this hydrometeorological-statistical method of forecast of storm wind and tornadoes to 36 -48 hours ahead in the warm season for the territory of Europe part of Russia and Siberia is T = 1-a-b=0,54-0,78 after independent and author experiments during the period 2004-2009 years. A lot of examples of very successful forecasts are submitted at this report for the territory of Europe and Russia. The same decisive rules were applied to the forecast of these phenomena during cold period in 2009-2010 years too. On the first month of 2010 a lot of cases of storm wind with heavy snowfall were observed and were forecasting over the territory of France, Italy and Germany.

  16. Forecasting eruptions of Mauna Loa Volcano, Hawaii

    NASA Astrophysics Data System (ADS)

    Decker, Robert W.; Klein, Fred W.; Okamura, Arnold T.; Okubo, Paul G.

    Past eruption patterns and various kinds of precursors are the two basic ingredients of eruption forecasts. The 39 historical eruptions of Mauna Loa from 1832 to 1984 have intervals as short as 104 days and as long as 9,165 days between the beginning of an eruption and the beginning of the next one. These recurrence times roughly fit a Poisson distribution pattern with a mean recurrence time of 1,459 days, yielding a probability of 22% (P=.22) for an eruption of Mauna Loa during any next year. The long recurrence times since 1950, however, suggest that the probability is not random, and that the current probability for an eruption during the next year may be as low as 6%. Seismicity beneath Mauna Loa increased for about two years prior to the 1975 and 1984 eruptions. Inflation of the summit area took place between eruptions with the highest rates occurring for a year or two before and after the 1975 and 1984 eruptions. Volcanic tremor beneath Mauna Loa began 51 minutes prior to the 1975 eruption and 115 minutes prior to the 1984 eruption. Eruption forecasts were published in 1975, 1976, and 1983. The 1975 and 1983 forecasts, though vaguely worded, were qualitatively correct regarding the timing of the next eruption. The 1976 forecast was more quantitative; it was wrong on timing but accurate on forecasting the location of the 1984 eruption. This paper urges that future forecasts be specific so they can be evaluated quantitatively.

  17. Generalization of information-based concepts in forecast verification

    NASA Astrophysics Data System (ADS)

    Tödter, J.; Ahrens, B.

    2012-04-01

    This work deals with information-theoretical methods in probabilistic forecast verification. Recent findings concerning the Ignorance Score are shortly reviewed, then the generalization to continuous forecasts is shown. For ensemble forecasts, the presented measures can be calculated exactly. The Brier Score (BS) and its generalizations to the multi-categorical Ranked Probability Score (RPS) and to the Continuous Ranked Probability Score (CRPS) are the prominent verification measures for probabilistic forecasts. Particularly, their decompositions into measures quantifying the reliability, resolution and uncertainty of the forecasts are attractive. Information theory sets up the natural framework for forecast verification. Recently, it has been shown that the BS is a second-order approximation of the information-based Ignorance Score (IGN), which also contains easily interpretable components and can also be generalized to a ranked version (RIGN). Here, the IGN, its generalizations and decompositions are systematically discussed in analogy to the variants of the BS. Additionally, a Continuous Ranked IGN (CRIGN) is introduced in analogy to the CRPS. The applicability and usefulness of the conceptually appealing CRIGN is illustrated, together with an algorithm to evaluate its components reliability, resolution, and uncertainty for ensemble-generated forecasts. This is also directly applicable to the more traditional CRPS.

  18. Weather forecasting with open source software

    NASA Astrophysics Data System (ADS)

    Rautenhaus, Marc; Dörnbrack, Andreas

    2013-04-01

    To forecast the weather situation during aircraft-based atmospheric field campaigns, we employ a tool chain of existing and self-developed open source software tools and open standards. Of particular value are the Python programming language with its extension libraries NumPy, SciPy, PyQt4, Matplotlib and the basemap toolkit, the NetCDF standard with the Climate and Forecast (CF) Metadata conventions, and the Open Geospatial Consortium Web Map Service standard. These open source libraries and open standards helped to implement the "Mission Support System", a Web Map Service based tool to support weather forecasting and flight planning during field campaigns. The tool has been implemented in Python and has also been released as open source (Rautenhaus et al., Geosci. Model Dev., 5, 55-71, 2012). In this presentation we discuss the usage of free and open source software for weather forecasting in the context of research flight planning, and highlight how the field campaign work benefits from using open source tools and open standards.

  19. NWS Operational Requirements for Ensemble-Based Hydrologic Forecasts

    NASA Astrophysics Data System (ADS)

    Hartman, R. K.

    2008-12-01

    Ensemble-based hydrologic forecasts have been developed and issued by National Weather Service (NWS) staff at River Forecast Centers (RFCs) for many years. Used principally for long-range water supply forecasts, only the uncertainty associated with weather and climate have been traditionally considered. As technology and societal expectations of resource managers increase, the use and desire for risk-based decision support tools has also increased. These tools require forecast information that includes reliable uncertainty estimates across all time and space domains. The development of reliable uncertainty estimates associated with hydrologic forecasts is being actively pursued within the United States and internationally. This presentation will describe the challenges, components, and requirements for operational hydrologic ensemble-based forecasts from the perspective of a NOAA/NWS River Forecast Center.

  20. Improved Rainfall Estimates and Predictions for 21st Century Drought Early Warning

    NASA Technical Reports Server (NTRS)

    Funk, Chris; Peterson, Pete; Shukla, Shraddhanand; Husak, Gregory; Landsfeld, Marty; Hoell, Andrew; Pedreros, Diego; Roberts, J. B.; Robertson, F. R.; Tadesse, Tsegae; hide

    2015-01-01

    As temperatures increase, the onset and severity of droughts is likely to become more intense. Improved tools for understanding, monitoring and predicting droughts will be a key component of 21st century climate adaption. The best drought monitoring systems will bring together accurate precipitation estimates with skillful climate and weather forecasts. Such systems combine the predictive power inherent in the current land surface state with the predictive power inherent in low frequency ocean-atmosphere dynamics. To this end, researchers at the Climate Hazards Group (CHG), in collaboration with partners at the USGS and NASA, have developed i) a long (1981-present) quasi-global (50degS-50degN, 180degW-180degE) high resolution (0.05deg) homogenous precipitation data set designed specifically for drought monitoring, ii) tools for understanding and predicting East African boreal spring droughts, and iii) an integrated land surface modeling (LSM) system that combines rainfall observations and predictions to provide effective drought early warning. This talk briefly describes these three components. Component 1: CHIRPS The Climate Hazards group InfraRed Precipitation with Stations (CHIRPS), blends station data with geostationary satellite observations to provide global near real time daily, pentadal and monthly precipitation estimates. We describe the CHIRPS algorithm and compare CHIRPS and other estimates to validation data. The CHIRPS is shown to have high correlation, low systematic errors (bias) and low mean absolute errors. Component 2: Hybrid statistical-dynamic forecast strategies East African droughts have increased in frequency, but become more predictable as Indo- Pacific SST gradients and Walker circulation disruptions intensify. We describe hybrid statistical-dynamic forecast strategies that are far superior to the raw output of coupled forecast models. These forecasts can be translated into probabilities that can be used to generate bootstrapped ensembles describing future climate conditions. Component 3: Assimilation using LSMs CHIRPS rainfall observations (component 1) and bootstrapped forecast ensembles (component 2) can be combined using LSMs to predict soil moisture deficits. We evaluate the skill such a system in East Africa, and demonstrate results for 2013.

  1. Monthly forecasting of agricultural pests in Switzerland

    NASA Astrophysics Data System (ADS)

    Hirschi, M.; Dubrovsky, M.; Spirig, C.; Samietz, J.; Calanca, P.; Weigel, A. P.; Fischer, A. M.; Rotach, M. W.

    2012-04-01

    Given the repercussions of pests and diseases on agricultural production, detailed forecasting tools have been developed to simulate the degree of infestation depending on actual weather conditions. The life cycle of pests is most successfully predicted if the micro-climate of the immediate environment (habitat) of the causative organisms can be simulated. Sub-seasonal pest forecasts therefore require weather information for the relevant habitats and the appropriate time scale. The pest forecasting system SOPRA (www.sopra.info) currently in operation in Switzerland relies on such detailed weather information, using hourly weather observations up to the day the forecast is issued, but only a climatology for the forecasting period. Here, we aim at improving the skill of SOPRA forecasts by transforming the weekly information provided by ECMWF monthly forecasts (MOFCs) into hourly weather series as required for the prediction of upcoming life phases of the codling moth, the major insect pest in apple orchards worldwide. Due to the probabilistic nature of operational monthly forecasts and the limited spatial and temporal resolution, their information needs to be post-processed for use in a pest model. In this study, we developed a statistical downscaling approach for MOFCs that includes the following steps: (i) application of a stochastic weather generator to generate a large pool of daily weather series consistent with the climate at a specific location, (ii) a subsequent re-sampling of weather series from this pool to optimally represent the evolution of the weekly MOFC anomalies, and (iii) a final extension to hourly weather series suitable for the pest forecasting model. Results show a clear improvement in the forecast skill of occurrences of upcoming codling moth life phases when incorporating MOFCs as compared to the operational pest forecasting system. This is true both in terms of root mean squared errors and of the continuous rank probability scores of the probabilistic forecasts vs. the mean absolute errors of the deterministic system. Also, the application of the climate conserving recalibration (CCR, Weigel et al. 2009) technique allows for successful correction of the under-confidence in the forecasted occurrences of codling moth life phases. Reference: Weigel, A. P.; Liniger, M. A. & Appenzeller, C. (2009). Seasonal Ensemble Forecasts: Are Recalibrated Single Models Better than Multimodels? Mon. Wea. Rev., 137, 1460-1479.

  2. Uncertainty quantification and reliability assessment in operational oil spill forecast modeling system.

    PubMed

    Hou, Xianlong; Hodges, Ben R; Feng, Dongyu; Liu, Qixiao

    2017-03-15

    As oil transport increasing in the Texas bays, greater risks of ship collisions will become a challenge, yielding oil spill accidents as a consequence. To minimize the ecological damage and optimize rapid response, emergency managers need to be informed with how fast and where oil will spread as soon as possible after a spill. The state-of-the-art operational oil spill forecast modeling system improves the oil spill response into a new stage. However uncertainty due to predicted data inputs often elicits compromise on the reliability of the forecast result, leading to misdirection in contingency planning. Thus understanding the forecast uncertainty and reliability become significant. In this paper, Monte Carlo simulation is implemented to provide parameters to generate forecast probability maps. The oil spill forecast uncertainty is thus quantified by comparing the forecast probability map and the associated hindcast simulation. A HyosPy-based simple statistic model is developed to assess the reliability of an oil spill forecast in term of belief degree. The technologies developed in this study create a prototype for uncertainty and reliability analysis in numerical oil spill forecast modeling system, providing emergency managers to improve the capability of real time operational oil spill response and impact assessment. Copyright © 2017 Elsevier Ltd. All rights reserved.

  3. Pre-seismic anomalies from optical satellite observations: a review

    NASA Astrophysics Data System (ADS)

    Jiao, Zhong-Hu; Zhao, Jing; Shan, Xinjian

    2018-04-01

    Detecting various anomalies using optical satellite data prior to strong earthquakes is key to understanding and forecasting earthquake activities because of its recognition of thermal-radiation-related phenomena in seismic preparation phases. Data from satellite observations serve as a powerful tool in monitoring earthquake preparation areas at a global scale and in a nearly real-time manner. Over the past several decades, many new different data sources have been utilized in this field, and progressive anomaly detection approaches have been developed. This paper reviews the progress and development of pre-seismic anomaly detection technology in this decade. First, precursor parameters, including parameters from the top of the atmosphere, in the atmosphere, and on the Earth's surface, are stated and discussed. Second, different anomaly detection methods, which are used to extract anomalous signals that probably indicate future seismic events, are presented. Finally, certain critical problems with the current research are highlighted, and new developing trends and perspectives for future work are discussed. The development of Earth observation satellites and anomaly detection algorithms can enrich available information sources, provide advanced tools for multilevel earthquake monitoring, and improve short- and medium-term forecasting, which play a large and growing role in pre-seismic anomaly detection research.

  4. Short-term Forecasting Tools for Agricultural Nutrient Management.

    PubMed

    Easton, Zachary M; Kleinman, Peter J A; Buda, Anthony R; Goering, Dustin; Emberston, Nichole; Reed, Seann; Drohan, Patrick J; Walter, M Todd; Guinan, Pat; Lory, John A; Sommerlot, Andrew R; Sharpley, Andrew

    2017-11-01

    The advent of real-time, short-term farm management tools is motivated by the need to protect water quality above and beyond the general guidance offered by existing nutrient management plans. Advances in high-performance computing and hydrologic or climate modeling have enabled rapid dissemination of real-time information that can assist landowners and conservation personnel with short-term management planning. This paper reviews short-term decision support tools for agriculture that are under various stages of development and implementation in the United States: (i) Wisconsin's Runoff Risk Advisory Forecast (RRAF) System, (ii) New York's Hydrologically Sensitive Area Prediction Tool, (iii) Virginia's Saturated Area Forecast Model, (iv) Pennsylvania's Fertilizer Forecaster, (v) Washington's Application Risk Management (ARM) System, and (vi) Missouri's Design Storm Notification System. Although these decision support tools differ in their underlying model structure, the resolution at which they are applied, and the hydroclimates to which they are relevant, all provide forecasts (range 24-120 h) of runoff risk or soil moisture saturation derived from National Weather Service Forecast models. Although this review highlights the need for further development of robust and well-supported short-term nutrient management tools, their potential for adoption and ultimate utility requires an understanding of the appropriate context of application, the strategic and operational needs of managers, access to weather forecasts, scales of application (e.g., regional vs. field level), data requirements, and outreach communication structure. Copyright © by the American Society of Agronomy, Crop Science Society of America, and Soil Science Society of America, Inc.

  5. A new scoring method for evaluating the performance of earthquake forecasts and predictions

    NASA Astrophysics Data System (ADS)

    Zhuang, J.

    2009-12-01

    This study presents a new method, namely the gambling score, for scoring the performance of earthquake forecasts or predictions. Unlike most other scoring procedures that require a regular scheme of forecast and treat each earthquake equally, regardless their magnitude, this new scoring method compensates the risk that the forecaster has taken. A fair scoring scheme should reward the success in a way that is compatible with the risk taken. Suppose that we have the reference model, usually the Poisson model for usual cases or Omori-Utsu formula for the case of forecasting aftershocks, which gives probability p0 that at least 1 event occurs in a given space-time-magnitude window. The forecaster, similar to a gambler, who starts with a certain number of reputation points, bets 1 reputation point on ``Yes'' or ``No'' according to his forecast, or bets nothing if he performs a NA-prediction. If the forecaster bets 1 reputation point of his reputations on ``Yes" and loses, the number of his reputation points is reduced by 1; if his forecasts is successful, he should be rewarded (1-p0)/p0 reputation points. The quantity (1-p0)/p0 is the return (reward/bet) ratio for bets on ``Yes''. In this way, if the reference model is correct, the expected return that he gains from this bet is 0. This rule also applies to probability forecasts. Suppose that p is the occurrence probability of an earthquake given by the forecaster. We can regard the forecaster as splitting 1 reputation point by betting p on ``Yes'' and 1-p on ``No''. In this way, the forecaster's expected pay-off based on the reference model is still 0. From the viewpoints of both the reference model and the forecaster, the rule for rewarding and punishment is fair. This method is also extended to the continuous case of point process models, where the reputation points bet by the forecaster become a continuous mass on the space-time-magnitude range of interest. We also calculate the upper bound of the gambling score when the true model is a renewal process, the stress release model or the ETAS model and when the reference model is the Poisson model.

  6. UQ for Decision Making: How (at least five) Kinds of Probability Might Come Into Play

    NASA Astrophysics Data System (ADS)

    Smith, L. A.

    2013-12-01

    In 1959 IJ Good published the discussion "Kinds of Probability" in Science. Good identified (at least) five kinds. The need for (at least) a sixth kind of probability when quantifying uncertainty in the context of climate science is discussed. This discussion brings out the differences in weather-like forecasting tasks and climate-links tasks, with a focus on the effective use both of science and of modelling in support of decision making. Good also introduced the idea of a "Dynamic probability" a probability one expects to change without any additional empirical evidence; the probabilities assigned by a chess playing program when it is only half thorough its analysis being an example. This case is contrasted with the case of "Mature probabilities" where a forecast algorithm (or model) has converged on its asymptotic probabilities and the question hinges in whether or not those probabilities are expected to change significantly before the event in question occurs, even in the absence of new empirical evidence. If so, then how might one report and deploy such immature probabilities in scientific-support of decision-making rationally? Mature Probability is suggested as a useful sixth kind, although Good would doubtlessly argue that we can get by with just one, effective communication with decision makers may be enhanced by speaking as if the others existed. This again highlights the distinction between weather-like contexts and climate-like contexts. In the former context one has access to a relevant climatology (a relevant, arguably informative distribution prior to any model simulations), in the latter context that information is not available although one can fall back on the scientific basis upon which the model itself rests, and estimate the probability that the model output is in fact misinformative. This subjective "probability of a big surprise" is one way to communicate the probability of model-based information holding in practice, the probability that the information the model-based probability is conditioned on holds. It is argued that no model-based climate-like probability forecast is complete without a quantitative estimate of its own irrelevance, and that the clear identification of model-based probability forecasts as mature or immature, are critical elements for maintaining the credibility of science-based decision support, and can shape uncertainty quantification more widely.

  7. Improved Anvil Forecasting

    NASA Technical Reports Server (NTRS)

    Lambert, Winifred C.

    2000-01-01

    This report describes the outcome of Phase 1 of the AMU's Improved Anvil Forecasting task. Forecasters in the 45th Weather Squadron and the Spaceflight Meteorology Group have found that anvil forecasting is a difficult task when predicting LCC and FR violations. The purpose of this task is to determine the technical feasibility of creating an anvil-forecasting tool. Work on this study was separated into three steps: literature search, forecaster discussions, and determination of technical feasibility. The literature search revealed no existing anvil-forecasting techniques. However, there appears to be growing interest in anvils in recent years. If this interest continues to grow, more information will be available to aid in developing a reliable anvil-forecasting tool. The forecaster discussion step revealed an array of methods on how better forecasting techniques could be developed. The forecasters have ideas based on sound meteorological principles and personal experience in forecasting and analyzing anvils. Based on the information gathered in the discussions with the forecasters, the conclusion of this report is that it is technically feasible at this time to develop an anvil forecasting technique that will significantly contribute to the confidence in anvil forecasts.

  8. A robust method to forecast volcanic ash clouds

    USGS Publications Warehouse

    Denlinger, Roger P.; Pavolonis, Mike; Sieglaff, Justin

    2012-01-01

    Ash clouds emanating from volcanic eruption columns often form trails of ash extending thousands of kilometers through the Earth's atmosphere, disrupting air traffic and posing a significant hazard to air travel. To mitigate such hazards, the community charged with reducing flight risk must accurately assess risk of ash ingestion for any flight path and provide robust forecasts of volcanic ash dispersal. In response to this need, a number of different transport models have been developed for this purpose and applied to recent eruptions, providing a means to assess uncertainty in forecasts. Here we provide a framework for optimal forecasts and their uncertainties given any model and any observational data. This involves random sampling of the probability distributions of input (source) parameters to a transport model and iteratively running the model with different inputs, each time assessing the predictions that the model makes about ash dispersal by direct comparison with satellite data. The results of these comparisons are embodied in a likelihood function whose maximum corresponds to the minimum misfit between model output and observations. Bayes theorem is then used to determine a normalized posterior probability distribution and from that a forecast of future uncertainty in ash dispersal. The nature of ash clouds in heterogeneous wind fields creates a strong maximum likelihood estimate in which most of the probability is localized to narrow ranges of model source parameters. This property is used here to accelerate probability assessment, producing a method to rapidly generate a prediction of future ash concentrations and their distribution based upon assimilation of satellite data as well as model and data uncertainties. Applying this method to the recent eruption of Eyjafjallajökull in Iceland, we show that the 3 and 6 h forecasts of ash cloud location probability encompassed the location of observed satellite-determined ash cloud loads, providing an efficient means to assess all of the hazards associated with these ash clouds.

  9. Open Source Tools for Seismicity Analysis

    NASA Astrophysics Data System (ADS)

    Powers, P.

    2010-12-01

    The spatio-temporal analysis of seismicity plays an important role in earthquake forecasting and is integral to research on earthquake interactions and triggering. For instance, the third version of the Uniform California Earthquake Rupture Forecast (UCERF), currently under development, will use Epidemic Type Aftershock Sequences (ETAS) as a model for earthquake triggering. UCERF will be a "living" model and therefore requires robust, tested, and well-documented ETAS algorithms to ensure transparency and reproducibility. Likewise, as earthquake aftershock sequences unfold, real-time access to high quality hypocenter data makes it possible to monitor the temporal variability of statistical properties such as the parameters of the Omori Law and the Gutenberg Richter b-value. Such statistical properties are valuable as they provide a measure of how much a particular sequence deviates from expected behavior and can be used when assigning probabilities of aftershock occurrence. To address these demands and provide public access to standard methods employed in statistical seismology, we present well-documented, open-source JavaScript and Java software libraries for the on- and off-line analysis of seismicity. The Javascript classes facilitate web-based asynchronous access to earthquake catalog data and provide a framework for in-browser display, analysis, and manipulation of catalog statistics; implementations of this framework will be made available on the USGS Earthquake Hazards website. The Java classes, in addition to providing tools for seismicity analysis, provide tools for modeling seismicity and generating synthetic catalogs. These tools are extensible and will be released as part of the open-source OpenSHA Commons library.

  10. A study on the predictability of the transition day from the dry to the rainy season over South Korea

    NASA Astrophysics Data System (ADS)

    Lee, Sang-Min; Nam, Ji-Eun; Choi, Hee-Wook; Ha, Jong-Chul; Lee, Yong Hee; Kim, Yeon-Hee; Kang, Hyun-Suk; Cho, ChunHo

    2016-08-01

    This study was conducted to evaluate the prediction accuracies of THe Observing system Research and Predictability EXperiment (THORPEX) Interactive Grand Global Ensemble (TIGGE) data at six operational forecast centers using the root-mean square difference (RMSD) and Brier score (BS) from April to July 2012. And it was performed to test the precipitation predictability of ensemble prediction systems (EPS) on the onset of the summer rainy season, the day of withdrawal in spring drought over South Korea on 29 June 2012 with use of the ensemble mean precipitation, ensemble probability precipitation, 10-day lag ensemble forecasts (ensemble mean and probability precipitation), and effective drought index (EDI). The RMSD analysis of atmospheric variables (geopotential-height at 500 hPa, temperature at 850 hPa, sea-level pressure and specific humidity at 850 hPa) showed that the prediction accuracies of the EPS at the Meteorological Service of Canada (CMC) and China Meteorological Administration (CMA) were poor and those at the European Center for Medium-Range Weather Forecasts (ECMWF) and Korea Meteorological Administration (KMA) were good. Also, ECMWF and KMA showed better results than other EPSs for predicting precipitation in the BS distributions. It is also evaluated that the onset of the summer rainy season could be predicted using ensemble-mean precipitation from 4-day leading time at all forecast centers. In addition, the spatial distributions of predicted precipitation of the EPS at KMA and the Met Office of the United Kingdom (UKMO) were similar to those of observed precipitation; thus, the predictability showed good performance. The precipitation probability forecasts of EPS at CMA, the National Centers for Environmental Prediction (NCEP), and UKMO (ECMWF and KMA) at 1-day lead time produced over-forecasting (under-forecasting) in the reliability diagram. And all the ones at 2˜4-day lead time showed under-forecasting. Also, the precipitation on onset day of the summer rainy season could be predicted from a 4-day lead time to initial time by using the 10-day lag ensemble mean and probability forecasts. Additionally, the predictability for withdrawal day of spring drought to be ended due to precipitation on onset day of summer rainy season was evaluated using Effective Drought Index (EDI) to be calculated by ensemble mean precipitation forecasts and spreads at five EPSs.

  11. New Aspects of Probabilistic Forecast Verification Using Information Theory

    NASA Astrophysics Data System (ADS)

    Tödter, Julian; Ahrens, Bodo

    2013-04-01

    This work deals with information-theoretical methods in probabilistic forecast verification, particularly concerning ensemble forecasts. Recent findings concerning the "Ignorance Score" are shortly reviewed, then a consistent generalization to continuous forecasts is motivated. For ensemble-generated forecasts, the presented measures can be calculated exactly. The Brier Score (BS) and its generalizations to the multi-categorical Ranked Probability Score (RPS) and to the Continuous Ranked Probability Score (CRPS) are prominent verification measures for probabilistic forecasts. Particularly, their decompositions into measures quantifying the reliability, resolution and uncertainty of the forecasts are attractive. Information theory sets up a natural framework for forecast verification. Recently, it has been shown that the BS is a second-order approximation of the information-based Ignorance Score (IGN), which also contains easily interpretable components and can also be generalized to a ranked version (RIGN). Here, the IGN, its generalizations and decompositions are systematically discussed in analogy to the variants of the BS. Additionally, a Continuous Ranked IGN (CRIGN) is introduced in analogy to the CRPS. The useful properties of the conceptually appealing CRIGN are illustrated, together with an algorithm to evaluate its components reliability, resolution, and uncertainty for ensemble-generated forecasts. This algorithm can also be used to calculate the decomposition of the more traditional CRPS exactly. The applicability of the "new" measures is demonstrated in a small evaluation study of ensemble-based precipitation forecasts.

  12. Time‐dependent renewal‐model probabilities when date of last earthquake is unknown

    USGS Publications Warehouse

    Field, Edward H.; Jordan, Thomas H.

    2015-01-01

    We derive time-dependent, renewal-model earthquake probabilities for the case in which the date of the last event is completely unknown, and compare these with the time-independent Poisson probabilities that are customarily used as an approximation in this situation. For typical parameter values, the renewal-model probabilities exceed Poisson results by more than 10% when the forecast duration exceeds ~20% of the mean recurrence interval. We also derive probabilities for the case in which the last event is further constrained to have occurred before historical record keeping began (the historic open interval), which can only serve to increase earthquake probabilities for typically applied renewal models.We conclude that accounting for the historic open interval can improve long-term earthquake rupture forecasts for California and elsewhere.

  13. Climate sensitivity estimated from temperature reconstructions of the Last Glacial Maximum

    NASA Astrophysics Data System (ADS)

    Schmittner, A.; Urban, N.; Shakun, J. D.; Mahowald, N. M.; Clark, P. U.; Bartlein, P. J.; Mix, A. C.; Rosell-Melé, A.

    2011-12-01

    In 1959 IJ Good published the discussion "Kinds of Probability" in Science. Good identified (at least) five kinds. The need for (at least) a sixth kind of probability when quantifying uncertainty in the context of climate science is discussed. This discussion brings out the differences in weather-like forecasting tasks and climate-links tasks, with a focus on the effective use both of science and of modelling in support of decision making. Good also introduced the idea of a "Dynamic probability" a probability one expects to change without any additional empirical evidence; the probabilities assigned by a chess playing program when it is only half thorough its analysis being an example. This case is contrasted with the case of "Mature probabilities" where a forecast algorithm (or model) has converged on its asymptotic probabilities and the question hinges in whether or not those probabilities are expected to change significantly before the event in question occurs, even in the absence of new empirical evidence. If so, then how might one report and deploy such immature probabilities in scientific-support of decision-making rationally? Mature Probability is suggested as a useful sixth kind, although Good would doubtlessly argue that we can get by with just one, effective communication with decision makers may be enhanced by speaking as if the others existed. This again highlights the distinction between weather-like contexts and climate-like contexts. In the former context one has access to a relevant climatology (a relevant, arguably informative distribution prior to any model simulations), in the latter context that information is not available although one can fall back on the scientific basis upon which the model itself rests, and estimate the probability that the model output is in fact misinformative. This subjective "probability of a big surprise" is one way to communicate the probability of model-based information holding in practice, the probability that the information the model-based probability is conditioned on holds. It is argued that no model-based climate-like probability forecast is complete without a quantitative estimate of its own irrelevance, and that the clear identification of model-based probability forecasts as mature or immature, are critical elements for maintaining the credibility of science-based decision support, and can shape uncertainty quantification more widely.

  14. Determining the Probability of Violating Upper-Level Wind Constraints for the Launch of Minuteman III Ballistic Missiles at Vandenberg Air Force Base

    NASA Technical Reports Server (NTRS)

    Shafer, Jaclyn A.; Brock, Tyler M.

    2012-01-01

    The 30th Operational Support Squadron Weather Flight (30 OSSWF) provides comprehensive weather services to the space program at Vandenberg Air Force Base (VAFB) in California. One of their responsibilities is to monitor upper-level winds to ensure safe launch operations of the Minuteman Ill ballistic missile. The 30 OSSWF tasked the Applied Meteorology Unit (AMU) to analyze VAFB sounding data with the goal of determining the probability of violating (PoV) their upper-level thresholds for wind speed and shear constraints specific to this launch vehicle, and to develop a tool that will calculate the PoV of each constraint on the day of launch. In order to calculate the probability of exceeding each constraint, the AMU collected and analyzed historical data from VAFB. The historical sounding data were retrieved from the National Oceanic and Atmospheric Administration Earth System Research Laboratory archive for the years 1994-2011 and then stratified into four sub-seasons: January-March, April-June, July-September, and October-December. The AMU determined the theoretical distributions that best fit the maximum wind speed and maximum wind shear datasets and applied this information when calculating the averages and standard deviations needed for the historical and real-time PoV calculations. In addition, the AMU included forecast sounding data from the Rapid Refresh model. This information provides further insight for the launch weather officers (LWOs) when determining if a wind constraint violation will occur over the next few hours on the day of launch. The AMU developed an interactive graphical user interface (GUI) in Microsoft Excel using Visual Basic for Applications. The GUI displays the critical sounding data easily and quickly for LWOs on day of launch. This tool will replace the existing one used by the 30 OSSWF, assist the LWOs in determining the probability of exceeding specific wind threshold values, and help to improve the overall upper winds forecast for the launch customer. This presentation will describe how the AMU calculated the historical and real-time PoV values for the specific upper-level wind launch constraints and outline the development of the interactive GUI display.

  15. Verification of Space Weather Forecasts using Terrestrial Weather Approaches

    NASA Astrophysics Data System (ADS)

    Henley, E.; Murray, S.; Pope, E.; Stephenson, D.; Sharpe, M.; Bingham, S.; Jackson, D.

    2015-12-01

    The Met Office Space Weather Operations Centre (MOSWOC) provides a range of 24/7 operational space weather forecasts, alerts, and warnings, which provide valuable information on space weather that can degrade electricity grids, radio communications, and satellite electronics. Forecasts issued include arrival times of coronal mass ejections (CMEs), and probabilistic forecasts for flares, geomagnetic storm indices, and energetic particle fluxes and fluences. These forecasts are produced twice daily using a combination of output from models such as Enlil, near-real-time observations, and forecaster experience. Verification of forecasts is crucial for users, researchers, and forecasters to understand the strengths and limitations of forecasters, and to assess forecaster added value. To this end, the Met Office (in collaboration with Exeter University) has been adapting verification techniques from terrestrial weather, and has been working closely with the International Space Environment Service (ISES) to standardise verification procedures. We will present the results of part of this work, analysing forecast and observed CME arrival times, assessing skill using 2x2 contingency tables. These MOSWOC forecasts can be objectively compared to those produced by the NASA Community Coordinated Modelling Center - a useful benchmark. This approach cannot be taken for the other forecasts, as they are probabilistic and categorical (e.g., geomagnetic storm forecasts give probabilities of exceeding levels from minor to extreme). We will present appropriate verification techniques being developed to address these forecasts, such as rank probability skill score, and comparing forecasts against climatology and persistence benchmarks. As part of this, we will outline the use of discrete time Markov chains to assess and improve the performance of our geomagnetic storm forecasts. We will also discuss work to adapt a terrestrial verification visualisation system to space weather, to help MOSWOC forecasters view verification results in near real-time; plans to objectively assess flare forecasts under the EU Horizon 2020 FLARECAST project; and summarise ISES efforts to achieve consensus on verification.

  16. Operational earthquake forecasting can enhance earthquake preparedness

    USGS Publications Warehouse

    Jordan, T.H.; Marzocchi, W.; Michael, A.J.; Gerstenberger, M.C.

    2014-01-01

    We cannot yet predict large earthquakes in the short term with much reliability and skill, but the strong clustering exhibited in seismic sequences tells us that earthquake probabilities are not constant in time; they generally rise and fall over periods of days to years in correlation with nearby seismic activity. Operational earthquake forecasting (OEF) is the dissemination of authoritative information about these time‐dependent probabilities to help communities prepare for potentially destructive earthquakes. The goal of OEF is to inform the decisions that people and organizations must continually make to mitigate seismic risk and prepare for potentially destructive earthquakes on time scales from days to decades. To fulfill this role, OEF must provide a complete description of the seismic hazard—ground‐motion exceedance probabilities as well as short‐term rupture probabilities—in concert with the long‐term forecasts of probabilistic seismic‐hazard analysis (PSHA).

  17. Forecasting of future earthquakes in the northeast region of India considering energy released concept

    NASA Astrophysics Data System (ADS)

    Zarola, Amit; Sil, Arjun

    2018-04-01

    This study presents the forecasting of time and magnitude size of the next earthquake in the northeast India, using four probability distribution models (Gamma, Lognormal, Weibull and Log-logistic) considering updated earthquake catalog of magnitude Mw ≥ 6.0 that occurred from year 1737-2015 in the study area. On the basis of past seismicity of the region, two types of conditional probabilities have been estimated using their best fit model and respective model parameters. The first conditional probability is the probability of seismic energy (e × 1020 ergs), which is expected to release in the future earthquake, exceeding a certain level of seismic energy (E × 1020 ergs). And the second conditional probability is the probability of seismic energy (a × 1020 ergs/year), which is expected to release per year, exceeding a certain level of seismic energy per year (A × 1020 ergs/year). The logarithm likelihood functions (ln L) were also estimated for all four probability distribution models. A higher value of ln L suggests a better model and a lower value shows a worse model. The time of the future earthquake is forecasted by dividing the total seismic energy expected to release in the future earthquake with the total seismic energy expected to release per year. The epicentre of recently occurred 4 January 2016 Manipur earthquake (M 6.7), 13 April 2016 Myanmar earthquake (M 6.9) and the 24 August 2016 Myanmar earthquake (M 6.8) are located in zone Z.12, zone Z.16 and zone Z.15, respectively and that are the identified seismic source zones in the study area which show that the proposed techniques and models yield good forecasting accuracy.

  18. Using a Software Tool in Forecasting: a Case Study of Sales Forecasting Taking into Account Data Uncertainty

    NASA Astrophysics Data System (ADS)

    Fabianová, Jana; Kačmáry, Peter; Molnár, Vieroslav; Michalik, Peter

    2016-10-01

    Forecasting is one of the logistics activities and a sales forecast is the starting point for the elaboration of business plans. Forecast accuracy affects the business outcomes and ultimately may significantly affect the economic stability of the company. The accuracy of the prediction depends on the suitability of the use of forecasting methods, experience, quality of input data, time period and other factors. The input data are usually not deterministic but they are often of random nature. They are affected by uncertainties of the market environment, and many other factors. Taking into account the input data uncertainty, the forecast error can by reduced. This article deals with the use of the software tool for incorporating data uncertainty into forecasting. Proposals are presented of a forecasting approach and simulation of the impact of uncertain input parameters to the target forecasted value by this case study model. The statistical analysis and risk analysis of the forecast results is carried out including sensitivity analysis and variables impact analysis.

  19. Applied Meteorology Unit (AMU) Quarterly Report - Fourth Quarter FY-10

    NASA Technical Reports Server (NTRS)

    Bauman, William; Crawford, Winifred; Barrett, Joe; Watson, Leela; Wheeler, Mark

    2010-01-01

    Three AMU tasks were completed in this Quarter, each resulting in a forecast tool now being used in operations and a final report documenting how the work was done. AMU personnel completed the following tasks (1) Phase II of the Peak Wind Tool for General Forecasting task by delivering an improved wind forecasting tool to operations and providing training on its use; (2) a graphical user interface (GUI) she updated with new scripts to complete the ADAS Update and Maintainability task, and delivered the scripts to the Spaceflight Meteorology Group on Johnson Space Center, Texas and National Weather Service in Melbourne, Fla.; and (3) the Verify MesoNAM Performance task after we created and delivered a GUI that forecasters will use to determine the performance of the operational MesoNAM weather model forecast.

  20. Ecological Forecasting of Vibrio sp. in U.S. Coastal Waters Using an Operational Platform, a Pilot Project of the NOAA Ecological Forecasting Roadmap. Development of Web based Tools and Forecasts to Help the Public Avoid Exposure to Vibrio vulnificus and Shell Fish Harvesters Avoid Dangerous Concentrations of Vibrio parahaemolyticus.

    NASA Astrophysics Data System (ADS)

    Daniels, R. M.; Jacobs, J. M.; Paranjpye, R.; Lanerolle, L. W.

    2016-02-01

    The Pathogens group of the NOAA Ecological Forecasting Roadmap has begun a range of efforts to monitor and predict potential pathogen occurrences in shellfish and in U.S. Coastal waters. NOAA/NCOSS along with NMFS/NWFSC have led the Pathogens group and the development of web based tools and forecasts for both Vibrio vulnificus and Vibrio parahaemolyticus. A strong relationship with FDA has allowed the team to develop forecasts that will serve U.S. shellfish harvesters and consumers. NOAA/NOS/CSDL has provided modeling expertise to help the group use the hydrodynamic models and their forecasts of physical variables that drive the ecological predictions. The NOAA/NWS/Ocean Prediction Center has enabled these ecological forecasting efforts by providing the infrastructure, computing knowledge and experience in an operational culture. Daily forecasts have been demonstrated and are available from the web for the Chesapeake Bay, Delaware Bay, Northern Gulf of Mexico, Tampa Bay, Puget Sound and Long Island Sound. The forecast systems run on a daily basis being fed by NOS model data from the NWS/NCEP super computers. New forecast tools including V. parahaemolyticus post harvest growth and doubling time in ambient air temperature will be described.

  1. Using total precipitable water anomaly as a forecast aid for heavy precipitation events

    NASA Astrophysics Data System (ADS)

    VandenBoogart, Lance M.

    Heavy precipitation events are of interest to weather forecasters, local government officials, and the Department of Defense. These events can cause flooding which endangers lives and property. Military concerns include decreased trafficability for military vehicles, which hinders both war- and peace-time missions. Even in data-rich areas such as the United States, it is difficult to determine when and where a heavy precipitation event will occur. The challenges are compounded in data-denied regions. The hypothesis that total precipitable water anomaly (TPWA) will be positive and increasing preceding heavy precipitation events is tested in order to establish an understanding of TPWA evolution. Results are then used to create a precipitation forecast aid. The operational, 16 km-gridded, 6-hourly TPWA product developed at the Cooperative Institute for Research in the Atmosphere (CIRA) compares a blended TPW product with a TPW climatology to give a percent of normal TPWA value. TPWA evolution is examined for 84 heavy precipitation events which occurred between August 2010 and November 2011. An algorithm which uses various TPWA thresholds derived from the 84 events is then developed and tested using dichotomous contingency table verification statistics to determine the extent to which satellite-based TPWA might be used to aid in forecasting precipitation over mesoscale domains. The hypothesis of positive and increasing TPWA preceding heavy precipitation events is supported by the analysis. Event-average TPWA rises for 36 hours and peaks at 154% of normal at the event time. The average precipitation event detected by the forecast algorithm is not of sufficient magnitude to be termed a "heavy" precipitation event; however, the algorithm adds skill to a climatological precipitation forecast. Probability of detection is low and false alarm ratios are large, thus qualifying the algorithm's current use as an aid rather than a deterministic forecast tool. The algorithm's ability to be easily modified and quickly run gives it potential for future use in precipitation forecasting.

  2. A probabilistic neural network based approach for predicting the output power of wind turbines

    NASA Astrophysics Data System (ADS)

    Tabatabaei, Sajad

    2017-03-01

    Finding the authentic predicting tools of eliminating the uncertainty of wind speed forecasts is highly required while wind power sources are strongly penetrating. Recently, traditional predicting models of generating point forecasts have no longer been trustee. Thus, the present paper aims at utilising the concept of prediction intervals (PIs) to assess the uncertainty of wind power generation in power systems. Besides, this paper uses a newly introduced non-parametric approach called lower upper bound estimation (LUBE) to build the PIs since the forecasting errors are unable to be modelled properly by applying distribution probability functions. In the present proposed LUBE method, a PI combination-based fuzzy framework is used to overcome the performance instability of neutral networks (NNs) used in LUBE. In comparison to other methods, this formulation more suitably has satisfied the PI coverage and PI normalised average width (PINAW). Since this non-linear problem has a high complexity, a new heuristic-based optimisation algorithm comprising a novel modification is introduced to solve the aforesaid problems. Based on data sets taken from a wind farm in Australia, the feasibility and satisfying performance of the suggested method have been investigated.

  3. The MIT Integrated Global System Model: A facility for Assessing and Communicating Climate Change Uncertainty (Invited)

    NASA Astrophysics Data System (ADS)

    Prinn, R. G.

    2013-12-01

    The world is facing major challenges that create tensions between human development and environmental sustenance. In facing these challenges, computer models are invaluable tools for addressing the need for probabilistic approaches to forecasting. To illustrate this, I use the MIT Integrated Global System Model framework (IGSM; http://globalchange.mit.edu ). The IGSM consists of a set of coupled sub-models of global economic and technological development and resultant emissions, and physical, dynamical and chemical processes in the atmosphere, land, ocean and ecosystems (natural and managed). Some of the sub-models have both complex and simplified versions available, with the choice of which version to use being guided by the questions being addressed. Some sub-models (e.g.urban air pollution) are reduced forms of complex ones created by probabilistic collocation with polynomial chaos bases. Given the significant uncertainties in the model components, it is highly desirable that forecasts be probabilistic. We achieve this by running 400-member ensembles (Latin hypercube sampling) with different choices for key uncertain variables and processes within the human and natural system model components (pdfs of inputs estimated by model-observation comparisons, literature surveys, or expert elicitation). The IGSM has recently been used for probabilistic forecasts of climate, each using 400-member ensembles: one ensemble assumes no explicit climate mitigation policy and others assume increasingly stringent policies involving stabilization of greenhouse gases at various levels. These forecasts indicate clearly that the greatest effect of these policies is to lower the probability of extreme changes. The value of such probability analyses for policy decision-making lies in their ability to compare relative (not just absolute) risks of various policies, which are less affected by the earth system model uncertainties. Given the uncertainties in forecasts, it is also clear that we need to evaluate policies based on their ability to lower risk, and to re-evaluate decisions over time as new knowledge is gained. Reference: R. G. Prinn, Development and Application of Earth System Models, Proceedings, National Academy of Science, June 15, 2012, http://www.pnas.org/cgi/doi/10.1073/pnas.1107470109.

  4. Results of the Regional Earthquake Likelihood Models (RELM) test of earthquake forecasts in California.

    PubMed

    Lee, Ya-Ting; Turcotte, Donald L; Holliday, James R; Sachs, Michael K; Rundle, John B; Chen, Chien-Chih; Tiampo, Kristy F

    2011-10-04

    The Regional Earthquake Likelihood Models (RELM) test of earthquake forecasts in California was the first competitive evaluation of forecasts of future earthquake occurrence. Participants submitted expected probabilities of occurrence of M ≥ 4.95 earthquakes in 0.1° × 0.1° cells for the period 1 January 1, 2006, to December 31, 2010. Probabilities were submitted for 7,682 cells in California and adjacent regions. During this period, 31 M ≥ 4.95 earthquakes occurred in the test region. These earthquakes occurred in 22 test cells. This seismic activity was dominated by earthquakes associated with the M = 7.2, April 4, 2010, El Mayor-Cucapah earthquake in northern Mexico. This earthquake occurred in the test region, and 16 of the other 30 earthquakes in the test region could be associated with it. Nine complete forecasts were submitted by six participants. In this paper, we present the forecasts in a way that allows the reader to evaluate which forecast is the most "successful" in terms of the locations of future earthquakes. We conclude that the RELM test was a success and suggest ways in which the results can be used to improve future forecasts.

  5. Results of the Regional Earthquake Likelihood Models (RELM) test of earthquake forecasts in California

    PubMed Central

    Lee, Ya-Ting; Turcotte, Donald L.; Holliday, James R.; Sachs, Michael K.; Rundle, John B.; Chen, Chien-Chih; Tiampo, Kristy F.

    2011-01-01

    The Regional Earthquake Likelihood Models (RELM) test of earthquake forecasts in California was the first competitive evaluation of forecasts of future earthquake occurrence. Participants submitted expected probabilities of occurrence of M≥4.95 earthquakes in 0.1° × 0.1° cells for the period 1 January 1, 2006, to December 31, 2010. Probabilities were submitted for 7,682 cells in California and adjacent regions. During this period, 31 M≥4.95 earthquakes occurred in the test region. These earthquakes occurred in 22 test cells. This seismic activity was dominated by earthquakes associated with the M = 7.2, April 4, 2010, El Mayor–Cucapah earthquake in northern Mexico. This earthquake occurred in the test region, and 16 of the other 30 earthquakes in the test region could be associated with it. Nine complete forecasts were submitted by six participants. In this paper, we present the forecasts in a way that allows the reader to evaluate which forecast is the most “successful” in terms of the locations of future earthquakes. We conclude that the RELM test was a success and suggest ways in which the results can be used to improve future forecasts. PMID:21949355

  6. Maximizing Statistical Power When Verifying Probabilistic Forecasts of Hydrometeorological Events

    NASA Astrophysics Data System (ADS)

    DeChant, C. M.; Moradkhani, H.

    2014-12-01

    Hydrometeorological events (i.e. floods, droughts, precipitation) are increasingly being forecasted probabilistically, owing to the uncertainties in the underlying causes of the phenomenon. In these forecasts, the probability of the event, over some lead time, is estimated based on some model simulations or predictive indicators. By issuing probabilistic forecasts, agencies may communicate the uncertainty in the event occurring. Assuming that the assigned probability of the event is correct, which is referred to as a reliable forecast, the end user may perform some risk management based on the potential damages resulting from the event. Alternatively, an unreliable forecast may give false impressions of the actual risk, leading to improper decision making when protecting resources from extreme events. Due to this requisite for reliable forecasts to perform effective risk management, this study takes a renewed look at reliability assessment in event forecasts. Illustrative experiments will be presented, showing deficiencies in the commonly available approaches (Brier Score, Reliability Diagram). Overall, it is shown that the conventional reliability assessment techniques do not maximize the ability to distinguish between a reliable and unreliable forecast. In this regard, a theoretical formulation of the probabilistic event forecast verification framework will be presented. From this analysis, hypothesis testing with the Poisson-Binomial distribution is the most exact model available for the verification framework, and therefore maximizes one's ability to distinguish between a reliable and unreliable forecast. Application of this verification system was also examined within a real forecasting case study, highlighting the additional statistical power provided with the use of the Poisson-Binomial distribution.

  7. Development of a European Ensemble System for Seasonal Prediction: Application to crop yield

    NASA Astrophysics Data System (ADS)

    Terres, J. M.; Cantelaube, P.

    2003-04-01

    Western European agriculture is highly intensive and the weather is the main source of uncertainty for crop yield assessment and for crop management. In the current system, at the time when a crop yield forecast is issued, the weather conditions leading up to harvest time are unknown and are therefore a major source of uncertainty. The use of seasonal weather forecast would bring additional information for the remaining crop season and has valuable benefit for improving the management of agricultural markets and environmentally sustainable farm practices. An innovative method for supplying seasonal forecast information to crop simulation models has been developed in the frame of the EU funded research project DEMETER. It consists in running a crop model on each individual member of the seasonal hindcasts to derive a probability distribution of crop yield. Preliminary results of cumulative probability function of wheat yield provides information on both the yield anomaly and the reliability of the forecast. Based on the spread of the probability distribution, the end-user can directly quantify the benefits and risks of taking weather-sensitive decisions.

  8. MASTER: a model to improve and standardize clinical breakpoints for antimicrobial susceptibility testing using forecast probabilities.

    PubMed

    Blöchliger, Nicolas; Keller, Peter M; Böttger, Erik C; Hombach, Michael

    2017-09-01

    The procedure for setting clinical breakpoints (CBPs) for antimicrobial susceptibility has been poorly standardized with respect to population data, pharmacokinetic parameters and clinical outcome. Tools to standardize CBP setting could result in improved antibiogram forecast probabilities. We propose a model to estimate probabilities for methodological categorization errors and defined zones of methodological uncertainty (ZMUs), i.e. ranges of zone diameters that cannot reliably be classified. The impact of ZMUs on methodological error rates was used for CBP optimization. The model distinguishes theoretical true inhibition zone diameters from observed diameters, which suffer from methodological variation. True diameter distributions are described with a normal mixture model. The model was fitted to observed inhibition zone diameters of clinical Escherichia coli strains. Repeated measurements for a quality control strain were used to quantify methodological variation. For 9 of 13 antibiotics analysed, our model predicted error rates of < 0.1% applying current EUCAST CBPs. Error rates were > 0.1% for ampicillin, cefoxitin, cefuroxime and amoxicillin/clavulanic acid. Increasing the susceptible CBP (cefoxitin) and introducing ZMUs (ampicillin, cefuroxime, amoxicillin/clavulanic acid) decreased error rates to < 0.1%. ZMUs contained low numbers of isolates for ampicillin and cefuroxime (3% and 6%), whereas the ZMU for amoxicillin/clavulanic acid contained 41% of all isolates and was considered not practical. We demonstrate that CBPs can be improved and standardized by minimizing methodological categorization error rates. ZMUs may be introduced if an intermediate zone is not appropriate for pharmacokinetic/pharmacodynamic or drug dosing reasons. Optimized CBPs will provide a standardized antibiotic susceptibility testing interpretation at a defined level of probability. © The Author 2017. Published by Oxford University Press on behalf of the British Society for Antimicrobial Chemotherapy. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  9. Tracking Expected Improvements of Decadal Prediction in Climate Services

    NASA Astrophysics Data System (ADS)

    Suckling, E.; Thompson, E.; Smith, L. A.

    2013-12-01

    Physics-based simulation models are ultimately expected to provide the best available (decision-relevant) probabilistic climate predictions, as they can capture the dynamics of the Earth System across a range of situations, situations for which observations for the construction of empirical models are scant if not nonexistent. This fact in itself provides neither evidence that predictions from today's Earth Systems Models will outperform today's empirical models, nor a guide to the space and time scales on which today's model predictions are adequate for a given purpose. Empirical (data-based) models are employed to make probability forecasts on decadal timescales. The skill of these forecasts is contrasted with that of state-of-the-art climate models, and the challenges faced by each approach are discussed. The focus is on providing decision-relevant probability forecasts for decision support. An empirical model, known as Dynamic Climatology is shown to be competitive with CMIP5 climate models on decadal scale probability forecasts. Contrasting the skill of simulation models not only with each other but also with empirical models can reveal the space and time scales on which a generation of simulation models exploits their physical basis effectively. It can also quantify their ability to add information in the formation of operational forecasts. Difficulties (i) of information contamination (ii) of the interpretation of probabilistic skill and (iii) of artificial skill complicate each modelling approach, and are discussed. "Physics free" empirical models provide fixed, quantitative benchmarks for the evaluation of ever more complex climate models, that is not available from (inter)comparisons restricted to only complex models. At present, empirical models can also provide a background term for blending in the formation of probability forecasts from ensembles of simulation models. In weather forecasting this role is filled by the climatological distribution, and can significantly enhance the value of longer lead-time weather forecasts to those who use them. It is suggested that the direct comparison of simulation models with empirical models become a regular component of large model forecast intercomparison and evaluation. This would clarify the extent to which a given generation of state-of-the-art simulation models provide information beyond that available from simpler empirical models. It would also clarify current limitations in using simulation forecasting for decision support. No model-based probability forecast is complete without a quantitative estimate if its own irrelevance; this estimate is likely to increase as a function of lead time. A lack of decision-relevant quantitative skill would not bring the science-based foundation of anthropogenic warming into doubt. Similar levels of skill with empirical models does suggest a clear quantification of limits, as a function of lead time, for spatial and temporal scales on which decisions based on such model output are expected to prove maladaptive. Failing to clearly state such weaknesses of a given generation of simulation models, while clearly stating their strength and their foundation, risks the credibility of science in support of policy in the long term.

  10. On the validity of cosmological Fisher matrix forecasts

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wolz, Laura; Kilbinger, Martin; Weller, Jochen

    2012-09-01

    We present a comparison of Fisher matrix forecasts for cosmological probes with Monte Carlo Markov Chain (MCMC) posterior likelihood estimation methods. We analyse the performance of future Dark Energy Task Force (DETF) stage-III and stage-IV dark-energy surveys using supernovae, baryon acoustic oscillations and weak lensing as probes. We concentrate in particular on the dark-energy equation of state parameters w{sub 0} and w{sub a}. For purely geometrical probes, and especially when marginalising over w{sub a}, we find considerable disagreement between the two methods, since in this case the Fisher matrix can not reproduce the highly non-elliptical shape of the likelihood function.more » More quantitatively, the Fisher method underestimates the marginalized errors for purely geometrical probes between 30%-70%. For cases including structure formation such as weak lensing, we find that the posterior probability contours from the Fisher matrix estimation are in good agreement with the MCMC contours and the forecasted errors only changing on the 5% level. We then explore non-linear transformations resulting in physically-motivated parameters and investigate whether these parameterisations exhibit a Gaussian behaviour. We conclude that for the purely geometrical probes and, more generally, in cases where it is not known whether the likelihood is close to Gaussian, the Fisher matrix is not the appropriate tool to produce reliable forecasts.« less

  11. Probabilistic forecasting of extreme weather events based on extreme value theory

    NASA Astrophysics Data System (ADS)

    Van De Vyver, Hans; Van Schaeybroeck, Bert

    2016-04-01

    Extreme events in weather and climate such as high wind gusts, heavy precipitation or extreme temperatures are commonly associated with high impacts on both environment and society. Forecasting extreme weather events is difficult, and very high-resolution models are needed to describe explicitly extreme weather phenomena. A prediction system for such events should therefore preferably be probabilistic in nature. Probabilistic forecasts and state estimations are nowadays common in the numerical weather prediction community. In this work, we develop a new probabilistic framework based on extreme value theory that aims to provide early warnings up to several days in advance. We consider the combined events when an observation variable Y (for instance wind speed) exceeds a high threshold y and its corresponding deterministic forecasts X also exceeds a high forecast threshold y. More specifically two problems are addressed:} We consider pairs (X,Y) of extreme events where X represents a deterministic forecast, and Y the observation variable (for instance wind speed). More specifically two problems are addressed: Given a high forecast X=x_0, what is the probability that Y>y? In other words: provide inference on the conditional probability: [ Pr{Y>y|X=x_0}. ] Given a probabilistic model for Problem 1, what is the impact on the verification analysis of extreme events. These problems can be solved with bivariate extremes (Coles, 2001), and the verification analysis in (Ferro, 2007). We apply the Ramos and Ledford (2009) parametric model for bivariate tail estimation of the pair (X,Y). The model accommodates different types of extremal dependence and asymmetry within a parsimonious representation. Results are presented using the ensemble reforecast system of the European Centre of Weather Forecasts (Hagedorn, 2008). Coles, S. (2001) An Introduction to Statistical modelling of Extreme Values. Springer-Verlag.Ferro, C.A.T. (2007) A probability model for verifying deterministic forecasts of extreme events. Wea. Forecasting {22}, 1089-1100.Hagedorn, R. (2008) Using the ECMWF reforecast dataset to calibrate EPS forecasts. ECMWF Newsletter, {117}, 8-13.Ramos, A., Ledford, A. (2009) A new class of models for bivariate joint tails. J.R. Statist. Soc. B {71}, 219-241.

  12. Real-Time CME Forecasting Using HMI Active-Region Magnetograms and Flare History

    NASA Technical Reports Server (NTRS)

    Falconer, David; Moore, Ron; Barghouty, Abdulnasser F.; Khazanov, Igor

    2011-01-01

    We have recently developed a method of predicting an active region s probability of producing a CME, an X-class Flare, an M-class Flare, or a Solar Energetic Particle Event from a free-energy proxy measured from SOHO/MDI line-of-sight magnetograms. This year we have added three major improvements to our forecast tool: 1) Transition from MDI magnetogram to SDO/HMI magnetogram allowing us near-real-time forecasts, 2) Automation of acquisition and measurement of HMI magnetograms giving us near-real-time forecasts (no older than 2 hours), and 3) Determination of how to improve forecast by using the active region s previous flare history in combination with its free-energy proxy. HMI was turned on in May 2010 and MDI was turned off in April 2011. Using the overlap period, we have calibrated HMI to yield what MDI would measure. This is important since the value of the free-energy proxy used for our forecast is resolution dependent, and the forecasts are made from results of a 1996-2004 database of MDI observations. With near-real-time magnetograms from HMI, near-real-time forecasts are now possible. We have augmented the code so that it continually acquires and measures new magnetograms as they become available online, and updates the whole-sun forecast from the coming day. The next planned improvement is to use an active region s previous flare history, in conjunction with its free-energy proxy, to forecast the active region s event rate. It has long been known that active regions that have produced flares in the past are likely to produce flares in the future, and that active regions that are nonpotential (have large free-energy) are more likely to produce flares in the future. This year we have determined that persistence of flaring is not just a reflection of an active region s free energy. In other words, after controlling for free energy, we have found that active regions that have flared recently are more likely to flare in the future.

  13. Verification of an ensemble prediction system for storm surge forecast in the Adriatic Sea

    NASA Astrophysics Data System (ADS)

    Mel, Riccardo; Lionello, Piero

    2014-12-01

    In the Adriatic Sea, storm surges present a significant threat to Venice and to the flat coastal areas of the northern coast of the basin. Sea level forecast is of paramount importance for the management of daily activities and for operating the movable barriers that are presently being built for the protection of the city. In this paper, an EPS (ensemble prediction system) for operational forecasting of storm surge in the northern Adriatic Sea is presented and applied to a 3-month-long period (October-December 2010). The sea level EPS is based on the HYPSE (hydrostatic Padua Sea elevation) model, which is a standard single-layer nonlinear shallow water model, whose forcings (mean sea level pressure and surface wind fields) are provided by the ensemble members of the ECMWF (European Center for Medium-Range Weather Forecasts) EPS. Results are verified against observations at five tide gauges located along the Croatian and Italian coasts of the Adriatic Sea. Forecast uncertainty increases with the predicted value of the storm surge and with the forecast lead time. The EMF (ensemble mean forecast) provided by the EPS has a rms (root mean square) error lower than the DF (deterministic forecast), especially for short (up to 3 days) lead times. Uncertainty for short lead times of the forecast and for small storm surges is mainly caused by uncertainty of the initial condition of the hydrodynamical model. Uncertainty for large lead times and large storm surges is mainly caused by uncertainty in the meteorological forcings. The EPS spread increases with the rms error of the forecast. For large lead times the EPS spread and the forecast error substantially coincide. However, the EPS spread in this study, which does not account for uncertainty in the initial condition, underestimates the error during the early part of the forecast and for small storm surge values. On the contrary, it overestimates the rms error for large surge values. The PF (probability forecast) of the EPS has a clear skill in predicting the actual probability distribution of sea level, and it outperforms simple "dressed" PF methods. A probability estimate based on the single DF is shown to be inadequate. However, a PF obtained with a prescribed Gaussian distribution and centered on the DF value performs very similarly to the EPS-based PF.

  14. International Aftershock Forecasting: Lessons from the Gorkha Earthquake

    NASA Astrophysics Data System (ADS)

    Michael, A. J.; Blanpied, M. L.; Brady, S. R.; van der Elst, N.; Hardebeck, J.; Mayberry, G. C.; Page, M. T.; Smoczyk, G. M.; Wein, A. M.

    2015-12-01

    Following the M7.8 Gorhka, Nepal, earthquake of April 25, 2015 the USGS issued a series of aftershock forecasts. The initial impetus for these forecasts was a request from the USAID Office of US Foreign Disaster Assistance to support their Disaster Assistance Response Team (DART) which coordinated US Government disaster response, including search and rescue, with the Government of Nepal. Because of the possible utility of the forecasts to people in the region and other response teams, the USGS released these forecasts publicly through the USGS Earthquake Program web site. The initial forecast used the Reasenberg and Jones (Science, 1989) model with generic parameters developed for active deep continental regions based on the Garcia et al. (BSSA, 2012) tectonic regionalization. These were then updated to reflect a lower productivity and higher decay rate based on the observed aftershocks, although relying on teleseismic observations, with a high magnitude-of-completeness, limited the amount of data. After the 12 May M7.3 aftershock, the forecasts used an Epidemic Type Aftershock Sequence model to better characterize the multiple sources of earthquake clustering. This model provided better estimates of aftershock uncertainty. These forecast messages were crafted based on lessons learned from the Christchurch earthquake along with input from the U.S. Embassy staff in Kathmandu. Challenges included how to balance simple messaging with forecasts over a variety of time periods (week, month, and year), whether to characterize probabilities with words such as those suggested by the IPCC (IPCC, 2010), how to word the messages in a way that would translate accurately into Nepali and not alarm the public, and how to present the probabilities of unlikely but possible large and potentially damaging aftershocks, such as the M7.3 event, which had an estimated probability of only 1-in-200 for the week in which it occurred.

  15. Space Weather Forecasting at IZMIRAN

    NASA Astrophysics Data System (ADS)

    Gaidash, S. P.; Belov, A. V.; Abunina, M. A.; Abunin, A. A.

    2017-12-01

    Since 1998, the Institute of Terrestrial Magnetism, Ionosphere, and Radio Wave Propagation (IZMIRAN) has had an operating heliogeophysical service—the Center for Space Weather Forecasts. This center transfers the results of basic research in solar-terrestrial physics into daily forecasting of various space weather parameters for various lead times. The forecasts are promptly available to interested consumers. This article describes the center and the main types of forecasts it provides: solar and geomagnetic activity, magnetospheric electron fluxes, and probabilities of proton increases. The challenges associated with the forecasting of effects of coronal mass ejections and coronal holes are discussed. Verification data are provided for the center's forecasts.

  16. Update to the Lightning Probability Forecast Equations at Kennedy Space Center/Cape Canaveral Air Force Station, Florida

    NASA Technical Reports Server (NTRS)

    Lambert, Winifred; Roeder, William

    2007-01-01

    This conference presentation describes the improvement of a set of lightning probability forecast equations that are used by the 45th Weather Squadron forecasters for their daily 1100 UTC (0700 EDT) weather briefing during the warm season months of May-September. This information is used for general scheduling of operations at Cape Canaveral Air Force Station and Kennedy Space Center. Forecasters at the Spaceflight Meteorology Group also make thunderstorm forecasts during Shuttle flight operations. Five modifications were made by the Applied Meteorology Unit: increased the period of record from 15 to 17 years, changed the method of calculating the flow regime of the day, calculated a new optimal layer relative humidity, used a new smoothing technique for the daily climatology, and used a new valid area. The test results indicated that the modified equations showed and increase in skill over the current equations, good reliability, and an ability to distinguish between lightning and non-lightning days.

  17. Update to the Lightning Probability Forecast Equations at Kennedy Space Center/Cape Canaveral Air Force Station, Florida

    NASA Technical Reports Server (NTRS)

    Lambert, Winifred; Roeder, William

    2007-01-01

    This conference presentation describes the improvement of a set of lightning probability forecast equations that are used by the 45th Weather Squadron forecasters for their daily 1100 UTC (0700 EDT) weather briefing during the warm season months of May- September. This information is used for general scheduling of operations at Cape Canaveral Air Force Station and Kennedy Space Center. Forecasters at the Spaceflight Meteorology Group also make thunderstorm forecasts during Shuttle flight operations. Five modifications were made by the Applied Meteorology Unit: increased the period of record from 15 to 17 years, changed the method of calculating the flow regime of the day, calculated a new optimal layer relative humidity, used a new smoothing technique for the daily climatology, and used a new valid area. The test results indicated that the modified equations showed and increase in skill over the current equations, good reliability, and an ability to distinguish between lightning and non-lightning days.

  18. Forecast Tools for Alaska River Ice Breakup Timing and Severity

    NASA Astrophysics Data System (ADS)

    Moran, E. H.; Lindsey, S.; van Breukelen, C. M.; Thoman, R.

    2016-12-01

    Spring Breakup on the large interior rivers in Alaska means a time of nervous anticipation for many of the residents in the villages alongside those rivers. On the Yukon and Kuskokwim Rivers the record flood for most villages occurred as a result of ice jams that backed up water and dump truck sized ice floes into the village. Those floods can occur suddenly and can literally wipe out a village. The challenge is that with a limited observation network (3 automated USGS gages along the 1200 miles of the Yukon River flowing through Alaska) and the inherently transient nature of ice jam formation, prediction of the timing and severity of these events has been a tremendous challenge. Staff at the Alaska Pacific River Forecast Center as well as the Alaska Region Climate Program Manager have been developing more quantitative tools to attempt to provide a longer lead time for villages to prepare for potentially devastating flooding. In the past, a very qualitative assessment of the primary drivers of Spring Breakup (snow pack, river ice thickness and forecast spring weather) have led to the successful identification of years when flood severity was likely to be elevated or significantly decreased. These qualitative assessments have also allowed the forecasting of the probability of either a thermal or a dynamic breakup. But there has continued to be a need for an objective tool that can handle weather patterns that border on the tails of the climatic distributions as well as the timing and flood potential from weather patterns that are closer to the median of the distribution. Over the past 8 years there have been a significant number of years with anomalous spring weather patterns including cold springs followed by rapid warmups leading to record flooding from ice jams during spring breakup (2009, 2013), record late breakup (2013), record early breakup (2016), record high snowfall (2012), record snowmelt and aufeis flooding (2015) and record low snowfall (2015). The need for improved tools that can handle these events over the full breadth of the distribution has never been greater. This talk will describe efforts to incorporate climate signals into the spring breakup outlook and show results of some temperature based indices as an indicator of breakup timing.

  19. An overview of health forecasting.

    PubMed

    Soyiri, Ireneous N; Reidpath, Daniel D

    2013-01-01

    Health forecasting is a novel area of forecasting, and a valuable tool for predicting future health events or situations such as demands for health services and healthcare needs. It facilitates preventive medicine and health care intervention strategies, by pre-informing health service providers to take appropriate mitigating actions to minimize risks and manage demand. Health forecasting requires reliable data, information and appropriate analytical tools for the prediction of specific health conditions or situations. There is no single approach to health forecasting, and so various methods have often been adopted to forecast aggregate or specific health conditions. Meanwhile, there are no defined health forecasting horizons (time frames) to match the choices of health forecasting methods/approaches that are often applied. The key principles of health forecasting have not also been adequately described to guide the process. This paper provides a brief introduction and theoretical analysis of health forecasting. It describes the key issues that are important for health forecasting, including: definitions, principles of health forecasting, and the properties of health data, which influence the choices of health forecasting methods. Other matters related to the value of health forecasting, and the general challenges associated with developing and using health forecasting services are discussed. This overview is a stimulus for further discussions on standardizing health forecasting approaches and methods that will facilitate health care and health services delivery.

  20. Open-source Software for Demand Forecasting of Clinical Laboratory Test Volumes Using Time-series Analysis.

    PubMed

    Mohammed, Emad A; Naugler, Christopher

    2017-01-01

    Demand forecasting is the area of predictive analytics devoted to predicting future volumes of services or consumables. Fair understanding and estimation of how demand will vary facilitates the optimal utilization of resources. In a medical laboratory, accurate forecasting of future demand, that is, test volumes, can increase efficiency and facilitate long-term laboratory planning. Importantly, in an era of utilization management initiatives, accurately predicted volumes compared to the realized test volumes can form a precise way to evaluate utilization management initiatives. Laboratory test volumes are often highly amenable to forecasting by time-series models; however, the statistical software needed to do this is generally either expensive or highly technical. In this paper, we describe an open-source web-based software tool for time-series forecasting and explain how to use it as a demand forecasting tool in clinical laboratories to estimate test volumes. This tool has three different models, that is, Holt-Winters multiplicative, Holt-Winters additive, and simple linear regression. Moreover, these models are ranked and the best one is highlighted. This tool will allow anyone with historic test volume data to model future demand.

  1. Open-source Software for Demand Forecasting of Clinical Laboratory Test Volumes Using Time-series Analysis

    PubMed Central

    Mohammed, Emad A.; Naugler, Christopher

    2017-01-01

    Background: Demand forecasting is the area of predictive analytics devoted to predicting future volumes of services or consumables. Fair understanding and estimation of how demand will vary facilitates the optimal utilization of resources. In a medical laboratory, accurate forecasting of future demand, that is, test volumes, can increase efficiency and facilitate long-term laboratory planning. Importantly, in an era of utilization management initiatives, accurately predicted volumes compared to the realized test volumes can form a precise way to evaluate utilization management initiatives. Laboratory test volumes are often highly amenable to forecasting by time-series models; however, the statistical software needed to do this is generally either expensive or highly technical. Method: In this paper, we describe an open-source web-based software tool for time-series forecasting and explain how to use it as a demand forecasting tool in clinical laboratories to estimate test volumes. Results: This tool has three different models, that is, Holt-Winters multiplicative, Holt-Winters additive, and simple linear regression. Moreover, these models are ranked and the best one is highlighted. Conclusion: This tool will allow anyone with historic test volume data to model future demand. PMID:28400996

  2. An investigation into incident duration forecasting for FleetForward

    DOT National Transportation Integrated Search

    2000-08-01

    Traffic condition forecasting is the process of estimating future traffic conditions based on current and archived data. Real-time forecasting is becoming an important tool in Intelligent Transportation Systems (ITS). This type of forecasting allows ...

  3. Testing hypotheses of earthquake occurrence

    NASA Astrophysics Data System (ADS)

    Kagan, Y. Y.; Jackson, D. D.; Schorlemmer, D.; Gerstenberger, M.

    2003-12-01

    We present a relatively straightforward likelihood method for testing those earthquake hypotheses that can be stated as vectors of earthquake rate density in defined bins of area, magnitude, and time. We illustrate the method as it will be applied to the Regional Earthquake Likelihood Models (RELM) project of the Southern California Earthquake Center (SCEC). Several earthquake forecast models are being developed as part of this project, and additional contributed forecasts are welcome. Various models are based on fault geometry and slip rates, seismicity, geodetic strain, and stress interactions. We would test models in pairs, requiring that both forecasts in a pair be defined over the same set of bins. Thus we offer a standard "menu" of bins and ground rules to encourage standardization. One menu category includes five-year forecasts of magnitude 5.0 and larger. Forecasts would be in the form of a vector of yearly earthquake rates on a 0.05 degree grid at the beginning of the test. Focal mechanism forecasts, when available, would be also be archived and used in the tests. The five-year forecast category may be appropriate for testing hypotheses of stress shadows from large earthquakes. Interim progress will be evaluated yearly, but final conclusions would be made on the basis of cumulative five-year performance. The second category includes forecasts of earthquakes above magnitude 4.0 on a 0.05 degree grid, evaluated and renewed daily. Final evaluation would be based on cumulative performance over five years. Other types of forecasts with different magnitude, space, and time sampling are welcome and will be tested against other models with shared characteristics. All earthquakes would be counted, and no attempt made to separate foreshocks, main shocks, and aftershocks. Earthquakes would be considered as point sources located at the hypocenter. For each pair of forecasts, we plan to compute alpha, the probability that the first would be wrongly rejected in favor of the second, and beta, the probability that the second would be wrongly rejected in favor of the first. Computing alpha and beta requires knowing the theoretical distribution of likelihood scores under each hypothesis, which we will estimate by simulations. Each forecast is given equal status; there is no "null hypothesis" which would be accepted by default. Forecasts and test results would be archived and posted on the RELM web site. The same methods can be applied to any region with adequate monitoring and sufficient earthquakes. If fewer than ten events are forecasted, the likelihood tests may not give definitive results. The tests do force certain requirements on the forecast models. Because the tests are based on absolute rates, stress models must be explicit about how stress increments affect past seismicity rates. Aftershocks of triggered events must be accounted for. Furthermore, the tests are sensitive to magnitude, so forecast models must specify the magnitude distribution of triggered events. Models should account for probable errors in magnitude and location by appropriate smoothing of the probabilities, as the tests will be "cold hearted:" near misses won't count.

  4. Development of a flood early warning system and communication with end-users: the Vipava/Vipacco case study in the KULTURisk FP7 project

    NASA Astrophysics Data System (ADS)

    Grossi, Giovanna; Caronna, Paolo; Ranzi, Roberto

    2014-05-01

    Within the framework of risk communication, the goal of an early warning system is to support the interaction between technicians and authorities (and subsequently population) as a prevention measure. The methodology proposed in the KULTURisk FP7 project aimed to build a closer collaboration between these actors, in the perspective of promoting pro-active actions to mitigate the effects of flood hazards. The transnational (Slovenia/ Italy) Soča/Isonzo case study focused on this concept of cooperation between stakeholders and hydrological forecasters. The DIMOSHONG_VIP hydrological model was calibrated for the Vipava/Vipacco River (650 km2), a tributary of the Soča/Isonzo River, on the basis of flood events occurred between 1998 and 2012. The European Centre for Medium-Range Weather Forecasts (ECMWF) provided the past meteorological forecasts, both deterministic (1 forecast) and probabilistic (51 ensemble members). The resolution of the ECMWF grid is currently about 15 km (Deterministic-DET) and 30 km (Ensemble Prediction System-EPS). A verification was conducted to validate the flood-forecast outputs of the DIMOSHONG_VIP+ECMWF early warning system. Basic descriptive statistics, like event probability, probability of a forecast occurrence and frequency bias were determined. Some performance measures were calculated, such as hit rate (probability of detection) and false alarm rate (probability of false detection). Relative Opening Characteristic (ROC) curves were generated both for deterministic and probabilistic forecasts. These analysis showed a good performance of the early warning system, in respect of the small size of the sample. A particular attention was spent to the design of flood-forecasting output charts, involving and inquiring stakeholders (Alto Adriatico River Basin Authority), hydrology specialists in the field, and common people. Graph types for both forecasted precipitation and discharge were set. Three different risk thresholds were identified ("attention", "pre-alarm" or "alert", "alarm"), with an "icon-style" representation, suitable for communication to civil protection stakeholders or the public. Aiming at showing probabilistic representations in a "user-friendly" way, we opted for the visualization of the single deterministic forecasted hydrograph together with the 5%, 25%, 50%, 75% and 95% percentiles bands of the Hydrological Ensemble Prediction System (HEPS). HEPS is generally used for 3-5 days hydrological forecasts, while the error due to incorrect initial data is comparable to the error due to the lower resolution with respect to the deterministic forecast. In the short term forecasting (12-48 hours) the HEPS-members show obviously a similar tendency; in this case, considering its higher resolution, the deterministic forecast is expected to be more effective. The plot of different forecasts in the same chart allows the use of model outputs from 4/5 days to few hours before a potential flood event. This framework was built to help a stakeholder, like a mayor, a civil protection authority, etc, in the flood control and management operations, and was designed to be included in a wider decision support system.

  5. Precipitable water vapour forecasting: a tool for optimizing IR observations at Roque de los Muchachos Observatory

    NASA Astrophysics Data System (ADS)

    Pérez-Jordán, wG; Castro-Almazán, J. A.; Muñoz-Tuñón, C.

    2018-07-01

    We validate the Weather Research and Forecasting (WRF) model for precipitable water vapour (PWV) forecasting as a fully operational tool for optimizing astronomical infrared observations at Roque de los Muchachos Observatory (ORM). For the model validation, we used GNSS-based (Global Navigation Satellite System) data from the PWV monitor located at the ORM. We have run WRF every 24 h for near two months, with a horizon of 48 h (hourly forecasts), from 2016 January 11 to March 04. These runs represent 1296 hourly forecast points. The validation is carried out using different approaches: performance as a function of the forecast range, time horizon accuracy, performance as a function of the PWV value, and performance of the operational WRF time series with 24- and 48-h horizons. Excellent agreement was found between the model forecasts and observations, with R = 0.951 and 0.904 for the 24- and 48-h forecast time series, respectively. The 48-h forecast was further improved by correcting a time lag of 2 h found in the predictions. The final errors, taking into account all the uncertainties involved, are 1.75 mm for the 24-h forecasts and 1.99 mm for 48 h. We found linear trends in both the correlation and root-mean-square error of the residuals (measurements - forecasts) as a function of the forecast range within the horizons analysed (up to 48 h). In summary, the WRF performance is excellent and accurate, thus allowing it to be implemented as an operational tool at the ORM.

  6. Precipitable water vapour forecasting: a tool for optimizing IR observations at Roque de los Muchachos Observatory.

    NASA Astrophysics Data System (ADS)

    Pérez-Jordán, G.; Castro-Almazán, J. A.; Muñoz-Tuñón, C.

    2018-04-01

    We validate the Weather Research and Forecasting (WRF) model for precipitable water vapour (PWV) forecasting as a fully operational tool for optimizing astronomical infrared (IR) observations at Roque de los Muchachos Observatory (ORM). For the model validation we used GNSS-based (Global Navigation Satellite System) data from the PWV monitor located at the ORM. We have run WRF every 24 h for near two months, with a horizon of 48 hours (hourly forecasts), from 2016 January 11 to 2016 March 4. These runs represent 1296 hourly forecast points. The validation is carried out using different approaches: performance as a function of the forecast range, time horizon accuracy, performance as a function of the PWV value, and performance of the operational WRF time series with 24- and 48-hour horizons. Excellent agreement was found between the model forecasts and observations, with R =0.951 and R =0.904 for the 24- and 48-h forecast time series respectively. The 48-h forecast was further improved by correcting a time lag of 2 h found in the predictions. The final errors, taking into account all the uncertainties involved, are 1.75 mm for the 24-h forecasts and 1.99 mm for 48 h. We found linear trends in both the correlation and RMSE of the residuals (measurements - forecasts) as a function of the forecast range within the horizons analysed (up to 48 h). In summary, the WRF performance is excellent and accurate, thus allowing it to be implemented as an operational tool at the ORM.

  7. Perceptions of disease risk: from social construction of subjective judgments to rational decision making.

    PubMed

    McRoberts, N; Hall, C; Madden, L V; Hughes, G

    2011-06-01

    Many factors influence how people form risk perceptions. Farmers' perceptions of risk and levels of risk aversion impact on decision-making about such things as technology adoption and disease management practices. Irrespective of the underlying factors that affect risk perceptions, those perceptions can be summarized by variables capturing impact and uncertainty components of risk. We discuss a new framework that has the subjective probability of disease and the cost of decision errors as its central features, which might allow a better integration of social science and epidemiology, to the benefit of plant disease management. By focusing on the probability and cost (or impact) dimensions of risk, the framework integrates research from the social sciences, economics, decision theory, and epidemiology. In particular, we review some useful properties of expected regret and skill value, two measures of expected cost that are particularly useful in the evaluation of decision tools. We highlight decision-theoretic constraints on the usefulness of decision tools that may partly explain cases of failure of adoption. We extend this analysis by considering information-theoretic criteria that link model complexity and relative performance and which might explain why users reject forecasters that impose even moderate increases in the complexity of decision making despite improvements in performance or accept very simple decision tools that have relatively poor performance.

  8. q-Gaussian distributions of leverage returns, first stopping times, and default risk valuations

    NASA Astrophysics Data System (ADS)

    Katz, Yuri A.; Tian, Li

    2013-10-01

    We study the probability distributions of daily leverage returns of 520 North American industrial companies that survive de-listing during the financial crisis, 2006-2012. We provide evidence that distributions of unbiased leverage returns of all individual firms belong to the class of q-Gaussian distributions with the Tsallis entropic parameter within the interval 1

  9. Prospective Tests of Southern California Earthquake Forecasts

    NASA Astrophysics Data System (ADS)

    Jackson, D. D.; Schorlemmer, D.; Gerstenberger, M.; Kagan, Y. Y.; Helmstetter, A.; Wiemer, S.; Field, N.

    2004-12-01

    We are testing earthquake forecast models prospectively using likelihood ratios. Several investigators have developed such models as part of the Southern California Earthquake Center's project called Regional Earthquake Likelihood Models (RELM). Various models are based on fault geometry and slip rates, seismicity, geodetic strain, and stress interactions. Here we describe the testing procedure and present preliminary results. Forecasts are expressed as the yearly rate of earthquakes within pre-specified bins of longitude, latitude, magnitude, and focal mechanism parameters. We test models against each other in pairs, which requires that both forecasts in a pair be defined over the same set of bins. For this reason we specify a standard "menu" of bins and ground rules to guide forecasters in using common descriptions. One menu category includes five-year forecasts of magnitude 5.0 and larger. Contributors will be requested to submit forecasts in the form of a vector of yearly earthquake rates on a 0.1 degree grid at the beginning of the test. Focal mechanism forecasts, when available, are also archived and used in the tests. Interim progress will be evaluated yearly, but final conclusions would be made on the basis of cumulative five-year performance. The second category includes forecasts of earthquakes above magnitude 4.0 on a 0.1 degree grid, evaluated and renewed daily. Final evaluation would be based on cumulative performance over five years. Other types of forecasts with different magnitude, space, and time sampling are welcome and will be tested against other models with shared characteristics. Tests are based on the log likelihood scores derived from the probability that future earthquakes would occur where they do if a given forecast were true [Kagan and Jackson, J. Geophys. Res.,100, 3,943-3,959, 1995]. For each pair of forecasts, we compute alpha, the probability that the first would be wrongly rejected in favor of the second, and beta, the probability that the second would be wrongly rejected in favor of the first. Computing alpha and beta requires knowing the theoretical distribution of likelihood scores under each hypothesis, which we estimate by simulations. In this scheme, each forecast is given equal status; there is no "null hypothesis" which would be accepted by default. Forecasts and test results will be archived and posted on the RELM web site. Major problems under discussion include how to treat aftershocks, which clearly violate the variable-rate Poissonian hypotheses that we employ, and how to deal with the temporal variations in catalog completeness that follow large earthquakes.

  10. Accuracy of forecasts in strategic intelligence

    PubMed Central

    Mandel, David R.; Barnes, Alan

    2014-01-01

    The accuracy of 1,514 strategic intelligence forecasts abstracted from intelligence reports was assessed. The results show that both discrimination and calibration of forecasts was very good. Discrimination was better for senior (versus junior) analysts and for easier (versus harder) forecasts. Miscalibration was mainly due to underconfidence such that analysts assigned more uncertainty than needed given their high level of discrimination. Underconfidence was more pronounced for harder (versus easier) forecasts and for forecasts deemed more (versus less) important for policy decision making. Despite the observed underconfidence, there was a paucity of forecasts in the least informative 0.4–0.6 probability range. Recalibrating the forecasts substantially reduced underconfidence. The findings offer cause for tempered optimism about the accuracy of strategic intelligence forecasts and indicate that intelligence producers aim to promote informativeness while avoiding overstatement. PMID:25024176

  11. Upgrade Summer Severe Weather Tool in MIDDS

    NASA Technical Reports Server (NTRS)

    Wheeler, Mark M.

    2010-01-01

    The goal of this task was to upgrade the severe weather database from the previous phase by adding weather observations from the years 2004 - 2009, re-analyze the data to determine the important parameters, make adjustments to the index weights depending on the analysis results, and update the MIDDS GUI. The added data increased the period of record from 15 to 21 years. Data sources included local forecast rules, archived sounding data, surface and upper air maps, and two severe weather event databases covering east-central Florida. Four of the stability indices showed increased severe weather predication. The Total Threat Score (TTS) of the previous work was verified for the warm season of 2009 with very good skill. The TTS Probability of Detection (POD) was 88% and the False alarm rate (FAR) of 8%. Based on the results of the analyses, the MIDDS Severe Weather Worksheet GUI was updated to assist the duty forecaster by providing a level of objective guidance based on the analysis of the stability parameters and synoptic-scale dynamics.

  12. Averaging business cycles vs. myopia: Do we need a long term vision when developing IRP?

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    McDonald, C.; Gupta, P.C.

    1995-05-01

    Utility demand forecasting is inherently imprecise due to the number of uncertainties resulting from business cycles, policy making, technology breakthroughs, national and international political upheavals and the limitations of the forecasting tools. This implies that revisions based primarily on recent experience could lead to unstable forecasts. Moreover, new planning tools are required that provide an explicit consideration of uncertainty and lead to flexible and robust planning tools are required that provide an explicit consideration of uncertainty and lead to flexible and robust planning decisions.

  13. Applications of Principled Search Methods in Climate Influences and Mechanisms

    NASA Technical Reports Server (NTRS)

    Glymour, Clark

    2005-01-01

    Forest and grass fires cause economic losses in the billions of dollars in the U.S. alone. In addition, boreal forests constitute a large carbon store; it has been estimated that, were no burning to occur, an additional 7 gigatons of carbon would be sequestered in boreal soils each century. Effective wildfire suppression requires anticipation of locales and times for which wildfire is most probable, preferably with a two to four week forecast, so that limited resources can be efficiently deployed. The United States Forest Service (USFS), and other experts and agencies have developed several measures of fire risk combining physical principles and expert judgment, and have used them in automated procedures for forecasting fire risk. Forecasting accuracies for some fire risk indices in combination with climate and other variables have been estimated for specific locations, with the value of fire risk index variables assessed by their statistical significance in regressions. In other cases, the MAPSS forecasts [23, 241 for example, forecasting accuracy has been estimated only by simulated data. We describe alternative forecasting methods that predict fire probability by locale and time using statistical or machine learning procedures trained on historical data, and we give comparative assessments of their forecasting accuracy for one fire season year, April- October, 2003, for all U.S. Forest Service lands. Aside from providing an accuracy baseline for other forecasting methods, the results illustrate the interdependence between the statistical significance of prediction variables and the forecasting method used.

  14. Statistical and dynamical forecast of regional precipitation after mature phase of ENSO

    NASA Astrophysics Data System (ADS)

    Sohn, S.; Min, Y.; Lee, J.; Tam, C.; Ahn, J.

    2010-12-01

    While the seasonal predictability of general circulation models (GCMs) has been improved, the current model atmosphere in the mid-latitude does not respond correctly to external forcing such as tropical sea surface temperature (SST), particularly over the East Asia and western North Pacific summer monsoon regions. In addition, the time-scale of prediction scope is considerably limited and the model forecast skill still is very poor beyond two weeks. Although recent studies indicate that coupled model based multi-model ensemble (MME) forecasts show the better performance, the long-lead forecasts exceeding 9 months still show a dramatic decrease of the seasonal predictability. This study aims at diagnosing the dynamical MME forecasts comprised of the state of art 1-tier models as well as comparing them with the statistical model forecasts, focusing on the East Asian summer precipitation predictions after mature phase of ENSO. The lagged impact of El Nino as major climate contributor on the summer monsoon in model environments is also evaluated, in the sense of the conditional probabilities. To evaluate the probability forecast skills, the reliability (attributes) diagram and the relative operating characteristics following the recommendations of the World Meteorological Organization (WMO) Standardized Verification System for Long-Range Forecasts are used in this study. The results should shed light on the prediction skill for dynamical model and also for the statistical model, in forecasting the East Asian summer monsoon rainfall with a long-lead time.

  15. Calibration and validation of rainfall thresholds for shallow landslide forecasting in Sicily, southern Italy

    NASA Astrophysics Data System (ADS)

    Gariano, S. L.; Brunetti, M. T.; Iovine, G.; Melillo, M.; Peruccacci, S.; Terranova, O.; Vennari, C.; Guzzetti, F.

    2015-01-01

    Empirical rainfall thresholds are tools to forecast the possible occurrence of rainfall-induced shallow landslides. Accurate prediction of landslide occurrence requires reliable thresholds, which need to be properly validated before their use in operational warning systems. We exploited a catalogue of 200 rainfall conditions that have resulted in at least 223 shallow landslides in Sicily, southern Italy, in the 11-year period 2002-2011, to determine regional event duration-cumulated event rainfall (ED) thresholds for shallow landslide occurrence. We computed ED thresholds for different exceedance probability levels and determined the uncertainty associated to the thresholds using a consolidated bootstrap nonparametric technique. We further determined subregional thresholds, and we studied the role of lithology and seasonal periods in the initiation of shallow landslides in Sicily. Next, we validated the regional rainfall thresholds using 29 rainfall conditions that have resulted in 42 shallow landslides in Sicily in 2012. We based the validation on contingency tables, skill scores, and a receiver operating characteristic (ROC) analysis for thresholds at different exceedance probability levels, from 1% to 50%. Validation of rainfall thresholds is hampered by lack of information on landslide occurrence. Therefore, we considered the effects of variations in the contingencies and the skill scores caused by lack of information. Based on the results obtained, we propose a general methodology for the objective identification of a threshold that provides an optimal balance between maximization of correct predictions and minimization of incorrect predictions, including missed and false alarms. We expect that the methodology will increase the reliability of rainfall thresholds, fostering the operational use of validated rainfall thresholds in operational early warning system for regional shallow landslide forecasting.

  16. Statistical-Dynamical Seasonal Forecasts of Central-Southwest Asian Winter Precipitation.

    NASA Astrophysics Data System (ADS)

    Tippett, Michael K.; Goddard, Lisa; Barnston, Anthony G.

    2005-06-01

    Interannual precipitation variability in central-southwest (CSW) Asia has been associated with East Asian jet stream variability and western Pacific tropical convection. However, atmospheric general circulation models (AGCMs) forced by observed sea surface temperature (SST) poorly simulate the region's interannual precipitation variability. The statistical-dynamical approach uses statistical methods to correct systematic deficiencies in the response of AGCMs to SST forcing. Statistical correction methods linking model-simulated Indo-west Pacific precipitation and observed CSW Asia precipitation result in modest, but statistically significant, cross-validated simulation skill in the northeast part of the domain for the period from 1951 to 1998. The statistical-dynamical method is also applied to recent (winter 1998/99 to 2002/03) multimodel, two-tier December-March precipitation forecasts initiated in October. This period includes 4 yr (winter of 1998/99 to 2001/02) of severe drought. Tercile probability forecasts are produced using ensemble-mean forecasts and forecast error estimates. The statistical-dynamical forecasts show enhanced probability of below-normal precipitation for the four drought years and capture the return to normal conditions in part of the region during the winter of 2002/03.May Kabul be without gold, but not without snow.—Traditional Afghan proverb

  17. An operational real-time flood forecasting system in Southern Italy

    NASA Astrophysics Data System (ADS)

    Ortiz, Enrique; Coccia, Gabriele; Todini, Ezio

    2015-04-01

    A real-time flood forecasting system has been operating since year 2012 as a non-structural measure for mitigating the flood risk in Campania Region (Southern Italy), within the Sele river basin (3.240 km2). The Sele Flood Forecasting System (SFFS) has been built within the FEWS (Flood Early Warning System) platform developed by Deltares and it assimilates the numerical weather predictions of the COSMO LAM family: the deterministic COSMO-LAMI I2, the deterministic COSMO-LAMI I7 and the ensemble numerical weather predictions COSMO-LEPS (16 members). Sele FFS is composed by a cascade of three main models. The first model is a fully continuous physically based distributed hydrological model, named TOPKAPI-eXtended (Idrologia&Ambiente s.r.l., Naples, Italy), simulating the dominant processes controlling the soil water dynamics, runoff generation and discharge with a spatial resolution of 250 m. The second module is a set of Neural-Networks (ANN) built for forecasting the river stages at a set of monitored cross-sections. The third component is a Model Conditional Processor (MCP), which provides the predictive uncertainty (i.e., the probability of occurrence of a future flood event) within the framework of a multi-temporal forecast, according to the most recent advancements on this topic (Coccia and Todini, HESS, 2011). The MCP provides information about the probability of exceedance of a maximum river stage within the forecast lead time, by means of a discrete time function representing the variation of cumulative probability of exceeding a river stage during the forecast lead time and the distribution of the time occurrence of the flood peak, starting from one or more model forecasts. This work shows the Sele FFS performance after two years of operation, evidencing the added-values that can provide to a flood early warning and emergency management system.

  18. Development of an Adaptable Display and Diagnostic System for the Evaluation of Tropical Cyclone Forecasts

    NASA Astrophysics Data System (ADS)

    Kucera, P. A.; Burek, T.; Halley-Gotway, J.

    2015-12-01

    NCAR's Joint Numerical Testbed Program (JNTP) focuses on the evaluation of experimental forecasts of tropical cyclones (TCs) with the goal of developing new research tools and diagnostic evaluation methods that can be transitioned to operations. Recent activities include the development of new TC forecast verification methods and the development of an adaptable TC display and diagnostic system. The next generation display and diagnostic system is being developed to support evaluation needs of the U.S. National Hurricane Center (NHC) and broader TC research community. The new hurricane display and diagnostic capabilities allow forecasters and research scientists to more deeply examine the performance of operational and experimental models. The system is built upon modern and flexible technology that includes OpenLayers Mapping tools that are platform independent. The forecast track and intensity along with associated observed track information are stored in an efficient MySQL database. The system provides easy-to-use interactive display system, and provides diagnostic tools to examine forecast track stratified by intensity. Consensus forecasts can be computed and displayed interactively. The system is designed to display information for both real-time and for historical TC cyclones. The display configurations are easily adaptable to meet the needs of the end-user preferences. Ongoing enhancements include improving capabilities for stratification and evaluation of historical best tracks, development and implementation of additional methods to stratify and compute consensus hurricane track and intensity forecasts, and improved graphical display tools. The display is also being enhanced to incorporate gridded forecast, satellite, and sea surface temperature fields. The presentation will provide an overview of the display and diagnostic system development and demonstration of the current capabilities.

  19. Forecasting the duration of volcanic eruptions: an empirical probabilistic model

    NASA Astrophysics Data System (ADS)

    Gunn, L. S.; Blake, S.; Jones, M. C.; Rymer, H.

    2014-01-01

    The ability to forecast future volcanic eruption durations would greatly benefit emergency response planning prior to and during a volcanic crises. This paper introduces a probabilistic model to forecast the duration of future and on-going eruptions. The model fits theoretical distributions to observed duration data and relies on past eruptions being a good indicator of future activity. A dataset of historical Mt. Etna flank eruptions is presented and used to demonstrate the model. The data have been compiled through critical examination of existing literature along with careful consideration of uncertainties on reported eruption start and end dates between the years 1300 AD and 2010. Data following 1600 is considered to be reliable and free of reporting biases. The distribution of eruption duration between the years 1600 and 1669 is found to be statistically different from that following it and the forecasting model is run on two datasets of Mt. Etna flank eruption durations: 1600-2010 and 1670-2010. Each dataset is modelled using a log-logistic distribution with parameter values found by maximum likelihood estimation. Survivor function statistics are applied to the model distributions to forecast (a) the probability of an eruption exceeding a given duration, (b) the probability of an eruption that has already lasted a particular number of days exceeding a given total duration and (c) the duration with a given probability of being exceeded. Results show that excluding the 1600-1670 data has little effect on the forecasting model result, especially where short durations are involved. By assigning the terms `likely' and `unlikely' to probabilities of 66 % or more and 33 % or less, respectively, the forecasting model based on the 1600-2010 dataset indicates that a future flank eruption on Mt. Etna would be likely to exceed 20 days (± 7 days) but unlikely to exceed 86 days (± 29 days). This approach can easily be adapted for use on other highly active, well-documented volcanoes or for different duration data such as the duration of explosive episodes or the duration of repose periods between eruptions.

  20. Verification of operational solar flare forecast: Case of Regional Warning Center Japan

    NASA Astrophysics Data System (ADS)

    Kubo, Yûki; Den, Mitsue; Ishii, Mamoru

    2017-08-01

    In this article, we discuss a verification study of an operational solar flare forecast in the Regional Warning Center (RWC) Japan. The RWC Japan has been issuing four-categorical deterministic solar flare forecasts for a long time. In this forecast verification study, we used solar flare forecast data accumulated over 16 years (from 2000 to 2015). We compiled the forecast data together with solar flare data obtained with the Geostationary Operational Environmental Satellites (GOES). Using the compiled data sets, we estimated some conventional scalar verification measures with 95% confidence intervals. We also estimated a multi-categorical scalar verification measure. These scalar verification measures were compared with those obtained by the persistence method and recurrence method. As solar activity varied during the 16 years, we also applied verification analyses to four subsets of forecast-observation pair data with different solar activity levels. We cannot conclude definitely that there are significant performance differences between the forecasts of RWC Japan and the persistence method, although a slightly significant difference is found for some event definitions. We propose to use a scalar verification measure to assess the judgment skill of the operational solar flare forecast. Finally, we propose a verification strategy for deterministic operational solar flare forecasting. For dichotomous forecast, a set of proposed verification measures is a frequency bias for bias, proportion correct and critical success index for accuracy, probability of detection for discrimination, false alarm ratio for reliability, Peirce skill score for forecast skill, and symmetric extremal dependence index for association. For multi-categorical forecast, we propose a set of verification measures as marginal distributions of forecast and observation for bias, proportion correct for accuracy, correlation coefficient and joint probability distribution for association, the likelihood distribution for discrimination, the calibration distribution for reliability and resolution, and the Gandin-Murphy-Gerrity score and judgment skill score for skill.

  1. Constraints on Rational Model Weighting, Blending and Selecting when Constructing Probability Forecasts given Multiple Models

    NASA Astrophysics Data System (ADS)

    Higgins, S. M. W.; Du, H. L.; Smith, L. A.

    2012-04-01

    Ensemble forecasting on a lead time of seconds over several years generates a large forecast-outcome archive, which can be used to evaluate and weight "models". Challenges which arise as the archive becomes smaller are investigated: in weather forecasting one typically has only thousands of forecasts however those launched 6 hours apart are not independent of each other, nor is it justified to mix seasons with different dynamics. Seasonal forecasts, as from ENSEMBLES and DEMETER, typically have less than 64 unique launch dates; decadal forecasts less than eight, and long range climate forecasts arguably none. It is argued that one does not weight "models" so much as entire ensemble prediction systems (EPSs), and that the marginal value of an EPS will depend on the other members in the mix. The impact of using different skill scores is examined in the limits of both very large forecast-outcome archives (thereby evaluating the efficiency of the skill score) and in very small forecast-outcome archives (illustrating fundamental limitations due to sampling fluctuations and memory in the physical system being forecast). It is shown that blending with climatology (J. Bröcker and L.A. Smith, Tellus A, 60(4), 663-678, (2008)) tends to increase the robustness of the results; also a new kernel dressing methodology (simply insuring that the expected probability mass tends to lie outside the range of the ensemble) is illustrated. Fair comparisons using seasonal forecasts from the ENSEMBLES project are used to illustrate the importance of these results with fairly small archives. The robustness of these results across the range of small, moderate and huge archives is demonstrated using imperfect models of perfectly known nonlinear (chaotic) dynamical systems. The implications these results hold for distinguishing the skill of a forecast from its value to a user of the forecast are discussed.

  2. Metric optimisation for analogue forecasting by simulated annealing

    NASA Astrophysics Data System (ADS)

    Bliefernicht, J.; Bárdossy, A.

    2009-04-01

    It is well known that weather patterns tend to recur from time to time. This property of the atmosphere is used by analogue forecasting techniques. They have a long history in weather forecasting and there are many applications predicting hydrological variables at the local scale for different lead times. The basic idea of the technique is to identify past weather situations which are similar (analogue) to the predicted one and to take the local conditions of the analogues as forecast. But the forecast performance of the analogue method depends on user-defined criteria like the choice of the distance function and the size of the predictor domain. In this study we propose a new methodology of optimising both criteria by minimising the forecast error with simulated annealing. The performance of the methodology is demonstrated for the probability forecast of daily areal precipitation. It is compared with a traditional analogue forecasting algorithm, which is used operational as an element of a hydrological forecasting system. The study is performed for several meso-scale catchments located in the Rhine basin in Germany. The methodology is validated by a jack-knife method in a perfect prognosis framework for a period of 48 years (1958-2005). The predictor variables are derived from the NCEP/NCAR reanalysis data set. The Brier skill score and the economic value are determined to evaluate the forecast skill and value of the technique. In this presentation we will present the concept of the optimisation algorithm and the outcome of the comparison. It will be also demonstrated how a decision maker should apply a probability forecast to maximise the economic benefit from it.

  3. Set-up and validation of a Delft-FEWS based coastal hazard forecasting system

    NASA Astrophysics Data System (ADS)

    Valchev, Nikolay; Eftimova, Petya; Andreeva, Nataliya

    2017-04-01

    European coasts are increasingly threatened by hazards related to low-probability and high-impact hydro-meteorological events. Uncertainties in hazard prediction and capabilities to cope with their impact lie in both future storm pattern and increasing coastal development. Therefore, adaptation to future conditions requires a re-evaluation of coastal disaster risk reduction (DRR) strategies and introduction of a more efficient mix of prevention, mitigation and preparedness measures. The latter presumes that development of tools, which can manage the complex process of merging data and models and generate products on the current and expected hydro-and morpho-dynamic states of the coasts, such as forecasting system of flooding and erosion hazards at vulnerable coastal locations (hotspots), is of vital importance. Output of such system can be of an utmost value for coastal stakeholders and the entire coastal community. In response to these challenges, Delft-FEWS provides a state-of-the-art framework for implementation of such system with vast capabilities to trigger the early warning process. In addition, this framework is highly customizable to the specific requirements of any individual coastal hotspot. Since its release many Delft-FEWS based forecasting system related to inland flooding have been developed. However, limited number of coastal applications was implemented. In this paper, a set-up of Delft-FEWS based forecasting system for Varna Bay (Bulgaria) and a coastal hotspot, which includes a sandy beach and port infrastructure, is presented. It is implemented in the frame of RISC-KIT project (Resilience-Increasing Strategies for Coasts - toolKIT). The system output generated in hindcast mode is validated with available observations of surge levels, wave and morphodynamic parameters for a sequence of three short-duration and relatively weak storm events occurred during February 4-12, 2015. Generally, the models' performance is considered as very good and results obtained - quite promising for reliable prediction of both boundary conditions and coastal hazard and gives a good basis for estimation of onshore impact.

  4. Do quantitative decadal forecasts from GCMs provide decision relevant skill?

    NASA Astrophysics Data System (ADS)

    Suckling, E. B.; Smith, L. A.

    2012-04-01

    It is widely held that only physics-based simulation models can capture the dynamics required to provide decision-relevant probabilistic climate predictions. This fact in itself provides no evidence that predictions from today's GCMs are fit for purpose. Empirical (data-based) models are employed to make probability forecasts on decadal timescales, where it is argued that these 'physics free' forecasts provide a quantitative 'zero skill' target for the evaluation of forecasts based on more complicated models. It is demonstrated that these zero skill models are competitive with GCMs on decadal scales for probability forecasts evaluated over the last 50 years. Complications of statistical interpretation due to the 'hindcast' nature of this experiment, and the likely relevance of arguments that the lack of hindcast skill is irrelevant as the signal will soon 'come out of the noise' are discussed. A lack of decision relevant quantiative skill does not bring the science-based insights of anthropogenic warming into doubt, but it does call for a clear quantification of limits, as a function of lead time, for spatial and temporal scales on which decisions based on such model output are expected to prove maladaptive. Failing to do so may risk the credibility of science in support of policy in the long term. The performance amongst a collection of simulation models is evaluated, having transformed ensembles of point forecasts into probability distributions through the kernel dressing procedure [1], according to a selection of proper skill scores [2] and contrasted with purely data-based empirical models. Data-based models are unlikely to yield realistic forecasts for future climate change if the Earth system moves away from the conditions observed in the past, upon which the models are constructed; in this sense the empirical model defines zero skill. When should a decision relevant simulation model be expected to significantly outperform such empirical models? Probability forecasts up to ten years ahead (decadal forecasts) are considered, both on global and regional spatial scales for surface air temperature. Such decadal forecasts are not only important in terms of providing information on the impacts of near-term climate change, but also from the perspective of climate model validation, as hindcast experiments and a sufficient database of historical observations allow standard forecast verification methods to be used. Simulation models from the ENSEMBLES hindcast experiment [3] are evaluated and contrasted with static forecasts of the observed climatology, persistence forecasts and against simple statistical models, called dynamic climatology (DC). It is argued that DC is a more apropriate benchmark in the case of a non-stationary climate. It is found that the ENSEMBLES models do not demonstrate a significant increase in skill relative to the empirical models even at global scales over any lead time up to a decade ahead. It is suggested that the contsruction and co-evaluation with the data-based models become a regular component of the reporting of large simulation model forecasts. The methodology presented may easily be adapted to other forecasting experiments and is expected to influence the design of future experiments. The inclusion of comparisons with dynamic climatology and other data-based approaches provide important information to both scientists and decision makers on which aspects of state-of-the-art simulation forecasts are likely to be fit for purpose. [1] J. Bröcker and L. A. Smith. From ensemble forecasts to predictive distributions, Tellus A, 60(4), 663-678 (2007). [2] J. Bröcker and L. A. Smith. Scoring probabilistic forecasts: The importance of being proper, Weather and Forecasting, 22, 382-388 (2006). [3] F. J. Doblas-Reyes, A. Weisheimer, T. N. Palmer, J. M. Murphy and D. Smith. Forecast quality asessment of the ENSEMBLES seasonal-to-decadal stream 2 hindcasts, ECMWF Technical Memorandum, 621 (2010).

  5. Is It Going to Rain Today? Understanding the Weather Forecast.

    ERIC Educational Resources Information Center

    Allsopp, Jim; And Others

    1996-01-01

    Presents a resource for science teachers to develop a better understanding of weather forecasts, including outlooks, watches, warnings, advisories, severe local storms, winter storms, floods, hurricanes, nonprecipitation hazards, precipitation probabilities, sky condition, and UV index. (MKR)

  6. Application of Second-Moment Source Analysis to Three Problems in Earthquake Forecasting

    NASA Astrophysics Data System (ADS)

    Donovan, J.; Jordan, T. H.

    2011-12-01

    Though earthquake forecasting models have often represented seismic sources as space-time points (usually hypocenters), a more complete hazard analysis requires the consideration of finite-source effects, such as rupture extent, orientation, directivity, and stress drop. The most compact source representation that includes these effects is the finite moment tensor (FMT), which approximates the degree-two polynomial moments of the stress glut by its projection onto the seismic (degree-zero) moment tensor. This projection yields a scalar space-time source function whose degree-one moments define the centroid moment tensor (CMT) and whose degree-two moments define the FMT. We apply this finite-source parameterization to three forecasting problems. The first is the question of hypocenter bias: can we reject the null hypothesis that the conditional probability of hypocenter location is uniformly distributed over the rupture area? This hypothesis is currently used to specify rupture sets in the "extended" earthquake forecasts that drive simulation-based hazard models, such as CyberShake. Following McGuire et al. (2002), we test the hypothesis using the distribution of FMT directivity ratios calculated from a global data set of source slip inversions. The second is the question of source identification: given an observed FMT (and its errors), can we identify it with an FMT in the complete rupture set that represents an extended fault-based rupture forecast? Solving this problem will facilitate operational earthquake forecasting, which requires the rapid updating of earthquake triggering and clustering models. Our proposed method uses the second-order uncertainties as a norm on the FMT parameter space to identify the closest member of the hypothetical rupture set and to test whether this closest member is an adequate representation of the observed event. Finally, we address the aftershock excitation problem: given a mainshock, what is the spatial distribution of aftershock probabilities? The FMT representation allows us to generalize the models typically used for this purpose (e.g., marked point process models, such as ETAS), which will again be necessary in operational earthquake forecasting. To quantify aftershock probabilities, we compare mainshock FMTs with the first and second spatial moments of weighted aftershock hypocenters. We will describe applications of these results to the Uniform California Earthquake Rupture Forecast, version 3, which is now under development by the Working Group on California Earthquake Probabilities.

  7. How MAG4 Improves Space Weather Forecasting

    NASA Technical Reports Server (NTRS)

    Falconer, David; Khazanov, Igor; Barghouty, Nasser

    2013-01-01

    Dangerous space weather is driven by solar flares and Coronal Mass Ejection (CMEs). Forecasting flares and CMEs is the first step to forecasting either dangerous space weather or All Clear. MAG4 (Magnetogram Forecast), developed originally for NASA/SRAG (Space Radiation Analysis Group), is an automated program that analyzes magnetograms from the HMI (Helioseismic and Magnetic Imager) instrument on NASA SDO (Solar Dynamics Observatory), and automatically converts the rate (or probability) of major flares (M- and X-class), Coronal Mass Ejections (CMEs), and Solar Energetic Particle Events.

  8. Evidence-based pathology in its second decade: toward probabilistic cognitive computing.

    PubMed

    Marchevsky, Alberto M; Walts, Ann E; Wick, Mark R

    2017-03-01

    Evidence-based pathology advocates using a combination of best available data ("evidence") from the literature and personal experience for the diagnosis, estimation of prognosis, and assessment of other variables that impact individual patient care. Evidence-based pathology relies on systematic reviews of the literature, evaluation of the quality of evidence as categorized by evidence levels and statistical tools such as meta-analyses, estimates of probabilities and odds, and others. However, it is well known that previously "statistically significant" information usually does not accurately forecast the future for individual patients. There is great interest in "cognitive computing" in which "data mining" is combined with "predictive analytics" designed to forecast future events and estimate the strength of those predictions. This study demonstrates the use of IBM Watson Analytics software to evaluate and predict the prognosis of 101 patients with typical and atypical pulmonary carcinoid tumors in which Ki-67 indices have been determined. The results obtained with this system are compared with those previously reported using "routine" statistical software and the help of a professional statistician. IBM Watson Analytics interactively provides statistical results that are comparable to those obtained with routine statistical tools but much more rapidly, with considerably less effort and with interactive graphics that are intuitively easy to apply. It also enables analysis of natural language variables and yields detailed survival predictions for patient subgroups selected by the user. Potential applications of this tool and basic concepts of cognitive computing are discussed. Copyright © 2016 Elsevier Inc. All rights reserved.

  9. The Tracking Meteogram, an AWIPS II Tool for Time-Series Analysis

    NASA Technical Reports Server (NTRS)

    Burks, Jason Eric; Sperow, Ken

    2015-01-01

    A new tool has been developed for the National Weather Service (NWS) Advanced Weather Interactive Processing System (AWIPS) II through collaboration between NASA's Short-term Prediction Research and Transition (SPoRT) and the NWS Meteorological Development Laboratory (MDL). Referred to as the "Tracking Meteogram", the tool aids NWS forecasters in assessing meteorological parameters associated with moving phenomena. The tool aids forecasters in severe weather situations by providing valuable satellite and radar derived trends such as cloud top cooling rates, radial velocity couplets, reflectivity, and information from ground-based lightning networks. The Tracking Meteogram tool also aids in synoptic and mesoscale analysis by tracking parameters such as the deepening of surface low pressure systems, changes in surface or upper air temperature, and other properties. The tool provides a valuable new functionality and demonstrates the flexibility and extensibility of the NWS AWIPS II architecture. In 2014, the operational impact of the tool was formally evaluated through participation in the NOAA/NWS Operations Proving Ground (OPG), a risk reduction activity to assess performance and operational impact of new forecasting concepts, tools, and applications. Performance of the Tracking Meteogram Tool during the OPG assessment confirmed that it will be a valuable asset to the operational forecasters. This presentation reviews development of the Tracking Meteogram tool, performance and feedback acquired during the OPG activity, and future goals for continued support and extension to other application areas.

  10. Probabilistic flood forecasting tool for Andalusia (Spain). Application to September 2012 disaster event in Vera Playa.

    NASA Astrophysics Data System (ADS)

    García, Darío; Baquerizo, Asunción; Ortega, Miguel; Herrero, Javier; Ángel Losada, Miguel

    2013-04-01

    Torrential and heavy rains are frequent in Andalusia (Southern Spain) due to the characteristic Mediterranean climate (semi-arid areas). This, in combination with a massive occupation of floodable (river sides) and coastal areas, produces severe problems of management and damage to the population and social and economical activities when extreme events occur. Some of the most important problems are being produced during last years in Almería (Southeastern Andalusia). Between 27 and 28 September 2012 rainstorms characterized by 240mm in 24h (exceeding precipitation for a return period of 500 years) occurred. Antas River and Jático creek, that are normally dry, became raging torrents. The massive flooding of occupied areas resulted in eleven deaths and two missing in Andalucía, with a total estimated cost of all claims for compensation on the order of 197 million euros. This study presents a probabilistic flood forecasting tool including the effect of river and marine forcings. It is based on a distributed, physically-based hydrological model (WiMMed). For Almería the model has been calibrated with the largest event recorded in Cantoria gauging station (data since 1965) on 19 October 1973. It was then validated with the second strongest event (26 October 1977). Among the different results of the model, it can provide probability floods scenarios in Andalusia with up 10 days weather forecasts. The tool has been applied to Vera, a 15.000 inhabitants town located in the east of Almería along the Antas River at an altitude of 95 meters. Its main economic resource is the "beach and sun" based-tourism, which has experienced an enormous growth during last decades. Its coastal stretch has been completely built in these years, occupying floodable areas and constricting the channel and rivers mouths. Simulations of the model in this area for the 1973 event and published in March 2011 on the internet event already announced that the floods of September 2012 may occur.

  11. Applied Meteorology Unit Quarterly Report. First Quarter FY-13

    NASA Technical Reports Server (NTRS)

    2013-01-01

    The AMU team worked on five tasks for their customers: (1) Ms. Crawford continued work on the objective lightning forecast task for airports in east-central Florida. (2) Ms. Shafer continued work on the task for Vandenberg Air Force Base to create an automated tool that will help forecasters relate pressure gradients to peak wind values. (3) Dr. Huddleston began work to develop a lightning timing forecast tool for the Kennedy Space Center/Cape Canaveral Air Force Station area. (3) Dr. Bauman began work on a severe weather forecast tool focused on east-central Florida. (4) Dr. Watson completed testing high-resolution model configurations for Wallops Flight Facility and the Eastern Range, and wrote the final report containing the AMU's recommendations for model configurations at both ranges.

  12. Cost-Loss Analysis of Ensemble Solar Wind Forecasting: Space Weather Use of Terrestrial Weather Tools

    NASA Astrophysics Data System (ADS)

    Henley, E. M.; Pope, E. C. D.

    2017-12-01

    This commentary concerns recent work on solar wind forecasting by Owens and Riley (2017). The approach taken makes effective use of tools commonly used in terrestrial weather—notably, via use of a simple model—generation of an "ensemble" forecast, and application of a "cost-loss" analysis to the resulting probabilistic information, to explore the benefit of this forecast to users with different risk appetites. This commentary aims to highlight these useful techniques to the wider space weather audience and to briefly discuss the general context of application of terrestrial weather approaches to space weather.

  13. Satellite Sounder Data Assimilation for Improving Alaska Region Weather Forecast

    NASA Technical Reports Server (NTRS)

    Zhu, Jiang; Stevens, E.; Zhang, X.; Zavodsky, B. T.; Heinrichs, T.; Broderson, D.

    2014-01-01

    A case study and monthly statistical analysis using sounder data assimilation to improve the Alaska regional weather forecast model are presented. Weather forecast in Alaska faces challenges as well as opportunities. Alaska has a large land with multiple types of topography and coastal area. Weather forecast models must be finely tuned in order to accurately predict weather in Alaska. Being in the high-latitudes provides Alaska greater coverage of polar orbiting satellites for integration into forecasting models than the lower 48. Forecasting marine low stratus clouds is critical to the Alaska aviation and oil industry and is the current focus of the case study. NASA AIRS/CrIS sounder profiles data are used to do data assimilation for the Alaska regional weather forecast model to improve Arctic marine stratus clouds forecast. Choosing physical options for the WRF model is discussed. Preprocess of AIRS/CrIS sounder data for data assimilation is described. Local observation data, satellite data, and global data assimilation data are used to verify and/or evaluate the forecast results by the MET tools Model Evaluation Tools (MET).

  14. Bayesian analyses of seasonal runoff forecasts

    NASA Astrophysics Data System (ADS)

    Krzysztofowicz, R.; Reese, S.

    1991-12-01

    Forecasts of seasonal snowmelt runoff volume provide indispensable information for rational decision making by water project operators, irrigation district managers, and farmers in the western United States. Bayesian statistical models and communication frames have been researched in order to enhance the forecast information disseminated to the users, and to characterize forecast skill from the decision maker's point of view. Four products are presented: (i) a Bayesian Processor of Forecasts, which provides a statistical filter for calibrating the forecasts, and a procedure for estimating the posterior probability distribution of the seasonal runoff; (ii) the Bayesian Correlation Score, a new measure of forecast skill, which is related monotonically to the ex ante economic value of forecasts for decision making; (iii) a statistical predictor of monthly cumulative runoffs within the snowmelt season, conditional on the total seasonal runoff forecast; and (iv) a framing of the forecast message that conveys the uncertainty associated with the forecast estimates to the users. All analyses are illustrated with numerical examples of forecasts for six gauging stations from the period 1971 1988.

  15. Storm-based Cloud-to-Ground Lightning Probabilities and Warnings

    NASA Astrophysics Data System (ADS)

    Calhoun, K. M.; Meyer, T.; Kingfield, D.

    2017-12-01

    A new cloud-to-ground (CG) lightning probability algorithm has been developed using machine-learning methods. With storm-based inputs of Earth Networks' in-cloud lightning, Vaisala's CG lightning, multi-radar/multi-sensor (MRMS) radar derived products including the Maximum Expected Size of Hail (MESH) and Vertically Integrated Liquid (VIL), and near storm environmental data including lapse rate and CAPE, a random forest algorithm was trained to produce probabilities of CG lightning up to one-hour in advance. As part of the Prototype Probabilistic Hazard Information experiment in the Hazardous Weather Testbed in 2016 and 2017, National Weather Service forecasters were asked to use this CG lightning probability guidance to create rapidly updating probability grids and warnings for the threat of CG lightning for 0-60 minutes. The output from forecasters was shared with end-users, including emergency managers and broadcast meteorologists, as part of an integrated warning team.

  16. Transitioning a Chesapeake Bay Ecological Prediction System to Operations

    NASA Astrophysics Data System (ADS)

    Brown, C.; Green, D. S.; Eco Forecasters

    2011-12-01

    Ecological prediction of the impacts of physical, chemical, biological, and human-induced change on ecosystems and their components, encompass a wide range of space and time scales, and subject matter. They vary from predicting the occurrence and/or transport of certain species, such harmful algal blooms, or biogeochemical constituents, such as dissolved oxygen concentrations, to large-scale ecosystem responses and higher trophic levels. The timescales of ecological prediction, including guidance and forecasts, range from nowcasts and short-term forecasts (days), to intraseasonal and interannual outlooks (weeks to months), to decadal and century projections in climate change scenarios. The spatial scales range from small coastal inlets to basin and global scale biogeochemical and ecological forecasts. The types of models that have been used include conceptual, empirical, mechanistic, and hybrid approaches. This presentation will identify the challenges and progress toward transitioning experimental model-based ecological prediction into operational guidance and forecasting. Recent efforts are targeting integration of regional ocean, hydrodynamic and hydrological models and leveraging weather and water service infrastructure to enable the prototyping of an operational ecological forecast capability for the Chesapeake Bay and its tidal tributaries. A path finder demonstration predicts the probability of encountering sea nettles (Chrysaora quinquecirrha), a stinging jellyfish. These jellyfish can negatively impact safety and economic activities in the bay and an impact-based forecast that predicts where and when this biotic nuisance occurs may help management effects. The issuance of bay-wide nowcasts and three-day forecasts of sea nettle probability are generated daily by forcing an empirical habitat model (that predicts the probability of sea nettles) with real-time and 3-day forecasts of sea-surface temperature (SST) and salinity (SSS). In the first demonstration phase, the sea surface temperature (SST) and sea surface salinity (SSS) fields are generated by the Chesapeake Bay Operational Forecast System (CBOFS2), a 3-dimensional hydrodynamic model developed and operated by NOAA's National Ocean Service and run operationally at the National Weather Service National Centers for Environmental Prediction (NCEP). Importantly, this system is readily modified to predict the probability of other important target organisms, such as harmful algal blooms, biogeochemical constituents, such as dissolved oxygen concentration, and water-borne pathogens. Extending this initial effort includes advancement of a regional coastal ocean modeling testbed and proving ground. Such formal collaboration is intended to accelerate transition to operations and increase confidence and use of forecast guidance. The outcome will be improved decision making by emergency and resource managers, scientific researchers and the general public. The presentation will describe partnership plans for this testbed as well as the potential implications for the services and research community.

  17. The game of making decisions under uncertainty: How sure must one be?

    NASA Astrophysics Data System (ADS)

    Werner, Micha; Verkade, Jan; Wetterhall, Fredrik; van Andel, Schalk-Jan; Ramos, Maria-Helena

    2016-04-01

    Probabilistic hydrometeorological forecasting is now widely accepted to be more skillful than deterministic forecasts, and is increasingly being integrated into operational practice. Provided they are reliable and unbiased, probabilistic forecasts have the advantage that they give decision maker not only the forecast value, but also the uncertainty associated to that prediction. Though that information provides more insight, it does now leave the forecaster/decision maker with the challenge of deciding at what level of probability of a threshold being exceeded the decision to act should be taken. According to the cost-loss theory, that probability should be related to the impact of the threshold being exceeded. However, it is not entirely clear how easy it is for decision makers to follow that rule, even when the impact of a threshold being exceeded, and the actions to choose from are known. To continue the tradition in the "Ensemble Hydrometeorological Forecast" session, we will address the challenge of making decisions based on probabilistic forecasts through a game to be played with the audience. We will explore how decisions made differ depending on the known impacts of the forecasted events. Participants will be divided into a number of groups with differing levels of impact, and will be faced with a number of forecast situations. They will be asked to make decisions and record the consequence of those decisions. A discussion of the differences in the decisions made will be presented at the end of the game, with a fuller analysis later posted on the HEPEX web site blog (www.hepex.org).

  18. Storm Prediction Center Fire Weather Forecasts

    Science.gov Websites

    Archive NOAA Weather Radio Research Non-op. Products Forecast Tools Svr. Tstm. Events SPC Publications SPC Composite Maps Fire Weather Graphical Composite Maps Forecast and observational maps for various fire

  19. Convective Weather Forecast Accuracy Analysis at Center and Sector Levels

    NASA Technical Reports Server (NTRS)

    Wang, Yao; Sridhar, Banavar

    2010-01-01

    This paper presents a detailed convective forecast accuracy analysis at center and sector levels. The study is aimed to provide more meaningful forecast verification measures to aviation community, as well as to obtain useful information leading to the improvements in the weather translation capacity models. In general, the vast majority of forecast verification efforts over past decades have been on the calculation of traditional standard verification measure scores over forecast and observation data analyses onto grids. These verification measures based on the binary classification have been applied in quality assurance of weather forecast products at the national level for many years. Our research focuses on the forecast at the center and sector levels. We calculate the standard forecast verification measure scores for en-route air traffic centers and sectors first, followed by conducting the forecast validation analysis and related verification measures for weather intensities and locations at centers and sectors levels. An approach to improve the prediction of sector weather coverage by multiple sector forecasts is then developed. The weather severe intensity assessment was carried out by using the correlations between forecast and actual weather observation airspace coverage. The weather forecast accuracy on horizontal location was assessed by examining the forecast errors. The improvement in prediction of weather coverage was determined by the correlation between actual sector weather coverage and prediction. observed and forecasted Convective Weather Avoidance Model (CWAM) data collected from June to September in 2007. CWAM zero-minute forecast data with aircraft avoidance probability of 60% and 80% are used as the actual weather observation. All forecast measurements are based on 30-minute, 60- minute, 90-minute, and 120-minute forecasts with the same avoidance probabilities. The forecast accuracy analysis for times under one-hour showed that the errors in intensity and location for center forecast are relatively low. For example, 1-hour forecast intensity and horizontal location errors for ZDC center were about 0.12 and 0.13. However, the correlation between sector 1-hour forecast and actual weather coverage was weak, for sector ZDC32, about 32% of the total variation of observation weather intensity was unexplained by forecast; the sector horizontal location error was about 0.10. The paper also introduces an approach to estimate the sector three-dimensional actual weather coverage by using multiple sector forecasts, which turned out to produce better predictions. Using Multiple Linear Regression (MLR) model for this approach, the correlations between actual observation and the multiple sector forecast model prediction improved by several percents at 95% confidence level in comparison with single sector forecast.

  20. Aggregation of Environmental Model Data for Decision Support

    NASA Astrophysics Data System (ADS)

    Alpert, J. C.

    2013-12-01

    Weather forecasts and warnings must be prepared and then delivered so as to reach their intended audience in good time to enable effective decision-making. An effort to mitigate these difficulties was studied at a Workshop, 'Sustaining National Meteorological Services - Strengthening WMO Regional and Global Centers' convened, June , 2013, by the World Bank, WMO and the US National Weather Service (NWS). The skill and accuracy of atmospheric forecasts from deterministic models have increased and there are now ensembles of such models that improve decisions to protect life, property and commerce. The NWS production of numerical weather prediction products result in model output from global and high resolution regional ensemble forecasts. Ensembles are constructed by changing the initial conditions to make a 'cloud' of forecasts that attempt to span the space of possible atmospheric realizations which can quantify not only the most likely forecast, but also the uncertainty. This has led to an unprecedented increase in data production and information content from higher resolution, multi-model output and secondary calculations. One difficulty is to obtain the needed subset of data required to estimate the probability of events, and report the information. The calibration required to reliably estimate the probability of events, and honing of threshold adjustments to reduce false alarms for decision makers is also needed. To meet the future needs of the ever-broadening user community and address these issues on a national and international basis, the weather service implemented the NOAA Operational Model Archive and Distribution System (NOMADS). NOMADS provides real-time and retrospective format independent access to climate, ocean and weather model data and delivers high availability content services as part of NOAA's official real time data dissemination at its new NCWCP web operations center. An important aspect of the server's abilities is to aggregate the matrix of model output offering access to probability and calibrating information for real time decision making. The aggregation content server reports over ensemble component and forecast time in addition to the other data dimensions of vertical layer and position for each variable. The unpacking, organization and reading of many binary packed files is accomplished most efficiently on the server while weather element event probability calculations, the thresholds for more accurate decision support, or display remain for the client. Our goal is to reduce uncertainty for variables of interest, e.g, agricultural importance. The weather service operational GFS model ensemble and short range ensemble forecasts can make skillful probability forecasts to alert users if and when their selected weather events will occur. A description of how this framework operates and how it can be implemented using existing NOMADS content services and applications is described.

  1. Estimating predictive hydrological uncertainty by dressing deterministic and ensemble forecasts; a comparison, with application to Meuse and Rhine

    NASA Astrophysics Data System (ADS)

    Verkade, J. S.; Brown, J. D.; Davids, F.; Reggiani, P.; Weerts, A. H.

    2017-12-01

    Two statistical post-processing approaches for estimation of predictive hydrological uncertainty are compared: (i) 'dressing' of a deterministic forecast by adding a single, combined estimate of both hydrological and meteorological uncertainty and (ii) 'dressing' of an ensemble streamflow forecast by adding an estimate of hydrological uncertainty to each individual streamflow ensemble member. Both approaches aim to produce an estimate of the 'total uncertainty' that captures both the meteorological and hydrological uncertainties. They differ in the degree to which they make use of statistical post-processing techniques. In the 'lumped' approach, both sources of uncertainty are lumped by post-processing deterministic forecasts using their verifying observations. In the 'source-specific' approach, the meteorological uncertainties are estimated by an ensemble of weather forecasts. These ensemble members are routed through a hydrological model and a realization of the probability distribution of hydrological uncertainties (only) is then added to each ensemble member to arrive at an estimate of the total uncertainty. The techniques are applied to one location in the Meuse basin and three locations in the Rhine basin. Resulting forecasts are assessed for their reliability and sharpness, as well as compared in terms of multiple verification scores including the relative mean error, Brier Skill Score, Mean Continuous Ranked Probability Skill Score, Relative Operating Characteristic Score and Relative Economic Value. The dressed deterministic forecasts are generally more reliable than the dressed ensemble forecasts, but the latter are sharper. On balance, however, they show similar quality across a range of verification metrics, with the dressed ensembles coming out slightly better. Some additional analyses are suggested. Notably, these include statistical post-processing of the meteorological forecasts in order to increase their reliability, thus increasing the reliability of the streamflow forecasts produced with ensemble meteorological forcings.

  2. Meta-heuristic CRPS minimization for the calibration of short-range probabilistic forecasts

    NASA Astrophysics Data System (ADS)

    Mohammadi, Seyedeh Atefeh; Rahmani, Morteza; Azadi, Majid

    2016-08-01

    This paper deals with the probabilistic short-range temperature forecasts over synoptic meteorological stations across Iran using non-homogeneous Gaussian regression (NGR). NGR creates a Gaussian forecast probability density function (PDF) from the ensemble output. The mean of the normal predictive PDF is a bias-corrected weighted average of the ensemble members and its variance is a linear function of the raw ensemble variance. The coefficients for the mean and variance are estimated by minimizing the continuous ranked probability score (CRPS) during a training period. CRPS is a scoring rule for distributional forecasts. In the paper of Gneiting et al. (Mon Weather Rev 133:1098-1118, 2005), Broyden-Fletcher-Goldfarb-Shanno (BFGS) method is used to minimize the CRPS. Since BFGS is a conventional optimization method with its own limitations, we suggest using the particle swarm optimization (PSO), a robust meta-heuristic method, to minimize the CRPS. The ensemble prediction system used in this study consists of nine different configurations of the weather research and forecasting model for 48-h forecasts of temperature during autumn and winter 2011 and 2012. The probabilistic forecasts were evaluated using several common verification scores including Brier score, attribute diagram and rank histogram. Results show that both BFGS and PSO find the optimal solution and show the same evaluation scores, but PSO can do this with a feasible random first guess and much less computational complexity.

  3. Innovative Tools for Water Quality/Quantity Management: New York City's Operations Support Tool

    NASA Astrophysics Data System (ADS)

    Wang, L.; Schaake, J. C.; Day, G. N.; Porter, J.; Sheer, D. P.; Pyke, G.

    2011-12-01

    The New York City Department of Environmental Protection (DEP) manages New York City's water supply, which is comprised of over 20 reservoirs and supplies more than 1 billion gallons of water per day to over 9 million customers. Recently, DEP has initiated design of an Operations Support Tool (OST), a state-of-the-art decision support system to provide computational and predictive support for water supply operations and planning. This presentation describes the technical structure of OST, including the underlying water supply and water quality models, data sources and database management, reservoir inflow forecasts, and the functionalities required to meet the needs of a diverse group of end users. OST is a major upgrade of DEP's current water supply - water quality model, developed to evaluate alternatives for controlling turbidity in NYC's Catskill reservoirs. While the current model relies on historical hydrologic and meteorological data, OST can be driven by forecasted future conditions. It will receive a variety of near-real-time data from a number of sources. OST will support two major types of simulations: long-term, for evaluating policy or infrastructure changes over an extended period of time; and short-term "position analysis" (PA) simulations, consisting of multiple short simulations, all starting from the same initial conditions. Typically, the starting conditions for a PA run will represent those for the current day and traces of forecasted hydrology will drive the model for the duration of the simulation period. The result of these simulations will be a distribution of future system states based on system operating rules and the range of input ensemble streamflow predictions. DEP managers will analyze the output distributions and make operation decisions using risk-based metrics such as probability of refill. Currently, in the developmental stages of OST, forecasts are based on antecedent hydrologic conditions and are statistical in nature. The statistical algorithm is a relatively simple and versatile, but lacks short-term skill critical for water quality and spill management. To improve short-term skill, OST will ultimately operate with meteorologically driven hydrologic forecasts provided by the National Weather Service (NWS). OST functionalities will support a wide range of DEP uses, including short term operational projections, outage planning and emergency management, operating rule development, and water supply planning. A core use of OST will be to inform reservoir management strategies to control and mitigate turbidity events while ensuring water supply reliability. OST will also allow DEP to manage its complex reservoir system to meet multiple objectives, including ecological flows, tailwater fisheries and recreational releases, and peak flow mitigation for downstream communities.

  4. The HEPEX Seasonal Streamflow Forecast Intercomparison Project

    NASA Astrophysics Data System (ADS)

    Wood, A. W.; Schepen, A.; Bennett, J.; Mendoza, P. A.; Ramos, M. H.; Wetterhall, F.; Pechlivanidis, I.

    2016-12-01

    The Hydrologic Ensemble Prediction Experiment (HEPEX; www.hepex.org) has launched an international seasonal streamflow forecasting intercomparison project (SSFIP) with the goal of broadening community knowledge about the strengths and weaknesses of various operational approaches being developed around the world. While some of these approaches have existed for decades (e.g. Ensemble Streamflow Prediction - ESP - in the United States and elsewhere), recent years have seen the proliferation of new operational and experimental streamflow forecasting approaches. These have largely been developed independently in each country, thus it is difficult to assess whether the approaches employed in some centers offer more promise for development than others. This motivates us to establish a forecasting testbed to facilitate a diagnostic evaluation of a range of different streamflow forecasting approaches and their components over a common set of catchments, using a common set of validation methods. Rather than prescribing a set of scientific questions from the outset, we are letting the hindcast results and notable differences in methodologies on a watershed-specific basis motivate more targeted analyses and sub-experiments that may provide useful insights. The initial pilot of the testbed involved two approaches - CSIRO's Bayesian joint probability (BJP) and NCAR's sequential regression - for two catchments, each designated by one of the teams (the Murray River, Australia, and Hungry Horse reservoir drainage area, USA). Additional catchments/approaches are in the process of being added to the testbed. To support this CSIRO and NCAR have developed data and analysis tools, data standards and protocols to formalize the experiment. These include requirements for cross-validation, verification, reference climatologies, and common predictands. This presentation describes the SSFIP experiments, pilot basin results and scientific findings to date.

  5. A physics-based probabilistic forecasting model for rainfall-induced shallow landslides at regional scale

    NASA Astrophysics Data System (ADS)

    Zhang, Shaojie; Zhao, Luqiang; Delgado-Tellez, Ricardo; Bao, Hongjun

    2018-03-01

    Conventional outputs of physics-based landslide forecasting models are presented as deterministic warnings by calculating the safety factor (Fs) of potentially dangerous slopes. However, these models are highly dependent on variables such as cohesion force and internal friction angle which are affected by a high degree of uncertainty especially at a regional scale, resulting in unacceptable uncertainties of Fs. Under such circumstances, the outputs of physical models are more suitable if presented in the form of landslide probability values. In order to develop such models, a method to link the uncertainty of soil parameter values with landslide probability is devised. This paper proposes the use of Monte Carlo methods to quantitatively express uncertainty by assigning random values to physical variables inside a defined interval. The inequality Fs < 1 is tested for each pixel in n simulations which are integrated in a unique parameter. This parameter links the landslide probability to the uncertainties of soil mechanical parameters and is used to create a physics-based probabilistic forecasting model for rainfall-induced shallow landslides. The prediction ability of this model was tested in a case study, in which simulated forecasting of landslide disasters associated with heavy rainfalls on 9 July 2013 in the Wenchuan earthquake region of Sichuan province, China, was performed. The proposed model successfully forecasted landslides in 159 of the 176 disaster points registered by the geo-environmental monitoring station of Sichuan province. Such testing results indicate that the new model can be operated in a highly efficient way and show more reliable results, attributable to its high prediction accuracy. Accordingly, the new model can be potentially packaged into a forecasting system for shallow landslides providing technological support for the mitigation of these disasters at regional scale.

  6. Operational warnings issued by the SMC in the 8th March snow event in Catalonia

    NASA Astrophysics Data System (ADS)

    Vilaclara, E.; Segalà, S.; Andrés, A.; Aran, M.

    2010-09-01

    The snowfall event of 8th March 2010 was one of the most important of the last years in Catalonia, with high societal impact in the communication and in the power distribution. Since 2002, after an agreement between Meteorological Service of Catalonia (SMC) and Civil Protection authority, the SMC is the agency responsible to carry out meteorological warnings in Catalonia. These warnings are issued depending on the thresholds which are expected to be exceeded (three different probability degrees are defined), and are characterized by a high spatial and temporal resolution. In the snow event of 8th March, forecasting team of the SMC did the first meteorological warning three days before, on 5th March. During the two following days broadcasted four different warnings, taking into account the high probability of exceeding 2 cm of snow above 200 meters, 15 above 400 m and 30 above 800 m. Furthermore, the SMC disseminated also two special press releases with the aim to extend the forecast to the public. In the afternoon of Sunday, 7th March, the snow precipitation started in some areas of Catalonia. From this moment, the SMC followed the surveillance of situation with the remote sensing tools and the Meteorological Observers Network data. This network is formed by a hundred of spotters with mobile phones able to transmit observations in real time to our web site. This surveillance and the conceptual model of snow events in Catalonia allowed the forecasters to improve the estimation of the snow level forecasted by mesoscale meteorological models. In this event, the snow level obtained from these models was higher than the real one. In spite of the accuracy of the forecast, the impact was very important in Catalonia. On one hand, it was due to the exceptionality of the event, with 3 cm of snow in Barcelona in the afternoon of a working day. On the other hand, the large amount of wet snow precipitation (until 100 mm), and the wind, contributed to an important snow accumulation in the electric power lines. The snow weight and the wind effect broke down a lot of electrical towers in the north-east of Catalonia and more than 450,000 customers were affected by power outages in the following days.

  7. Integrating Solar Power onto the Electric Grid - Bridging the Gap between Atmospheric Science, Engineering and Economics

    NASA Astrophysics Data System (ADS)

    Ghonima, M. S.; Yang, H.; Zhong, X.; Ozge, B.; Sahu, D. K.; Kim, C. K.; Babacan, O.; Hanna, R.; Kurtz, B.; Mejia, F. A.; Nguyen, A.; Urquhart, B.; Chow, C. W.; Mathiesen, P.; Bosch, J.; Wang, G.

    2015-12-01

    One of the main obstacles to high penetrations of solar power is the variable nature of solar power generation. To mitigate variability, grid operators have to schedule additional reliability resources, at considerable expense, to ensure that load requirements are met by generation. Thus despite the cost of solar PV decreasing, the cost of integrating solar power will increase as penetration of solar resources onto the electric grid increases. There are three principal tools currently available to mitigate variability impacts: (i) flexible generation, (ii) storage, either virtual (demand response) or physical devices and (iii) solar forecasting. Storage devices are a powerful tool capable of ensuring smooth power output from renewable resources. However, the high cost of storage is prohibitive and markets are still being designed to leverage their full potential and mitigate their limitation (e.g. empty storage). Solar forecasting provides valuable information on the daily net load profile and upcoming ramps (increasing or decreasing solar power output) thereby providing the grid advance warning to schedule ancillary generation more accurately, or curtail solar power output. In order to develop solar forecasting as a tool that can be utilized by the grid operators we identified two focus areas: (i) develop solar forecast technology and improve solar forecast accuracy and (ii) develop forecasts that can be incorporated within existing grid planning and operation infrastructure. The first issue required atmospheric science and engineering research, while the second required detailed knowledge of energy markets, and power engineering. Motivated by this background we will emphasize area (i) in this talk and provide an overview of recent advancements in solar forecasting especially in two areas: (a) Numerical modeling tools for coastal stratocumulus to improve scheduling in the day-ahead California energy market. (b) Development of a sky imager to provide short term forecasts (0-20 min ahead) to improve optimization and control of equipment on distribution feeders with high penetration of solar. Leveraging such tools that have seen extensive use in the atmospheric sciences supports the development of accurate physics-based solar forecast models. Directions for future research are also provided.

  8. A Bayesian joint probability modeling approach for seasonal forecasting of streamflows at multiple sites

    NASA Astrophysics Data System (ADS)

    Wang, Q. J.; Robertson, D. E.; Chiew, F. H. S.

    2009-05-01

    Seasonal forecasting of streamflows can be highly valuable for water resources management. In this paper, a Bayesian joint probability (BJP) modeling approach for seasonal forecasting of streamflows at multiple sites is presented. A Box-Cox transformed multivariate normal distribution is proposed to model the joint distribution of future streamflows and their predictors such as antecedent streamflows and El Niño-Southern Oscillation indices and other climate indicators. Bayesian inference of model parameters and uncertainties is implemented using Markov chain Monte Carlo sampling, leading to joint probabilistic forecasts of streamflows at multiple sites. The model provides a parametric structure for quantifying relationships between variables, including intersite correlations. The Box-Cox transformed multivariate normal distribution has considerable flexibility for modeling a wide range of predictors and predictands. The Bayesian inference formulated allows the use of data that contain nonconcurrent and missing records. The model flexibility and data-handling ability means that the BJP modeling approach is potentially of wide practical application. The paper also presents a number of statistical measures and graphical methods for verification of probabilistic forecasts of continuous variables. Results for streamflows at three river gauges in the Murrumbidgee River catchment in southeast Australia show that the BJP modeling approach has good forecast quality and that the fitted model is consistent with observed data.

  9. On the forecasting the unfavorable periods in the technosphere by the space weather factors

    NASA Astrophysics Data System (ADS)

    Lyakhov, N. N.

    2002-12-01

    There is the considerable progress in development of geomagnetic disturbances forecast technique, in the necessary time, by solar activity phenomena last years. The possible relationship between violations of the traffic safety terms (VTS) in East Siberian Railway during 1986-1999 and the space weather factors was investigated. The overall number of cases under consideration is equal to 11575. By methods of correlation and spectral analysis it was shown, that statistics of VTS has not a random and it's character is probably caused by space weather factors. The principal difference between rhythmic of VTS by purely technical reasons (MECH) (failures in mechanical systems) and, that of VTS caused by wrong operations of a personnel (MAN), is noted. Increase of sudden storm commencements number results in increase of probability of mistakable actions of an operator. Probability of violations in mechanical systems increases with increase of number of quiet geomagnetic conditions. This, in its turn, dictate different approach to the ordered rows of MECH and MAN data when forecasting the unfavourable periods as the priods of increased risk in working out a wrong decision by technological process participants. The advances in forecasting of geomagnetic environment technique made possible to start construction of systems of the operative informing about unfavourable factors of space weather for the interested organizations.

  10. Weather Prediction Center (WPC) Home Page

    Science.gov Websites

    grids, quantitative precipitation, and winter weather outlook probabilities can be found at: http Short Range Products » More Medium Range Products Quantitative Precipitation Forecasts Legacy Page Discussion (Day 1-3) Quantitative Precipitation Forecast Discussion NWS Weather Prediction Center College

  11. National Centers for Environmental Prediction

    Science.gov Websites

    ENSEMBLE PRODUCTS & DATA SOURCES Probabilistic Forecasts of Quantitative Precipitation from the NCEP Predictability Research with Indian Monsoon Examples - PDF - 28 Mar 2005 North American Ensemble Forecast System QUANTITATIVE PRECIPITATION *PQPF* In these charts, the probability that 24-hour precipitation amounts over a

  12. Modeling and Computing of Stock Index Forecasting Based on Neural Network and Markov Chain

    PubMed Central

    Dai, Yonghui; Han, Dongmei; Dai, Weihui

    2014-01-01

    The stock index reflects the fluctuation of the stock market. For a long time, there have been a lot of researches on the forecast of stock index. However, the traditional method is limited to achieving an ideal precision in the dynamic market due to the influences of many factors such as the economic situation, policy changes, and emergency events. Therefore, the approach based on adaptive modeling and conditional probability transfer causes the new attention of researchers. This paper presents a new forecast method by the combination of improved back-propagation (BP) neural network and Markov chain, as well as its modeling and computing technology. This method includes initial forecasting by improved BP neural network, division of Markov state region, computing of the state transition probability matrix, and the prediction adjustment. Results of the empirical study show that this method can achieve high accuracy in the stock index prediction, and it could provide a good reference for the investment in stock market. PMID:24782659

  13. Time-lagged ensemble simulations of the dispersion of the Eyjafjallajökull plume over Europe with COSMO-ART

    NASA Astrophysics Data System (ADS)

    Vogel, H.; Förstner, J.; Vogel, B.; Hanisch, T.; Mühr, B.; Schättler, U.; Schad, T.

    2014-08-01

    An extended version of the German operational weather forecast model was used to simulate the ash dispersion during the eruption of the Eyjafjallajökull. As an operational forecast was launched every 6 hours, a time-lagged ensemble was obtained. Sensitivity runs show the ability of the model to simulate thin ash layers when an increased vertical resolution is used. Calibration of the model results with measured data allows for a quantitative forecast of the ash concentration. After this calibration an independent comparison of the simulated number concentration of 3 μm particles and observations at Hohenpeißenberg gives a correlation coefficient of 0.79. However, this agreement could only be reached after additional modifications of the emissions. Based on the time lagged ensemble the conditional probability of violation of a certain threshold is calculated. Improving the ensemble technique used in our study such probabilities could become valuable information for the forecasters advising the organizations responsible for the closing of the airspace.

  14. Operational value of ensemble streamflow forecasts for hydropower production: A Canadian case study

    NASA Astrophysics Data System (ADS)

    Boucher, Marie-Amélie; Tremblay, Denis; Luc, Perreault; François, Anctil

    2010-05-01

    Ensemble and probabilistic forecasts have many advantages over deterministic ones, both in meteorology and hydrology (e.g. Krzysztofowicz, 2001). Mainly, they inform the user on the uncertainty linked to the forecast. It has been brought to attention that such additional information could lead to improved decision making (e.g. Wilks and Hamill, 1995; Mylne, 2002; Roulin, 2007), but very few studies concentrate on operational situations involving the use of such forecasts. In addition, many authors have demonstrated that ensemble forecasts outperform deterministic forecasts in terms of performance (e.g. Jaun et al., 2005; Velazquez et al., 2009; Laio and Tamea, 2007). However, such performance is mostly assessed on the basis of numerical scoring rules, which compare the forecasts to the observations, and seldom in terms of management gains. The proposed case study adopts an operational point of view, on the basis that a novel forecasting system has value only if it leads to increase monetary and societal gains (e.g. Murphy, 1994; Laio and Tamea, 2007). More specifically, Environment Canada operational ensemble precipitation forecasts are used to drive the HYDROTEL distributed hydrological model (Fortin et al., 1995), calibrated on the Gatineau watershed located in Québec, Canada. The resulting hydrological ensemble forecasts are then incorporated into Hydro-Québec SOHO stochastic management optimization tool that automatically search for optimal operation decisions for the all reservoirs and hydropower plants located on the basin. The timeline of the study is the fall season of year 2003. This period is especially relevant because of high precipitations that nearly caused a major spill, and forced the preventive evacuation of a portion of the population located near one of the dams. We show that the use of the ensemble forecasts would have reduced the occurrence of spills and flooding, which is of particular importance for dams located in populous area, and increased hydropower production. The ensemble precipitation forecasts extend from March 1st of 2002 to December 31st of 2003. They were obtained using two atmospheric models, SEF (8 members plus the control deterministic forecast) and GEM (8 members). The corresponding deterministic precipitation forecast issued by SEF model is also used within HYDROTEL in order to compare ensemble streamflow forecasts with their deterministic counterparts. Although this study does not incorporate all the sources of uncertainty, precipitation is certainly the most important input for hydrological modeling and conveys a great portion of the total uncertainty. References: Fortin, J.P., Moussa, R., Bocquillon, C. and Villeneuve, J.P. 1995: HYDROTEL, un modèle hydrologique distribué pouvant bénéficier des données fournies par la télédétection et les systèmes d'information géographique, Revue des Sciences de l'Eau, 8(1), 94-124. Jaun, S., Ahrens, B., Walser, A., Ewen, T. and Schaer, C. 2008: A probabilistic view on the August 2005 floods in the upper Rhine catchment, Natural Hazards and Earth System Sciences, 8 (2), 281-291. Krzysztofowicz, R. 2001: The case for probabilistic forecasting in hydrology, Journal of Hydrology, 249, 2-9. Murphy, A.H. 1994: Assessing the economic value of weather forecasts: An overview of methods, results and issues, Meteorological Applications, 1, 69-73. Mylne, K.R. 2002: Decision-Making from probability forecasts based on forecast value, Meteorological Applications, 9, 307-315. Laio, F. and Tamea, S. 2007: Verification tools for probabilistic forecasts of continuous hydrological variables, Hydrology and Earth System Sciences, 11, 1267-1277. Roulin, E. 2007: Skill and relative economic value of medium-range hydrological ensemble predictions, Hydrology and Earth System Sciences, 11, 725-737. Velazquez, J.-A., Petit, T., Lavoie, A., Boucher, M.-A., Turcotte, R., Fortin, V. and Anctil, F. 2009: An evaluation of the Canadian global meteorological ensemble prediction system for short-term hydrological forecasting, Hydrology and Earth System Sciences, 13(11), 2221-2231. Wilks, D.S. and Hamill, T.M. 1995: Potential economic value of ensemble-based surface weather forecasts, Monthly Weather Review, 123(12), 3565-3575.

  15. A channel dynamics model for real-time flood forecasting

    USGS Publications Warehouse

    Hoos, Anne B.; Koussis, Antonis D.; Beale, Guy O.

    1989-01-01

    A new channel dynamics scheme (alternative system predictor in real time (ASPIRE)), designed specifically for real-time river flow forecasting, is introduced to reduce uncertainty in the forecast. ASPIRE is a storage routing model that limits the influence of catchment model forecast errors to the downstream station closest to the catchment. Comparisons with the Muskingum routing scheme in field tests suggest that the ASPIRE scheme can provide more accurate forecasts, probably because discharge observations are used to a maximum advantage and routing reaches (and model errors in each reach) are uncoupled. Using ASPIRE in conjunction with the Kalman filter did not improve forecast accuracy relative to a deterministic updating procedure. Theoretical analysis suggests that this is due to a large process noise to measurement noise ratio.

  16. Towards the Olympic Games: Guanabara Bay Forecasting System and its Application on the Floating Debris Cleaning Actions.

    NASA Astrophysics Data System (ADS)

    Pimentel, F. P.; Marques Da Cruz, L.; Cabral, M. M.; Miranda, T. C.; Garção, H. F.; Oliveira, A. L. S. C.; Carvalho, G. V.; Soares, F.; São Tiago, P. M.; Barmak, R. B.; Rinaldi, F.; dos Santos, F. A.; Da Rocha Fragoso, M.; Pellegrini, J. C.

    2016-02-01

    Marine debris is a widespread pollution issue that affects almost all water bodies and is remarkably relevant in estuaries and bays. Rio de Janeiro city will host the 2016 Olympic Games and Guanabara Bay will be the venue for the sailing competitions. Historically serving as deposit for all types of waste, this water body suffers with major environmental problems, one of them being the massive presence of floating garbage. Therefore, it is of great importance to count on effective contingency actions to address this issue. In this sense, an operational ocean forecasting system was designed and it is presently being used by the Rio de Janeiro State Government to manage and control the cleaning actions on the bay. The forecasting system makes use of high resolution hydrodynamic and atmospheric models and a lagragian particle transport model, in order to provide probabilistic forecasts maps of the areas where the debris are most probably accumulating. All the results are displayed on an interactive GIS web platform along with the tracks of the boats that make the garbage collection, so the decision makers can easily command the actions, enhancing its efficiency. The integration of in situ data and advanced techniques such as Lyapunov exponent analysis are also being developed in the system, so to increase its forecast reliability. Additionally, the system also gathers and compiles on its database all the information on the debris collection, including quantity, type, locations, accumulation areas and their correlation with the environmental factors that drive the runoff and surface drift. Combining probabilistic, deterministic and statistical approaches, the forecasting system of Guanabara Bay has been proving to be a powerful tool for the environmental management and will be of great importance on helping securing the safety and fairness of the Olympic sailing competitions. The system design, its components and main results are presented in this paper.

  17. Advancing Data Assimilation in Operational Hydrologic Forecasting: Progresses, Challenges, and Emerging Opportunities

    NASA Technical Reports Server (NTRS)

    Liu, Yuqiong; Weerts, A.; Clark, M.; Hendricks Franssen, H.-J; Kumar, S.; Moradkhani, H.; Seo, D.-J.; Schwanenberg, D.; Smith, P.; van Dijk, A. I. J. M.; hide

    2012-01-01

    Data assimilation (DA) holds considerable potential for improving hydrologic predictions as demonstrated in numerous research studies. However, advances in hydrologic DA research have not been adequately or timely implemented in operational forecast systems to improve the skill of forecasts for better informed real-world decision making. This is due in part to a lack of mechanisms to properly quantify the uncertainty in observations and forecast models in real-time forecasting situations and to conduct the merging of data and models in a way that is adequately efficient and transparent to operational forecasters. The need for effective DA of useful hydrologic data into the forecast process has become increasingly recognized in recent years. This motivated a hydrologic DA workshop in Delft, the Netherlands in November 2010, which focused on advancing DA in operational hydrologic forecasting and water resources management. As an outcome of the workshop, this paper reviews, in relevant detail, the current status of DA applications in both hydrologic research and operational practices, and discusses the existing or potential hurdles and challenges in transitioning hydrologic DA research into cost-effective operational forecasting tools, as well as the potential pathways and newly emerging opportunities for overcoming these challenges. Several related aspects are discussed, including (1) theoretical or mathematical aspects in DA algorithms, (2) the estimation of different types of uncertainty, (3) new observations and their objective use in hydrologic DA, (4) the use of DA for real-time control of water resources systems, and (5) the development of community-based, generic DA tools for hydrologic applications. It is recommended that cost-effective transition of hydrologic DA from research to operations should be helped by developing community-based, generic modeling and DA tools or frameworks, and through fostering collaborative efforts among hydrologic modellers, DA developers, and operational forecasters.

  18. Development and Transition of the Radiation, Interplanetary Shocks, and Coronal Sources (RISCS) Toolset

    NASA Technical Reports Server (NTRS)

    Spann, James F.; Zank, G.

    2014-01-01

    We outline a plan to develop and transition a physics based predictive toolset called The Radiation, Interplanetary Shocks, and Coronal Sources (RISCS) to describe the interplanetary energetic particle and radiation environment throughout the inner heliosphere, including at the Earth. To forecast and "nowcast" the radiation environment requires the fusing of three components: 1) the ability to provide probabilities for incipient solar activity; 2) the use of these probabilities and daily coronal and solar wind observations to model the 3D spatial and temporal heliosphere, including magnetic field structure and transients, within 10 Astronomical Units; and 3) the ability to model the acceleration and transport of energetic particles based on current and anticipated coronal and heliospheric conditions. We describe how to address 1) - 3) based on our existing, well developed, and validated codes and models. The goal of RISCS toolset is to provide an operational forecast and "nowcast" capability that will a) predict solar energetic particle (SEP) intensities; b) spectra for protons and heavy ions; c) predict maximum energies and their duration; d) SEP composition; e) cosmic ray intensities, and f) plasma parameters, including shock arrival times, strength and obliquity at any given heliospheric location and time. The toolset would have a 72 hour predicative capability, with associated probabilistic bounds, that would be updated hourly thereafter to improve the predicted event(s) and reduce the associated probability bounds. The RISCS toolset would be highly adaptable and portable, capable of running on a variety of platforms to accommodate various operational needs and requirements. The described transition plan is based on a well established approach developed in the Earth Science discipline that ensures that the customer has a tool that meets their needs

  19. Applying different independent component analysis algorithms and support vector regression for IT chain store sales forecasting.

    PubMed

    Dai, Wensheng; Wu, Jui-Yu; Lu, Chi-Jie

    2014-01-01

    Sales forecasting is one of the most important issues in managing information technology (IT) chain store sales since an IT chain store has many branches. Integrating feature extraction method and prediction tool, such as support vector regression (SVR), is a useful method for constructing an effective sales forecasting scheme. Independent component analysis (ICA) is a novel feature extraction technique and has been widely applied to deal with various forecasting problems. But, up to now, only the basic ICA method (i.e., temporal ICA model) was applied to sale forecasting problem. In this paper, we utilize three different ICA methods including spatial ICA (sICA), temporal ICA (tICA), and spatiotemporal ICA (stICA) to extract features from the sales data and compare their performance in sales forecasting of IT chain store. Experimental results from a real sales data show that the sales forecasting scheme by integrating stICA and SVR outperforms the comparison models in terms of forecasting error. The stICA is a promising tool for extracting effective features from branch sales data and the extracted features can improve the prediction performance of SVR for sales forecasting.

  20. Applying Different Independent Component Analysis Algorithms and Support Vector Regression for IT Chain Store Sales Forecasting

    PubMed Central

    Dai, Wensheng

    2014-01-01

    Sales forecasting is one of the most important issues in managing information technology (IT) chain store sales since an IT chain store has many branches. Integrating feature extraction method and prediction tool, such as support vector regression (SVR), is a useful method for constructing an effective sales forecasting scheme. Independent component analysis (ICA) is a novel feature extraction technique and has been widely applied to deal with various forecasting problems. But, up to now, only the basic ICA method (i.e., temporal ICA model) was applied to sale forecasting problem. In this paper, we utilize three different ICA methods including spatial ICA (sICA), temporal ICA (tICA), and spatiotemporal ICA (stICA) to extract features from the sales data and compare their performance in sales forecasting of IT chain store. Experimental results from a real sales data show that the sales forecasting scheme by integrating stICA and SVR outperforms the comparison models in terms of forecasting error. The stICA is a promising tool for extracting effective features from branch sales data and the extracted features can improve the prediction performance of SVR for sales forecasting. PMID:25165740

  1. Verification of space weather forecasts at the UK Met Office

    NASA Astrophysics Data System (ADS)

    Bingham, S.; Sharpe, M.; Jackson, D.; Murray, S.

    2017-12-01

    The UK Met Office Space Weather Operations Centre (MOSWOC) has produced space weather guidance twice a day since its official opening in 2014. Guidance includes 4-day probabilistic forecasts of X-ray flares, geomagnetic storms, high-energy electron events and high-energy proton events. Evaluation of such forecasts is important to forecasters, stakeholders, model developers and users to understand the performance of these forecasts and also strengths and weaknesses to enable further development. Met Office terrestrial near real-time verification systems have been adapted to provide verification of X-ray flare and geomagnetic storm forecasts. Verification is updated daily to produce Relative Operating Characteristic (ROC) curves and Reliability diagrams, and rolling Ranked Probability Skill Scores (RPSSs) thus providing understanding of forecast performance and skill. Results suggest that the MOSWOC issued X-ray flare forecasts are usually not statistically significantly better than a benchmark climatological forecast (where the climatology is based on observations from the previous few months). By contrast, the issued geomagnetic storm activity forecast typically performs better against this climatological benchmark.

  2. Assessing the Effectiveness of the Cone of Probability as a Visual Means of Communicating Scientific Forecasts

    NASA Astrophysics Data System (ADS)

    Orlove, B. S.; Broad, K.; Meyer, R.

    2010-12-01

    We review the evolution, communication, and differing interpretations of the National Hurricane Center (NHC)'s "cone of uncertainty" hurricane forecast graphic, drawing on several related disciplines—cognitive psychology, visual anthropology, and risk communication theory. We examine the 2004 hurricane season, two specific hurricanes (Katrina 2005 and Ike 2008) and the 2010 hurricane season, still in progress. During the 2004 hurricane season, five named storms struck Florida. Our analysis of that season draws upon interviews with key government officials and media figures, archival research of Florida newspapers, analysis of public comments on the NHC cone of uncertainty graphic and a multiagency study of 2004 hurricane behavior. At that time, the hurricane forecast graphic was subject to misinterpretation by many members of the public. We identify several characteristics of this graphic that contributed to public misinterpretation. Residents overemphasized the specific track of the eye, failed to grasp the width of hurricanes, and generally did not recognize the timing of the passage of the hurricane. Little training was provided to emergency response managers in the interpretation of forecasts. In the following year, Katrina became a national scandal, further demonstrating the limitations of the cone as a means of leading to appropriate responses to forecasts. In the second half of the first decade of the 21st century, three major changes occurred in hurricane forecast communication: the forecasts themselves improved in terms of accuracy and lead time, the NHC made minor changes in the graphics and expanded the explanatory material that accompanies the graphics, and some efforts were made to reach out to emergency response planners and municipal officials to enhance their understanding of the forecasts and graphics. There were some improvements in the responses to Ike, though a number of deaths were due to inadequate evacuations, and property damage probably exceeded the levels that could have been reached with fuller preparation. Though no hurricanes in 2010 have yet made landfall at the time of the writing of this abstract, coordination has been fuller to support evacuations of vulnerable coastal regions of North Carolina for Hurricane Earl. Through an examination of interviews, newspaper accounts, public comments on the NHC’s site and on weather blogs, we trace the relative weight of these three changes in the improvements in response to forecasts. We conclude that forecast providers should consider more formal, rigorous pretesting of forecast graphics, using standard social science techniques, in order to minimize the probability of misinterpretation.

  3. Short-term droughts forecast using Markov chain model in Victoria, Australia

    NASA Astrophysics Data System (ADS)

    Rahmat, Siti Nazahiyah; Jayasuriya, Niranjali; Bhuiyan, Muhammed A.

    2017-07-01

    A comprehensive risk management strategy for dealing with drought should include both short-term and long-term planning. The objective of this paper is to present an early warning method to forecast drought using the Standardised Precipitation Index (SPI) and a non-homogeneous Markov chain model. A model such as this is useful for short-term planning. The developed method has been used to forecast droughts at a number of meteorological monitoring stations that have been regionalised into six (6) homogenous clusters with similar drought characteristics based on SPI. The non-homogeneous Markov chain model was used to estimate drought probabilities and drought predictions up to 3 months ahead. The drought severity classes defined using the SPI were computed at a 12-month time scale. The drought probabilities and the predictions were computed for six clusters that depict similar drought characteristics in Victoria, Australia. Overall, the drought severity class predicted was quite similar for all the clusters, with the non-drought class probabilities ranging from 49 to 57 %. For all clusters, the near normal class had a probability of occurrence varying from 27 to 38 %. For the more moderate and severe classes, the probabilities ranged from 2 to 13 % and 3 to 1 %, respectively. The developed model predicted drought situations 1 month ahead reasonably well. However, 2 and 3 months ahead predictions should be used with caution until the models are developed further.

  4. Forecasted economic change and the self-fulfilling prophecy in economic decision-making

    PubMed Central

    2017-01-01

    This study addresses the self-fulfilling prophecy effect, in the domain of economic decision-making. We present experimental data in support of the hypothesis that speculative forecasts of economic change can impact individuals’ economic decision behavior, prior to any realized changes. In a within-subjects experiment, participants (N = 40) played 180 trials in a Balloon Analogue Risk Talk (BART) in which they could make actual profit. Simple messages about possible (positive and negative) changes in outcome probabilities of future trials had significant effects on measures of risk taking (number of inflations) and actual profits in the game. These effects were enduring, even though no systematic changes in actual outcome probabilities took place following any of the messages. Risk taking also found to be reflected in reaction times revealing increasing reaction times with riskier decisions. Positive and negative economic forecasts affected reaction times slopes differently, with negative forecasts resulting in increased reaction time slopes as a function of risk. These findings suggest that forecasted positive or negative economic change can bias people’s mental model of the economy and reduce or stimulate risk taking. Possible implications for media-fulfilling prophecies in the domain of the economy are considered. PMID:28334031

  5. Short-term solar activity forecasting

    NASA Technical Reports Server (NTRS)

    Xie-Zhen, C.; Ai-Di, Z.

    1979-01-01

    A method of forecasting the level of activity of every active region on the surface of the Sun within one to three days is proposed in order to estimate the possibility of the occurrence of ionospheric disturbances and proton events. The forecasting method is a probability process based on statistics. In many of the cases, the accuracy in predicting the short term solar activity was in the range of 70%, although there were many false alarms.

  6. Applied Meteorology Unit (AMU) Quarterly Report Fourth Quarter FY-04

    NASA Technical Reports Server (NTRS)

    Bauman, William; Wheeler, Mark; Lambert, Winifred; Case, Jonathan; Short, David

    2004-01-01

    This report summarizes the Applied Meteorology Unit (A MU) activities for the fourth quarter of Fiscal Year 2004 (July -Sept 2004). Tasks covered are: (1) Objective Lightning Probability Forecast: Phase I, (2) Severe Weather Forecast Decision Aid, (3) Hail Index, (4) Shuttle Ascent Camera Cloud Obstruction Forecast, (5) Advanced Regional Prediction System (ARPS) Optimization and Training Extension and (5) User Control Interface for ARPS Data Analysis System (ADAS) Data Ingest.

  7. Global analysis of seasonal streamflow predictability using an ensemble prediction system and observations from 6192 small catchments worldwide

    NASA Astrophysics Data System (ADS)

    van Dijk, Albert I. J. M.; Peña-Arancibia, Jorge L.; Wood, Eric F.; Sheffield, Justin; Beck, Hylke E.

    2013-05-01

    Ideally, a seasonal streamflow forecasting system would ingest skilful climate forecasts and propagate these through calibrated hydrological models initialized with observed catchment conditions. At global scale, practical problems exist in each of these aspects. For the first time, we analyzed theoretical and actual skill in bimonthly streamflow forecasts from a global ensemble streamflow prediction (ESP) system. Forecasts were generated six times per year for 1979-2008 by an initialized hydrological model and an ensemble of 1° resolution daily climate estimates for the preceding 30 years. A post-ESP conditional sampling method was applied to 2.6% of forecasts, based on predictive relationships between precipitation and 1 of 21 climate indices prior to the forecast date. Theoretical skill was assessed against a reference run with historic forcing. Actual skill was assessed against streamflow records for 6192 small (<10,000 km2) catchments worldwide. The results show that initial catchment conditions provide the main source of skill. Post-ESP sampling enhanced skill in equatorial South America and Southeast Asia, particularly in terms of tercile probability skill, due to the persistence and influence of the El Niño Southern Oscillation. Actual skill was on average 54% of theoretical skill but considerably more for selected regions and times of year. The realized fraction of the theoretical skill probably depended primarily on the quality of precipitation estimates. Forecast skill could be predicted as the product of theoretical skill and historic model performance. Increases in seasonal forecast skill are likely to require improvement in the observation of precipitation and initial hydrological conditions.

  8. Probability for Weather and Climate

    NASA Astrophysics Data System (ADS)

    Smith, L. A.

    2013-12-01

    Over the last 60 years, the availability of large-scale electronic computers has stimulated rapid and significant advances both in meteorology and in our understanding of the Earth System as a whole. The speed of these advances was due, in large part, to the sudden ability to explore nonlinear systems of equations. The computer allows the meteorologist to carry a physical argument to its conclusion; the time scales of weather phenomena then allow the refinement of physical theory, numerical approximation or both in light of new observations. Prior to this extension, as Charney noted, the practicing meteorologist could ignore the results of theory with good conscience. Today, neither the practicing meteorologist nor the practicing climatologist can do so, but to what extent, and in what contexts, should they place the insights of theory above quantitative simulation? And in what circumstances can one confidently estimate the probability of events in the world from model-based simulations? Despite solid advances of theory and insight made possible by the computer, the fidelity of our models of climate differs in kind from the fidelity of models of weather. While all prediction is extrapolation in time, weather resembles interpolation in state space, while climate change is fundamentally an extrapolation. The trichotomy of simulation, observation and theory which has proven essential in meteorology will remain incomplete in climate science. Operationally, the roles of probability, indeed the kinds of probability one has access too, are different in operational weather forecasting and climate services. Significant barriers to forming probability forecasts (which can be used rationally as probabilities) are identified. Monte Carlo ensembles can explore sensitivity, diversity, and (sometimes) the likely impact of measurement uncertainty and structural model error. The aims of different ensemble strategies, and fundamental differences in ensemble design to support of decision making versus advance science, are noted. It is argued that, just as no point forecast is complete without an estimate of its accuracy, no model-based probability forecast is complete without an estimate of its own irrelevance. The same nonlinearities that made the electronic computer so valuable links the selection and assimilation of observations, the formation of ensembles, the evolution of models, the casting of model simulations back into observables, and the presentation of this information to those who use it to take action or to advance science. Timescales of interest exceed the lifetime of a climate model and the career of a climate scientist, disarming the trichotomy that lead to swift advances in weather forecasting. Providing credible, informative climate services is a more difficult task. In this context, the value of comparing the forecasts of simulation models not only with each other but also with the performance of simple empirical models, whenever possible, is stressed. The credibility of meteorology is based on its ability to forecast and explain the weather. The credibility of climatology will always be based on flimsier stuff. Solid insights of climate science may be obscured if the severe limits on our ability to see the details of the future even probabilistically are not communicated clearly.

  9. A non-parametric postprocessor for bias-correcting multi-model ensemble forecasts of hydrometeorological and hydrologic variables

    NASA Astrophysics Data System (ADS)

    Brown, James; Seo, Dong-Jun

    2010-05-01

    Operational forecasts of hydrometeorological and hydrologic variables often contain large uncertainties, for which ensemble techniques are increasingly used. However, the utility of ensemble forecasts depends on the unbiasedness of the forecast probabilities. We describe a technique for quantifying and removing biases from ensemble forecasts of hydrometeorological and hydrologic variables, intended for use in operational forecasting. The technique makes no a priori assumptions about the distributional form of the variables, which is often unknown or difficult to model parametrically. The aim is to estimate the conditional cumulative distribution function (ccdf) of the observed variable given a (possibly biased) real-time ensemble forecast from one or several forecasting systems (multi-model ensembles). The technique is based on Bayesian optimal linear estimation of indicator variables, and is analogous to indicator cokriging (ICK) in geostatistics. By developing linear estimators for the conditional expectation of the observed variable at many thresholds, ICK provides a discrete approximation of the full ccdf. Since ICK minimizes the conditional error variance of the indicator expectation at each threshold, it effectively minimizes the Continuous Ranked Probability Score (CRPS) when infinitely many thresholds are employed. However, the ensemble members used as predictors in ICK, and other bias-correction techniques, are often highly cross-correlated, both within and between models. Thus, we propose an orthogonal transform of the predictors used in ICK, which is analogous to using their principal components in the linear system of equations. This leads to a well-posed problem in which a minimum number of predictors are used to provide maximum information content in terms of the total variance explained. The technique is used to bias-correct precipitation ensemble forecasts from the NCEP Global Ensemble Forecast System (GEFS), for which independent validation results are presented. Extension to multimodel ensembles from the NCEP GFS and Short Range Ensemble Forecast (SREF) systems is also proposed.

  10. Evaluating the Predictability of South-East Asian Floods Using ECMWF and GloFAS Forecasts

    NASA Astrophysics Data System (ADS)

    Pillosu, F. M.

    2017-12-01

    Between July and September 2017, the monsoon season caused widespread heavy rainfall and severe floods across countries in South-East Asia, notably in India, Nepal and Bangladesh, with deadly consequences. According to the U.N., in Bangladesh 140 people lost their lives and 700,000 homes were destroyed; in Nepal at least 143 people died, and more than 460,000 people were forced to leave their homes; in India there were 726 victims of flooding and landslides, 3 million people were affected by the monsoon floods and 2000 relief camps were established. Monsoon season happens regularly every year in South Asia, but local authorities reported the last monsoon season as the worst in several years. What made the last monsoon season particularly severe in certain regions? Are these causes clear from the forecasts? Regarding the meteorological characterization of the event, an analysis of forecasts from the European Centre for Medium-Range Weather Forecast (ECMWF) for different lead times (from seasonal to short range) will be shown to evaluate how far in advance this event was predicted and start discussion on what were the factors that led to such a severe event. To illustrate hydrological aspects, forecasts from the Global Flood Awareness System (GloFAS) will be shown. GloFAS is developed at ECMWF in co-operation with the European Commission's Joint Research Centre (JRC) and with the support of national authorities and research institutions such as the University of Reading. It will become operational at the end of 2017 as part of the Copernicus Emergency Management Service. GloFAS couples state-of-the-art weather forecasts with a hydrological model to provide a cross-border system with early flood guidance information to help humanitarian agencies and national hydro-meteorological services to strengthen and improve forecasting capacity, preparedness and mitigation of natural hazards. In this case GloFAS has shown good potential to become a useful tool for better and earlier preparedness. For instance, first tests showed that by 28th July GloFAS was able to forecast that a relatively large flood peak would probably occur between 13th and 22nd August. An actual flood peak was recorded around 16th August according to the Bangladeshi Flood Forecasting Centre.

  11. Should we use seasonnal meteorological ensemble forecasts for hydrological forecasting? A case study for nordic watersheds in Canada.

    NASA Astrophysics Data System (ADS)

    Bazile, Rachel; Boucher, Marie-Amélie; Perreault, Luc; Leconte, Robert; Guay, Catherine

    2017-04-01

    Hydro-electricity is a major source of energy for many countries throughout the world, including Canada. Long lead-time streamflow forecasts are all the more valuable as they help decision making and dam management. Different techniques exist for long-term hydrological forecasting. Perhaps the most well-known is 'Extended Streamflow Prediction' (ESP), which considers past meteorological scenarios as possible, often equiprobable, future scenarios. In the ESP framework, those past-observed meteorological scenarios (climatology) are used in turn as the inputs of a chosen hydrological model to produce ensemble forecasts (one member corresponding to each year in the available database). Many hydropower companies, including Hydro-Québec (province of Quebec, Canada) use variants of the above described ESP system operationally for long-term operation planning. The ESP system accounts for the hydrological initial conditions and for the natural variability of the meteorological variables. However, it cannot consider the current initial state of the atmosphere. Climate models can help remedy this drawback. In the context of a changing climate, dynamical forecasts issued from climate models seem to be an interesting avenue to improve upon the ESP method and could help hydropower companies to adapt their management practices to an evolving climate. Long-range forecasts from climate models can also be helpful for water management at locations where records of past meteorological conditions are short or nonexistent. In this study, we compare 7-month hydrological forecasts obtained from climate model outputs to an ESP system. The ESP system mimics the one used operationally at Hydro-Québec. The dynamical climate forecasts are produced by the European Center for Medium range Weather Forecasts (ECMWF) System4. Forecasts quality is assessed using numerical scores such as the Continuous Ranked Probability Score (CRPS) and the Ignorance score and also graphical tools such as the reliability diagram. This study covers 10 nordic watersheds. We show that forecast performance according to the CRPS varies with lead-time but also with the period of the year. The raw forecasts from the ECMWF System4 display important biases for both temperature and precipitation, which need to be corrected. The linear scaling method is used for this purpose and is found effective. Bias correction improves forecasts performance, especially during the summer when the precipitations are over-estimated. According to the CRPS, bias corrected forecasts from System4 show performances comparable to those of the ESP system. However, the Ignorance score, which penalizes the lack of calibration (under-dispersive forecasts in this case) more severely than the CRPS, provides a different outlook for the comparison of the two systems. In fact, according to the Ignorance score, the ESP system outperforms forecasts based on System4 in most cases. This illustrates that the joint use of several metrics is crucial to assess the quality of a forecasts system thoroughly. Globally, ESP provide reliable forecasts which can be over-dispersed whereas bias corrected ECMWF System4 forecasts are sharper but at the risk of missing events.

  12. Goals of Quality in Doctoral Studies and Forecasted Outcomes

    ERIC Educational Resources Information Center

    Zelvys, Rimantas

    2007-01-01

    This article discusses the quality assurance policy of doctoral studies implemented in Lithuania and its probable outcomes are forecasted. In scientific literature the quality of education is commonly defined as a holistic phenomenon composed of the quality of initial conditions, quality of process and quality of outputs. The accomplished document…

  13. Applied Meteorology Unit (AMU) Quarterly Report First Quarter FY-04

    NASA Technical Reports Server (NTRS)

    Bauman, William; Wheeler, Mark; Labert, Winifred; Jonathan Case; Short, David

    2004-01-01

    This report summarizes the Applied Meteorology Unit (AMU) activities for the First Quarter of Fiscal Year 2004 (October - December 2003). Tasks reviewed are: (1) Objective Lightning Probability Forecast, (2) Mesonet Temperature and Wind Climatology, (3) Severe Weather Forecast Decision Aid and (4) Anvil Transparency Relationship to Radar Reflectivity

  14. Socio-Political Forecasting: Who Needs It?

    ERIC Educational Resources Information Center

    Burnett, D. Jack

    1978-01-01

    Socio-political forecasting, a new dimension to university planning that can provide universities time to prepare for the impact of social and political changes, is examined. The four elements in the process are scenarios of the future, the probability/diffusion matrix, the profile of significant value-system changes, and integration and…

  15. Trends in the predictive performance of raw ensemble weather forecasts

    NASA Astrophysics Data System (ADS)

    Hemri, Stephan; Scheuerer, Michael; Pappenberger, Florian; Bogner, Konrad; Haiden, Thomas

    2015-04-01

    Over the last two decades the paradigm in weather forecasting has shifted from being deterministic to probabilistic. Accordingly, numerical weather prediction (NWP) models have been run increasingly as ensemble forecasting systems. The goal of such ensemble forecasts is to approximate the forecast probability distribution by a finite sample of scenarios. Global ensemble forecast systems, like the European Centre for Medium-Range Weather Forecasts (ECMWF) ensemble, are prone to probabilistic biases, and are therefore not reliable. They particularly tend to be underdispersive for surface weather parameters. Hence, statistical post-processing is required in order to obtain reliable and sharp forecasts. In this study we apply statistical post-processing to ensemble forecasts of near-surface temperature, 24-hour precipitation totals, and near-surface wind speed from the global ECMWF model. Our main objective is to evaluate the evolution of the difference in skill between the raw ensemble and the post-processed forecasts. The ECMWF ensemble is under continuous development, and hence its forecast skill improves over time. Parts of these improvements may be due to a reduction of probabilistic bias. Thus, we first hypothesize that the gain by post-processing decreases over time. Based on ECMWF forecasts from January 2002 to March 2014 and corresponding observations from globally distributed stations we generate post-processed forecasts by ensemble model output statistics (EMOS) for each station and variable. Parameter estimates are obtained by minimizing the Continuous Ranked Probability Score (CRPS) over rolling training periods that consist of the n days preceding the initialization dates. Given the higher average skill in terms of CRPS of the post-processed forecasts for all three variables, we analyze the evolution of the difference in skill between raw ensemble and EMOS forecasts. The fact that the gap in skill remains almost constant over time, especially for near-surface wind speed, suggests that improvements to the atmospheric model have an effect quite different from what calibration by statistical post-processing is doing. That is, they are increasing potential skill. Thus this study indicates that (a) further model development is important even if one is just interested in point forecasts, and (b) statistical post-processing is important because it will keep adding skill in the foreseeable future.

  16. MAFALDA: An early warning modeling tool to forecast volcanic ash dispersal and deposition

    NASA Astrophysics Data System (ADS)

    Barsotti, S.; Nannipieri, L.; Neri, A.

    2008-12-01

    Forecasting the dispersal of ash from explosive volcanoes is a scientific challenge to modern volcanology. It also represents a fundamental step in mitigating the potential impact of volcanic ash on urban areas and transport routes near explosive volcanoes. To this end we developed a Web-based early warning modeling tool named MAFALDA (Modeling and Forecasting Ash Loading and Dispersal in the Atmosphere) able to quantitatively forecast ash concentrations in the air and on the ground. The main features of MAFALDA are the usage of (1) a dispersal model, named VOL-CALPUFF, that couples the column ascent phase with the ash cloud transport and (2) high-resolution weather forecasting data, the capability to run and merge multiple scenarios, and the Web-based structure of the procedure that makes it suitable as an early warning tool. MAFALDA produces plots for a detailed analysis of ash cloud dynamics and ground deposition, as well as synthetic 2-D maps of areas potentially affected by dangerous concentrations of ash. A first application of MAFALDA to the long-lasting weak plumes produced at Mt. Etna (Italy) is presented. A similar tool can be useful to civil protection authorities and volcanic observatories in reducing the impact of the eruptive events. MAFALDA can be accessed at http://mafalda.pi.ingv.it.

  17. Evaluation of NOAA's High Resolution Rapid Refresh (HRRR), 12 km North America Model (NAM12) and 4km North America Model (NAM 4) hub-height wind speed forecasts

    NASA Astrophysics Data System (ADS)

    Pendergrass, W.; Vogel, C. A.

    2013-12-01

    As an outcome of discussions between Duke Energy Generation and NOAA/ARL following the 2009 AMS Summer Community Meeting, in Norman Oklahoma, ARL and Duke Energy Generation (Duke) signed a Cooperative Research and Development Agreement (CRADA) which allows NOAA to conduct atmospheric boundary layer (ABL) research using Duke renewable energy sites as research testbeds. One aspect of this research has been the evaluation of forecast hub-height winds from three NOAA atmospheric models. Forecasts of 10m (surface) and 80m (hub-height) wind speeds from (1) NOAA/GSD's High Resolution Rapid Refresh (HRRR) model, (2) NOAA/NCEP's 12 km North America Model (NAM12) and (3) NOAA/NCEP's 4k high resolution North America Model (NAM4) were evaluated against 18 months of surface-layer wind observations collected at the joint NOAA/Duke Energy research station located at Duke Energy's West Texas Ocotillo wind farm over the period April 2011 through October 2012. HRRR, NAM12 and NAM4 10m wind speed forecasts were compared with 10m level wind speed observations measured on the NOAA/ATDD flux-tower. Hub-height (80m) HRRR , NAM12 and NAM4 forecast wind speeds were evaluated against the 80m operational PMM27-28 meteorological tower supporting the Ocotillo wind farm. For each HRRR update, eight forecast hours (hour 01, 02, 03, 05, 07, 10, 12, 15) plus the initialization hour (hour 00), evaluated. For the NAM12 and NAM4 models forecast hours 00-24 from the 06z initialization were evaluated. Performance measures or skill score based on absolute error 50% cumulative probability were calculated for each forecast hour. HRRR forecast hour 01 provided the best skill score with an absolute wind speed error within 0.8 m/s of observed 10m wind speed and 1.25 m/s for hub-height wind speed at the designated 50% cumulative probability. For both NAM4 and NAM12 models, skill scores were diurnal with comparable best scores observed during the day of 0.7 m/s of observed 10m wind speed and 1.1 m/s for hub-height wind speed at the designated 50% cumulative probability level.

  18. Monitoring Areal Snow Cover Using NASA Satellite Imagery

    NASA Technical Reports Server (NTRS)

    Harshburger, Brian J.; Blandford, Troy; Moore, Brandon

    2011-01-01

    The objective of this project is to develop products and tools to assist in the hydrologic modeling process, including tools to help prepare inputs for hydrologic models and improved methods for the visualization of streamflow forecasts. In addition, this project will facilitate the use of NASA satellite imagery (primarily snow cover imagery) by other federal and state agencies with operational streamflow forecasting responsibilities. A GIS software toolkit for monitoring areal snow cover extent and producing streamflow forecasts is being developed. This toolkit will be packaged as multiple extensions for ArcGIS 9.x and an opensource GIS software package. The toolkit will provide users with a means for ingesting NASA EOS satellite imagery (snow cover analysis), preparing hydrologic model inputs, and visualizing streamflow forecasts. Primary products include a software tool for predicting the presence of snow under clouds in satellite images; a software tool for producing gridded temperature and precipitation forecasts; and a suite of tools for visualizing hydrologic model forecasting results. The toolkit will be an expert system designed for operational users that need to generate accurate streamflow forecasts in a timely manner. The Remote Sensing of Snow Cover Toolbar will ingest snow cover imagery from multiple sources, including the MODIS Operational Snowcover Data and convert them to gridded datasets that can be readily used. Statistical techniques will then be applied to the gridded snow cover data to predict the presence of snow under cloud cover. The toolbar has the ability to ingest both binary and fractional snow cover data. Binary mapping techniques use a set of thresholds to determine whether a pixel contains snow or no snow. Fractional mapping techniques provide information regarding the percentage of each pixel that is covered with snow. After the imagery has been ingested, physiographic data is attached to each cell in the snow cover image. This data can be obtained from a digital elevation model (DEM) for the area of interest.

  19. Climate forecasts for corn producer decision making

    USDA-ARS?s Scientific Manuscript database

    Corn is the most widely grown crop in the Americas, with annual production in the United States of approximately 332 million metric tons. Improved climate forecasts, together with climate-related decision tools for corn producers based on these improved forecasts, could substantially reduce uncertai...

  20. Integrated Wind Power Planning Tool

    NASA Astrophysics Data System (ADS)

    Rosgaard, Martin; Giebel, Gregor; Skov Nielsen, Torben; Hahmann, Andrea; Sørensen, Poul; Madsen, Henrik

    2013-04-01

    This poster presents the current state of the public service obligation (PSO) funded project PSO 10464, with the title "Integrated Wind Power Planning Tool". The goal is to integrate a mesoscale numerical weather prediction (NWP) model with purely statistical tools in order to assess wind power fluctuations, with focus on long term power system planning for future wind farms as well as short term forecasting for existing wind farms. Currently, wind power fluctuation models are either purely statistical or integrated with NWP models of limited resolution. Using the state-of-the-art mesoscale NWP model Weather Research & Forecasting model (WRF) the forecast error is sought quantified in dependence of the time scale involved. This task constitutes a preparative study for later implementation of features accounting for NWP forecast errors in the DTU Wind Energy maintained Corwind code - a long term wind power planning tool. Within the framework of PSO 10464 research related to operational short term wind power prediction will be carried out, including a comparison of forecast quality at different mesoscale NWP model resolutions and development of a statistical wind power prediction tool taking input from WRF. The short term prediction part of the project is carried out in collaboration with ENFOR A/S; a Danish company that specialises in forecasting and optimisation for the energy sector. The integrated prediction model will allow for the description of the expected variability in wind power production in the coming hours to days, accounting for its spatio-temporal dependencies, and depending on the prevailing weather conditions defined by the WRF output. The output from the integrated short term prediction tool constitutes scenario forecasts for the coming period, which can then be fed into any type of system model or decision making problem to be solved. The high resolution of the WRF results loaded into the integrated prediction model will ensure a high accuracy data basis is available for use in the decision making process of the Danish transmission system operator. The need for high accuracy predictions will only increase over the next decade as Denmark approaches the goal of 50% wind power based electricity in 2025 from the current 20%.

  1. Using Web-Based Collaborative Forecasting to Enhance Information Literacy and Disciplinary Knowledge

    ERIC Educational Resources Information Center

    Buckley, Patrick; Doyle, Elaine

    2016-01-01

    This paper outlines how an existing collaborative forecasting tool called a prediction market (PM) can be integrated into an educational context to enhance information literacy skills and cognitive disciplinary knowledge. The paper makes a number of original contributions. First, it describes how this tool can be packaged as a pedagogical…

  2. Creating Interactive Graphical Overlays in the Advanced Weather Interactive Processing System (AWIPS) Using Shapefiles and DGM Files

    NASA Technical Reports Server (NTRS)

    Barrett, Joe H., III; Lafosse, Richard; Hood, Doris; Hoeth, Brian

    2007-01-01

    Graphical overlays can be created in real-time in the Advanced Weather Interactive Processing System (AWIPS) using shapefiles or DARE Graphics Metafile (DGM) files. This presentation describes how to create graphical overlays on-the-fly for AWIPS, by using two examples of AWIPS applications that were created by the Applied Meteorology Unit (AMU). The first example is the Anvil Threat Corridor Forecast Tool, which produces a shapefile that depicts a graphical threat corridor of the forecast movement of thunderstorm anvil clouds, based on the observed or forecast upper-level winds. This tool is used by the Spaceflight Meteorology Group (SMG) and 45th Weather Squadron (45 WS) to analyze the threat of natural or space vehicle-triggered lightning over a location. The second example is a launch and landing trajectory tool that produces a DGM file that plots the ground track of space vehicles during launch or landing. The trajectory tool can be used by SMG and the 45 WS forecasters to analyze weather radar imagery along a launch or landing trajectory. Advantages of both file types will be listed.

  3. Section on Observed Impacts on El Nino

    NASA Technical Reports Server (NTRS)

    Rosenzweig, Cynthia

    2000-01-01

    Agricultural applications of El Nino forecasts are already underway in some countries and need to be evaluated or re-evaluated. For example, in Peru, El Nino forecasts have been incorporated into national planning for the agricultural sector, and areas planted with rice and cotton (cotton being the more drought-tolerant crop) are adjusted accordingly. How well are this and other such programs working? Such evaluations will contribute to the governmental and intergovernmental institutions, including the Inter-American Institute for Global Change Research and the US National Ocean and Atmospheric Agency that are fostering programs to aid the effective use of forecasts. As El Nino climate forecasting grows out of the research mode into operational mode, the research focus shifts to include the design of appropriate modes of utilization. Awareness of and sensitivity to the costs of prediction errors also grow. For example, one major forecasting model failed to predict the very large El Nino event of 1997, when Pacific sea-surface temperatures were the highest on record. Although simple correlations between El Nino events and crop yields may be suggestive, more sophisticated work is needed to understand the subtleties of the interplay among the global climate system, regional climate patterns, and local agricultural systems. Honesty about the limitations of an forecast is essential, especially when human livelihoods are at stake. An end-to-end analysis links tools and expertise from the full sequence of ENSO cause-and-effect processes. Representatives from many disciplines are needed to achieve insights, e.g, oceanographers and atmospheric scientists who predict El Nino events, climatologists who drive global climate models with sea-surface temperature predictions, agronomists who translate regional climate connections in to crop yield forecasts, and economists who analyze market adjustments to the vagaries of climate and determine the value of climate forecasts. Methods include historical studies to understand past patterns and to test hindcasts of the prediction tools, crop modeling, spatial analysis and remote sensing. This research involves expanding, deepening, and applying the understanding of physical climate to the fields of agronomy and social science; and the reciprocal understanding of crop growth and farm economics to climatology. Delivery of a regional climate forecast with no information about how the climate forecast was derived limits its effectiveness. Explanation of a region's major climate driving forces helps to place a seasonal forecast in context. Then, a useful approach is to show historical responses to previous El Nino events, and projections, with uncertainty intervals, of crop response from dynamic process crop growth models. Regional ID forecasts should be updated with real-time weather conditions. Since every El Nino event is different, it is important to track, report and advise on each new event as it unfolds. The stability of human enterprises depends on understanding both the potentialities and the limits of predictability. Farmers rely on past experience to anticipate and respond to fluctuations in the biophysical systems on which their livelihoods depend. Now scientists are improving their ability to predict some major elements of climate variability. The improvements in the reliability of El Nino forecasts are encouraging, but seasonal forecasts for agriculture are not, and will probably never be completely infallible, due to the chaotic nature of the climate system. Uncertainties proliferate as we extend beyond Pacific sea-surface temperatures to climate teleconnections and agricultural outcomes. The goal of this research is to shed as a clear light as possible on these inherent uncertainties and thus to contribute to the development of appropriate responses to El Nino and other seasonal forecasts for a range of stakeholders, which, ultimately, includes food consumers everywhere.

  4. Rainfall thresholds for forecasting landslides in the Seattle, Washington, area - exceedance and probability

    USGS Publications Warehouse

    Chleborad, Alan F.; Baum, Rex L.; Godt, Jonathan W.

    2006-01-01

    Empirical rainfall thresholds and related information form a basis for forecasting landslides in the Seattle area. A formula for a cumulative rainfall threshold (CT), P3=3.5-0.67P15, defined by rainfall amounts (in inches) during the last 3 days (72 hours), P3, and the previous 15 days (360 hours), P15, was developed from analysis of historical data for 91 landslides that occurred as part of 3-day events of three or more landslides between 1933 and 1997. Comparison with historical records for 577 landslides (including some used in developing the CT) indicates that the CT captures more than 90 percent of historical landslide events of three or more landslides in 1-day and 3-day periods that were recorded from 1978 to 2003. However, the probability of landslide occurrence on a day when the CT is exceeded at any single rain gage (8.4 percent) is low, and additional criteria are needed to confidently forecast landslide occurrence. Exceedance of a rainfall intensity-duration threshold I=3.257D-1.13, for intensity, I, (inch per hour) and duration, D, (hours), corresponds to a higher probability of landslide occurrence (42 percent at any 3 rain gages or 65 percent at any 10 rain gages), but it predicts fewer landslides. Both thresholds must be used in tandem to forecast landslide occurrence in Seattle.

  5. Situational Lightning Climatologies for Central Florida: Phase IV

    NASA Technical Reports Server (NTRS)

    Bauman, William H., III

    2009-01-01

    The threat of lightning is a daily concern during the warm season in Florida. Research has revealed distinct spatial and temporal distributions of lightning occurrence that are strongly influenced by large-scale atmospheric flow regimes. Previously, the Applied Meteorology Unit (AMU) calculated the gridded lightning climatologies based on seven flow regimes over Florida for 1-, 3- and 6-hr intervals in 5-, 10-,20-, and 30-NM diameter range rings around the Shuttle Landing Facility (SLF) and eight other airfields in the National Weather Service in Melbourne (NWS MLB) county warning area (CWA). In this update to the work, the AMU recalculated the lightning climatologies for using individual lightning strike data to improve the accuracy of the climatologies. The AMU included all data regardless of flow regime as one of the stratifications, added monthly stratifications, added three years of data to the period of record and used modified flow regimes based work from the AMU's Objective Lightning Probability Forecast Tool, Phase II. The AMU made changes so the 5- and 10-NM radius range rings are consistent with the aviation forecast requirements at NWS MLB, while the 20- and 30-NM radius range rings at the SLF assist the Spaceflight Meteorology Group in making forecasts for weather Flight Rule violations during Shuttle landings. The AMU also updated the graphical user interface with the new data.

  6. Some Advances in Downscaling Probabilistic Climate Forecasts for Agricultural Decision Support

    NASA Astrophysics Data System (ADS)

    Han, E.; Ines, A.

    2015-12-01

    Seasonal climate forecasts, commonly provided in tercile-probabilities format (below-, near- and above-normal), need to be translated into more meaningful information for decision support of practitioners in agriculture. In this paper, we will present two new novel approaches to temporally downscale probabilistic seasonal climate forecasts: one non-parametric and another parametric method. First, the non-parametric downscaling approach called FResampler1 uses the concept of 'conditional block sampling' of weather data to create daily weather realizations of a tercile-based seasonal climate forecasts. FResampler1 randomly draws time series of daily weather parameters (e.g., rainfall, maximum and minimum temperature and solar radiation) from historical records, for the season of interest from years that belong to a certain rainfall tercile category (e.g., being below-, near- and above-normal). In this way, FResampler1 preserves the covariance between rainfall and other weather parameters as if conditionally sampling maximum and minimum temperature and solar radiation if that day is wet or dry. The second approach called predictWTD is a parametric method based on a conditional stochastic weather generator. The tercile-based seasonal climate forecast is converted into a theoretical forecast cumulative probability curve. Then the deviates for each percentile is converted into rainfall amount or frequency or intensity to downscale the 'full' distribution of probabilistic seasonal climate forecasts. Those seasonal deviates are then disaggregated on a monthly basis and used to constrain the downscaling of forecast realizations at different percentile values of the theoretical forecast curve. As well as the theoretical basis of the approaches we will discuss sensitivity analysis (length of data and size of samples) of them. In addition their potential applications for managing climate-related risks in agriculture will be shown through a couple of case studies based on actual seasonal climate forecasts for: rice cropping in the Philippines and maize cropping in India and Kenya.

  7. The analysis of rapidly developing fog at the Kennedy Space Center

    NASA Technical Reports Server (NTRS)

    Wheeler, Mark M.; Atchison, Michael K.; Schumann, Robin; Taylor, Greg E.; Yersavich, Ann; Warburton, John D.

    1994-01-01

    This report documents fog precursors and fog climatology at Kennedy Space Center (KSC) Florida from 1986 to 1990. The major emphasis of this report focuses on rapidly developing fog events that would affect the less than 7-statute mile visibility rule for End-Of-Mission (EOM) Shuttle landing at KSC (Rule 4-64(A)). The Applied Meteorology Unit's (AMU's) work is to: develop a data base for study of fog associated weather conditions relating to violations of this landing constraint; develop forecast techniques or rules-of-thumb to determine whether or not current conditions are likely to result in an acceptable condition at landing; validate the forecast techniques; and transition techniques to operational use. As part of the analysis the fog events were categorized as either advection, pre-frontal or radiation. As a result of these analyses, the AMU developed a fog climatological data base, identified fog precursors and developed forecaster tools and decision trees. The fog climatological analysis indicates that during the fog season (October to April) there is a higher risk for a visibility violation at KSC during the early morning hours (0700 to 1200 UTC), while 95 percent of all fog events have dissipated by 1600 UTC. A high number of fog events are characterized by a westerly component to the surface wind at KSC (92 percent) and 83 percent of the fog events had fog develop west of KSC first (up to 2 hours). The AMU developed fog decision trees and forecaster tools that would help the forecaster identify fog precursors up to 12 hours in advance. Using the decision trees as process tools ensures the important meteorological data are not overlooked in the forecast process. With these tools and a better understanding of fog formation in the local KSC area, the Shuttle weather support forecaster should be able to give the Launch and Flight Directors a better KSC fog forecast with more confidence.

  8. The Eruption Forecasting Information System: Volcanic Eruption Forecasting Using Databases

    NASA Astrophysics Data System (ADS)

    Ogburn, S. E.; Harpel, C. J.; Pesicek, J. D.; Wellik, J.

    2016-12-01

    Forecasting eruptions, including the onset size, duration, location, and impacts, is vital for hazard assessment and risk mitigation. The Eruption Forecasting Information System (EFIS) project is a new initiative of the US Geological Survey-USAID Volcano Disaster Assistance Program (VDAP) and will advance VDAP's ability to forecast the outcome of volcanic unrest. The project supports probability estimation for eruption forecasting by creating databases useful for pattern recognition, identifying monitoring data thresholds beyond which eruptive probabilities increase, and for answering common forecasting questions. A major component of the project is a global relational database, which contains multiple modules designed to aid in the construction of probabilistic event trees and to answer common questions that arise during volcanic crises. The primary module contains chronologies of volcanic unrest. This module allows us to query eruption chronologies, monitoring data, descriptive information, operational data, and eruptive phases alongside other global databases, such as WOVOdat and the Global Volcanism Program. The EFIS database is in the early stages of development and population; thus, this contribution also is a request for feedback from the community. Preliminary data are already benefitting several research areas. For example, VDAP provided a forecast of the likely remaining eruption duration for Sinabung volcano, Indonesia, using global data taken from similar volcanoes in the DomeHaz database module, in combination with local monitoring time-series data. In addition, EFIS seismologists used a beta-statistic test and empirically-derived thresholds to identify distal volcano-tectonic earthquake anomalies preceding Alaska volcanic eruptions during 1990-2015 to retrospectively evaluate Alaska Volcano Observatory eruption precursors. This has identified important considerations for selecting analog volcanoes for global data analysis, such as differences between closed and open system volcanoes.

  9. Seasonal streamflow prediction using ensemble streamflow prediction technique for the Rangitata and Waitaki River basins on the South Island of New Zealand

    NASA Astrophysics Data System (ADS)

    Singh, Shailesh Kumar

    2014-05-01

    Streamflow forecasts are essential for making critical decision for optimal allocation of water supplies for various demands that include irrigation for agriculture, habitat for fisheries, hydropower production and flood warning. The major objective of this study is to explore the Ensemble Streamflow Prediction (ESP) based forecast in New Zealand catchments and to highlights the present capability of seasonal flow forecasting of National Institute of Water and Atmospheric Research (NIWA). In this study a probabilistic forecast framework for ESP is presented. The basic assumption in ESP is that future weather pattern were experienced historically. Hence, past forcing data can be used with current initial condition to generate an ensemble of prediction. Small differences in initial conditions can result in large difference in the forecast. The initial state of catchment can be obtained by continuously running the model till current time and use this initial state with past forcing data to generate ensemble of flow for future. The approach taken here is to run TopNet hydrological models with a range of past forcing data (precipitation, temperature etc.) with current initial conditions. The collection of runs is called the ensemble. ESP give probabilistic forecasts for flow. From ensemble members the probability distributions can be derived. The probability distributions capture part of the intrinsic uncertainty in weather or climate. An ensemble stream flow prediction which provide probabilistic hydrological forecast with lead time up to 3 months is presented for Rangitata, Ahuriri, and Hooker and Jollie rivers in South Island of New Zealand. ESP based seasonal forecast have better skill than climatology. This system can provide better over all information for holistic water resource management.

  10. Extended Range Prediction of Indian Summer Monsoon: Current status

    NASA Astrophysics Data System (ADS)

    Sahai, A. K.; Abhilash, S.; Borah, N.; Joseph, S.; Chattopadhyay, R.; S, S.; Rajeevan, M.; Mandal, R.; Dey, A.

    2014-12-01

    The main focus of this study is to develop forecast consensus in the extended range prediction (ERP) of monsoon Intraseasonal oscillations using a suit of different variants of Climate Forecast system (CFS) model. In this CFS based Grand MME prediction system (CGMME), the ensemble members are generated by perturbing the initial condition and using different configurations of CFSv2. This is to address the role of different physical mechanisms known to have control on the error growth in the ERP in the 15-20 day time scale. The final formulation of CGMME is based on 21 ensembles of the standalone Global Forecast System (GFS) forced with bias corrected forecasted SST from CFS, 11 low resolution CFST126 and 11 high resolution CFST382. Thus, we develop the multi-model consensus forecast for the ERP of Indian summer monsoon (ISM) using a suite of different variants of CFS model. This coordinated international effort lead towards the development of specific tailor made regional forecast products over Indian region. Skill of deterministic and probabilistic categorical rainfall forecast as well the verification of large-scale low frequency monsoon intraseasonal oscillations has been carried out using hindcast from 2001-2012 during the monsoon season in which all models are initialized at every five days starting from 16May to 28 September. The skill of deterministic forecast from CGMME is better than the best participating single model ensemble configuration (SME). The CGMME approach is believed to quantify the uncertainty in both initial conditions and model formulation. Main improvement is attained in probabilistic forecast which is because of an increase in the ensemble spread, thereby reducing the error due to over-confident ensembles in a single model configuration. For probabilistic forecast, three tercile ranges are determined by ranking method based on the percentage of ensemble members from all the participating models falls in those three categories. CGMME further added value to both deterministic and probability forecast compared to raw SME's and this better skill is probably flows from large spread and improved spread-error relationship. CGMME system is currently capable of generating ER prediction in real time and successfully delivering its experimental operational ER forecast of ISM for the last few years.

  11. Improving the effectiveness of real-time flood forecasting through Predictive Uncertainty estimation: the multi-temporal approach

    NASA Astrophysics Data System (ADS)

    Barbetta, Silvia; Coccia, Gabriele; Moramarco, Tommaso; Todini, Ezio

    2015-04-01

    The negative effects of severe flood events are usually contrasted through structural measures that, however, do not fully eliminate flood risk. Non-structural measures, such as real-time flood forecasting and warning, are also required. Accurate stage/discharge future predictions with appropriate forecast lead-time are sought by decision-makers for implementing strategies to mitigate the adverse effects of floods. Traditionally, flood forecasting has been approached by using rainfall-runoff and/or flood routing modelling. Indeed, both types of forecasts, cannot be considered perfectly representing future outcomes because of lacking of a complete knowledge of involved processes (Todini, 2004). Nonetheless, although aware that model forecasts are not perfectly representing future outcomes, decision makers are de facto implicitly assuming the forecast of water level/discharge/volume, etc. as "deterministic" and coinciding with what is going to occur. Recently the concept of Predictive Uncertainty (PU) was introduced in hydrology (Krzysztofowicz, 1999), and several uncertainty processors were developed (Todini, 2008). PU is defined as the probability of occurrence of the future realization of a predictand (water level/discharge/volume) conditional on: i) prior observations and knowledge, ii) the available information obtained on the future value, typically provided by one or more forecast models. Unfortunately, PU has been frequently interpreted as a measure of lack of accuracy rather than the appropriate tool allowing to take the most appropriate decisions, given a model or several models' forecasts. With the aim to shed light on the benefits for appropriately using PU, a multi-temporal approach based on the MCP approach (Todini, 2008; Coccia and Todini, 2011) is here applied to stage forecasts at sites along the Upper Tiber River. Specifically, the STAge Forecasting-Rating Curve Model Muskingum-based (STAFOM-RCM) (Barbetta et al., 2014) along with the Rating-Curve Model in Real Time (RCM-RT) (Barbetta and Moramarco, 2014) are used to this end. Both models without considering rainfall information explicitly considers, at each time of forecast, the estimate of lateral contribution along the river reach for which the stage forecast is performed at downstream end. The analysis is performed for several reaches using different lead times according to the channel length. Barbetta, S., Moramarco, T., Brocca, L., Franchini, M. and Melone, F. 2014. Confidence interval of real-time forecast stages provided by the STAFOM-RCM model: the case study of the Tiber River (Italy). Hydrological Processes, 28(3),729-743. Barbetta, S. and Moramarco, T. 2014. Real-time flood forecasting by relating local stage and remote discharge. Hydrological Sciences Journal, 59(9 ), 1656-1674. Coccia, G. and Todini, E. 2011. Recent developments in predictive uncertainty assessment based on the Model Conditional Processor approach. Hydrology and Earth System Sciences, 15, 3253-3274. doi:10.5194/hess-15-3253-2011. Krzysztofowicz, R. 1999. Bayesian theory of probabilistic forecasting via deterministic hydrologic model, Water Resour. Res., 35, 2739-2750. Todini, E. 2004. Role and treatment of uncertainty in real-time flood forecasting. Hydrological Processes 18(14), 2743_2746. Todini, E. 2008. A model conditional processor to assess predictive uncertainty in flood forecasting. Intl. J. River Basin Management, 6(2): 123-137.

  12. Thunderstorm Research International Program (TRIP 77) report to management

    NASA Technical Reports Server (NTRS)

    Taiani, A. J.

    1977-01-01

    A post analysis of the previous day's weather, followed by the day's forecast and an outlook on weather conditions for the following day is given. The normal NOAA weather charts were used, complemented by the latest GOES satellite pictures, the latest rawinsonde sounding, and the computer-derived thunderstorm probability forecasts associated with the sounding.

  13. How might Model-based Probabilities Extracted from Imperfect Models Guide Rational Decisions: The Case for non-probabilistic odds

    NASA Astrophysics Data System (ADS)

    Smith, Leonard A.

    2010-05-01

    This contribution concerns "deep" or "second-order" uncertainty, such as the uncertainty in our probability forecasts themselves. It asks the question: "Is it rational to take (or offer) bets using model-based probabilities as if they were objective probabilities?" If not, what alternative approaches for determining odds, perhaps non-probabilistic odds, might prove useful in practice, given the fact we know our models are imperfect? We consider the case where the aim is to provide sustainable odds: not to produce a profit but merely to rationally expect to break even in the long run. In other words, to run a quantified risk of ruin that is relatively small. Thus the cooperative insurance schemes of coastal villages provide a more appropriate parallel than a casino. A "better" probability forecast would lead to lower premiums charged and less volatile fluctuations in the cash reserves of the village. Note that the Bayesian paradigm does not constrain one to interpret model distributions as subjective probabilities, unless one believes the model to be empirically adequate for the task at hand. In geophysics, this is rarely the case. When a probability forecast is interpreted as the objective probability of an event, the odds on that event can be easily computed as one divided by the probability of the event, and one need not favour taking either side of the wager. (Here we are using "odds-for" not "odds-to", the difference being whether of not the stake is returned; odds of one to one are equivalent to odds of two for one.) The critical question is how to compute sustainable odds based on information from imperfect models. We suggest that this breaks the symmetry between the odds-on an event and the odds-against it. While a probability distribution can always be translated into odds, interpreting the odds on a set of events might result in "implied-probabilities" that sum to more than one. And/or the set of odds may be incomplete, not covering all events. We ask whether or not probabilities based on imperfect models can be expected to yield probabilistic odds which are sustainable. Evidence is provided that suggest this is not the case. Even with very good models (good in an Root-Mean-Square sense), the risk of ruin of probabilistic odds is significantly higher than might be expected. Methods for constructing model-based non-probabilistic odds which are sustainable are discussed. The aim here is to be relevant to real world decision support, and so unrealistic assumptions of equal knowledge, equal compute power, or equal access to information are to be avoided. Finally, the use of non-probabilistic odds as a method for communicating deep uncertainty (uncertainty in a probability forecast itself) is discussed in the context of other methods, such as stating one's subjective probability that the models will prove inadequate in each particular instance (that is, the Probability of a "Big Surprise").

  14. Forecasting distribution of numbers of large fires

    USGS Publications Warehouse

    Eidenshink, Jeffery C.; Preisler, Haiganoush K.; Howard, Stephen; Burgan, Robert E.

    2014-01-01

    Systems to estimate forest fire potential commonly utilize one or more indexes that relate to expected fire behavior; however they indicate neither the chance that a large fire will occur, nor the expected number of large fires. That is, they do not quantify the probabilistic nature of fire danger. In this work we use large fire occurrence information from the Monitoring Trends in Burn Severity project, and satellite and surface observations of fuel conditions in the form of the Fire Potential Index, to estimate two aspects of fire danger: 1) the probability that a 1 acre ignition will result in a 100+ acre fire, and 2) the probabilities of having at least 1, 2, 3, or 4 large fires within a Predictive Services Area in the forthcoming week. These statistical processes are the main thrust of the paper and are used to produce two daily national forecasts that are available from the U.S. Geological Survey, Earth Resources Observation and Science Center and via the Wildland Fire Assessment System. A validation study of our forecasts for the 2013 fire season demonstrated good agreement between observed and forecasted values.

  15. Interval forecasting of cyberattack intensity on informatization objects of industry using probability cluster model

    NASA Astrophysics Data System (ADS)

    Krakovsky, Y. M.; Luzgin, A. N.; Mikhailova, E. A.

    2018-05-01

    At present, cyber-security issues associated with the informatization objects of industry occupy one of the key niches in the state management system. As a result of functional disruption of these systems via cyberattacks, an emergency may arise related to loss of life, environmental disasters, major financial and economic damage, or disrupted activities of cities and settlements. When cyberattacks occur with high intensity, in these conditions there is the need to develop protection against them, based on machine learning methods. This paper examines interval forecasting and presents results with a pre-set intensity level. The interval forecasting is carried out based on a probabilistic cluster model. This method involves forecasting of one of the two predetermined intervals in which a future value of the indicator will be located; probability estimates are used for this purpose. A dividing bound of these intervals is determined by a calculation method based on statistical characteristics of the indicator. Source data are used that includes a number of hourly cyberattacks using a honeypot from March to September 2013.

  16. The scientific management of volcanic crises

    NASA Astrophysics Data System (ADS)

    Marzocchi, Warner; Newhall, Christopher; Woo, Gordon

    2012-12-01

    Sound scientific management of volcanic crises is the primary tool to reduce significantly volcanic risk in the short-term. At present, a wide variety of qualitative or semi-quantitative strategies is adopted, and there is not yet a commonly accepted quantitative and general strategy. Pre-eruptive processes are extremely complicated, with many degrees of freedom nonlinearly coupled, and poorly known, so scientists must quantify eruption forecasts through the use of probabilities. On the other hand, this also forces decision-makers to make decisions under uncertainty. We review the present state of the art in this field in order to identify the main gaps of the existing procedures. Then, we put forward a general quantitative procedure that may overcome the present barriers, providing guidelines on how probabilities may be used to take rational mitigation actions. These procedures constitute a crucial link between science and society; they can be used to establish objective and transparent decision-making protocols and also clarify the role and responsibility of each partner involved in managing a crisis.

  17. Applied Meteorology Unit (AMU) Quarterly Report. First Quarter FY-05

    NASA Technical Reports Server (NTRS)

    Bauman, William; Wheeler, Mark; Lambert, Winifred; Case, Jonathan; Short, David

    2005-01-01

    This report summarizes the Applied Meteorology Unit (AMU) activities for the first quarter of Fiscal Year 2005 (October - December 2005). Tasks reviewed include: (1) Objective Lightning Probability Forecast: Phase I, (2) Severe Weather Forecast Decision Aid, (3) Hail Index, (4) Stable Low Cloud Evaluation, (5) Shuttle Ascent Camera Cloud Obstruction Forecast, (6) Range Standardization and Automation (RSA) and Legacy Wind Sensor Evaluation, (7) Advanced Regional Prediction System (ARPS) Optimization and Training Extension, and (8) User Control Interface for ARPS Data Analysis System (ADAS) Data Ingest

  18. The new Met Office strategy for seasonal forecasts

    NASA Astrophysics Data System (ADS)

    Hewson, T. D.

    2012-04-01

    In October 2011 the Met Office began issuing a new-format UK seasonal forecast, called "The 3-month Outlook". Government interest in a UK-relevant product had been heightened by infrastructure issues arising during the severe cold of previous winters. At the same time there was evidence that the Met Office's "GLOSEA4" long range forecasting system exhibited some hindcast skill for the UK, that was comparable to its hindcast skill for the larger (and therefore less useful) 'northern Europe' region. Also, the NAO- and AO- signals prevailing in the previous two winters had been highlighted by the GLOSEA4 model well in advance. This presentation will initially give a brief overview of GLOSEA4, describing key features such as evolving sea-ice, a well-resolved stratosphere, and the perturbation strategy. Skill measures will be shown, along with forecasts for the last 3 winters. The new structure 3-month outlook will then be described and presented. Previously, our seasonal forecasts had been based on a tercile approach. The new format outlook aims to substantially improve upon this by illustrating graphically, and with text, the full range of possible outcomes, and by placing those outcomes in the context of climatology. In one key component the forecast pdfs (probability density functions) are displayed alongside climatological pdfs. To generate the forecast pdf we take the bias-corrected GLOSEA4 output (42 members), and then incorporate, via expert team, all other relevant information. Firstly model forecasts from other centres are examined. Then external 'forcing factors', such as solar, and the state of the land-ocean-ice system, are referenced, assessing how well the models represent their influence, and bringing in statistical relationships where appropriate. The expert team thereby decides upon any changes to the GLOSEA4 data, employing an interactive tool to shift, expand or contract the forecast pdfs accordingly. The full modification process will be illustrated during the presentation. Another key component of the 3-month outlook is the focus it places on potential hazards and impacts. To date specific references have been made to snow and ice disruption, to replenishment expectation for regions suffering water supply shortages, and to windstorm frequency. This aspect will be discussed, showing also some subjective verification. In future we hope to extend the 3-month outlook framework to other parts of the world, notably Africa, a region where the Met Office, with DfID support, is working collaboratively to improve real-time long range forecasts. Brief reference will also be made to such activities.

  19. Some Initiatives in a Business Forecasting Course

    ERIC Educational Resources Information Center

    Chu, Singfat

    2007-01-01

    The paper reports some initiatives to freshen up the typical undergraduate business forecasting course. These include (1) students doing research and presentations on contemporary tools and industry practices such as neural networks and collaborative forecasting (2) insertion of Logistic Regression in the curriculum (3) productive use of applets…

  20. Validation of potential fishing zone forecast using experimental fishing method in Tolo Bay, Central Sulawesi Province

    NASA Astrophysics Data System (ADS)

    Rintaka, W. E.; Susilo, E.

    2018-04-01

    The national scale of Indonesian Potential Fishing Zone (PFZ) forecast system has been established since 2000. Recent times this system use Single Image Edge Detection algorithm to automatically identify thermal front from remote sensing images. Its generate two fishing ground/FG criteria: FG (high probability) and potential fishing ground/PFG (medium/low probability). To quantify the accuracy of this algorithm, an experimental fishing/EF was carried out in Tolo Bay, Central Sulawesi Province at September 2016 the late southeast monsoon period by using a pole and line fishing vessel. Four fishing activities (P1, P2, P3, and P4) were conducted during this study at a different location nearby the PFZ forecast position, two of them had good results. Based on distance measurement, these locations P1 and P4 were associated with PFZ forecast position. They were associated with PFG and FG criteria. The distance between EF to P1 and P4 were 9.7 and 6.69 nautical miles. The amount of catch for each location was 850 and 900 kg, respectively. The other locations P2 and P3 were also associated with PFG criteria, but there was no catch. We conclude that the number of the catch is influenced by the distance from PFZ forecast position and criteria.

  1. Short-term volcano-tectonic earthquake forecasts based on a moving mean recurrence time algorithm: the El Hierro seismo-volcanic crisis experience

    NASA Astrophysics Data System (ADS)

    García, Alicia; De la Cruz-Reyna, Servando; Marrero, José M.; Ortiz, Ramón

    2016-05-01

    Under certain conditions, volcano-tectonic (VT) earthquakes may pose significant hazards to people living in or near active volcanic regions, especially on volcanic islands; however, hazard arising from VT activity caused by localized volcanic sources is rarely addressed in the literature. The evolution of VT earthquakes resulting from a magmatic intrusion shows some orderly behaviour that may allow the occurrence and magnitude of major events to be forecast. Thus governmental decision makers can be supplied with warnings of the increased probability of larger-magnitude earthquakes on the short-term timescale. We present here a methodology for forecasting the occurrence of large-magnitude VT events during volcanic crises; it is based on a mean recurrence time (MRT) algorithm that translates the Gutenberg-Richter distribution parameter fluctuations into time windows of increased probability of a major VT earthquake. The MRT forecasting algorithm was developed after observing a repetitive pattern in the seismic swarm episodes occurring between July and November 2011 at El Hierro (Canary Islands). From then on, this methodology has been applied to the consecutive seismic crises registered at El Hierro, achieving a high success rate in the real-time forecasting, within 10-day time windows, of volcano-tectonic earthquakes.

  2. Quantifying the Economic and Grid Reliability Impacts of Improved Wind Power Forecasting

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Qin; Martinez-Anido, Carlo Brancucci; Wu, Hongyu

    Wind power forecasting is an important tool in power system operations to address variability and uncertainty. Accurately doing so is important to reducing the occurrence and length of curtailment, enhancing market efficiency, and improving the operational reliability of the bulk power system. This research quantifies the value of wind power forecasting improvements in the IEEE 118-bus test system as modified to emulate the generation mixes of Midcontinent, California, and New England independent system operator balancing authority areas. To measure the economic value, a commercially available production cost modeling tool was used to simulate the multi-timescale unit commitment (UC) and economicmore » dispatch process for calculating the cost savings and curtailment reductions. To measure the reliability improvements, an in-house tool, FESTIV, was used to calculate the system's area control error and the North American Electric Reliability Corporation Control Performance Standard 2. The approach allowed scientific reproducibility of results and cross-validation of the tools. A total of 270 scenarios were evaluated to accommodate the variation of three factors: generation mix, wind penetration level, and wind fore-casting improvements. The modified IEEE 118-bus systems utilized 1 year of data at multiple timescales, including the day-ahead UC, 4-hour-ahead UC, and 5-min real-time dispatch. The value of improved wind power forecasting was found to be strongly tied to the conventional generation mix, existence of energy storage devices, and the penetration level of wind energy. The simulation results demonstrate that wind power forecasting brings clear benefits to power system operations.« less

  3. Next-Day Earthquake Forecasts for California

    NASA Astrophysics Data System (ADS)

    Werner, M. J.; Jackson, D. D.; Kagan, Y. Y.

    2008-12-01

    We implemented a daily forecast of m > 4 earthquakes for California in the format suitable for testing in community-based earthquake predictability experiments: Regional Earthquake Likelihood Models (RELM) and the Collaboratory for the Study of Earthquake Predictability (CSEP). The forecast is based on near-real time earthquake reports from the ANSS catalog above magnitude 2 and will be available online. The model used to generate the forecasts is based on the Epidemic-Type Earthquake Sequence (ETES) model, a stochastic model of clustered and triggered seismicity. Our particular implementation is based on the earlier work of Helmstetter et al. (2006, 2007), but we extended the forecast to all of Cali-fornia, use more data to calibrate the model and its parameters, and made some modifications. Our forecasts will compete against the Short-Term Earthquake Probabilities (STEP) forecasts of Gersten-berger et al. (2005) and other models in the next-day testing class of the CSEP experiment in California. We illustrate our forecasts with examples and discuss preliminary results.

  4. Forecasting Dust Storms Using the CARMA-Dust Model and MM5 Weather Data

    NASA Astrophysics Data System (ADS)

    Barnum, B. H.; Winstead, N. S.; Wesely, J.; Hakola, A.; Colarco, P.; Toon, O. B.; Ginoux, P.; Brooks, G.; Hasselbarth, L. M.; Toth, B.; Sterner, R.

    2002-12-01

    An operational model for the forecast of dust storms in Northern Africa, the Middle East and Southwest Asia has been developed for the United States Air Force Weather Agency (AFWA). The dust forecast model uses the 5th generation Penn State Mesoscale Meteorology Model (MM5), and a modified version of the Colorado Aerosol and Radiation Model for Atmospheres (CARMA). AFWA conducted a 60 day evaluation of the dust model to look at the model's ability to forecast dust storms for short, medium and long range (72 hour) forecast periods. The study used satellite and ground observations of dust storms to verify the model's effectiveness. Each of the main mesoscale forecast theaters was broken down into smaller sub-regions for detailed analysis. The study found the forecast model was able to forecast dust storms in Saharan Africa and the Sahel region with an average Probability of Detection (POD)exceeding 68%, with a 16% False Alarm Rate (FAR). The Southwest Asian theater had average POD's of 61% with FAR's averaging 10%.

  5. Probability, propensity and probability of propensities (and of probabilities)

    NASA Astrophysics Data System (ADS)

    D'Agostini, Giulio

    2017-06-01

    The process of doing Science in condition of uncertainty is illustrated with a toy experiment in which the inferential and the forecasting aspects are both present. The fundamental aspects of probabilistic reasoning, also relevant in real life applications, arise quite naturally and the resulting discussion among non-ideologized, free-minded people offers an opportunity for clarifications.

  6. Assessing the Utility of Seasonal SST Forecasts to the Fisheries Management Process: a Pacific Sardine Case Study

    NASA Astrophysics Data System (ADS)

    Tommasi, D.; Stock, C. A.

    2016-02-01

    It is well established that environmental fluctuations affect the productivity of numerous fish stocks. Recent advances in prediction capability of dynamical global forecast systems, such as the state of the art NOAA Geophysical Fluid dynamics Laboratory (GFDL) 2.5-FLOR model, allow for climate predictions of fisheries-relevant variables at temporal scales relevant to the fishery management decision making process. We demonstrate that the GFDL FLOR model produces skillful seasonal SST anomaly predictions over the continental shelf , where most of the global fish yield is generated. The availability of skillful SST projections at this "fishery relevant" scale raises the potential for better constrained estimates of future fish biomass and improved harvest decisions. We assessed the utility of seasonal SST coastal shelf predictions for fisheries management using the case study of Pacific sardine. This fishery was selected because it is one of the few to already incorporate SST into its harvest guideline, and show a robust recruitment-SST relationship. We quantified the effectiveness of management under the status quo harvest guideline (HG) and under alternative HGs including future information at different levels of uncertainty. Usefulness of forecast SST to management was dependent on forecast uncertainty. If the standard deviation of the SST anomaly forecast residuals was less than 0.65, the alternative HG produced higher long-term yield and stock biomass, and reduced the probability of either catch or stock biomass falling below management-set threshold values as compared to the status quo. By contrast, probability of biomass falling to extremely low values increased as compared to the status quo for all alternative HGs except for a perfectly known future SST case. To safeguard against occurrence of such low probability but costly events, a harvest cutoff biomass also has to be implemented into the HG.

  7. A prototype system for forecasting landslides in the Seattle, Washington, area

    USGS Publications Warehouse

    Chleborad, Alan F.; Baum, Rex L.; Godt, Jonathan W.; Powers, Philip S.

    2008-01-01

    Empirical rainfall thresholds and related information form the basis of a prototype system for forecasting landslides in the Seattle area. The forecasts are tied to four alert levels, and a decision tree guides the use of thresholds to determine the appropriate level. From analysis of historical landslide data, we developed a formula for a cumulative rainfall threshold (CT), P3  =  88.9 − 0.67P15, defined by rainfall amounts in millimeters during consecutive 3 d (72 h) periods, P3, and the 15 d (360 h) period before P3, P15. The variable CT captures more than 90% of historical events of three or more landslides in 1 d and 3 d periods recorded from 1978 to 2003. However, the low probability of landslide occurrence on a day when the CT is exceeded at one or more rain gauges (8.4%) justifies a low-level of alert for possible landslide occurrence, but it does trigger more vigilant monitoring of rainfall and soil wetness. Exceedance of a rainfall intensity-duration threshold I  =  82.73D−1.13, for intensity, I (mm/hr), and duration, D (hr), corresponds to a higher probability of landslide occurrence (30%) and forms the basis for issuing warnings of impending, widespread occurrence of landslides. Information about the area of exceedance and soil wetness can be used to increase the certainty of landslide forecasts (probabilities as great as 71%). Automated analysis of real-time rainfall and subsurface water data and digital quantitative precipitation forecasts are needed to fully implement a warning system based on the two thresholds.

  8. The Integrated Medical Model: A Risk Assessment and Decision Support Tool for Human Space Flight Missions

    NASA Technical Reports Server (NTRS)

    Kerstman, Eric L.; Minard, Charles; FreiredeCarvalho, Mary H.; Walton, Marlei E.; Myers, Jerry G., Jr.; Saile, Lynn G.; Lopez, Vilma; Butler, Douglas J.; Johnson-Throop, Kathy A.

    2011-01-01

    This slide presentation reviews the Integrated Medical Model (IMM) and its use as a risk assessment and decision support tool for human space flight missions. The IMM is an integrated, quantified, evidence-based decision support tool useful to NASA crew health and mission planners. It is intended to assist in optimizing crew health, safety and mission success within the constraints of the space flight environment for in-flight operations. It uses ISS data to assist in planning for the Exploration Program and it is not intended to assist in post flight research. The IMM was used to update Probability Risk Assessment (PRA) for the purpose of updating forecasts for the conditions requiring evacuation (EVAC) or Loss of Crew Life (LOC) for the ISS. The IMM validation approach includes comparison with actual events and involves both qualitative and quantitaive approaches. The results of these comparisons are reviewed. Another use of the IMM is to optimize the medical kits taking into consideration the specific mission and the crew profile. An example of the use of the IMM to optimize the medical kits is reviewed.

  9. Evolving forecasting classifications and applications in health forecasting

    PubMed Central

    Soyiri, Ireneous N; Reidpath, Daniel D

    2012-01-01

    Health forecasting forewarns the health community about future health situations and disease episodes so that health systems can better allocate resources and manage demand. The tools used for developing and measuring the accuracy and validity of health forecasts commonly are not defined although they are usually adapted forms of statistical procedures. This review identifies previous typologies used in classifying the forecasting methods commonly used in forecasting health conditions or situations. It then discusses the strengths and weaknesses of these methods and presents the choices available for measuring the accuracy of health-forecasting models, including a note on the discrepancies in the modes of validation. PMID:22615533

  10. Using prediction markets to estimate the reproducibility of scientific research.

    PubMed

    Dreber, Anna; Pfeiffer, Thomas; Almenberg, Johan; Isaksson, Siri; Wilson, Brad; Chen, Yiling; Nosek, Brian A; Johannesson, Magnus

    2015-12-15

    Concerns about a lack of reproducibility of statistically significant results have recently been raised in many fields, and it has been argued that this lack comes at substantial economic costs. We here report the results from prediction markets set up to quantify the reproducibility of 44 studies published in prominent psychology journals and replicated in the Reproducibility Project: Psychology. The prediction markets predict the outcomes of the replications well and outperform a survey of market participants' individual forecasts. This shows that prediction markets are a promising tool for assessing the reproducibility of published scientific results. The prediction markets also allow us to estimate probabilities for the hypotheses being true at different testing stages, which provides valuable information regarding the temporal dynamics of scientific discovery. We find that the hypotheses being tested in psychology typically have low prior probabilities of being true (median, 9%) and that a "statistically significant" finding needs to be confirmed in a well-powered replication to have a high probability of being true. We argue that prediction markets could be used to obtain speedy information about reproducibility at low cost and could potentially even be used to determine which studies to replicate to optimally allocate limited resources into replications.

  11. Using prediction markets to estimate the reproducibility of scientific research

    PubMed Central

    Dreber, Anna; Pfeiffer, Thomas; Almenberg, Johan; Isaksson, Siri; Wilson, Brad; Chen, Yiling; Nosek, Brian A.; Johannesson, Magnus

    2015-01-01

    Concerns about a lack of reproducibility of statistically significant results have recently been raised in many fields, and it has been argued that this lack comes at substantial economic costs. We here report the results from prediction markets set up to quantify the reproducibility of 44 studies published in prominent psychology journals and replicated in the Reproducibility Project: Psychology. The prediction markets predict the outcomes of the replications well and outperform a survey of market participants’ individual forecasts. This shows that prediction markets are a promising tool for assessing the reproducibility of published scientific results. The prediction markets also allow us to estimate probabilities for the hypotheses being true at different testing stages, which provides valuable information regarding the temporal dynamics of scientific discovery. We find that the hypotheses being tested in psychology typically have low prior probabilities of being true (median, 9%) and that a “statistically significant” finding needs to be confirmed in a well-powered replication to have a high probability of being true. We argue that prediction markets could be used to obtain speedy information about reproducibility at low cost and could potentially even be used to determine which studies to replicate to optimally allocate limited resources into replications. PMID:26553988

  12. The Integrated Medical Model: A Risk Assessment and Decision Support Tool for Space Flight Medical Systems

    NASA Technical Reports Server (NTRS)

    Kerstman, Eric; Minard, Charles; Saile, Lynn; deCarvalho, Mary Freire; Myers, Jerry; Walton, Marlei; Butler, Douglas; Iyengar, Sriram; Johnson-Throop, Kathy; Baumann, David

    2009-01-01

    The Integrated Medical Model (IMM) is a decision support tool that is useful to mission planners and medical system designers in assessing risks and designing medical systems for space flight missions. The IMM provides an evidence based approach for optimizing medical resources and minimizing risks within space flight operational constraints. The mathematical relationships among mission and crew profiles, medical condition incidence data, in-flight medical resources, potential crew functional impairments, and clinical end-states are established to determine probable mission outcomes. Stochastic computational methods are used to forecast probability distributions of crew health and medical resource utilization, as well as estimates of medical evacuation and loss of crew life. The IMM has been used in support of the International Space Station (ISS) medical kit redesign, the medical component of the ISS Probabilistic Risk Assessment, and the development of the Constellation Medical Conditions List. The IMM also will be used to refine medical requirements for the Constellation program. The IMM outputs for ISS and Constellation design reference missions will be presented to demonstrate the potential of the IMM in assessing risks, planning missions, and designing medical systems. The implementation of the IMM verification and validation plan will be reviewed. Additional planned capabilities of the IMM, including optimization techniques and the inclusion of a mission timeline, will be discussed. Given the space flight constraints of mass, volume, and crew medical training, the IMM is a valuable risk assessment and decision support tool for medical system design and mission planning.

  13. The Psychology of Hazard Risk Perception

    NASA Astrophysics Data System (ADS)

    Thompson, K. F.

    2012-12-01

    A critical step in preparing for natural hazards is understanding the risk: what is the hazard, its likelihood and range of impacts, and what are the vulnerabilities of the community? Any hazard forecast naturally includes a degree of uncertainty, and often these uncertainties are expressed in terms of probabilities. There is often a strong understanding of probability among the physical scientists and emergency managers who create hazard forecasts and issue watches, warnings, and evacuation orders, and often such experts expect similar levels of risk fluency among the general public—indeed, the Working Group on California Earthquake Probabilities (WGCEP) states in the introduction to its earthquake rupture forecast maps that "In daily living, people are used to making decisions based on probabilities—from the flip of a coin (50% probability of heads) to weather forecasts (such as a 30% chance of rain) to the annual chance of being killed by lightning (about 0.0003%)." [1] However, cognitive psychologists have shown in numerous studies [see, e.g., 2-5] that the WGCEP's expectation of probability literacy is inaccurate. People neglect, distort, misjudge, or misuse probability information, even when given strong guidelines about the meaning of numerical or verbally stated probabilities [6]. Even the most ubiquitous of probabilistic information—weather forecasts—are systematically misinterpreted [7]. So while disaster risk analysis and assessment is undoubtedly a critical step in public preparedness and hazard mitigation plans, it is equally important that scientists and practitioners understand the common psychological barriers to accurate probability perception before they attempt to communicate hazard risks to the public. This paper discusses several common, systematic distortions in probability perception and use, including: the influence of personal experience on use of statistical information; temporal discounting and construal level theory; the effect of instrumentality on risk perception; and the impact of "false alarms" or "near misses." We conclude with practical recommendations for ways that risk communications may best be presented to avoid (or, in some cases, to capitalize on) these typical psychological hurdles to the understanding of risk. 1 http://www.scec.org/ucerf/ 2 Kahneman, D. & Tversky, A. (1979). Prospect Theory: An Analysis of Decision under Risk. Econometrica, XLVII: 263-291. 3 Hau, R., Pleskac, T. J., Kiefer, J., & Hertwig, R. (2008). The Description/Experience Gap in Risky Choice: The Role of Sample Size and Experienced Probabilities. Journal of Behavioral Decision Making, 21: 493-518. 4 Lichtenstein, S., Slovic, P., Fischhoff, B., Layman, M., & Combs, B. (1978). Judged frequency of lethal events. JEP: Human Learning and Memory, 4, 551-578. 5 Hertwig, R., Barron, G., Weber, E. U., & Erev, I. (2006). The role of information sampling in risky choice. In K. Fiedler, & P. Juslin (Eds.), Information sampling and adaptive cognition. (pp. 75-91). New York: Cambridge U Press. 6 Budescu, DV, Weinberg, S & Wallsten, TS (1987). Decisions based on numerically and verbally expressed uncertainties. JEP: Human Perception and Performance, 14(2), 281-294. 7 Gigerenzer, G., Hertwig, R., Van Den Broek, E., Fasolo, B., & Katsikopoulos, K. V. (2005). "A 30% chance of rain tomorrow": How does the public understand probabilistic weather forecasts? Risk Analysis, 25(3), 623-629.

  14. Prediction of sea ice thickness cluster in the Northern Hemisphere

    NASA Astrophysics Data System (ADS)

    Fuckar, Neven-Stjepan; Guemas, Virginie; Johnson, Nathaniel; Doblas-Reyes, Francisco

    2016-04-01

    Sea ice thickness (SIT) has a potential to contain substantial climate memory and predictability in the northern hemisphere (NH) sea ice system. We use 5-member NH SIT, reconstructed with an ocean-sea-ice general circulation model (NEMOv3.3 with LIM2) with a simple data assimilation routine, to determine NH SIT modes of variability disentangled from the long-term climate change. Specifically, we apply the K-means cluster analysis - one of nonhierarchical clustering methods that partition data into modes or clusters based on their distances in the physical - to determine optimal number of NH SIT clusters (K=3) and their historical variability. To examine prediction skill of NH SIT clusters in EC-Earth2.3, a state-of-the-art coupled climate forecast system, we use 5-member ocean and sea ice initial conditions (IC) from the same ocean-sea-ice historical reconstruction and atmospheric IC from ERA-Interim reanalysis. We focus on May 1st and Nov 1st start dates from 1979 to 2010. Common skill metrics of probability forecast, such as rank probability skill core and ROC (relative operating characteristics - hit rate versus false alarm rate) and reliability diagrams show that our dynamical model predominately perform better than the 1st order Marko chain forecast (that beats climatological forecast) over the first forecast year. On average May 1st start dates initially have lower skill than Nov 1st start dates, but their skill is degraded at slower rate than skill of forecast started on Nov 1st.

  15. Spatially explicit forecasts of large wildland fire probability and suppression costs for California

    Treesearch

    Haiganoush Preisler; Anthony L. Westerling; Krista M. Gebert; Francisco Munoz-Arriola; Thomas P. Holmes

    2011-01-01

    In the last decade, increases in fire activity and suppression expenditures have caused budgetary problems for federal land management agencies. Spatial forecasts of upcoming fire activity and costs have the potential to help reduce expenditures, and increase the efficiency of suppression efforts, by enabling them to focus resources where they have the greatest effect...

  16. Users Guide for the Anvil Threat Corridor Forecast Tool V1.7.0 for AWIPS

    NASA Technical Reports Server (NTRS)

    Bauman, William H., III

    2007-01-01

    The Applied Meteorology Unit (AMU) originally developed the Anvil Threat Sector Tool for the Meteorological Interactive Data Display System (MIDDS) and delivered the capability in three phases beginning with a feasibility study in 2000 and delivering the operational final product in December 2003. This tool is currently used operationally by the 45th Weather Squadron (45 WS) Launch Weather Officers (LWO) and Spaceflight Meteorology Group (SMG) forecasters. Phase I of the task established the technical feasibility of developing an objective, observations-based tool for short-range anvil forecasting. The AMU was subsequently tasked to develop short-term anvil forecasting tools to improve predictions of the threat of triggered lightning to space launch and landing vehicles. Under the Phase II effort, the AMU developed a nowcasting anvil threat sector tool, which provided the user with a threat sector based on the most current radiosonde upper wind data from a co-located or upstream station. The Phase II Anvil Threat Sector Tool computes the average wind speed and direction in the layer between 300 and 150 mb from the latest radiosonde for a user-designated station. The following threat sector properties are consistent with the propagation and lifetime characteristics of thunderstorm anvil clouds observed over Florida and its coastal waters (Short et al. 2002): a) 20 n mi standoff circle, b) 30 degree sector width, c) Orientation given by 300 to 150 mb average wind direction, d) 1-, 2-, and 3- hour arcs in upwind direction, and e) Arc distances given by 300 to 150 mb average wind speed. Figure 1 is an example of the MIDDS Anvil Threat Sector tool overlaid on a visible satellite image at 2132 UTC 13 May 2001. Space Launch Complex 39A was selected as the center point and the Anvil Threat Sector was determined from upper-level wind data at 1500 UTC in the preconvective environment. Narrow thunderstorm anvil clouds extend from central Florida to the space launch and landing facilities at the Kennedy Space Center (KSC) and Cape Canaveral Air Force Station (CCAFS) and beyond. The anvil clouds were generated around 1930 UTC (1430 EDT) by thunderstorm activity over central Florida and transported 90 n mi east-northeastward within 2 hours, as diagnosed by the anvil forecast tool. Phase III, delivered in February 2003, built upon the results of Phase II by enhancing the Anvil Threat Sector Tool with the capability to use national model forecast winds for depiction of potential anvil lengths and orientations over the KSC/CCAFS area with lead times from 3 through 168 hours (7 days). In September 2003, AMU customers requested the capability to use data from the KSC 50 MHz Doppler Radar Wind Profiler (DRWP) in the Anvil Threat Sector Tool and this capability was delivered by the AMU in December 2003. In March 2005, the AMU was tasked to migrate the MIDDS Anvil Threat Sector Tool capabilities onto the Advanced Weather Interactive Processing System (AWIPS) as the Anvil Threat Corridor Forecast Tool.

  17. Seasonal forecasting of dolphinfish distribution in eastern Australia to aid recreational fishers and managers

    NASA Astrophysics Data System (ADS)

    Brodie, Stephanie; Hobday, Alistair J.; Smith, James A.; Spillman, Claire M.; Hartog, Jason R.; Everett, Jason D.; Taylor, Matthew D.; Gray, Charles A.; Suthers, Iain M.

    2017-06-01

    Seasonal forecasting of environmental conditions and marine species distribution has been used as a decision support tool in commercial and aquaculture fisheries. These tools may also be applicable to species targeted by the recreational fisheries sector, a sector that is increasing its use of marine resources, and making important economic and social contributions to coastal communities around the world. Here, a seasonal forecast of the habitat and density of dolphinfish (Coryphaena hippurus), based on sea surface temperatures, was developed for the east coast of New South Wales (NSW), Australia. Two prototype forecast products were created; geographic spatial forecasts of dolphinfish habitat and a latitudinal summary identifying the location of fish density peaks. The less detailed latitudinal summary was created to limit the resolution of habitat information to prevent potential resource over-exploitation by fishers in the absence of total catch controls. The forecast dolphinfish habitat model was accurate at the start of the annual dolphinfish migration in NSW (December) but other months (January - May) showed poor performance due to spatial and temporal variability in the catch data used in model validation. Habitat forecasts for December were useful up to five months ahead, with performance decreasing as forecast were made further into the future. The continued development and sound application of seasonal forecasts will help fishery industries cope with future uncertainty and promote dynamic and sustainable marine resource management.

  18. Uses and Applications of Climate Forecasts for Power Utilities.

    NASA Astrophysics Data System (ADS)

    Changnon, Stanley A.; Changnon, Joyce M.; Changnon, David

    1995-05-01

    The uses and potential applications of climate forecasts for electric and gas utilities were assessed 1) to discern needs for improving climate forecasts and guiding future research, and 2) to assist utilities in making wise use of forecasts. In-depth structured interviews were conducted with 56 decision makers in six utilities to assess existing and potential uses of climate forecasts. Only 3 of the 56 use forecasts. Eighty percent of those sampled envisioned applications of climate forecasts, given certain changes and additional information. Primary applications exist in power trading, load forecasting, fuel acquisition, and systems planning, with slight differences in interests between utilities. Utility staff understand probability-based forecasts but desire climatological information related to forecasted outcomes, including analogs similar to the forecasts, and explanations of the forecasts. Desired lead times vary from a week to three months, along with forecasts of up to four seasons ahead. The new NOAA forecasts initiated in 1995 provide the lead times and longer-term forecasts desired. Major hindrances to use of forecasts are hard-to-understand formats, lack of corporate acceptance, and lack of access to expertise. Recent changes in government regulations altered the utility industry, leading to a more competitive world wherein information about future weather conditions assumes much more value. Outreach efforts by government forecast agencies appear valuable to help achieve the appropriate and enhanced use of climate forecasts by the utility industry. An opportunity for service exists also for the private weather sector.

  19. Figures of Merit for Aeronautics Programs and Addition to NASA LARC Fire Station

    NASA Technical Reports Server (NTRS)

    Harper, Belinda M.

    1995-01-01

    This report accounts details of two research projects for the Langley Aerospace Research Summer Scholars (LARSS) program. The first project, with the Office of Mission Assurance, involved subjectively predicting the probable success of two aeronautics programs by means of a tool called a Figure of Merit. The figure of merit bases program success on the quality and reliability of the following factors: parts, complexity of research, quality programs, hazards elimination, and single point failures elimination. The second project, for the Office of Safety and Facilities Assurance, required planning, layouts, and source seeking for an addition to the fire house. Forecasted changes in facility layout necessitate this addition which will serve as housing for the fire fighters.

  20. Non-parametric data-based approach for the quantification and communication of uncertainties in river flood forecasts

    NASA Astrophysics Data System (ADS)

    Van Steenbergen, N.; Willems, P.

    2012-04-01

    Reliable flood forecasts are the most important non-structural measures to reduce the impact of floods. However flood forecasting systems are subject to uncertainty originating from the input data, model structure and model parameters of the different hydraulic and hydrological submodels. To quantify this uncertainty a non-parametric data-based approach has been developed. This approach analyses the historical forecast residuals (differences between the predictions and the observations at river gauging stations) without using a predefined statistical error distribution. Because the residuals are correlated with the value of the forecasted water level and the lead time, the residuals are split up into discrete classes of simulated water levels and lead times. For each class, percentile values are calculated of the model residuals and stored in a 'three dimensional error' matrix. By 3D interpolation in this error matrix, the uncertainty in new forecasted water levels can be quantified. In addition to the quantification of the uncertainty, the communication of this uncertainty is equally important. The communication has to be done in a consistent way, reducing the chance of misinterpretation. Also, the communication needs to be adapted to the audience; the majority of the larger public is not interested in in-depth information on the uncertainty on the predicted water levels, but only is interested in information on the likelihood of exceedance of certain alarm levels. Water managers need more information, e.g. time dependent uncertainty information, because they rely on this information to undertake the appropriate flood mitigation action. There are various ways in presenting uncertainty information (numerical, linguistic, graphical, time (in)dependent, etc.) each with their advantages and disadvantages for a specific audience. A useful method to communicate uncertainty of flood forecasts is by probabilistic flood mapping. These maps give a representation of the probability of flooding of a certain area, based on the uncertainty assessment of the flood forecasts. By using this type of maps, water managers can focus their attention on the areas with the highest flood probability. Also the larger public can consult these maps for information on the probability of flooding for their specific location, such that they can take pro-active measures to reduce the personal damage. The method of quantifying the uncertainty was implemented in the operational flood forecasting system for the navigable rivers in the Flanders region of Belgium. The method has shown clear benefits during the floods of the last two years.

  1. Tsunami Forecast Progress Five Years After Indonesian Disaster

    NASA Astrophysics Data System (ADS)

    Titov, Vasily V.; Bernard, Eddie N.; Weinstein, Stuart A.; Kanoglu, Utku; Synolakis, Costas E.

    2010-05-01

    Almost five years after the 26 December 2004 Indian Ocean tragedy, tsunami warnings are finally benefiting from decades of research toward effective model-based forecasts. Since the 2004 tsunami, two seminal advances have been (i) deep-ocean tsunami measurements with tsunameters and (ii) their use in accurately forecasting tsunamis after the tsunami has been generated. Using direct measurements of deep-ocean tsunami heights, assimilated into numerical models for specific locations, greatly improves the real-time forecast accuracy over earthquake-derived magnitude estimates of tsunami impact. Since 2003, this method has been used to forecast tsunamis at specific harbors for different events in the Pacific and Indian Oceans. Recent tsunamis illustrated how this technology is being adopted in global tsunami warning operations. The U.S. forecasting system was used by both research and operations to evaluate the tsunami hazard. Tests demonstrated the effectiveness of operational tsunami forecasting using real-time deep-ocean data assimilated into forecast models. Several examples also showed potential of distributed forecast tools. With IOC and USAID funding, NOAA researchers at PMEL developed the Community Model Interface for Tsunami (ComMIT) tool and distributed it through extensive capacity-building sessions in the Indian Ocean. Over hundred scientists have been trained in tsunami inundation mapping, leading to the first generation of inundation models for many Indian Ocean shorelines. These same inundation models can also be used for real-time tsunami forecasts as was demonstrated during several events. Contact Information Vasily V. Titov, Seattle, Washington, USA, 98115

  2. Wind Energy Management System Integration Project Incorporating Wind Generation and Load Forecast Uncertainties into Power Grid Operations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Makarov, Yuri V.; Huang, Zhenyu; Etingov, Pavel V.

    2010-09-01

    The power system balancing process, which includes the scheduling, real time dispatch (load following) and regulation processes, is traditionally based on deterministic models. Since the conventional generation needs time to be committed and dispatched to a desired megawatt level, the scheduling and load following processes use load and wind power production forecasts to achieve future balance between the conventional generation and energy storage on the one side, and system load, intermittent resources (such as wind and solar generation) and scheduled interchange on the other side. Although in real life the forecasting procedures imply some uncertainty around the load and windmore » forecasts (caused by forecast errors), only their mean values are actually used in the generation dispatch and commitment procedures. Since the actual load and intermittent generation can deviate from their forecasts, it becomes increasingly unclear (especially, with the increasing penetration of renewable resources) whether the system would be actually able to meet the conventional generation requirements within the look-ahead horizon, what the additional balancing efforts would be needed as we get closer to the real time, and what additional costs would be incurred by those needs. In order to improve the system control performance characteristics, maintain system reliability, and minimize expenses related to the system balancing functions, it becomes necessary to incorporate the predicted uncertainty ranges into the scheduling, load following, and, in some extent, into the regulation processes. It is also important to address the uncertainty problem comprehensively, by including all sources of uncertainty (load, intermittent generation, generators’ forced outages, etc.) into consideration. All aspects of uncertainty such as the imbalance size (which is the same as capacity needed to mitigate the imbalance) and generation ramping requirement must be taken into account. The latter unique features make this work a significant step forward toward the objective of incorporating of wind, solar, load, and other uncertainties into power system operations. In this report, a new methodology to predict the uncertainty ranges for the required balancing capacity, ramping capability and ramp duration is presented. Uncertainties created by system load forecast errors, wind and solar forecast errors, generation forced outages are taken into account. The uncertainty ranges are evaluated for different confidence levels of having the actual generation requirements within the corresponding limits. The methodology helps to identify system balancing reserve requirement based on a desired system performance levels, identify system “breaking points”, where the generation system becomes unable to follow the generation requirement curve with the user-specified probability level, and determine the time remaining to these potential events. The approach includes three stages: statistical and actual data acquisition, statistical analysis of retrospective information, and prediction of future grid balancing requirements for specified time horizons and confidence intervals. Assessment of the capacity and ramping requirements is performed using a specially developed probabilistic algorithm based on a histogram analysis incorporating all sources of uncertainty and parameters of a continuous (wind forecast and load forecast errors) and discrete (forced generator outages and failures to start up) nature. Preliminary simulations using California Independent System Operator (California ISO) real life data have shown the effectiveness of the proposed approach. A tool developed based on the new methodology described in this report will be integrated with the California ISO systems. Contractual work is currently in place to integrate the tool with the AREVA EMS system.« less

  3. Statistical analysis of the induced Basel 2006 earthquake sequence: introducing a probability-based monitoring approach for Enhanced Geothermal Systems

    NASA Astrophysics Data System (ADS)

    Bachmann, C. E.; Wiemer, S.; Woessner, J.; Hainzl, S.

    2011-08-01

    Geothermal energy is becoming an important clean energy source, however, the stimulation of a reservoir for an Enhanced Geothermal System (EGS) is associated with seismic risk due to induced seismicity. Seismicity occurring due to the water injection at depth have to be well recorded and monitored. To mitigate the seismic risk of a damaging event, an appropriate alarm system needs to be in place for each individual experiment. In recent experiments, the so-called traffic-light alarm system, based on public response, local magnitude and peak ground velocity, was used. We aim to improve the pre-defined alarm system by introducing a probability-based approach; we retrospectively model the ongoing seismicity in real time with multiple statistical forecast models and then translate the forecast to seismic hazard in terms of probabilities of exceeding a ground motion intensity level. One class of models accounts for the water injection rate, the main parameter that can be controlled by the operators during an experiment. By translating the models into time-varying probabilities of exceeding various intensity levels, we provide tools which are well understood by the decision makers and can be used to determine thresholds non-exceedance during a reservoir stimulation; this, however, remains an entrepreneurial or political decision of the responsible project coordinators. We introduce forecast models based on the data set of an EGS experiment in the city of Basel. Between 2006 December 2 and 8, approximately 11 500 m3 of water was injected into a 5-km-deep well at high pressures. A six-sensor borehole array, was installed by the company Geothermal Explorers Limited (GEL) at depths between 300 and 2700 m around the well to monitor the induced seismicity. The network recorded approximately 11 200 events during the injection phase, more than 3500 of which were located. With the traffic-light system, actions where implemented after an ML 2.7 event, the water injection was reduced and then stopped after another ML 2.5 event. A few hours later, an earthquake with ML 3.4, felt within the city, occurred, which led to bleed-off of the well. A risk study was later issued with the outcome that the experiment could not be resumed. We analyse the statistical features of the sequence and show that the sequence is well modelled with the Omori-Utsu law following the termination of water injection. Based on this model, the sequence will last 31+29/-14 years to reach the background level. We introduce statistical models based on Reasenberg and Jones and Epidemic Type Aftershock Sequence (ETAS) models, commonly used to model aftershock sequences. We compare and test different model setups to simulate the sequences, varying the number of fixed and free parameters. For one class of the ETAS models, we account for the flow rate at the injection borehole. We test the models against the observed data with standard likelihood tests and find the ETAS model accounting for the on flow rate to perform best. Such a model may in future serve as a valuable tool for designing probabilistic alarm systems for EGS experiments.

  4. Fire danger rating over Mediterranean Europe based on fire radiative power derived from Meteosat

    NASA Astrophysics Data System (ADS)

    Pinto, Miguel M.; DaCamara, Carlos C.; Trigo, Isabel F.; Trigo, Ricardo M.; Feridun Turkman, K.

    2018-02-01

    We present a procedure that allows the operational generation of daily forecasts of fire danger over Mediterranean Europe. The procedure combines historical information about radiative energy released by fire events with daily meteorological forecasts, as provided by the Satellite Application Facility for Land Surface Analysis (LSA SAF) and the European Centre for Medium-Range Weather Forecasts (ECMWF). Fire danger is estimated based on daily probabilities of exceedance of daily energy released by fires occurring at the pixel level. Daily probability considers meteorological factors by means of the Canadian Fire Weather Index (FWI) and is estimated using a daily model based on a generalized Pareto distribution. Five classes of fire danger are then associated with daily probability estimated by the daily model. The model is calibrated using 13 years of data (2004-2016) and validated against the period of January-September 2017. Results obtained show that about 72 % of events releasing daily energy above 10 000 GJ belong to the extreme class of fire danger, a considerably high fraction that is more than 1.5 times the values obtained when using the currently operational Fire Danger Forecast module of the European Forest Fire Information System (EFFIS) or the Fire Risk Map (FRM) product disseminated by the LSA SAF. Besides assisting in wildfire management, the procedure is expected to help in decision making on prescribed burning within the framework of agricultural and forest management practices.

  5. Forecasting extinction risk with nonstationary matrix models.

    PubMed

    Gotelli, Nicholas J; Ellison, Aaron M

    2006-02-01

    Matrix population growth models are standard tools for forecasting population change and for managing rare species, but they are less useful for predicting extinction risk in the face of changing environmental conditions. Deterministic models provide point estimates of lambda, the finite rate of increase, as well as measures of matrix sensitivity and elasticity. Stationary matrix models can be used to estimate extinction risk in a variable environment, but they assume that the matrix elements are randomly sampled from a stationary (i.e., non-changing) distribution. Here we outline a method for using nonstationary matrix models to construct realistic forecasts of population fluctuation in changing environments. Our method requires three pieces of data: (1) field estimates of transition matrix elements, (2) experimental data on the demographic responses of populations to altered environmental conditions, and (3) forecasting data on environmental drivers. These three pieces of data are combined to generate a series of sequential transition matrices that emulate a pattern of long-term change in environmental drivers. Realistic estimates of population persistence and extinction risk can be derived from stochastic permutations of such a model. We illustrate the steps of this analysis with data from two populations of Sarracenia purpurea growing in northern New England. Sarracenia purpurea is a perennial carnivorous plant that is potentially at risk of local extinction because of increased nitrogen deposition. Long-term monitoring records or models of environmental change can be used to generate time series of driver variables under different scenarios of changing environments. Both manipulative and natural experiments can be used to construct a linking function that describes how matrix parameters change as a function of the environmental driver. This synthetic modeling approach provides quantitative estimates of extinction probability that have an explicit mechanistic basis.

  6. Artificial intelligence approach with the use of artificial neural networks for the creation of a forecasting model of Plasmopara viticola infection.

    PubMed

    Bugliosi, R; Spera, G; La Torre, A; Campoli, L; Scaglione, M

    2006-01-01

    Most of the forecasting models of Plasmopara viticola infections are based upon empiric correlations between meteorological/environmental data and pathogen outbreak. These models generally overestimate the risk of infections and induce to treat the vineyard even if it should be not necessary. In rare cases they underrate the risk of infection leaving the pathogen to breakout. Starting from these considerations we have decided to approach the problem from another point of view utilizing Artificial Intelligence techniques for data elaboration and analysis. Meanwhile the same data have been studied with a more classic approach with statistical tools to verify the impact of a large data collection on the standard data analysis methods. A network of RTUs (Remote Terminal Units) distributed all over the Italian national territory transmits 12 environmental parameters every 15 minutes via radio or via GPRS to a centralized Data Base. Other pedologic data is collected directly from the field and sent via Internet to the centralized data base utilizing Personal Digital Assistants (PDAs) running a specific software. Data is stored after having been preprocessed, to guarantee the quality of the information. The subsequent analysis has been realized mostly with Artificial Neural Networks (ANNs). Collecting and analizing data in this way will probably bring us to the possibility of preventing Plasmospara viticola infection starting from the environmental conditions in this very complex context. The aim of this work is to forecast the infection avoiding the ineffective use of the plant protection products in agriculture. Applying different analysis models we will try to find the best ANN capable of forecasting with an high level of affordability.

  7. scoringRules - A software package for probabilistic model evaluation

    NASA Astrophysics Data System (ADS)

    Lerch, Sebastian; Jordan, Alexander; Krüger, Fabian

    2016-04-01

    Models in the geosciences are generally surrounded by uncertainty, and being able to quantify this uncertainty is key to good decision making. Accordingly, probabilistic forecasts in the form of predictive distributions have become popular over the last decades. With the proliferation of probabilistic models arises the need for decision theoretically principled tools to evaluate the appropriateness of models and forecasts in a generalized way. Various scoring rules have been developed over the past decades to address this demand. Proper scoring rules are functions S(F,y) which evaluate the accuracy of a forecast distribution F , given that an outcome y was observed. As such, they allow to compare alternative models, a crucial ability given the variety of theories, data sources and statistical specifications that is available in many situations. This poster presents the software package scoringRules for the statistical programming language R, which contains functions to compute popular scoring rules such as the continuous ranked probability score for a variety of distributions F that come up in applied work. Two main classes are parametric distributions like normal, t, or gamma distributions, and distributions that are not known analytically, but are indirectly described through a sample of simulation draws. For example, Bayesian forecasts produced via Markov Chain Monte Carlo take this form. Thereby, the scoringRules package provides a framework for generalized model evaluation that both includes Bayesian as well as classical parametric models. The scoringRules package aims to be a convenient dictionary-like reference for computing scoring rules. We offer state of the art implementations of several known (but not routinely applied) formulas, and implement closed-form expressions that were previously unavailable. Whenever more than one implementation variant exists, we offer statistically principled default choices.

  8. Space Weather Products at the Community Coordinated Modeling Center

    NASA Technical Reports Server (NTRS)

    Hesse, Michael; Kuznetsova, M.; Pulkkinen, A.; Maddox, M.; Rastaetter, L.; Berrios, D.; MacNeice, P.

    2010-01-01

    The Community Coordinated Modeling Center (CCMC) is a US inter-agency activity aiming at research in support of the generation of advanced space weather models. As one of its main functions, the CCMC provides to researchers the use of space science models, even if they are not model owners themselves. The second CCMC activity is to support Space Weather forecasting at national Space Weather Forecasting Centers. This second activity involves model evaluations, model transitions to operations, and the development of space weather forecasting tools. Owing to the pace of development in the science community, new model capabilities emerge frequently. Consequently, space weather products and tools involve not only increased validity, but often entirely new capabilities. This presentation will review the present state of space weather tools as well as point out emerging future capabilities.

  9. The regional geological hazard forecast based on rainfall and WebGIS in Hubei, China

    NASA Astrophysics Data System (ADS)

    Zheng, Guizhou; Chao, Yi; Xu, Hongwen

    2008-10-01

    Various disasters have been a serious threat to human and are increasing over time. The reduction and prevention of hazard is the largest problem faced by local governments. The study of disasters has drawn more and more attention mainly due to increasing awareness of the socio-economic impact of disasters. Hubei province, one of the highest economic developing provinces in China, suffered big economic losses from geo-hazards in recent years due to frequent geo-hazard events with the estimated damage of approximately 3000 million RMB. It is therefore important to establish an efficient way to mitigate potential damage and reduce losses of property and life derived from disasters. This paper presents the procedure of setting up a regional geological hazard forecast and information releasing system of Hubei province with the combination of advanced techniques such as World Wide Web (WWW), database online and ASP based on WEBGIS platform (MAPGIS-IMS) and rainfall information. A Web-based interface was developed using a three-tiered architecture based on client-server technology in this system. The study focused on the upload of the rainfall data, the definition of rainfall threshold values, the creation of geological disaster warning map and the forecast of geohazard relating to the rainfall. Its purposes are to contribute to the management of mass individual and regional geological disaster spatial data, help to forecast the conditional probabilities of occurrence of various disasters that might be posed by the rainfall, and release forecasting information of Hubei province timely via the internet throughout all levels of government, the private and nonprofit sectors, and the academic community. This system has worked efficiently and stably in the internet environment which is strongly connected with meteorological observatory. Environment Station of Hubei Province are making increased use of our Web-tool to assist in the decision-making process to analyze geo-hazard in Hubei Province. It would be more helpful to present the geo-hazard information for Hubei administrator.

  10. Wildfire suppression cost forecasts from the US Forest Service

    Treesearch

    Karen L. Abt; Jeffrey P. Prestemon; Krista M. Gebert

    2009-01-01

    The US Forest Service and other land-management agencies seek better tools for nticipating future expenditures for wildfire suppression. We developed regression models for forecasting US Forest Service suppression spending at 1-, 2-, and 3-year lead times. We compared these models to another readily available forecast model, the 10-year moving average model,...

  11. Econometric Models for Forecasting of Macroeconomic Indices

    ERIC Educational Resources Information Center

    Sukhanova, Elena I.; Shirnaeva, Svetlana Y.; Mokronosov, Aleksandr G.

    2016-01-01

    The urgency of the research topic was stipulated by the necessity to carry out an effective controlled process by the economic system which can hardly be imagined without indices forecasting characteristic of this system. An econometric model is a safe tool of forecasting which makes it possible to take into consideration the trend of indices…

  12. Earthquake forecast for the Wasatch Front region of the Intermountain West

    USGS Publications Warehouse

    DuRoss, Christopher B.

    2016-04-18

    The Working Group on Utah Earthquake Probabilities has assessed the probability of large earthquakes in the Wasatch Front region. There is a 43 percent probability of one or more magnitude 6.75 or greater earthquakes and a 57 percent probability of one or more magnitude 6.0 or greater earthquakes in the region in the next 50 years. These results highlight the threat of large earthquakes in the region.

  13. a system approach to the long term forecasting of the climat data in baikal region

    NASA Astrophysics Data System (ADS)

    Abasov, N.; Berezhnykh, T.

    2003-04-01

    The Angara river running from Baikal with a cascade of hydropower plants built on it plays a peculiar role in economy of the region. With view of high variability of water inflow into the rivers and lakes (long-term low water periods and catastrophic floods) that is due to climatic peculiarities of the water resource formation, a long-term forecasting is developed and applied for risk decreasing at hydropower plants. Methodology and methods of long-term forecasting of natural-climatic processes employs some ideas of the research schools by Academician I.P.Druzhinin and Prof. A.P.Reznikhov and consists in detailed investigation of cause-effect relations, finding out physical analogs and their application to formalized methods of long-term forecasting. They are divided into qualitative (background method; method of analogs based on solar activity), probabilistic and approximative methods (analog-similarity relations; discrete-continuous model). These forecasting methods have been implemented in the form of analytical aids of the information-forecasting software "GIPSAR" that provides for some elements of artificial intelligence. Background forecasts of the runoff of the Ob, the Yenisei, the Angara Rivers in the south of Siberia are based on space-time regularities that were revealed on taking account of the phase shifts in occurrence of secular maxima and minima on integral-difference curves of many-year hydrological processes in objects compared. Solar activity plays an essential role in investigations of global variations of climatic processes. Its consideration in the method of superimposed epochs has allowed a conclusion to be made on the higher probability of the low-water period in the actual inflow to Lake Baikal that takes place on the increasing branch of solar activity of its 11-year cycle. The higher probability of a high-water period is observed on the decreasing branch of solar activity from the 2nd to the 5th year after its maximum. Probabilistic method of forecasting (with a year in advance) is based on the property of alternation of series of years with increase and decrease in the observed indicators (characteristic indices) of natural processes. Most of the series (98.4-99.6%) are represented by series of one to three years. The problem of forecasting is divided into two parts: 1) qualitative forecast of the probability that the started series will either continue or be replaced by a new series during the next year that is based on the frequency characteristics of series of years with increase or decrease of the forecasted sequence); 2) quantitative estimate of the forecasted value in the form of a curve of conditional frequencies is made on the base of intra-sequence interrelations among hydrometeorological elements by their differentiation with respect to series of years of increase or decrease, by construction of particular curves of conditional frequencies of the runoff for each expected variant of series development and by subsequent construction a generalized curve. Approximative learning methods form forecasted trajectories of the studied process indices for a long-term perspective. The method of analog-similarity relations is based on the fact that long periods of observations reveal some similarities in the character of variability of indices for some fragments of the sequence x (t) by definite criteria. The idea of the method is to estimate similarity of such fragments of the sequence that have been called the analogs. The method applies multistage optimization of both external parameters (e.g. the number of iterations of the sliding averaging needed to decompose the sequence into two components: the smoothed one with isolated periodic oscillations and the residual or random one). The method is applicable to current terms of forecasts and ending with the double solar cycle. Using a special procedure of integration, it separates terms with the best results for the given optimization subsample. Several optimal vectors of parameters obtained are tested on the examination (verifying) subsample. If the procedure is successful, the forecast is immediately made by integration of several best solutions. Peculiarities of forecasting extreme processes. Methods of long-term forecasting allow the sufficiently reliable forecasts to be made within the interval of xmin+Δ_1, xmax - Δ_2 (i.e. in the interval of medium values of indices). Meanwhile, in the intervals close to extreme ones, reliability of forecasts is substantially lower. While for medium values the statistics of the100-year sequence gives acceptable results owing to a sufficiently large number of revealed analogs that correspond to prognostic samples, for extreme values the situation is quite different, first of all by virtue of poverty of statistical data. Decreasing the values of Δ_1,Δ_2: Δ_1,Δ_2 rightarrow 0 (by including them into optimization parameters of the considered forecasting methods) could be one of the ways to improve reliability of forecasts. Partially, such an approach has been realized in the method of analog-similarity relations, giving the possibility to form a range of possible forecasted trajectories in two variants - from the minimum possible trajectory to the maximum possible one. Reliability of long-term forecasts. Both the methodology and the methods considered above have been realized as the information-forecasting system "GIPSAR". The system includes some tools implementing several methods of forecasting, analysis of initial and forecasted information, a developed database, a set of tools for verification of algorithms, additional information on the algorithms of statistical processing of sequences (sliding averaging, integral-difference curves, etc.), aids to organize input of initial information (in its various forms) as well as aids to draw up output prognostic documents. Risk management. The normal functioning of the Angara cascade is periodically interrupted by risks of two types that take place in the Baikal, the Bratsk and Ust-Ilimsk reservoirs: long low-water periods and sudden periods of extremely high water levels. For example, low-water periods, observed in the reservoirs of the Angara cascade can be classified under four risk categories : 1 - acceptable (negligible reduction of electric power generation by hydropower plants; certain difficulty in meeting environmental and navigation requirements); 2 - significant (substantial reduction of electric power generation by hydropower plants; certain restriction on water releases for navigation; violation of environmental requirements in some years); 3 - emergency (big losses in electric power generation; limited electricity supply to large consumers; significant restriction of water releases for navigation; threat of exposure of drinkable water intake works; violation of environmental requirements for a number of years); 4 - catastrophic (energy crisis; social crisis exposure of drinkable water intake works; termination of navigation; environmental catastrophe). Management of energy systems consists in operative, many-year regulation and perspective planning and has to take into account the analysis of operative data (water reserves in reservoirs), long-term statistics and relations among natural processes and also forecasts - short-term (for a day, week, decade), long-term and/or super-long-term (from a month to several decades). Such natural processes as water inflow to reservoirs, air temperatures during heating periods depend in turn on external factors: prevailing types of atmospheric circulation, intensity of the 11- and 22-year cycles of solar activity, volcanic activity, interaction between the ocean and atmosphere, etc. Until recently despite the formed scientific schools on long-term forecasting (I.P.Druzhinin, A.P.Reznikhov) the energy system management has been based on specially drawn dispatching schedules and long-term hydrometeorological forecasts only without attraction of perspective forecasted indices. Insertion of a parallel block of forecast (based on the analysis of data on natural processes and special methods of forecasting) into the scheme can largely smooth unfavorable consequences from the impact of natural processes on sustainable development of energy systems and especially on its safe operation. However, the requirements to reliability and accuracy of long-term forecasts significantly increase. The considered approach to long term forecasting can be used for prediction: mean winter and summer air temperatures, droughts and wood fires.

  14. Probabilistic Wind Power Ramp Forecasting Based on a Scenario Generation Method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Qin; Florita, Anthony R; Krishnan, Venkat K

    Wind power ramps (WPRs) are particularly important in the management and dispatch of wind power and currently drawing the attention of balancing authorities. With the aim to reduce the impact of WPRs for power system operations, this paper develops a probabilistic ramp forecasting method based on a large number of simulated scenarios. An ensemble machine learning technique is first adopted to forecast the basic wind power forecasting scenario and calculate the historical forecasting errors. A continuous Gaussian mixture model (GMM) is used to fit the probability distribution function (PDF) of forecasting errors. The cumulative distribution function (CDF) is analytically deduced.more » The inverse transform method based on Monte Carlo sampling and the CDF is used to generate a massive number of forecasting error scenarios. An optimized swinging door algorithm is adopted to extract all the WPRs from the complete set of wind power forecasting scenarios. The probabilistic forecasting results of ramp duration and start-time are generated based on all scenarios. Numerical simulations on publicly available wind power data show that within a predefined tolerance level, the developed probabilistic wind power ramp forecasting method is able to predict WPRs with a high level of sharpness and accuracy.« less

  15. Probabilistic Wind Power Ramp Forecasting Based on a Scenario Generation Method: Preprint

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Qin; Florita, Anthony R; Krishnan, Venkat K

    2017-08-31

    Wind power ramps (WPRs) are particularly important in the management and dispatch of wind power, and they are currently drawing the attention of balancing authorities. With the aim to reduce the impact of WPRs for power system operations, this paper develops a probabilistic ramp forecasting method based on a large number of simulated scenarios. An ensemble machine learning technique is first adopted to forecast the basic wind power forecasting scenario and calculate the historical forecasting errors. A continuous Gaussian mixture model (GMM) is used to fit the probability distribution function (PDF) of forecasting errors. The cumulative distribution function (CDF) ismore » analytically deduced. The inverse transform method based on Monte Carlo sampling and the CDF is used to generate a massive number of forecasting error scenarios. An optimized swinging door algorithm is adopted to extract all the WPRs from the complete set of wind power forecasting scenarios. The probabilistic forecasting results of ramp duration and start time are generated based on all scenarios. Numerical simulations on publicly available wind power data show that within a predefined tolerance level, the developed probabilistic wind power ramp forecasting method is able to predict WPRs with a high level of sharpness and accuracy.« less

  16. Parametric decadal climate forecast recalibration (DeFoReSt 1.0)

    NASA Astrophysics Data System (ADS)

    Pasternack, Alexander; Bhend, Jonas; Liniger, Mark A.; Rust, Henning W.; Müller, Wolfgang A.; Ulbrich, Uwe

    2018-01-01

    Near-term climate predictions such as decadal climate forecasts are increasingly being used to guide adaptation measures. For near-term probabilistic predictions to be useful, systematic errors of the forecasting systems have to be corrected. While methods for the calibration of probabilistic forecasts are readily available, these have to be adapted to the specifics of decadal climate forecasts including the long time horizon of decadal climate forecasts, lead-time-dependent systematic errors (drift) and the errors in the representation of long-term changes and variability. These features are compounded by small ensemble sizes to describe forecast uncertainty and a relatively short period for which typically pairs of reforecasts and observations are available to estimate calibration parameters. We introduce the Decadal Climate Forecast Recalibration Strategy (DeFoReSt), a parametric approach to recalibrate decadal ensemble forecasts that takes the above specifics into account. DeFoReSt optimizes forecast quality as measured by the continuous ranked probability score (CRPS). Using a toy model to generate synthetic forecast observation pairs, we demonstrate the positive effect on forecast quality in situations with pronounced and limited predictability. Finally, we apply DeFoReSt to decadal surface temperature forecasts from the MiKlip prototype system and find consistent, and sometimes considerable, improvements in forecast quality compared with a simple calibration of the lead-time-dependent systematic errors.

  17. Alternative configurations of Quantile Regression for estimating predictive uncertainty in water level forecasts for the Upper Severn River: a comparison

    NASA Astrophysics Data System (ADS)

    Lopez, Patricia; Verkade, Jan; Weerts, Albrecht; Solomatine, Dimitri

    2014-05-01

    Hydrological forecasting is subject to many sources of uncertainty, including those originating in initial state, boundary conditions, model structure and model parameters. Although uncertainty can be reduced, it can never be fully eliminated. Statistical post-processing techniques constitute an often used approach to estimate the hydrological predictive uncertainty, where a model of forecast error is built using a historical record of past forecasts and observations. The present study focuses on the use of the Quantile Regression (QR) technique as a hydrological post-processor. It estimates the predictive distribution of water levels using deterministic water level forecasts as predictors. This work aims to thoroughly verify uncertainty estimates using the implementation of QR that was applied in an operational setting in the UK National Flood Forecasting System, and to inter-compare forecast quality and skill in various, differing configurations of QR. These configurations are (i) 'classical' QR, (ii) QR constrained by a requirement that quantiles do not cross, (iii) QR derived on time series that have been transformed into the Normal domain (Normal Quantile Transformation - NQT), and (iv) a piecewise linear derivation of QR models. The QR configurations are applied to fourteen hydrological stations on the Upper Severn River with different catchments characteristics. Results of each QR configuration are conditionally verified for progressively higher flood levels, in terms of commonly used verification metrics and skill scores. These include Brier's probability score (BS), the continuous ranked probability score (CRPS) and corresponding skill scores as well as the Relative Operating Characteristic score (ROCS). Reliability diagrams are also presented and analysed. The results indicate that none of the four Quantile Regression configurations clearly outperforms the others.

  18. Vandenberg Air Force Base Upper Level Wind Launch Weather Constraints

    NASA Technical Reports Server (NTRS)

    Shafer, Jaclyn A.; Wheeler, Mark M.

    2012-01-01

    The 30th Operational Support Squadron Weather Flight (30 OSSWF) provides comprehensive weather services to the space program at Vandenberg Air Force Base (VAFB) in California. One of their responsibilities is to monitor upper-level winds to ensure safe launch operations of the Minuteman III ballistic missile. The 30 OSSWF tasked the Applied Meteorology Unit (AMU) to analyze VAFB sounding data with the goal of determining the probability of violating (PoV) their upper-level thresholds for wind speed and shear constraints specific to this launch vehicle, and to develop a tool that will calculate the PoV of each constraint on the day of launch. In order to calculate the probability of exceeding each constraint, the AMU collected and analyzed historical data from VAFB. The historical sounding data were retrieved from the National Oceanic and Atmospheric Administration Earth System Research Laboratory archive for the years 1994-2011 and then stratified into four sub-seasons: January-March, April-June, July-September, and October-December. The maximum wind speed and 1000-ft shear values for each sounding in each subseason were determined. To accurately calculate the PoV, the AMU determined the theoretical distributions that best fit the maximum wind speed and maximum shear datasets. Ultimately it was discovered that the maximum wind speeds follow a Gaussian distribution while the maximum shear values follow a lognormal distribution. These results were applied when calculating the averages and standard deviations needed for the historical and real-time PoV calculations. In addition to the requirements outlined in the original task plan, the AMU also included forecast sounding data from the Rapid Refresh model. This information provides further insight for the launch weather officers (LWOs) when determining if a wind constraint violation will occur over the next few hours on day of launch. The interactive graphical user interface (GUI) for this project was developed in Microsoft Excel using Visual Basic for Applications. The GUI displays the critical sounding data easily and quickly for the LWOs on day of launch. This tool will replace the existing one used by the 30 OSSWF, assist the LWOs in determining the probability of exceeding specific wind threshold values, and help to improve the overall upper winds forecast for the launch customer.

  19. Evaluating space weather forecasts of geomagnetic activity from a user perspective

    NASA Astrophysics Data System (ADS)

    Thomson, A. W. P.

    2000-12-01

    Decision Theory can be used as a tool for discussing the relative costs of complacency and false alarms with users of space weather forecasts. We describe a new metric for the value of space weather forecasts, derived from Decision Theory. In particular we give equations for the level of accuracy that a forecast must exceed in order to be useful to a specific customer. The technique is illustrated by simplified example forecasts for global geomagnetic activity and for geophysical exploration and power grid management in the British Isles.

  20. Development of a downed woody debris forecasting tool using strategic-scale multiresource forest inventories

    Treesearch

    Matthew B. Russell; Christopher W. Woodall

    2017-01-01

    The increasing interest in forest biomass for energy or carbon cycle purposes has raised the need for forest resource managers to refine their understanding of downed woody debris (DWD) dynamics. We developed a DWD forecasting tool using field measurements (mean size and stage of stage of decay) for three common forest types across the eastern United States using field...

  1. Determining relevant parameters for a statistical tropical cyclone genesis tool based upon global model output

    NASA Astrophysics Data System (ADS)

    Halperin, D.; Hart, R. E.; Fuelberg, H. E.; Cossuth, J.

    2013-12-01

    Predicting tropical cyclone (TC) genesis has been a vexing problem for forecasters. While the literature describes environmental conditions which are necessary for TC genesis, predicting if and when a specific disturbance will organize and become a TC remains a challenge. As recently as 5-10 years ago, global models possessed little if any skill in forecasting TC genesis. However, due to increased resolution and more advanced model parameterizations, we have reached the point where global models can provide useful TC genesis guidance to operational forecasters. A recent study evaluated five global models' ability to predict TC genesis out to four days over the North Atlantic basin (Halperin et al. 2013). The results indicate that the models are indeed able to capture the genesis time and location correctly a fair percentage of the time. The study also uncovered model biases. For example, probability of detection and false alarm rate varies spatially within the basin. Also, as expected, the models' performance decreases with increasing lead time. In order to explain these and other biases, it is useful to analyze the model-indicated genesis events further to determine whether or not there are systematic differences between successful forecasts (hits), false alarms, and miss events. This study will examine composites of a number of physically-relevant environmental parameters (e.g., magnitude of vertical wind shear, aerially averaged mid-level relative humidity) and disturbance-based parameters (e.g., 925 hPa maximum wind speed, vertical alignment of relative vorticity) among each TC genesis event classification (i.e., hit, false alarm, miss). We will use standard statistical tests (e.g., Student's t test, Mann-Whitney-U Test) to calculate whether or not any differences are statistically significant. We also plan to discuss how these composite results apply to a few illustrative case studies. The results may help determine which aspects of the forecast are (in)correct and whether the incorrect aspects can be bias-corrected. This, in turn, may allow us to further enhance probabilistic forecasts of TC genesis.

  2. Enhancing Community Based Early Warning Systems in Nepal with Flood Forecasting Using Local and Global Models

    NASA Astrophysics Data System (ADS)

    Dugar, Sumit; Smith, Paul; Parajuli, Binod; Khanal, Sonu; Brown, Sarah; Gautam, Dilip; Bhandari, Dinanath; Gurung, Gehendra; Shakya, Puja; Kharbuja, RamGopal; Uprety, Madhab

    2017-04-01

    Operationalising effective Flood Early Warning Systems (EWS) in developing countries like Nepal poses numerous challenges, with complex topography and geology, sparse network of river and rainfall gauging stations and diverse socio-economic conditions. Despite these challenges, simple real-time monitoring based EWSs have been in place for the past decade. A key constraint of these simple systems is the very limited lead time for response - as little as 2-3 hours, especially for rivers originating from steep mountainous catchments. Efforts to increase lead time for early warning are focusing on imbedding forecasts into the existing early warning systems. In 2016, the Nepal Department of Hydrology and Meteorology (DHM) piloted an operational Probabilistic Flood Forecasting Model in major river basins across Nepal. This comprised a low data approach to forecast water levels, developed jointly through a research/practitioner partnership with Lancaster University and WaterNumbers (UK) and the International NGO Practical Action. Using Data-Based Mechanistic Modelling (DBM) techniques, the model assimilated rainfall and water levels to generate localised hourly flood predictions, which are presented as probabilistic forecasts, increasing lead times from 2-3 hours to 7-8 hours. The Nepal DHM has simultaneously started utilizing forecasts from the Global Flood Awareness System (GLoFAS) that provides streamflow predictions at the global scale based upon distributed hydrological simulations using numerical ensemble weather forecasts from the ECMWF (European Centre for Medium-Range Weather Forecasts). The aforementioned global and local models have already affected the approach to early warning in Nepal, being operational during the 2016 monsoon in the West Rapti basin in Western Nepal. On 24 July 2016, GLoFAS hydrological forecasts for the West Rapti indicated a sharp rise in river discharge above 1500 m3/sec (equivalent to the river warning level at 5 meters) with 53% probability of exceeding the Medium Level Alert in two days. Rainfall stations upstream of the West Rapti catchment recorded heavy rainfall on 26 July, and localized forecasts from the probabilistic model at 8 am suggested that the water level would cross a pre-determined warning level in the next 3 hours. The Flood Forecasting Section at DHM issued a flood advisory, and disseminated SMS flood alerts to more than 13,000 at-risk people residing along the floodplains. Water levels crossed the danger threshold (5.4 meters) at 11 am, peaking at 8.15 meters at 10 pm. Extension of the warning lead time from probabilistic forecasts was significant in minimising the risk to lives and livelihoods as communities gained extra time to prepare, evacuate and respond. Likewise, longer timescale forecasts from GLoFAS could be potentially linked with no-regret early actions leading to improved preparedness and emergency response. These forecasting tools have contributed to enhance the effectiveness and efficiency of existing community based systems, increasing the lead time for response. Nevertheless, extensive work is required on appropriate ways to interpret and disseminate probabilistic forecasts having longer (2-14 days) and shorter (3-5 hours) time horizon for operational deployment as there are numerous uncertainties associated with predictions.

  3. Sentinel trees as a tool to forecast invasions of alien plant pathogens.

    PubMed

    Vettraino, AnnaMaria; Roques, Alain; Yart, Annie; Fan, Jian-ting; Sun, Jiang-hua; Vannini, Andrea

    2015-01-01

    Recent disease outbreaks caused by alien invasive pathogens into European forests posed a serious threat to forest sustainability with relevant environmental and economic effects. Many of the alien tree pathogens recently introduced into Europe were not previously included on any quarantine lists, thus they were not subject to phytosanitary inspections. The identification and description of alien fungi potentially pathogenic to native European flora before their introduction in Europe, is a paramount need in order to limit the risk of invasion and the impact to forest ecosystems. To determine the potential invasive fungi, a sentinel trees plot was established in Fuyang, China, using healthy seedlings of European tree species including Quercus petreae, Q. suber, and Q. ilex. The fungal assemblage associated with symptomatic specimens was studied using the tag-encoded 454 pyrosequencing of the nuclear ribosomal internal transcribed spacer-1 (ITS 1). Taxa with probable Asiatic origin were identified and included plant pathogenic genera. These results indicate that sentinel plants may be a strategic tool to improve the prevention of bioinvasions.

  4. Wave ensemble forecast in the Western Mediterranean Sea, application to an early warning system.

    NASA Astrophysics Data System (ADS)

    Pallares, Elena; Hernandez, Hector; Moré, Jordi; Espino, Manuel; Sairouni, Abdel

    2015-04-01

    The Western Mediterranean Sea is a highly heterogeneous and variable area, as is reflected on the wind field, the current field, and the waves, mainly in the first kilometers offshore. As a result of this variability, the wave forecast in these regions is quite complicated to perform, usually with some accuracy problems during energetic storm events. Moreover, is in these areas where most of the economic activities take part, including fisheries, sailing, tourism, coastal management and offshore renewal energy platforms. In order to introduce an indicator of the probability of occurrence of the different sea states and give more detailed information of the forecast to the end users, an ensemble wave forecast system is considered. The ensemble prediction systems have already been used in the last decades for the meteorological forecast; to deal with the uncertainties of the initial conditions and the different parametrizations used in the models, which may introduce some errors in the forecast, a bunch of different perturbed meteorological simulations are considered as possible future scenarios and compared with the deterministic forecast. In the present work, the SWAN wave model (v41.01) has been implemented for the Western Mediterranean sea, forced with wind fields produced by the deterministic Global Forecast System (GFS) and Global Ensemble Forecast System (GEFS). The wind fields includes a deterministic forecast (also named control), between 11 and 21 ensemble members, and some intelligent member obtained from the ensemble, as the mean of all the members. Four buoys located in the study area, moored in coastal waters, have been used to validate the results. The outputs include all the time series, with a forecast horizon of 8 days and represented in spaghetti diagrams, the spread of the system and the probability at different thresholds. The main goal of this exercise is to be able to determine the degree of the uncertainty of the wave forecast, meaningful between the 5th and the 8th day of the prediction. The information obtained is then included in an early warning system, designed in the framework of the European project iCoast (ECHO/SUB/2013/661009) with the aim of set alarms in coastal areas depending on the wave conditions, the sea level, the flooding and the run up in the coast.

  5. Ensemble Statistical Post-Processing of the National Air Quality Forecast Capability: Enhancing Ozone Forecasts in Baltimore, Maryland

    NASA Technical Reports Server (NTRS)

    Garner, Gregory G.; Thompson, Anne M.

    2013-01-01

    An ensemble statistical post-processor (ESP) is developed for the National Air Quality Forecast Capability (NAQFC) to address the unique challenges of forecasting surface ozone in Baltimore, MD. Air quality and meteorological data were collected from the eight monitors that constitute the Baltimore forecast region. These data were used to build the ESP using a moving-block bootstrap, regression tree models, and extreme-value theory. The ESP was evaluated using a 10-fold cross-validation to avoid evaluation with the same data used in the development process. Results indicate that the ESP is conditionally biased, likely due to slight overfitting while training the regression tree models. When viewed from the perspective of a decision-maker, the ESP provides a wealth of additional information previously not available through the NAQFC alone. The user is provided the freedom to tailor the forecast to the decision at hand by using decision-specific probability thresholds that define a forecast for an ozone exceedance. Taking advantage of the ESP, the user not only receives an increase in value over the NAQFC, but also receives value for An ensemble statistical post-processor (ESP) is developed for the National Air Quality Forecast Capability (NAQFC) to address the unique challenges of forecasting surface ozone in Baltimore, MD. Air quality and meteorological data were collected from the eight monitors that constitute the Baltimore forecast region. These data were used to build the ESP using a moving-block bootstrap, regression tree models, and extreme-value theory. The ESP was evaluated using a 10-fold cross-validation to avoid evaluation with the same data used in the development process. Results indicate that the ESP is conditionally biased, likely due to slight overfitting while training the regression tree models. When viewed from the perspective of a decision-maker, the ESP provides a wealth of additional information previously not available through the NAQFC alone. The user is provided the freedom to tailor the forecast to the decision at hand by using decision-specific probability thresholds that define a forecast for an ozone exceedance. Taking advantage of the ESP, the user not only receives an increase in value over the NAQFC, but also receives value for

  6. Performance of the 'material Failure Forecast Method' in real-time situations: A Bayesian approach applied on effusive and explosive eruptions

    NASA Astrophysics Data System (ADS)

    Boué, A.; Lesage, P.; Cortés, G.; Valette, B.; Reyes-Dávila, G.; Arámbula-Mendoza, R.; Budi-Santoso, A.

    2016-11-01

    Most attempts of deterministic eruption forecasting are based on the material Failure Forecast Method (FFM). This method assumes that a precursory observable, such as the rate of seismic activity, can be described by a simple power law which presents a singularity at a time close to the eruption onset. Until now, this method has been applied only in a small number of cases, generally for forecasts in hindsight. In this paper, a rigorous Bayesian approach of the FFM designed for real-time applications is applied. Using an automatic recognition system, seismo-volcanic events are detected and classified according to their physical mechanism and time series of probability distributions of the rates of events are calculated. At each time of observation, a Bayesian inversion provides estimations of the exponent of the power law and of the time of eruption, together with their probability density functions. Two criteria are defined in order to evaluate the quality and reliability of the forecasts. Our automated procedure has allowed the analysis of long, continuous seismic time series: 13 years from Volcán de Colima, Mexico, 10 years from Piton de la Fournaise, Reunion Island, France, and several months from Merapi volcano, Java, Indonesia. The new forecasting approach has been applied to 64 pre-eruptive sequences which present various types of dominant seismic activity (volcano-tectonic or long-period events) and patterns of seismicity with different level of complexity. This has allowed us to test the FFM assumptions, to determine in which conditions the method can be applied, and to quantify the success rate of the forecasts. 62% of the precursory sequences analysed are suitable for the application of FFM and half of the total number of eruptions are successfully forecast in hindsight. In real-time, the method allows for the successful forecast of 36% of all the eruptions considered. Nevertheless, real-time forecasts are successful for 83% of the cases that fulfil the reliability criteria. Therefore, good confidence on the method is obtained when the reliability criteria are met.

  7. Possibilities of forecasting hypercholesterinemia in pilots

    NASA Technical Reports Server (NTRS)

    Vivilov, P.

    1980-01-01

    The dependence of the frequency of hypercholesterinemia on the age, average annual flying time, functional category, qualification class, and flying specialty of 300 pilots was investigated. The risk probability coefficient of hypercholesterinemia was computed. An evaluation table was developed which gives an 84% probability of forcasting risk of hypercholesterinemia.

  8. Forecasting paratransit services demand : review and recommendations.

    DOT National Transportation Integrated Search

    2013-06-01

    Travel demand forecasting tools for Floridas paratransit services are outdated, utilizing old national trip : generation rate generalities and simple linear regression models. In its guidance for the development of : mandated Transportation Disadv...

  9. Technical note: Combining quantile forecasts and predictive distributions of streamflows

    NASA Astrophysics Data System (ADS)

    Bogner, Konrad; Liechti, Katharina; Zappa, Massimiliano

    2017-11-01

    The enhanced availability of many different hydro-meteorological modelling and forecasting systems raises the issue of how to optimally combine this great deal of information. Especially the usage of deterministic and probabilistic forecasts with sometimes widely divergent predicted future streamflow values makes it even more complicated for decision makers to sift out the relevant information. In this study multiple streamflow forecast information will be aggregated based on several different predictive distributions, and quantile forecasts. For this combination the Bayesian model averaging (BMA) approach, the non-homogeneous Gaussian regression (NGR), also known as the ensemble model output statistic (EMOS) techniques, and a novel method called Beta-transformed linear pooling (BLP) will be applied. By the help of the quantile score (QS) and the continuous ranked probability score (CRPS), the combination results for the Sihl River in Switzerland with about 5 years of forecast data will be compared and the differences between the raw and optimally combined forecasts will be highlighted. The results demonstrate the importance of applying proper forecast combination methods for decision makers in the field of flood and water resource management.

  10. Application of bayesian networks to real-time flood risk estimation

    NASA Astrophysics Data System (ADS)

    Garrote, L.; Molina, M.; Blasco, G.

    2003-04-01

    This paper presents the application of a computational paradigm taken from the field of artificial intelligence - the bayesian network - to model the behaviour of hydrologic basins during floods. The final goal of this research is to develop representation techniques for hydrologic simulation models in order to define, develop and validate a mechanism, supported by a software environment, oriented to build decision models for the prediction and management of river floods in real time. The emphasis is placed on providing decision makers with tools to incorporate their knowledge of basin behaviour, usually formulated in terms of rainfall-runoff models, in the process of real-time decision making during floods. A rainfall-runoff model is only a step in the process of decision making. If a reliable rainfall forecast is available and the rainfall-runoff model is well calibrated, decisions can be based mainly on model results. However, in most practical situations, uncertainties in rainfall forecasts or model performance have to be incorporated in the decision process. The computation paradigm adopted for the simulation of hydrologic processes is the bayesian network. A bayesian network is a directed acyclic graph that represents causal influences between linked variables. Under this representation, uncertain qualitative variables are related through causal relations quantified with conditional probabilities. The solution algorithm allows the computation of the expected probability distribution of unknown variables conditioned to the observations. An approach to represent hydrologic processes by bayesian networks with temporal and spatial extensions is presented in this paper, together with a methodology for the development of bayesian models using results produced by deterministic hydrologic simulation models

  11. Short-Term Forecasting of Loads and Wind Power for Latvian Power System: Accuracy and Capacity of the Developed Tools

    NASA Astrophysics Data System (ADS)

    Radziukynas, V.; Klementavičius, A.

    2016-04-01

    The paper analyses the performance results of the recently developed short-term forecasting suit for the Latvian power system. The system load and wind power are forecasted using ANN and ARIMA models, respectively, and the forecasting accuracy is evaluated in terms of errors, mean absolute errors and mean absolute percentage errors. The investigation of influence of additional input variables on load forecasting errors is performed. The interplay of hourly loads and wind power forecasting errors is also evaluated for the Latvian power system with historical loads (the year 2011) and planned wind power capacities (the year 2023).

  12. Accuracy of 24- and 48-Hour Forecasts of Haines' Index

    Treesearch

    Brian E. Potter; Jonathan E. Martin

    2001-01-01

    The University of Wisconsin-Madison produces Web-accessible, 24- and 48-hour forecasts of the Haines Index (a tool used to measure the atmospheric potential for large wildfire development) for most of North America using its nonhydrostatic modeling system. The authors examined the accuracy of these forecasts using data from 1999 and 2000. Measures used include root-...

  13. Performance and robustness of probabilistic river forecasts computed with quantile regression based on multiple independent variables in the North Central USA

    NASA Astrophysics Data System (ADS)

    Hoss, F.; Fischbeck, P. S.

    2014-10-01

    This study further develops the method of quantile regression (QR) to predict exceedance probabilities of flood stages by post-processing forecasts. Using data from the 82 river gages, for which the National Weather Service's North Central River Forecast Center issues forecasts daily, this is the first QR application to US American river gages. Archived forecasts for lead times up to six days from 2001-2013 were analyzed. Earlier implementations of QR used the forecast itself as the only independent variable (Weerts et al., 2011; López López et al., 2014). This study adds the rise rate of the river stage in the last 24 and 48 h and the forecast error 24 and 48 h ago to the QR model. Including those four variables significantly improved the forecasts, as measured by the Brier Skill Score (BSS). Mainly, the resolution increases, as the original QR implementation already delivered high reliability. Combining the forecast with the other four variables results in much less favorable BSSs. Lastly, the forecast performance does not depend on the size of the training dataset, but on the year, the river gage, lead time and event threshold that are being forecast. We find that each event threshold requires a separate model configuration or at least calibration.

  14. From anomalies to forecasts: Toward a descriptive model of decisions under risk, under ambiguity, and from experience.

    PubMed

    Erev, Ido; Ert, Eyal; Plonsky, Ori; Cohen, Doron; Cohen, Oded

    2017-07-01

    Experimental studies of choice behavior document distinct, and sometimes contradictory, deviations from maximization. For example, people tend to overweight rare events in 1-shot decisions under risk, and to exhibit the opposite bias when they rely on past experience. The common explanations of these results assume that the contradicting anomalies reflect situation-specific processes that involve the weighting of subjective values and the use of simple heuristics. The current article analyzes 14 choice anomalies that have been described by different models, including the Allais, St. Petersburg, and Ellsberg paradoxes, and the reflection effect. Next, it uses a choice prediction competition methodology to clarify the interaction between the different anomalies. It focuses on decisions under risk (known payoff distributions) and under ambiguity (unknown probabilities), with and without feedback concerning the outcomes of past choices. The results demonstrate that it is not necessary to assume situation-specific processes. The distinct anomalies can be captured by assuming high sensitivity to the expected return and 4 additional tendencies: pessimism, bias toward equal weighting, sensitivity to payoff sign, and an effort to minimize the probability of immediate regret. Importantly, feedback increases sensitivity to probability of regret. Simple abstractions of these assumptions, variants of the model Best Estimate and Sampling Tools (BEAST), allow surprisingly accurate ex ante predictions of behavior. Unlike the popular models, BEAST does not assume subjective weighting functions or cognitive shortcuts. Rather, it assumes the use of sampling tools and reliance on small samples, in addition to the estimation of the expected values. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  15. Olive Actual "on Year" Yield Forecast Tool Based on the Tree Canopy Geometry Using UAS Imagery.

    PubMed

    Sola-Guirado, Rafael R; Castillo-Ruiz, Francisco J; Jiménez-Jiménez, Francisco; Blanco-Roldan, Gregorio L; Castro-Garcia, Sergio; Gil-Ribes, Jesus A

    2017-07-30

    Olive has a notable importance in countries of Mediterranean basin and its profitability depends on several factors such as actual yield, production cost or product price. Actual "on year" Yield (AY) is production (kg tree -1 ) in "on years", and this research attempts to relate it with geometrical parameters of the tree canopy. Regression equation to forecast AY based on manual canopy volume was determined based on data acquired from different orchard categories and cultivars during different harvesting seasons in southern Spain. Orthoimages were acquired with unmanned aerial systems (UAS) imagery calculating individual crown for relating to canopy volume and AY. Yield levels did not vary between orchard categories; however, it did between irrigated orchards (7000-17,000 kg ha -1 ) and rainfed ones (4000-7000 kg ha -1 ). After that, manual canopy volume was related with the individual crown area of trees that were calculated by orthoimages acquired with UAS imagery. Finally, AY was forecasted using both manual canopy volume and individual tree crown area as main factors for olive productivity. AY forecast only by using individual crown area made it possible to get a simple and cheap forecast tool for a wide range of olive orchards. Finally, the acquired information was introduced in a thematic map describing spatial AY variability obtained from orthoimage analysis that may be a powerful tool for farmers, insurance systems, market forecasts or to detect agronomical problems.

  16. Olive Actual “on Year” Yield Forecast Tool Based on the Tree Canopy Geometry Using UAS Imagery

    PubMed Central

    Sola-Guirado, Rafael R.; Castillo-Ruiz, Francisco J.; Jiménez-Jiménez, Francisco; Blanco-Roldan, Gregorio L.; Gil-Ribes, Jesus A.

    2017-01-01

    Olive has a notable importance in countries of Mediterranean basin and its profitability depends on several factors such as actual yield, production cost or product price. Actual “on year” Yield (AY) is production (kg tree−1) in “on years”, and this research attempts to relate it with geometrical parameters of the tree canopy. Regression equation to forecast AY based on manual canopy volume was determined based on data acquired from different orchard categories and cultivars during different harvesting seasons in southern Spain. Orthoimages were acquired with unmanned aerial systems (UAS) imagery calculating individual crown for relating to canopy volume and AY. Yield levels did not vary between orchard categories; however, it did between irrigated orchards (7000–17,000 kg ha−1) and rainfed ones (4000–7000 kg ha−1). After that, manual canopy volume was related with the individual crown area of trees that were calculated by orthoimages acquired with UAS imagery. Finally, AY was forecasted using both manual canopy volume and individual tree crown area as main factors for olive productivity. AY forecast only by using individual crown area made it possible to get a simple and cheap forecast tool for a wide range of olive orchards. Finally, the acquired information was introduced in a thematic map describing spatial AY variability obtained from orthoimage analysis that may be a powerful tool for farmers, insurance systems, market forecasts or to detect agronomical problems. PMID:28758945

  17. Hydroclimate Forecasts in Ethiopia: Benefits, Impediments, and Ways Forward

    NASA Astrophysics Data System (ADS)

    Block, P. J.

    2014-12-01

    Numerous hydroclimate forecast models, tools, and guidance exist for application across Ethiopia and East Africa in the agricultural, water, energy, disasters, and economic sectors. This has resulted from concerted local and international interdisciplinary efforts, yet little evidence exists of rapid forecast uptake and use. We will review projected benefits and gains of seasonal forecast application, impediments, and options for the way forward. Specific case studies regarding floods, agricultural-economic links, and hydropower will be reviewed.

  18. The Ensemble Space Weather Modeling System (eSWMS): Status, Capabilities and Challenges

    NASA Astrophysics Data System (ADS)

    Fry, C. D.; Eccles, J. V.; Reich, J. P.

    2010-12-01

    Marking a milestone in space weather forecasting, the Space Weather Modeling System (SWMS) successfully completed validation testing in advance of operational testing at Air Force Weather Agency’s primary space weather production center. This is the first coupling of stand-alone, physics-based space weather models that are currently in operations at AFWA supporting the warfighter. Significant development effort went into ensuring the component models were portable and scalable while maintaining consistent results across diverse high performance computing platforms. Coupling was accomplished under the Earth System Modeling Framework (ESMF). The coupled space weather models are the Hakamada-Akasofu-Fry version 2 (HAFv2) solar wind model and GAIM1, the ionospheric forecast component of the Global Assimilation of Ionospheric Measurements (GAIM) model. The SWMS was developed by team members from AFWA, Explorations Physics International, Inc. (EXPI) and Space Environment Corporation (SEC). The successful development of the SWMS provides new capabilities beyond enabling extended lead-time, data-driven ionospheric forecasts. These include ingesting diverse data sets at higher resolution, incorporating denser computational grids at finer time steps, and performing probability-based ensemble forecasts. Work of the SWMS development team now focuses on implementing the ensemble-based probability forecast capability by feeding multiple scenarios of 5 days of solar wind forecasts to the GAIM1 model based on the variation of the input fields to the HAFv2 model. The ensemble SWMS (eSWMS) will provide the most-likely space weather scenario with uncertainty estimates for important forecast fields. The eSWMS will allow DoD mission planners to consider the effects of space weather on their systems with more advance warning than is currently possible. The payoff is enhanced, tailored support to the warfighter with improved capabilities, such as point-to-point HF propagation forecasts, single-frequency GPS error corrections, and high cadence, high-resolution Space Situational Awareness (SSA) products. We present the current status of eSWMS, its capabilities, limitations and path of transition to operational use.

  19. FASTER - A tool for DSN forecasting and scheduling

    NASA Technical Reports Server (NTRS)

    Werntz, David; Loyola, Steven; Zendejas, Silvino

    1993-01-01

    FASTER (Forecasting And Scheduling Tool for Earth-based Resources) is a suite of tools designed for forecasting and scheduling JPL's Deep Space Network (DSN). The DSN is a set of antennas and other associated resources that must be scheduled for satellite communications, astronomy, maintenance, and testing. FASTER consists of MS-Windows based programs that replace two existing programs (RALPH and PC4CAST). FASTER was designed to be more flexible, maintainable, and user friendly. FASTER makes heavy use of commercial software to allow for customization by users. FASTER implements scheduling as a two pass process: the first pass calculates a predictive profile of resource utilization; the second pass uses this information to calculate a cost function used in a dynamic programming optimization step. This information allows the scheduler to 'look ahead' at activities that are not as yet scheduled. FASTER has succeeded in allowing wider access to data and tools, reducing the amount of effort expended and increasing the quality of analysis.

  20. Experiments with Seasonal Forecasts of ocean conditions for the Northern region of the California Current upwelling system

    PubMed Central

    Siedlecki, Samantha A.; Kaplan, Isaac C.; Hermann, Albert J.; Nguyen, Thanh Tam; Bond, Nicholas A.; Newton, Jan A.; Williams, Gregory D.; Peterson, William T.; Alin, Simone R.; Feely, Richard A.

    2016-01-01

    Resource managers at the state, federal, and tribal levels make decisions on a weekly to quarterly basis, and fishers operate on a similar timeframe. To determine the potential of a support tool for these efforts, a seasonal forecast system is experimented with here. JISAO’s Seasonal Coastal Ocean Prediction of the Ecosystem (J-SCOPE) features dynamical downscaling of regional ocean conditions in Washington and Oregon waters using a combination of a high-resolution regional model with biogeochemistry and forecasts from NOAA’s Climate Forecast System (CFS). Model performance and predictability were examined for sea surface temperature (SST), bottom temperature, bottom oxygen, pH, and aragonite saturation state through model hindcasts, reforecast, and forecast comparisons with observations. Results indicate J-SCOPE forecasts have measurable skill on seasonal timescales. Experiments suggest that seasonal forecasting of ocean conditions important for fisheries is possible with the right combination of components. Those components include regional predictability on seasonal timescales of the physical environment from a large-scale model, a high-resolution regional model with biogeochemistry that simulates seasonal conditions in hindcasts, a relationship with local stakeholders, and a real-time observational network. Multiple efforts and approaches in different regions would advance knowledge to provide additional tools to fishers and other stakeholders. PMID:27273473

  1. Experiments with Seasonal Forecasts of ocean conditions for the Northern region of the California Current upwelling system

    NASA Astrophysics Data System (ADS)

    Siedlecki, Samantha A.; Kaplan, Isaac C.; Hermann, Albert J.; Nguyen, Thanh Tam; Bond, Nicholas A.; Newton, Jan A.; Williams, Gregory D.; Peterson, William T.; Alin, Simone R.; Feely, Richard A.

    2016-06-01

    Resource managers at the state, federal, and tribal levels make decisions on a weekly to quarterly basis, and fishers operate on a similar timeframe. To determine the potential of a support tool for these efforts, a seasonal forecast system is experimented with here. JISAO’s Seasonal Coastal Ocean Prediction of the Ecosystem (J-SCOPE) features dynamical downscaling of regional ocean conditions in Washington and Oregon waters using a combination of a high-resolution regional model with biogeochemistry and forecasts from NOAA’s Climate Forecast System (CFS). Model performance and predictability were examined for sea surface temperature (SST), bottom temperature, bottom oxygen, pH, and aragonite saturation state through model hindcasts, reforecast, and forecast comparisons with observations. Results indicate J-SCOPE forecasts have measurable skill on seasonal timescales. Experiments suggest that seasonal forecasting of ocean conditions important for fisheries is possible with the right combination of components. Those components include regional predictability on seasonal timescales of the physical environment from a large-scale model, a high-resolution regional model with biogeochemistry that simulates seasonal conditions in hindcasts, a relationship with local stakeholders, and a real-time observational network. Multiple efforts and approaches in different regions would advance knowledge to provide additional tools to fishers and other stakeholders.

  2. Experiments with Seasonal Forecasts of ocean conditions for the Northern region of the California Current upwelling system.

    PubMed

    Siedlecki, Samantha A; Kaplan, Isaac C; Hermann, Albert J; Nguyen, Thanh Tam; Bond, Nicholas A; Newton, Jan A; Williams, Gregory D; Peterson, William T; Alin, Simone R; Feely, Richard A

    2016-06-07

    Resource managers at the state, federal, and tribal levels make decisions on a weekly to quarterly basis, and fishers operate on a similar timeframe. To determine the potential of a support tool for these efforts, a seasonal forecast system is experimented with here. JISAO's Seasonal Coastal Ocean Prediction of the Ecosystem (J-SCOPE) features dynamical downscaling of regional ocean conditions in Washington and Oregon waters using a combination of a high-resolution regional model with biogeochemistry and forecasts from NOAA's Climate Forecast System (CFS). Model performance and predictability were examined for sea surface temperature (SST), bottom temperature, bottom oxygen, pH, and aragonite saturation state through model hindcasts, reforecast, and forecast comparisons with observations. Results indicate J-SCOPE forecasts have measurable skill on seasonal timescales. Experiments suggest that seasonal forecasting of ocean conditions important for fisheries is possible with the right combination of components. Those components include regional predictability on seasonal timescales of the physical environment from a large-scale model, a high-resolution regional model with biogeochemistry that simulates seasonal conditions in hindcasts, a relationship with local stakeholders, and a real-time observational network. Multiple efforts and approaches in different regions would advance knowledge to provide additional tools to fishers and other stakeholders.

  3. Fews-Risk: A step towards risk-based flood forecasting

    NASA Astrophysics Data System (ADS)

    Bachmann, Daniel; Eilander, Dirk; de Leeuw, Annemargreet; Diermanse, Ferdinand; Weerts, Albrecht; de Bruijn, Karin; Beckers, Joost; Boelee, Leonore; Brown, Emma; Hazlewood, Caroline

    2015-04-01

    Operational flood prediction and the assessment of flood risk are important components of flood management. Currently, the model-based prediction of discharge and/or water level in a river is common practice for operational flood forecasting. Based on the prediction of these values decisions about specific emergency measures are made within operational flood management. However, the information provided for decision support is restricted to pure hydrological or hydraulic aspects of a flood. Information about weak sections within the flood defences, flood prone areas and assets at risk in the protected areas are rarely used in a model-based flood forecasting system. This information is often available for strategic planning, but is not in an appropriate format for operational purposes. The idea of FEWS-Risk is the extension of existing flood forecasting systems with elements of strategic flood risk analysis, such as probabilistic failure analysis, two dimensional flood spreading simulation and the analysis of flood impacts and consequences. Thus, additional information is provided to the decision makers, such as: • Location, timing and probability of failure of defined sections of the flood defence line; • Flood spreading, extent and hydraulic values in the hinterland caused by an overflow or a breach flow • Impacts and consequences in case of flooding in the protected areas, such as injuries or casualties and/or damages to critical infrastructure or economy. In contrast with purely hydraulic-based operational information, these additional data focus upon decision support for answering crucial questions within an operational flood forecasting framework, such as: • Where should I reinforce my flood defence system? • What type of action can I take to mend a weak spot in my flood defences? • What are the consequences of a breach? • Which areas should I evacuate first? This presentation outlines the additional required workflows towards risk-based flood forecasting systems. In a cooperation between HR Wallingford and Deltares, the extended workflows are being integrated into the Delft-FEWS software system. Delft-FEWS provides modules for managing the data handling and forecasting process. Results of a pilot study that demonstrates the new tools are presented. The value of the newly generated information for decision support during a flood event is discussed.

  4. The Role of Probability-Based Inference in an Intelligent Tutoring System.

    ERIC Educational Resources Information Center

    Mislevy, Robert J.; Gitomer, Drew H.

    Probability-based inference in complex networks of interdependent variables is an active topic in statistical research, spurred by such diverse applications as forecasting, pedigree analysis, troubleshooting, and medical diagnosis. This paper concerns the role of Bayesian inference networks for updating student models in intelligent tutoring…

  5. Application of snowcovered area to runoff forecasting in selected basins of the Sierra Nevada, California. [Kings, Kern and Kaweah River Basins

    NASA Technical Reports Server (NTRS)

    Brown, A. J.; Hannaford, J. F. (Principal Investigator)

    1980-01-01

    The author has identified the following significant results. Direct overlay onto 1:1,000,000 prints takes about one third the time of 1:500,000 zone transfer scope analysis using transparencies, but the consistency of the transparencies reduce the time for data analysis. LANDSAT data received on transparencies is better and more easily interpreted than the near real-time data from Quick Look, or imagery from other sources such as NOAA. The greatest potential for water supply forecasting is probably in improving forecast accuracy and in expanding forecast services during the period of snowmelt. Problems of transient snow line and uncertainties in future weather are the main reasons that snow cover area appears to offer little in water supply forecast accuracy improvement during the peroid snowpack accumulation.

  6. Time Relevance of Convective Weather Forecast for Air Traffic Automation

    NASA Technical Reports Server (NTRS)

    Chan, William N.

    2006-01-01

    The Federal Aviation Administration (FAA) is handling nearly 120,000 flights a day through its Air Traffic Management (ATM) system and air traffic congestion is expected to increse substantially over the next 20 years. Weather-induced impacts to throughput and efficiency are the leading cause of flight delays accounting for 70% of all delays with convective weather accounting for 60% of all weather related delays. To support the Next Generation Air Traffic System goal of operating at 3X current capacity in the NAS, ATC decision support tools are being developed to create advisories to assist controllers in all weather constraints. Initial development of these decision support tools did not integrate information regarding weather constraints such as thunderstorms and relied on an additional system to provide that information. Future Decision Support Tools should move towards an integrated system where weather constraints are factored into the advisory of a Decision Support Tool (DST). Several groups such at NASA-Ames, Lincoln Laboratories, and MITRE are integrating convective weather data with DSTs. A survey of current convective weather forecast and observation data show they span a wide range of temporal and spatial resolutions. Short range convective observations can be obtained every 5 mins with longer range forecasts out to several days updated every 6 hrs. Today, the short range forecasts of less than 2 hours have a temporal resolution of 5 mins. Beyond 2 hours, forecasts have much lower temporal. resolution of typically 1 hour. Spatial resolutions vary from 1km for short range to 40km for longer range forecasts. Improving the accuracy of long range convective forecasts is a major challenge. A report published by the National Research Council states improvements for convective forecasts for the 2 to 6 hour time frame will only be achieved for a limited set of convective phenomena in the next 5 to 10 years. Improved longer range forecasts will be probabilistic as opposed to the deterministic shorter range forecasts. Despite the known low level of confidence with respect to long range convective forecasts, these data are still useful to a DST routing algorithm. It is better to develop an aircraft route using the best information available than no information. The temporally coarse long range forecast data needs to be interpolated to be useful to a DST. A DST uses aircraft trajectory predictions that need to be evaluated for impacts by convective storms. Each time-step of a trajectory prediction n&s to be checked against weather data. For the case of coarse temporal data, there needs to be a method fill in weather data where there is none. Simply using the coarse weather data without any interpolation can result in DST routes that are impacted by regions of strong convection. Increasing the temporal resolution of these data can be achieved but result in a large dataset that may prove to be an operational challenge in transmission and loading by a DST. Currently, it takes about 7mins retrieve a 7mb RUC2 forecast file from NOAA at NASA-Ames Research Center. A prototype NCWF6 1 hour forecast is about 3mb in size. A Six hour NCWFG forecast with a 1hr forecast time-step will be about l8mb (6 x 3mb). A 6 hour NCWF6 forecast with a l5min forecast time-step will be about 7mb (24 x 3mb). Based on the time it takes to retrieve a 7mb RUC2 forecast, it will take approximately 70mins to retrieve a 6 hour NCWF forecast with 15min time steps. Until those issues are addressed, there is a need to develop an algorithm that interpolates between these temporally coarse long range forecasts. This paper describes a method of how to use low temporal resolution probabilistic weather forecasts in a DST. The beginning of this paper is a description of some convective weather forecast and observation products followed by an example of how weather data are used by a DST. The subsequent sections will describe probabilistic forecasts followed by a descrtion of a method to use low temporal resolution probabilistic weather forecasts by providing a relevance value to these data outside of their valid times.

  7. A Bayesian modelling method for post-processing daily sub-seasonal to seasonal rainfall forecasts from global climate models and evaluation for 12 Australian catchments

    NASA Astrophysics Data System (ADS)

    Schepen, Andrew; Zhao, Tongtiegang; Wang, Quan J.; Robertson, David E.

    2018-03-01

    Rainfall forecasts are an integral part of hydrological forecasting systems at sub-seasonal to seasonal timescales. In seasonal forecasting, global climate models (GCMs) are now the go-to source for rainfall forecasts. For hydrological applications however, GCM forecasts are often biased and unreliable in uncertainty spread, and calibration is therefore required before use. There are sophisticated statistical techniques for calibrating monthly and seasonal aggregations of the forecasts. However, calibration of seasonal forecasts at the daily time step typically uses very simple statistical methods or climate analogue methods. These methods generally lack the sophistication to achieve unbiased, reliable and coherent forecasts of daily amounts and seasonal accumulated totals. In this study, we propose and evaluate a Rainfall Post-Processing method for Seasonal forecasts (RPP-S), which is based on the Bayesian joint probability modelling approach for calibrating daily forecasts and the Schaake Shuffle for connecting the daily ensemble members of different lead times. We apply the method to post-process ACCESS-S forecasts for 12 perennial and ephemeral catchments across Australia and for 12 initialisation dates. RPP-S significantly reduces bias in raw forecasts and improves both skill and reliability. RPP-S forecasts are also more skilful and reliable than forecasts derived from ACCESS-S forecasts that have been post-processed using quantile mapping, especially for monthly and seasonal accumulations. Several opportunities to improve the robustness and skill of RPP-S are identified. The new RPP-S post-processed forecasts will be used in ensemble sub-seasonal to seasonal streamflow applications.

  8. Prospective earthquake forecasts at the Himalayan Front after the 25 April 2015 M 7.8 Gorkha Mainshock

    USGS Publications Warehouse

    Segou, Margaret; Parsons, Thomas E.

    2016-01-01

    When a major earthquake strikes, the resulting devastation can be compounded or even exceeded by the subsequent cascade of triggered seismicity. As the Nepalese recover from the 25 April 2015 shock, knowledge of what comes next is essential. We calculate the redistribution of crustal stresses and implied earthquake probabilities for different periods, from daily to 30 years into the future. An initial forecast was completed before an M 7.3 earthquake struck on 12 May 2015 that enables a preliminary assessment; postforecast seismicity has so far occurred within a zone of fivefold probability gain. Evaluation of the forecast performance, using two months of seismic data, reveals that stress‐based approaches present improved skill in higher‐magnitude triggered seismicity. Our results suggest that considering the total stress field, rather than only the coseismic one, improves the spatial performance of the model based on the estimation of a wide range of potential triggered faults following a mainshock.

  9. Probability of US Heat Waves Affected by a Subseasonal Planetary Wave Pattern

    NASA Technical Reports Server (NTRS)

    Teng, Haiyan; Branstator, Grant; Wang, Hailan; Meehl, Gerald A.; Washington, Warren M.

    2013-01-01

    Heat waves are thought to result from subseasonal atmospheric variability. Atmospheric phenomena driven by tropical convection, such as the Asian monsoon, have been considered potential sources of predictability on subseasonal timescales. Mid-latitude atmospheric dynamics have been considered too chaotic to allow significant prediction skill of lead times beyond the typical 10-day range of weather forecasts. Here we use a 12,000-year integration of an atmospheric general circulation model to identify a pattern of subseasonal atmospheric variability that can help improve forecast skill for heat waves in the United States. We find that heat waves tend to be preceded by 15-20 days by a pattern of anomalous atmospheric planetary waves with a wavenumber of 5. This circulation pattern can arise as a result of internal atmospheric dynamics and is not necessarily linked to tropical heating.We conclude that some mid-latitude circulation anomalies that increase the probability of heat waves are predictable beyond the typical weather forecast range.

  10. Willingness-to-pay for a probabilistic flood forecast: a risk-based decision-making game

    NASA Astrophysics Data System (ADS)

    Arnal, Louise; Ramos, Maria-Helena; Coughlan de Perez, Erin; Cloke, Hannah Louise; Stephens, Elisabeth; Wetterhall, Fredrik; van Andel, Schalk Jan; Pappenberger, Florian

    2016-08-01

    Probabilistic hydro-meteorological forecasts have over the last decades been used more frequently to communicate forecast uncertainty. This uncertainty is twofold, as it constitutes both an added value and a challenge for the forecaster and the user of the forecasts. Many authors have demonstrated the added (economic) value of probabilistic over deterministic forecasts across the water sector (e.g. flood protection, hydroelectric power management and navigation). However, the richness of the information is also a source of challenges for operational uses, due partially to the difficulty in transforming the probability of occurrence of an event into a binary decision. This paper presents the results of a risk-based decision-making game on the topic of flood protection mitigation, called "How much are you prepared to pay for a forecast?". The game was played at several workshops in 2015, which were attended by operational forecasters and academics working in the field of hydro-meteorology. The aim of this game was to better understand the role of probabilistic forecasts in decision-making processes and their perceived value by decision-makers. Based on the participants' willingness-to-pay for a forecast, the results of the game show that the value (or the usefulness) of a forecast depends on several factors, including the way users perceive the quality of their forecasts and link it to the perception of their own performances as decision-makers.

  11. Relationships between rainfall and Combined Sewer Overflow (CSO) occurrences

    NASA Astrophysics Data System (ADS)

    Mailhot, A.; Talbot, G.; Lavallée, B.

    2015-04-01

    Combined Sewer Overflow (CSO) has been recognized as a major environmental issue in many countries. In Canada, the proposed reinforcement of the CSO frequency regulations will result in new constraints on municipal development. Municipalities will have to demonstrate that new developments do not increase CSO frequency above a reference level based on historical CSO records. Governmental agencies will also have to define a framework to assess the impact of new developments on CSO frequency and the efficiency of the various proposed measures to maintain CSO frequency at its historic level. In such a context, it is important to correctly assess the average number of days with CSO and to define relationships between CSO frequency and rainfall characteristics. This paper investigates such relationships using available CSO and rainfall datasets for Quebec. CSO records for 4285 overflow structures (OS) were analyzed. A simple model based on rainfall thresholds was developed to forecast the occurrence of CSO on a given day based on daily rainfall values. The estimated probability of days with CSO have been used to estimate the rainfall threshold value at each OS by imposing that the probability of exceeding this rainfall value for a given day be equal to the estimated probability of days with CSO. The forecast skill of this model was assessed for 3437 OS using contingency tables. The statistical significance of the forecast skill could be assessed for 64.2% of these OS. The threshold model has demonstrated significant forecast skill for 91.3% of these OS confirming that for most OS a simple threshold model can be used to assess the occurrence of CSO.

  12. Ensemble Streamflow Forecast Improvements in NYC's Operations Support Tool

    NASA Astrophysics Data System (ADS)

    Wang, L.; Weiss, W. J.; Porter, J.; Schaake, J. C.; Day, G. N.; Sheer, D. P.

    2013-12-01

    Like most other water supply utilities, New York City's Department of Environmental Protection (DEP) has operational challenges associated with drought and wet weather events. During drought conditions, DEP must maintain water supply reliability to 9 million customers as well as meet environmental release requirements downstream of its reservoirs. During and after wet weather events, DEP must maintain turbidity compliance in its unfiltered Catskill and Delaware reservoir systems and minimize spills to mitigate downstream flooding. Proactive reservoir management - such as release restrictions to prepare for a drought or preventative drawdown in advance of a large storm - can alleviate negative impacts associated with extreme events. It is important for water managers to understand the risks associated with proactive operations so unintended consequences such as endangering water supply reliability with excessive drawdown prior to a storm event are minimized. Probabilistic hydrologic forecasts are a critical tool in quantifying these risks and allow water managers to make more informed operational decisions. DEP has recently completed development of an Operations Support Tool (OST) that integrates ensemble streamflow forecasts, real-time observations, and a reservoir system operations model into a user-friendly graphical interface that allows its water managers to take robust and defensible proactive measures in the face of challenging system conditions. Since initial development of OST was first presented at the 2011 AGU Fall Meeting, significant improvements have been made to the forecast system. First, the monthly AR1 forecasts ('Hirsch method') were upgraded with a generalized linear model (GLM) utilizing historical daily correlations ('Extended Hirsch method' or 'eHirsch'). The development of eHirsch forecasts improved predictive skill over the Hirsch method in the first week to a month from the forecast date and produced more realistic hydrographs on the tail end of high flow periods. These improvements allowed DEP to more effectively manage water quality control and spill mitigation operations immediately after storm events. Later on, post-processed hydrologic forecasts from the National Weather Service (NWS) including the Advanced Hydrologic Prediction Service (AHPS) and the Hydrologic Ensemble Forecast Service (HEFS) were implemented into OST. These forecasts further increased the predictive skill over the initial statistical models as current basin conditions (e.g. soil moisture, snowpack) and meteorological forecasts (with HEFS) are now explicitly represented. With the post-processed HEFS forecasts, DEP may now truly quantify impacts associated with wet weather events on the horizon, rather than relying on statistical representations of current hydrologic trends. This presentation will highlight the benefits of the improved forecasts using examples from actual system operations.

  13. Helping Resource Managers Understand Hydroclimatic Variability and Forecasts: A Case Study in Research Equity

    NASA Astrophysics Data System (ADS)

    Hartmann, H. C.; Pagano, T. C.; Sorooshian, S.; Bales, R.

    2002-12-01

    Expectations for hydroclimatic research are evolving as changes in the contract between science and society require researchers to provide "usable science" that can improve resource management policies and practices. However, decision makers have a broad range of abilities to access, interpret, and apply scientific research. "High-end users" have technical capabilities and operational flexibility capable of readily exploiting new information and products. "Low-end users" have fewer resources and are less likely to change their decision making processes without clear demonstration of benefits by influential early adopters (i.e., high-end users). Should research programs aim for efficiency, targeting high-end users? Should they aim for impact, targeting decisions with high economic value or great influence (e.g., state or national agencies)? Or should they focus on equity, whereby outcomes benefit groups across a range of capabilities? In this case study, we focus on hydroclimatic variability and forecasts. Agencies and individuals responsible for resource management decisions have varying perspectives about hydroclimatic variability and opportunities for using forecasts to improve decision outcomes. Improper interpretation of forecasts is widespread and many individuals find it difficult to place forecasts in an appropriate regional historical context. In addressing these issues, we attempted to mitigate traditional inequities in the scope, communication, and accessibility of hydroclimatic research results. High-end users were important in prioritizing information needs, while low-end users were important in determining how information should be communicated. For example, high-end users expressed hesitancy to use seasonal forecasts in the absence of quantitative performance evaluations. Our subsequently developed forecast evaluation framework and research products, however, were guided by the need for a continuum of evaluation measures and interpretive materials to enable low-end users to increase their understanding of probabilistic forecasts, credibility concepts, and implications for decision making. We also developed an interactive forecast assessment tool accessible over the Internet, to support resource decisions by individuals as well as agencies. The tool provides tutorials for guiding forecast interpretation, including quizzes that allow users to test their forecast interpretation skills. Users can monitor recent and historical observations for selected regions, communicated using terminology consistent with available forecast products. The tool also allows users to evaluate forecast performance for the regions, seasons, forecast lead times, and performance criteria relevant to their specific decision making situations. Using consistent product formats, the evaluation component allows individuals to use results at the level they are capable of understanding, while offering opportunity to shift to more sophisticated criteria. Recognizing that many individuals lack Internet access, the forecast assessment webtool design also includes capabilities for customized report generation so extension agents or other trusted information intermediaries can provide material to decision makers at meetings or site visits.

  14. The net benefits of human-ignited wildfire forecasting: the case of Tribal land units in the United States

    PubMed Central

    Prestemon, Jeffrey P.; Butry, David T.; Thomas, Douglas S.

    2017-01-01

    Research shows that some categories of human-ignited wildfires might be forecastable, due to their temporal clustering, with the possibility that resources could be pre-deployed to help reduce the incidence of such wildfires. We estimated several kinds of incendiary and other human-ignited wildfire forecast models at the weekly time step for tribal land units in the United States, evaluating their forecast skill out of sample. Analyses show that an Autoregressive Conditional Poisson (ACP) model of both incendiary and non-incendiary human-ignited wildfires is more accurate out of sample compared to alternatives, and the simplest of the ACP models performed the best. Additionally, an ensemble of these and simpler, less analytically intensive approaches performed even better. Wildfire hotspot forecast models using all model types were evaluated in a simulation mode to assess the net benefits of forecasts in the context of law enforcement resource reallocations. Our analyses show that such hotspot tools could yield large positive net benefits for the tribes in terms of suppression expenditures averted for incendiary wildfires but that the hotspot tools were less likely to be beneficial for addressing outbreaks of non-incendiary human-ignited wildfires. PMID:28769549

  15. The net benefits of human-ignited wildfire forecasting: the case of Tribal land units in the United States.

    PubMed

    Prestemon, Jeffrey P; Butry, David T; Thomas, Douglas S

    2016-01-01

    Research shows that some categories of human-ignited wildfires might be forecastable, due to their temporal clustering, with the possibility that resources could be pre-deployed to help reduce the incidence of such wildfires. We estimated several kinds of incendiary and other human-ignited wildfire forecast models at the weekly time step for tribal land units in the United States, evaluating their forecast skill out of sample. Analyses show that an Autoregressive Conditional Poisson (ACP) model of both incendiary and non-incendiary human-ignited wildfires is more accurate out of sample compared to alternatives, and the simplest of the ACP models performed the best. Additionally, an ensemble of these and simpler, less analytically intensive approaches performed even better. Wildfire hotspot forecast models using all model types were evaluated in a simulation mode to assess the net benefits of forecasts in the context of law enforcement resource reallocations. Our analyses show that such hotspot tools could yield large positive net benefits for the tribes in terms of suppression expenditures averted for incendiary wildfires but that the hotspot tools were less likely to be beneficial for addressing outbreaks of non-incendiary human-ignited wildfires.

  16. Accuracy of short‐term sea ice drift forecasts using a coupled ice‐ocean model

    PubMed Central

    Zhang, Jinlun

    2015-01-01

    Abstract Arctic sea ice drift forecasts of 6 h–9 days for the summer of 2014 are generated using the Marginal Ice Zone Modeling and Assimilation System (MIZMAS); the model is driven by 6 h atmospheric forecasts from the Climate Forecast System (CFSv2). Forecast ice drift speed is compared to drifting buoys and other observational platforms. Forecast positions are compared with actual positions 24 h–8 days since forecast. Forecast results are further compared to those from the forecasts generated using an ice velocity climatology driven by multiyear integrations of the same model. The results are presented in the context of scheduling the acquisition of high‐resolution images that need to follow buoys or scientific research platforms. RMS errors for ice speed are on the order of 5 km/d for 24–48 h since forecast using the sea ice model compared with 9 km/d using climatology. Predicted buoy position RMS errors are 6.3 km for 24 h and 14 km for 72 h since forecast. Model biases in ice speed and direction can be reduced by adjusting the air drag coefficient and water turning angle, but the adjustments do not affect verification statistics. This suggests that improved atmospheric forecast forcing may further reduce the forecast errors. The model remains skillful for 8 days. Using the forecast model increases the probability of tracking a target drifting in sea ice with a 10 km × 10 km image from 60 to 95% for a 24 h forecast and from 27 to 73% for a 48 h forecast. PMID:27818852

  17. Forecasts and Warnings of Extreme Solar Storms at the Sun

    NASA Astrophysics Data System (ADS)

    Lundstedt, H.

    2015-12-01

    The most pressing space weather forecasts and warnings are those of the most intense solar flares and coronal mass ejections. However, in trying to develop these forecasts and warnings, we are confronted to many fundamental questions. Some of those are: How to define an observable measure for an extreme solar storm? How extreme can a solar storm become and how long is the build up time? How to make forecasts and warnings? Many have contributed to clarifying these general questions. In his presentation we will describe our latest results on the topological complexity of magnetic fields and the use of SDO SHARP parameters. The complexity concept will then be used to discuss the second question. Finally we will describe probability estimates of extreme solar storms.

  18. Forecasting distributions of large federal-lands fires utilizing satellite and gridded weather information

    USGS Publications Warehouse

    Preisler, H.K.; Burgan, R.E.; Eidenshink, J.C.; Klaver, Jacqueline M.; Klaver, R.W.

    2009-01-01

    The current study presents a statistical model for assessing the skill of fire danger indices and for forecasting the distribution of the expected numbers of large fires over a given region and for the upcoming week. The procedure permits development of daily maps that forecast, for the forthcoming week and within federal lands, percentiles of the distributions of (i) number of ignitions; (ii) number of fires above a given size; (iii) conditional probabilities of fires greater than a specified size, given ignition. As an illustration, we used the methods to study the skill of the Fire Potential Index an index that incorporates satellite and surface observations to map fire potential at a national scale in forecasting distributions of large fires. ?? 2009 IAWF.

  19. ECMWF Extreme Forecast Index for water vapor transport: A forecast tool for atmospheric rivers and extreme precipitation

    NASA Astrophysics Data System (ADS)

    Lavers, David A.; Pappenberger, Florian; Richardson, David S.; Zsoter, Ervin

    2016-11-01

    In winter, heavy precipitation and floods along the west coasts of midlatitude continents are largely caused by intense water vapor transport (integrated vapor transport (IVT)) within the atmospheric river of extratropical cyclones. This study builds on previous findings that showed that forecasts of IVT have higher predictability than precipitation, by applying and evaluating the European Centre for Medium-Range Weather Forecasts Extreme Forecast Index (EFI) for IVT in ensemble forecasts during three winters across Europe. We show that the IVT EFI is more able (than the precipitation EFI) to capture extreme precipitation in forecast week 2 during forecasts initialized in a positive North Atlantic Oscillation (NAO) phase; conversely, the precipitation EFI is better during the negative NAO phase and at shorter leads. An IVT EFI example for storm Desmond in December 2015 highlights its potential to identify upcoming hydrometeorological extremes, which may prove useful to the user and forecasting communities.

  20. WOD - Weather On Demand forecasting system

    NASA Astrophysics Data System (ADS)

    Rognvaldsson, Olafur; Ragnarsson, Logi; Stanislawska, Karolina

    2017-04-01

    The backbone of the Belgingur forecasting system (called WOD - Weather On Demand) is the WRF-Chem atmospheric model, with a number of in-house customisations. Initial and boundary data are taken from the Global Forecasting System, operated by the National Oceanic and Atmospheric Administration (NOAA). Operational forecasts use cycling of a number of parameters, mainly deep soil and surface fields. This is done to minimise spin-up effects and to ensure proper book-keeping of hydrological fields such as snow accumulation and runoff, as well as the constituents of various chemical parameters. The WOD system can be used to create conventional short- to medium-range weather forecasts for any location on the globe. The WOD system can also be used for air quality purposes (e.g. dispersion forecasts from volcanic eruptions) and as a tool to provide input to other modelling systems, such as hydrological models. A wide variety of post-processing options are also available, making WOD an ideal tool for creating highly customised output that can be tailored to the specific needs of individual end-users. The most recent addition to the WOD system is an integrated verification system where forecasts can be compared to surface observations from chosen locations. Forecast visualisation, such as weather charts, meteograms, weather icons and tables, is done via number of web components that can be configured to serve the varying needs of different end-users. The WOD system itself can be installed in an automatic way on hardware running a range of Linux based OS. System upgrades can also be done in semi-automatic fashion, i.e. upgrades and/or bug-fixes can be pushed to the end-user hardware without system downtime. Importantly, the WOD system requires only rudimentary knowledge of the WRF modelling, and the Linux operating systems on behalf of the end-user, making it an ideal NWP tool in locations with limited IT infrastructure.

  1. Real-time Social Internet Data to Guide Forecasting Models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Del Valle, Sara Y.

    Our goal is to improve decision support by monitoring and forecasting events using social media, mathematical models, and quantifying model uncertainty. Our approach is real-time, data-driven forecasts with quantified uncertainty: Not just for weather anymore. Information flow from human observations of events through an Internet system and classification algorithms is used to produce quantitatively uncertain forecast. In summary, we want to develop new tools to extract useful information from Internet data streams, develop new approaches to assimilate real-time information into predictive models, validate approaches by forecasting events, and our ultimate goal is to develop an event forecasting system using mathematicalmore » approaches and heterogeneous data streams.« less

  2. Modeling nonbreeding distributions of shorebirds and waterfowl in response to climate change

    USGS Publications Warehouse

    Reese, Gordon; Skagen, Susan K.

    2017-01-01

    To identify areas on the landscape that may contribute to a robust network of conservation areas, we modeled the probabilities of occurrence of several en route migratory shorebirds and wintering waterfowl in the southern Great Plains of North America, including responses to changing climate. We predominantly used data from the eBird citizen-science project to model probabilities of occurrence relative to land-use patterns, spatial distribution of wetlands, and climate. We projected models to potential future climate conditions using five representative general circulation models of the Coupled Model Intercomparison Project 5 (CMIP5). We used Random Forests to model probabilities of occurrence and compared the time periods 1981–2010 (hindcast) and 2041–2070 (forecast) in “model space.” Projected changes in shorebird probabilities of occurrence varied with species-specific general distribution pattern, migration distance, and spatial extent. Species using the western and northern portion of the study area exhibited the greatest likelihoods of decline, whereas species with more easterly occurrences, mostly long-distance migrants, had the greatest projected increases in probability of occurrence. At an ecoregional extent, differences in probabilities of shorebird occurrence ranged from −0.015 to 0.045 when averaged across climate models, with the largest increases occurring early in migration. Spatial shifts are predicted for several shorebird species. Probabilities of occurrence of wintering Mallards and Northern Pintail are predicted to increase by 0.046 and 0.061, respectively, with northward shifts projected for both species. When incorporated into partner land management decision tools, results at ecoregional extents can be used to identify wetland complexes with the greatest potential to support birds in the nonbreeding season under a wide range of future climate scenarios.

  3. Staged decision making based on probabilistic forecasting

    NASA Astrophysics Data System (ADS)

    Booister, Nikéh; Verkade, Jan; Werner, Micha; Cranston, Michael; Cumiskey, Lydia; Zevenbergen, Chris

    2016-04-01

    Flood forecasting systems reduce, but cannot eliminate uncertainty about the future. Probabilistic forecasts explicitly show that uncertainty remains. However, as - compared to deterministic forecasts - a dimension is added ('probability' or 'likelihood'), with this added dimension decision making is made slightly more complicated. A technique of decision support is the cost-loss approach, which defines whether or not to issue a warning or implement mitigation measures (risk-based method). With the cost-loss method a warning will be issued when the ratio of the response costs to the damage reduction is less than or equal to the probability of the possible flood event. This cost-loss method is not widely used, because it motivates based on only economic values and is a technique that is relatively static (no reasoning, yes/no decision). Nevertheless it has high potential to improve risk-based decision making based on probabilistic flood forecasting because there are no other methods known that deal with probabilities in decision making. The main aim of this research was to explore the ways of making decision making based on probabilities with the cost-loss method better applicable in practice. The exploration began by identifying other situations in which decisions were taken based on uncertain forecasts or predictions. These cases spanned a range of degrees of uncertainty: from known uncertainty to deep uncertainty. Based on the types of uncertainties, concepts of dealing with situations and responses were analysed and possible applicable concepts where chosen. Out of this analysis the concepts of flexibility and robustness appeared to be fitting to the existing method. Instead of taking big decisions with bigger consequences at once, the idea is that actions and decisions are cut-up into smaller pieces and finally the decision to implement is made based on economic costs of decisions and measures and the reduced effect of flooding. The more lead-time there is in flood event management, the more damage can be reduced. And with decisions based on probabilistic forecasts, partial decisions can be made earlier in time (with a lower probability) and can be scaled up or down later in time when there is more certainty; whether the event takes place or not. Partial decisions are often more cheap, or shorten the final mitigation-time at the moment when there is more certainty. The proposed method is tested on Stonehaven, on the Carron River in Scotland. Decisions to implement demountable defences in the town are currently made based on a very short lead-time due to the absence of certainty. Application showed that staged decision making is possible and gives the decision maker more time to respond to a situation. The decision maker is able to take a lower regret decision with higher uncertainty and less related negative consequences. Although it is not possible to quantify intangible effects, it is part of the analysis to reduce these effects. Above all, the proposed approach has shown to be a possible improvement in economic terms and opens up possibilities of more flexible and robust decision making.

  4. Modelling the 2013 North Aegean (Greece) seismic sequence: geometrical and frictional constraints, and aftershock probabilities

    NASA Astrophysics Data System (ADS)

    Karakostas, Vassilis; Papadimitriou, Eleftheria; Gospodinov, Dragomir

    2014-04-01

    The 2013 January 8 Mw 5.8 North Aegean earthquake sequence took place on one of the ENE-WSW trending parallel dextral strike slip fault branches in this area, in the continuation of 1968 large (M = 7.5) rupture. The source mechanism of the main event indicates predominantly strike slip faulting in agreement with what is expected from regional seismotectonics. It was the largest event to have occurred in the area since the establishment of the Hellenic Unified Seismological Network (HUSN), with an adequate number of stations in close distances and full azimuthal coverage, thus providing the chance of an exhaustive analysis of its aftershock sequence. The main shock was followed by a handful of aftershocks with M ≥ 4.0 and tens with M ≥ 3.0. Relocation was performed by using the recordings from HUSN and a proper crustal model for the area, along with time corrections in each station relative to the model used. Investigation of the spatial and temporal behaviour of seismicity revealed possible triggering of adjacent fault segments. Theoretical static stress changes from the main shock give a preliminary explanation for the aftershock distribution aside from the main rupture. The off-fault seismicity is perfectly explained if μ > 0.5 and B = 0.0, evidencing high fault friction. In an attempt to forecast occurrence probabilities of the strong events (Mw ≥ 5.0), estimations were performed following the Restricted Epidemic Type Aftershock Sequence (RETAS) model. The identified best-fitting MOF model was used to execute 1-d forecasts for such aftershocks and follow the probability evolution in time during the sequence. Forecasting was also implemented on the base of a temporal model of aftershock occurrence, different from the modified Omori formula (the ETAS model), which resulted in probability gain (though small) in strong aftershock forecasting for the beginning of the sequence.

  5. A New Statistical Model for Eruption Forecasting at Open Conduit Volcanoes: an Application to Mt Etna and Kilauea Volcanoes

    NASA Astrophysics Data System (ADS)

    Passarelli, Luigi; Sanso, Bruno; Laura, Sandri; Marzocchi, Warner

    2010-05-01

    One of the main goals in volcanology is to forecast volcanic eruptions. A trenchant forecast should be made before the onset of a volcanic eruption, using the data available at that time, with the aim of mitigating the volcanic risk associated to the volcanic event. In other words, models implemented with forecast purposes have to take into account the possibility to provide "forward" forecasts and should avoid the idea of a merely "retrospective" fitting of the data available. In this perspective, the main idea of the present model is to forecast the next volcanic eruption after the end of the last one, using only the data available at that time. We focus our attention on volcanoes with open conduit regime and high eruption frequency. We assume a generalization of the classical time predictable model to describe the eruptive behavior of open conduit volcanoes and we use a Bayesian hierarchical model to make probabilistic forecast. We apply the model to Kilauea volcano eruptive data and Mt. Etna volcano flank eruption data. The aims of this model are: 1) to test whether or not the Kilauea and Mt Etna volcanoes follow a time predictable behavior; 2) to discuss the volcanological implications of the time predictable model parameters inferred; 3) to compare the forecast capabilities of this model with other models present in literature. The results obtained using the MCMC sampling algorithm show that both volcanoes follow a time predictable behavior. The numerical values of the time predictable model parameters inferred suggest that the amount of the erupted volume could change the dynamics of the magma chamber refilling process during the repose period. The probability gain of this model compared with other models already present in literature is appreciably greater than zero. This means that our model performs better forecast than previous models and it could be used in a probabilistic volcanic hazard assessment scheme. In this perspective, the probability of eruptions given by our model for Mt Etna volcano flank eruption are published on a internet website and are updated after any change in the eruptive activity.

  6. Agroclimate.Org: Tools and Information for a Climate Resilient Agriculture in the Southeast USA

    NASA Astrophysics Data System (ADS)

    Fraisse, C.

    2014-12-01

    AgroClimate (http://agroclimate.org) is a web-based system developed to help the agricultural industry in the southeastern USA reduce risks associated with climate variability and change. It includes climate related information and dynamic application tools that interact with a climate and crop database system. Information available includes climate monitoring and forecasts combined with information about crop management practices that help increase the resiliency of the agricultural industry in the region. Recently we have included smartphone apps in the AgroClimate suite of tools, including irrigation management and crop disease alert systems. Decision support tools available in AgroClimate include: (a) Climate risk: expected (probabilistic) and historical climate information and freeze risk; (b) Crop yield risk: expected yield based on soil type, planting date, and basic management practices for selected commodities and historical county yield databases; (c) Crop diseases: disease risk monitoring and forecasting for strawberry and citrus; (d) Crop development: monitoring and forecasting of growing degree-days and chill accumulation; (e) Drought: monitoring and forecasting of selected drought indices, (f) Footprints: Carbon and water footprint calculators. The system also provides background information about the main drivers of climate variability and basic information about climate change in the Southeast USA. AgroClimate has been widely used as an educational tool by the Cooperative Extension Services in the region and also by producers. It is now being replicated internationally with version implemented in Mozambique and Paraguay.

  7. Forecasting municipal solid waste generation using prognostic tools and regression analysis.

    PubMed

    Ghinea, Cristina; Drăgoi, Elena Niculina; Comăniţă, Elena-Diana; Gavrilescu, Marius; Câmpean, Teofil; Curteanu, Silvia; Gavrilescu, Maria

    2016-11-01

    For an adequate planning of waste management systems the accurate forecast of waste generation is an essential step, since various factors can affect waste trends. The application of predictive and prognosis models are useful tools, as reliable support for decision making processes. In this paper some indicators such as: number of residents, population age, urban life expectancy, total municipal solid waste were used as input variables in prognostic models in order to predict the amount of solid waste fractions. We applied Waste Prognostic Tool, regression analysis and time series analysis to forecast municipal solid waste generation and composition by considering the Iasi Romania case study. Regression equations were determined for six solid waste fractions (paper, plastic, metal, glass, biodegradable and other waste). Accuracy Measures were calculated and the results showed that S-curve trend model is the most suitable for municipal solid waste (MSW) prediction. Copyright © 2016 Elsevier Ltd. All rights reserved.

  8. Applied Meteorology Unit Quarterly Report, Second Quarter FY-13

    NASA Technical Reports Server (NTRS)

    Bauman, William; Crawford, Winifred; Watson, Leela; Shafer, Jaclyn; Huddleston, Lisa

    2013-01-01

    The AMU team worked on six tasks for their customers: (1) Ms. Crawford continued work on the objective lightning forecast task for airports in east-central Florida, and began work on developing a dual-Doppler analysis with local Doppler radars, (2) Ms. Shafer continued work for Vandenberg Air Force Base on an automated tool to relate pressure gradients to peak winds, (3) Dr. Huddleston continued work to develop a lightning timing forecast tool for the Kennedy Space Center/Cape Canaveral Air Force Station area, (4) Dr. Bauman continued work on a severe weather forecast tool focused on east-central Florida, (5) Mr. Decker began developing a wind pairs database for the Launch Services Program to use when evaluating upper-level winds for launch vehicles, and (6) Dr. Watson began work to assimilate observational data into the high-resolution model configurations, she created for Wallops Flight Facility and the Eastern Range.

  9. Probabilistic rainfall warning system with an interactive user interface

    NASA Astrophysics Data System (ADS)

    Koistinen, Jarmo; Hohti, Harri; Kauhanen, Janne; Kilpinen, Juha; Kurki, Vesa; Lauri, Tuomo; Nurmi, Pertti; Rossi, Pekka; Jokelainen, Miikka; Heinonen, Mari; Fred, Tommi; Moisseev, Dmitri; Mäkelä, Antti

    2013-04-01

    A real time 24/7 automatic alert system is in operational use at the Finnish Meteorological Institute (FMI). It consists of gridded forecasts of the exceedance probabilities of rainfall class thresholds in the continuous lead time range of 1 hour to 5 days. Nowcasting up to six hours applies ensemble member extrapolations of weather radar measurements. With 2.8 GHz processors using 8 threads it takes about 20 seconds to generate 51 radar based ensemble members in a grid of 760 x 1226 points. Nowcasting exploits also lightning density and satellite based pseudo rainfall estimates. The latter ones utilize convective rain rate (CRR) estimate from Meteosat Second Generation. The extrapolation technique applies atmospheric motion vectors (AMV) originally developed for upper wind estimation with satellite images. Exceedance probabilities of four rainfall accumulation categories are computed for the future 1 h and 6 h periods and they are updated every 15 minutes. For longer forecasts exceedance probabilities are calculated for future 6 and 24 h periods during the next 4 days. From approximately 1 hour to 2 days Poor man's Ensemble Prediction System (PEPS) is used applying e.g. the high resolution short range Numerical Weather Prediction models HIRLAM and AROME. The longest forecasts apply EPS data from the European Centre for Medium Range Weather Forecasts (ECMWF). The blending of the ensemble sets from the various forecast sources is performed applying mixing of accumulations with equal exceedance probabilities. The blending system contains a real time adaptive estimator of the predictability of radar based extrapolations. The uncompressed output data are written to file for each member, having total size of 10 GB. Ensemble data from other sources (satellite, lightning, NWP) are converted to the same geometry as the radar data and blended as was explained above. A verification system utilizing telemetering rain gauges has been established. Alert dissemination e.g. for citizens and professional end users applies SMS messages and, in near future, smartphone maps. The present interactive user interface facilitates free selection of alert sites and two warning thresholds (any rain, heavy rain) at any location in Finland. The pilot service was tested by 1000-3000 users during summers 2010 and 2012. As an example of dedicated end-user services gridded exceedance scenarios (of probabilities 5 %, 50 % and 90 %) of hourly rainfall accumulations for the next 3 hours have been utilized as an online input data for the influent model at the Greater Helsinki Wastewater Treatment Plant.

  10. Predicting Airspace Capacity Impacts Using the Consolidated Storm Prediction for Aviation

    NASA Technical Reports Server (NTRS)

    Russell, Carl

    2010-01-01

    Convective weather is currently the largest contributor to air traffic delays in the United States. In order to make effective traffic flow management decisions to mitigate these delays, weather forecasts must be made as early and as accurately as possible. A forecast product that could be used to mitigate convective weather impacts is the Consolidated Storm Prediction for Aviation. This product provides forecasts of cloud water content and convective top heights at 0- to 8-hour look-ahead times. The objective of this study was to examine a method of predicting the impact of convective weather on air traffic sector capacities using these forecasts. Polygons representing forecast convective weather were overlaid at multiple flight levels on a sector map to calculate the fraction of each sector covered by weather. The fractional volume coverage was used as the primary metric to determine convection s impact on sectors. Results reveal that the forecasts can be used to predict the probability and magnitude of weather impacts on sector capacity up to eight hours in advance.

  11. Contrasting environments associated with storm prediction center tornado outbreak forecasts using synoptic-scale composite analysis

    NASA Astrophysics Data System (ADS)

    Bates, Alyssa Victoria

    Tornado outbreaks have significant human impact, so it is imperative forecasts of these phenomena are accurate. As a synoptic setup lays the foundation for a forecast, synoptic-scale aspects of Storm Prediction Center (SPC) outbreak forecasts of varying accuracy were assessed. The percentages of the number of tornado outbreaks within SPC 10% tornado probability polygons were calculated. False alarm events were separately considered. The outbreaks were separated into quartiles using a point-in-polygon algorithm. Statistical composite fields were created to represent the synoptic conditions of these groups and facilitate comparison. Overall, temperature advection had the greatest differences between the groups. Additionally, there were significant differences in the jet streak strengths and amounts of vertical wind shear. The events forecasted with low accuracy consisted of the weakest synoptic-scale setups. These results suggest it is possible that events with weak synoptic setups should be regarded as areas of concern by tornado outbreak forecasters.

  12. Evaluation of Ensemble Water Supply and Demands Forecasts for Water Management in the Klamath River Basin

    NASA Astrophysics Data System (ADS)

    Broman, D.; Gangopadhyay, S.; McGuire, M.; Wood, A.; Leady, Z.; Tansey, M. K.; Nelson, K.; Dahm, K.

    2017-12-01

    The Upper Klamath River Basin in south central Oregon and north central California is home to the Klamath Irrigation Project, which is operated by the Bureau of Reclamation and provides water to around 200,000 acres of agricultural lands. The project is managed in consideration of not only water deliveries to irrigators, but also wildlife refuge water demands, biological opinion requirements for Endangered Species Act (ESA) listed fish, and Tribal Trust responsibilities. Climate change has the potential to impact water management in terms of volume and timing of water and the ability to meet multiple objectives. Current operations use a spreadsheet-based decision support tool, with water supply forecasts from the National Resources Conservation Service (NRCS) and California-Nevada River Forecast Center (CNRFC). This tool is currently limited in its ability to incorporate in ensemble forecasts, which offer the potential for improved operations by quantifying forecast uncertainty. To address these limitations, this study has worked to develop a RiverWare based water resource systems model, flexible enough to use across multiple decision time-scales, from short-term operations out to long-range planning. Systems model development has been accompanied by operational system development to handle data management and multiple modeling components. Using a set of ensemble hindcasts, this study seeks to answer several questions: A) Do a new set of ensemble streamflow forecasts have additional skill beyond what?, and allow for improved decision making under changing conditions? B) Do net irrigation water requirement forecasts developed in this project to quantify agricultural demands and reservoir evaporation forecasts provide additional benefits to decision making beyond water supply forecasts? C) What benefit do ensemble forecasts have in the context of water management decisions?

  13. Training the next generation of scientists in Weather Forecasting: new approaches with real models

    NASA Astrophysics Data System (ADS)

    Carver, Glenn; Váňa, Filip; Siemen, Stephan; Kertesz, Sandor; Keeley, Sarah

    2014-05-01

    The European Centre for Medium Range Weather Forecasts operationally produce medium range forecasts using what is internationally acknowledged as the world leading global weather forecast model. Future development of this scientifically advanced model relies on a continued availability of experts in the field of meteorological science and with high-level software skills. ECMWF therefore has a vested interest in young scientists and University graduates developing the necessary skills in numerical weather prediction including both scientific and technical aspects. The OpenIFS project at ECMWF maintains a portable version of the ECMWF forecast model (known as IFS) for use in education and research at Universities, National Meteorological Services and other research and education organisations. OpenIFS models can be run on desktop or high performance computers to produce weather forecasts in a similar way to the operational forecasts at ECMWF. ECMWF also provide the Metview desktop application, a modern, graphical, and easy to use tool for analysing and visualising forecasts that is routinely used by scientists and forecasters at ECMWF and other institutions. The combination of Metview with the OpenIFS models has the potential to deliver classroom-friendly tools allowing students to apply their theoretical knowledge to real-world examples using a world-leading weather forecasting model. In this paper we will describe how the OpenIFS model has been used for teaching. We describe the use of Linux based 'virtual machines' pre-packaged on USB sticks that support a technically easy and safe way of providing 'classroom-on-a-stick' learning environments for advanced training in numerical weather prediction. We welcome discussions with interested parties.

  14. Empirical prediction intervals improve energy forecasting

    PubMed Central

    Kaack, Lynn H.; Apt, Jay; Morgan, M. Granger; McSharry, Patrick

    2017-01-01

    Hundreds of organizations and analysts use energy projections, such as those contained in the US Energy Information Administration (EIA)’s Annual Energy Outlook (AEO), for investment and policy decisions. Retrospective analyses of past AEO projections have shown that observed values can differ from the projection by several hundred percent, and thus a thorough treatment of uncertainty is essential. We evaluate the out-of-sample forecasting performance of several empirical density forecasting methods, using the continuous ranked probability score (CRPS). The analysis confirms that a Gaussian density, estimated on past forecasting errors, gives comparatively accurate uncertainty estimates over a variety of energy quantities in the AEO, in particular outperforming scenario projections provided in the AEO. We report probabilistic uncertainties for 18 core quantities of the AEO 2016 projections. Our work frames how to produce, evaluate, and rank probabilistic forecasts in this setting. We propose a log transformation of forecast errors for price projections and a modified nonparametric empirical density forecasting method. Our findings give guidance on how to evaluate and communicate uncertainty in future energy outlooks. PMID:28760997

  15. Tropical Cyclone Wind Probability Forecasting (WINDP).

    DTIC Science & Technology

    1981-04-01

    llq. h. ,c ilrac (t’ small probabilities (below 107c) is limited II(t’h, numb(r o!, significant digits given: therefore 1t( huld lU r~ruidvd as being...APPLIED SCI. CORP. ENGLAMD ;7MOS. SCIENCES OEPT., LIBRARY ATTN: LIBARY , SUITE 500 400 WASHINGTON AVE. 6811 KENILWORTH AVE. EUROPEAN CENTRE FOR MEDIUM

  16. Using Heliospheric Imaging for Storm Forecasting - SMEI CME Observations as a Tool for Operational Forecasting at AFWA

    NASA Astrophysics Data System (ADS)

    Webb, D. F.; Johnston, J. C.; Fry, C. D.; Kuchar, T. A.

    2008-12-01

    Observations of coronal mass ejections (CMEs) from heliospheric imagers such as the Solar Mass Ejection Imager (SMEI) can lead to significant improvements in operational space weather forecasting. We are working with the Air Force Weather Agency (AFWA) to ingest SMEI all-sky imagery with appropriate tools to help forecasters improve their operational space weather forecasts. We describe two approaches: 1) Near- real time analysis of propagating CMEs from SMEI images alone combined with near-Sun observations of CME onsets and, 2) Using these calculations of speed as a mid-course correction to the HAFv2 solar wind model forecasts. HAFv2 became operational at AFWA in late 2006. The objective is to determine a set of practical procedures that the duty forecaster can use to update or correct a solar wind forecast using heliospheric imager data. SMEI observations can be used inclusively to make storm forecasts, as recently discussed in Webb et al. (Space Weather, in press, 2008). We have developed a point-and-click analysis tool for use with SMEI images and are working with AFWA to ensure that timely SMEI images are available for analyses. When a frontside solar eruption occurs, especially if within about 45 deg. of Sun center, a forecaster checks for an associated CME observed by a coronagraph within an appropriate time window. If found, especially if the CME is a halo type, the forecaster checks SMEI observations about a day later, depending on the apparent initial CME speed, for possibly associated CMEs. If one is found, then the leading edge is measured over several successive frames and an elongation-time plot constructed. A minimum of three data points, i.e., over 3-4 orbits or about 6 hours, are necessary for such a plot. Using the solar source location and onset time of the CME from, e.g., SOHO observations, and assuming radial propagation, a distance-time relation is calculated and extrapolated to the 1 AU distance. As shown by Webb et al., the storm onset time is then expected to be about 3 hours after this 1 AU arrival time (AT). The prediction program is updated as more SMEI data become available. Currently when an appropriate solar event occurs, AFWA routinely runs the HAFv2 model to make a forecast of the shock and ejecta arrival times at Earth. SMEI data can be used to improve this prediction. The HAFv2 model can produce synthetic sky maps of predicted CME brightness for comparison with SMEI images. The forecaster uses SMEI imagery to observe and track the CME. The forecaster then measures the CME location and speed using the SMEI imagery and the HAFv2 synthetic sky maps. After comparing the SMEI and HAFv2 results, the forecaster can adjust a key input to HAFv2, such as the initial speed of the disturbance at the Sun or the mid-course speed. The forecaster then iteratively runs HAFv2 until the observed and forecast sky maps match. The final HAFv2 solution becomes the new forecast. When the CME/shock arrives at (or does not reach) Earth, the forecaster verifies the forecast and updates the forecast skill statistics. Eventually, we plan to develop a more automated version of this procedure.

  17. Calibration of decadal ensemble predictions

    NASA Astrophysics Data System (ADS)

    Pasternack, Alexander; Rust, Henning W.; Bhend, Jonas; Liniger, Mark; Grieger, Jens; Müller, Wolfgang; Ulbrich, Uwe

    2017-04-01

    Decadal climate predictions are of great socio-economic interest due to the corresponding planning horizons of several political and economic decisions. Due to uncertainties of weather and climate, forecasts (e.g. due to initial condition uncertainty), they are issued in a probabilistic way. One issue frequently observed for probabilistic forecasts is that they tend to be not reliable, i.e. the forecasted probabilities are not consistent with the relative frequency of the associated observed events. Thus, these kind of forecasts need to be re-calibrated. While re-calibration methods for seasonal time scales are available and frequently applied, these methods still have to be adapted for decadal time scales and its characteristic problems like climate trend and lead time dependent bias. Regarding this, we propose a method to re-calibrate decadal ensemble predictions that takes the above mentioned characteristics into account. Finally, this method will be applied and validated to decadal forecasts from the MiKlip system (Germany's initiative for decadal prediction).

  18. Integrating Wind Profiling Radars and Radiosonde Observations with Model Point Data to Develop a Decision Support Tool to Assess Upper-Level Winds for Space Launch

    NASA Technical Reports Server (NTRS)

    Bauman, William H., III; Flinn, Clay

    2013-01-01

    On the day-of-launch, the 45th Weather Squadron (45 WS) Launch Weather Officers (LWOs) monitor the upper-level winds for their launch customers to include NASA's Launch Services Program and NASA's Ground Systems Development and Operations Program. They currently do not have the capability to display and overlay profiles of upper-level observations and numerical weather prediction model forecasts. The LWOs requested the Applied Meteorology Unit (AMU) develop a tool in the form of a graphical user interface (GUI) that will allow them to plot upper-level wind speed and direction observations from the Kennedy Space Center (KSC) 50 MHz tropospheric wind profiling radar, KSC Shuttle Landing Facility 915 MHz boundary layer wind profiling radar and Cape Canaveral Air Force Station (CCAFS) Automated Meteorological Processing System (AMPS) radiosondes, and then overlay forecast wind profiles from the model point data including the North American Mesoscale (NAM) model, Rapid Refresh (RAP) model and Global Forecast System (GFS) model to assess the performance of these models. The AMU developed an Excel-based tool that provides an objective method for the LWOs to compare the model-forecast upper-level winds to the KSC wind profiling radars and CCAFS AMPS observations to assess the model potential to accurately forecast changes in the upperlevel profile through the launch count. The AMU wrote Excel Visual Basic for Applications (VBA) scripts to automatically retrieve model point data for CCAFS (XMR) from the Iowa State University Archive Data Server (http://mtarchive.qeol.iastate.edu) and the 50 MHz, 915 MHz and AMPS observations from the NASA/KSC Spaceport Weather Data Archive web site (http://trmm.ksc.nasa.gov). The AMU then developed code in Excel VBA to automatically ingest and format the observations and model point data in Excel to ready the data for generating Excel charts for the LWO's. The resulting charts allow the LWOs to independently initialize the three models 0-hour forecasts against the observations to determine which is the best performing model and then overlay the model forecasts on time-matched observations during the launch countdown to further assess the model performance and forecasts. This paper will demonstrate integration of observed and predicted atmospheric conditions into a decision support tool and demonstrate how the GUI is implemented in operations.

  19. Real-time Mainshock Forecast by Statistical Discrimination of Foreshock Clusters

    NASA Astrophysics Data System (ADS)

    Nomura, S.; Ogata, Y.

    2016-12-01

    Foreshock discremination is one of the most effective ways for short-time forecast of large main shocks. Though many large earthquakes accompany their foreshocks, discreminating them from enormous small earthquakes is difficult and only probabilistic evaluation from their spatio-temporal features and magnitude evolution may be available. Logistic regression is the statistical learning method best suited to such binary pattern recognition problems where estimates of a-posteriori probability of class membership are required. Statistical learning methods can keep learning discreminating features from updating catalog and give probabilistic recognition of forecast in real time. We estimated a non-linear function of foreshock proportion by smooth spline bases and evaluate the possibility of foreshocks by the logit function. In this study, we classified foreshocks from earthquake catalog by the Japan Meteorological Agency by single-link clustering methods and learned spatial and temporal features of foreshocks by the probability density ratio estimation. We use the epicentral locations, time spans and difference in magnitudes for learning and forecasting. Magnitudes of main shocks are also predicted our method by incorporating b-values into our method. We discuss the spatial pattern of foreshocks from the classifier composed by our model. We also implement a back test to validate predictive performance of the model by this catalog.

  20. Excessive Heat Events and National Security: Building Resilience based on Early Warning Systems

    NASA Astrophysics Data System (ADS)

    Vintzileos, A.

    2017-12-01

    Excessive heat events (EHE) affect security of Nations in multiple direct and indirect ways. EHE are the top cause for morbidity/mortality associated to any atmospheric extremes. Higher energy consumption used for cooling can lead to black-outs and social disorder. EHE affect the food supply chain reducing crop yield and increasing the probability of food contamination during delivery and storage. Distribution of goods during EHE can be severely disrupted due to mechanical failure of transportation equipment. EHE during athletic events e.g., marathons, may result to a high number of casualties. Finally, EHE may also affect military planning by e.g. reducing hours of exercise and by altering combat gear. Early warning systems for EHE allow for building resilience. In this paper we first define EHE as at least two consecutive heat days; a heat day is defined as a day with a maximum heat index with probability of occurrence that exceeds a certain threshold. We then use retrospective forecasts performed with a multitude of operational models and show that it is feasible to forecast EHE at forecast lead of week-2 and week-3 over the contiguous United States. We finally introduce an improved definition of EHE based on an intensity index and investigate forecast skill of the predictive system in the tropics and subtropics.

  1. Toward the Probabilistic Forecasting of High-latitude GPS Phase Scintillation

    NASA Technical Reports Server (NTRS)

    Prikryl, P.; Jayachandran, P.T.; Mushini, S. C.; Richardson, I. G.

    2012-01-01

    The phase scintillation index was obtained from L1 GPS data collected with the Canadian High Arctic Ionospheric Network (CHAIN) during years of extended solar minimum 2008-2010. Phase scintillation occurs predominantly on the dayside in the cusp and in the nightside auroral oval. We set forth a probabilistic forecast method of phase scintillation in the cusp based on the arrival time of either solar wind corotating interaction regions (CIRs) or interplanetary coronal mass ejections (ICMEs). CIRs on the leading edge of high-speed streams (HSS) from coronal holes are known to cause recurrent geomagnetic and ionospheric disturbances that can be forecast one or several solar rotations in advance. Superposed epoch analysis of phase scintillation occurrence showed a sharp increase in scintillation occurrence just after the arrival of high-speed solar wind and a peak associated with weak to moderate CMEs during the solar minimum. Cumulative probability distribution functions for the phase scintillation occurrence in the cusp are obtained from statistical data for days before and after CIR and ICME arrivals. The probability curves are also specified for low and high (below and above median) values of various solar wind plasma parameters. The initial results are used to demonstrate a forecasting technique on two example periods of CIRs and ICMEs.

  2. A probabilistic approach of the Flash Flood Early Warning System (FF-EWS) in Catalonia based on radar ensemble generation

    NASA Astrophysics Data System (ADS)

    Velasco, David; Sempere-Torres, Daniel; Corral, Carles; Llort, Xavier; Velasco, Enrique

    2010-05-01

    Early Warning Systems (EWS) are commonly identified as the most efficient tools in order to improve the preparedness and risk management against heavy rains and Flash Floods (FF) with the objective of reducing economical losses and human casualties. In particular, flash floods affecting torrential Mediterranean catchments are a key element to be incorporated within operational EWSs. The characteristic high spatial and temporal variability of the storms requires high-resolution data and methods to monitor/forecast the evolution of rainfall and its hydrological impact in small and medium torrential basins. A first version of an operational FF-EWS has been implemented in Catalonia (NE Spain) under the name of EHIMI system (Integrated Tool for Hydrometeorological Forecasting) with the support of the Catalan Water Agency (ACA) and the Meteorological Service of Catalonia (SMC). Flash flood warnings are issued based on radar-rainfall estimates. Rainfall estimation is performed on radar observations with high spatial and temporal resolution (1km2 and 10 minutes) in order to adapt the warning scale to the 1-km grid of the EWS. The method is based on comparing observed accumulated rainfall against rainfall thresholds provided by the regional Intensity-Duration-Frequency (IDF) curves. The so-called "aggregated rainfall warning" at every river cell is obtained as the spatially averaged rainfall over its associated upstream draining area. Regarding the time aggregation of rainfall, the critical duration is thought to be an accumulation period similar to the concentration time of each cachtment. The warning is issued once the forecasted rainfall accumulation exceeds the rainfall thresholds mentioned above, which are associated to certain probability of occurrence. Finally, the hazard warning is provided and shown to the decision-maker in terms of exceeded return periods at every river cell covering the whole area of Catalonia. The objective of the present work includes the probabilistic component to the FF-EWS. As a first step, we have incorporated the uncertainty in rainfall estimates and forecasts based on an ensemble of equiprobable rainfall scenarios. The presented study has focused on a number of rainfall events and the performance of the FF-EWS evaluated in terms of its ability to produce probabilistic hazard warnings for decision-making support.

  3. Forecasting deflation, intrusion and eruption at inflating volcanoes

    NASA Astrophysics Data System (ADS)

    Blake, Stephen; Cortés, Joaquín A.

    2018-01-01

    A principal goal of volcanology is to successfully forecast the start of volcanic eruptions. This paper introduces a general forecasting method, which relies on a stream of monitoring data and a statistical description of a given threshold criterion for an eruption to start. Specifically we investigate the timing of intrusive and eruptive events at inflating volcanoes. The gradual inflation of the ground surface is a well-known phenomenon at many volcanoes and is attributable to pressurised magma accumulating within a shallow chamber. Inflation usually culminates in a rapid deflation event caused by magma escaping from the chamber to produce a shallow intrusion and, in some cases, a volcanic eruption. We show that the ground elevation during 15 inflation periods at Krafla volcano, Iceland, increased with time towards a limiting value by following a decaying exponential with characteristic timescale τ. The available data for Krafla, Kilauea and Mauna Loa volcanoes show that the duration of inflation (t*) is approximately equal to τ. The distribution of t* / τ values follows a log-logistic distribution in which the central 60% of the data lie between 0.99

  4. Post-processing ECMWF precipitation and temperature ensemble reforecasts for operational hydrologic forecasting at various spatial scales

    NASA Astrophysics Data System (ADS)

    Verkade, J. S.; Brown, J. D.; Reggiani, P.; Weerts, A. H.

    2013-09-01

    The ECMWF temperature and precipitation ensemble reforecasts are evaluated for biases in the mean, spread and forecast probabilities, and how these biases propagate to streamflow ensemble forecasts. The forcing ensembles are subsequently post-processed to reduce bias and increase skill, and to investigate whether this leads to improved streamflow ensemble forecasts. Multiple post-processing techniques are used: quantile-to-quantile transform, linear regression with an assumption of bivariate normality and logistic regression. Both the raw and post-processed ensembles are run through a hydrologic model of the river Rhine to create streamflow ensembles. The results are compared using multiple verification metrics and skill scores: relative mean error, Brier skill score and its decompositions, mean continuous ranked probability skill score and its decomposition, and the ROC score. Verification of the streamflow ensembles is performed at multiple spatial scales: relatively small headwater basins, large tributaries and the Rhine outlet at Lobith. The streamflow ensembles are verified against simulated streamflow, in order to isolate the effects of biases in the forcing ensembles and any improvements therein. The results indicate that the forcing ensembles contain significant biases, and that these cascade to the streamflow ensembles. Some of the bias in the forcing ensembles is unconditional in nature; this was resolved by a simple quantile-to-quantile transform. Improvements in conditional bias and skill of the forcing ensembles vary with forecast lead time, amount, and spatial scale, but are generally moderate. The translation to streamflow forecast skill is further muted, and several explanations are considered, including limitations in the modelling of the space-time covariability of the forcing ensembles and the presence of storages.

  5. Short-term earthquake forecasting based on an epidemic clustering model

    NASA Astrophysics Data System (ADS)

    Console, Rodolfo; Murru, Maura; Falcone, Giuseppe

    2016-04-01

    The application of rigorous statistical tools, with the aim of verifying any prediction method, requires a univocal definition of the hypothesis, or the model, characterizing the concerned anomaly or precursor, so as it can be objectively recognized in any circumstance and by any observer. This is mandatory to build up on the old-fashion approach consisting only of the retrospective anecdotic study of past cases. A rigorous definition of an earthquake forecasting hypothesis should lead to the objective identification of particular sub-volumes (usually named alarm volumes) of the total time-space volume within which the probability of occurrence of strong earthquakes is higher than the usual. The test of a similar hypothesis needs the observation of a sufficient number of past cases upon which a statistical analysis is possible. This analysis should be aimed to determine the rate at which the precursor has been followed (success rate) or not followed (false alarm rate) by the target seismic event, or the rate at which a target event has been preceded (alarm rate) or not preceded (failure rate) by the precursor. The binary table obtained from this kind of analysis leads to the definition of the parameters of the model that achieve the maximum number of successes and the minimum number of false alarms for a specific class of precursors. The mathematical tools suitable for this purpose may include the definition of Probability Gain or the R-Score, as well as the application of popular plots such as the Molchan error-diagram and the ROC diagram. Another tool for evaluating the validity of a forecasting method is the concept of the likelihood ratio (also named performance factor) of occurrence and non-occurrence of seismic events under different hypotheses. Whatever is the method chosen for building up a new hypothesis, usually based on retrospective data, the final assessment of its validity should be carried out by a test on a new and independent set of observations. The implementation of this step could be problematic for seismicity characterized by long-term recurrence. However, the separation of the data base of the data base collected in the past in two separate sections (one on which the best fit of the parameters is carried out, and the other on which the hypothesis is tested) can be a viable solution, known as retrospective-forward testing. In this study we show examples of application of the above mentioned concepts to the analysis of the Italian catalog of instrumental seismicity, making use of an epidemic algorithm developed to model short-term clustering features. This model, for which a precursory anomaly is just the occurrence of seismic activity, doesn't need the retrospective categorization of earthquakes in terms of foreshocks, mainshocks and aftershocks. It was introduced more than 15 years ago and tested so far in a number of real cases. It is now being run by several seismological centers around the world in forward real-time mode for testing purposes.

  6. Bulk electric system reliability evaluation incorporating wind power and demand side management

    NASA Astrophysics Data System (ADS)

    Huang, Dange

    Electric power systems are experiencing dramatic changes with respect to structure, operation and regulation and are facing increasing pressure due to environmental and societal constraints. Bulk electric system reliability is an important consideration in power system planning, design and operation particularly in the new competitive environment. A wide range of methods have been developed to perform bulk electric system reliability evaluation. Theoretically, sequential Monte Carlo simulation can include all aspects and contingencies in a power system and can be used to produce an informative set of reliability indices. It has become a practical and viable tool for large system reliability assessment technique due to the development of computing power and is used in the studies described in this thesis. The well-being approach used in this research provides the opportunity to integrate an accepted deterministic criterion into a probabilistic framework. This research work includes the investigation of important factors that impact bulk electric system adequacy evaluation and security constrained adequacy assessment using the well-being analysis framework. Load forecast uncertainty is an important consideration in an electrical power system. This research includes load forecast uncertainty considerations in bulk electric system reliability assessment and the effects on system, load point and well-being indices and reliability index probability distributions are examined. There has been increasing worldwide interest in the utilization of wind power as a renewable energy source over the last two decades due to enhanced public awareness of the environment. Increasing penetration of wind power has significant impacts on power system reliability, and security analyses become more uncertain due to the unpredictable nature of wind power. The effects of wind power additions in generating and bulk electric system reliability assessment considering site wind speed correlations and the interactive effects of wind power and load forecast uncertainty on system reliability are examined. The concept of the security cost associated with operating in the marginal state in the well-being framework is incorporated in the economic analyses associated with system expansion planning including wind power and load forecast uncertainty. Overall reliability cost/worth analyses including security cost concepts are applied to select an optimal wind power injection strategy in a bulk electric system. The effects of the various demand side management measures on system reliability are illustrated using the system, load point, and well-being indices, and the reliability index probability distributions. The reliability effects of demand side management procedures in a bulk electric system including wind power and load forecast uncertainty considerations are also investigated. The system reliability effects due to specific demand side management programs are quantified and examined in terms of their reliability benefits.

  7. Uncertainty forecasts improve weather-related decisions and attenuate the effects of forecast error.

    PubMed

    Joslyn, Susan L; LeClerc, Jared E

    2012-03-01

    Although uncertainty is inherent in weather forecasts, explicit numeric uncertainty estimates are rarely included in public forecasts for fear that they will be misunderstood. Of particular concern are situations in which precautionary action is required at low probabilities, often the case with severe events. At present, a categorical weather warning system is used. The work reported here tested the relative benefits of several forecast formats, comparing decisions made with and without uncertainty forecasts. In three experiments, participants assumed the role of a manager of a road maintenance company in charge of deciding whether to pay to salt the roads and avoid a potential penalty associated with icy conditions. Participants used overnight low temperature forecasts accompanied in some conditions by uncertainty estimates and in others by decision advice comparable to categorical warnings. Results suggested that uncertainty information improved decision quality overall and increased trust in the forecast. Participants with uncertainty forecasts took appropriate precautionary action and withheld unnecessary action more often than did participants using deterministic forecasts. When error in the forecast increased, participants with conventional forecasts were reluctant to act. However, this effect was attenuated by uncertainty forecasts. Providing categorical decision advice alone did not improve decisions. However, combining decision advice with uncertainty estimates resulted in the best performance overall. The results reported here have important implications for the development of forecast formats to increase compliance with severe weather warnings as well as other domains in which one must act in the face of uncertainty. PsycINFO Database Record (c) 2012 APA, all rights reserved.

  8. Analysis and numerical simulation of an aircraft icing episode near Adolfo Suárez Madrid-Barajas International Airport

    NASA Astrophysics Data System (ADS)

    Bolgiani, Pedro; Fernández-González, Sergio; Martin, María Luisa; Valero, Francisco; Merino, Andrés; García-Ortega, Eduardo; Sánchez, José Luis

    2018-02-01

    Aircraft icing is one of the most dangerous weather phenomena in aviation security. Therefore, avoiding areas with high probability of icing episodes along arrival and departure routes to airports is strongly recommended. Although such icing is common, forecasting and observation are far from perfect. This paper presents an analysis of an aircraft icing and turbulence event including a commercial flight near the Guadarrama Mountains, during the aircraft approach to the airport. No reference to icing or turbulence was made in the pre-flight meteorological information provided to the pilot, highlighting the need for additional tools to predict such risks. For this reason, the icing episode is simulated by means of the Weather Research and Forecasting (WRF) model and analyzed using images from the Meteosat Second Generation (MSG) satellite, with the aim of providing tools for the detection of icing and turbulence in the airport vicinity. The WRF simulation shows alternating updrafts and downdrafts (> 2 m s- 1) on the lee side of the mountain barrier. This is consonant with moderate to strong turbulence experienced by the aircraft on its approach path to the airport and suggests clear air turbulence above the mountain wave cloud top. At the aircraft icing altitude, supercooled liquid water associated with orographic clouds and mountain waves is simulated. Daytime and nighttime MSG images corroborated the simulated mountain waves and associated supercooled liquid water. The results encourage the use of mesoscale models and MSG nowcasting information to minimize aviation risks associated with such meteorological phenomena.

  9. A Short-term ESPERTA-based Forecast Tool for Moderate-to-extreme Solar Proton Events

    NASA Astrophysics Data System (ADS)

    Laurenza, M.; Alberti, T.; Cliver, E. W.

    2018-04-01

    The ESPERTA (Empirical model for Solar Proton Event Real Time Alert) forecast tool has a Probability of Detection (POD) of 63% for all >10 MeV events with proton peak intensity ≥10 pfu (i.e., ≥S1 events, S1 referring to minor storms on the NOAA Solar Radiation Storms scale), from 1995 to 2014 with a false alarm rate (FAR) of 38% and a median (minimum) warning time (WT) of ∼4.8 (0.4) hr. The NOAA space weather scale includes four additional categories: moderate (S2), strong (S3), severe (S4), and extreme (S5). As S1 events have only minor impacts on HF radio propagation in the polar regions, the effective threshold for significant space radiation effects appears to be the S2 level (100 pfu), above which both biological and space operation impacts are observed along with increased effects on HF propagation in the polar regions. We modified the ESPERTA model to predict ≥S2 events and obtained a POD of 75% (41/55) and an FAR of 24% (13/54) for the 1995–2014 interval with a median (minimum) WT of ∼1.7 (0.2) hr based on predictions made at the time of the S1 threshold crossing. The improved performance of ESPERTA for ≥S2 events is a reflection of the big flare syndrome, which postulates that the measures of the various manifestations of eruptive solar flares increase as one considers increasingly larger events.

  10. HAKOU v3: SWIMS Hurricane Inundation Fast Forecasting Tool for Hawaii

    DTIC Science & Technology

    2012-02-01

    SUBTITLE HAKOU v3: SWIMS Hurricane Inundation Fast Forecasting Tool For Hawaii 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6...Coupled SWAN+ADCIRC were driven with wind and pressure fields generated by the planetary boundary layer model TC96 (Thompson and Cardone 1996...F., and V. J. Cardone . 1996. Practical modeling of hurricane surface wind fields. J. Waterw. Port C-ASCE. 122(4): 195-205. Zijlema, M. 2010

  11. Ensemble flare forecasting: using numerical weather prediction techniques to improve space weather operations

    NASA Astrophysics Data System (ADS)

    Murray, S.; Guerra, J. A.

    2017-12-01

    One essential component of operational space weather forecasting is the prediction of solar flares. Early flare forecasting work focused on statistical methods based on historical flaring rates, but more complex machine learning methods have been developed in recent years. A multitude of flare forecasting methods are now available, however it is still unclear which of these methods performs best, and none are substantially better than climatological forecasts. Current operational space weather centres cannot rely on automated methods, and generally use statistical forecasts with a little human intervention. Space weather researchers are increasingly looking towards methods used in terrestrial weather to improve current forecasting techniques. Ensemble forecasting has been used in numerical weather prediction for many years as a way to combine different predictions in order to obtain a more accurate result. It has proved useful in areas such as magnetospheric modelling and coronal mass ejection arrival analysis, however has not yet been implemented in operational flare forecasting. Here we construct ensemble forecasts for major solar flares by linearly combining the full-disk probabilistic forecasts from a group of operational forecasting methods (ASSA, ASAP, MAG4, MOSWOC, NOAA, and Solar Monitor). Forecasts from each method are weighted by a factor that accounts for the method's ability to predict previous events, and several performance metrics (both probabilistic and categorical) are considered. The results provide space weather forecasters with a set of parameters (combination weights, thresholds) that allow them to select the most appropriate values for constructing the 'best' ensemble forecast probability value, according to the performance metric of their choice. In this way different forecasts can be made to fit different end-user needs.

  12. Epidemic forecasting is messier than weather forecasting: The role of human behavior and internet data streams in epidemic forecast

    DOE PAGES

    Moran, Kelly Renee; Fairchild, Geoffrey; Generous, Nicholas; ...

    2016-11-14

    Mathematical models, such as those that forecast the spread of epidemics or predict the weather, must overcome the challenges of integrating incomplete and inaccurate data in computer simulations, estimating the probability of multiple possible scenarios, incorporating changes in human behavior and/or the pathogen, and environmental factors. In the past 3 decades, the weather forecasting community has made significant advances in data collection, assimilating heterogeneous data steams into models and communicating the uncertainty of their predictions to the general public. Epidemic modelers are struggling with these same issues in forecasting the spread of emerging diseases, such as Zika virus infection andmore » Ebola virus disease. While weather models rely on physical systems, data from satellites, and weather stations, epidemic models rely on human interactions, multiple data sources such as clinical surveillance and Internet data, and environmental or biological factors that can change the pathogen dynamics. We describe some of similarities and differences between these 2 fields and how the epidemic modeling community is rising to the challenges posed by forecasting to help anticipate and guide the mitigation of epidemics. Here, we conclude that some of the fundamental differences between these 2 fields, such as human behavior, make disease forecasting more challenging than weather forecasting.« less

  13. Epidemic forecasting is messier than weather forecasting: The role of human behavior and internet data streams in epidemic forecast

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Moran, Kelly Renee; Fairchild, Geoffrey; Generous, Nicholas

    Mathematical models, such as those that forecast the spread of epidemics or predict the weather, must overcome the challenges of integrating incomplete and inaccurate data in computer simulations, estimating the probability of multiple possible scenarios, incorporating changes in human behavior and/or the pathogen, and environmental factors. In the past 3 decades, the weather forecasting community has made significant advances in data collection, assimilating heterogeneous data steams into models and communicating the uncertainty of their predictions to the general public. Epidemic modelers are struggling with these same issues in forecasting the spread of emerging diseases, such as Zika virus infection andmore » Ebola virus disease. While weather models rely on physical systems, data from satellites, and weather stations, epidemic models rely on human interactions, multiple data sources such as clinical surveillance and Internet data, and environmental or biological factors that can change the pathogen dynamics. We describe some of similarities and differences between these 2 fields and how the epidemic modeling community is rising to the challenges posed by forecasting to help anticipate and guide the mitigation of epidemics. Here, we conclude that some of the fundamental differences between these 2 fields, such as human behavior, make disease forecasting more challenging than weather forecasting.« less

  14. On the Dominant Factor Controlling Seasonal Hydrological Forecast Skill in China

    DOE PAGES

    Zhang, Xuejun; Tang, Qiuhong; Leng, Guoyong; ...

    2017-11-20

    Initial conditions (ICs) and climate forecasts (CFs) are the two primary sources of seasonal hydrological forecast skill. However, their relative contribution to predictive skill remains unclear in China. In this study, we investigate the relative roles of ICs and CFs in cumulative runoff (CR) and soil moisture (SM) forecasts using 31-year (1980–2010) ensemble streamflow prediction (ESP) and reverse-ESP (revESP) simulations with the Variable Capacity Infiltration (VIC) hydrologic model. The results show that the relative importance of ICs and CFs largely depends on climate regimes. The influence of ICs is stronger in a dry or wet-to-dry climate regime that covers themore » northern and western interior regions during the late fall to early summer. In particular, ICs may dominate the forecast skill for up to three months or even six months during late fall and winter months, probably due to the low precipitation value and variability in the dry period. In contrast, CFs become more important for most of southern China or during summer months. The impact of ICs on SM forecasts tends to cover larger domains than on CR forecasts. These findings will greatly benefit future work that will target efforts towards improving current forecast levels for the particular regions and forecast periods.« less

  15. On the Dominant Factor Controlling Seasonal Hydrological Forecast Skill in China

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Xuejun; Tang, Qiuhong; Leng, Guoyong

    Initial conditions (ICs) and climate forecasts (CFs) are the two primary sources of seasonal hydrological forecast skill. However, their relative contribution to predictive skill remains unclear in China. In this study, we investigate the relative roles of ICs and CFs in cumulative runoff (CR) and soil moisture (SM) forecasts using 31-year (1980–2010) ensemble streamflow prediction (ESP) and reverse-ESP (revESP) simulations with the Variable Capacity Infiltration (VIC) hydrologic model. The results show that the relative importance of ICs and CFs largely depends on climate regimes. The influence of ICs is stronger in a dry or wet-to-dry climate regime that covers themore » northern and western interior regions during the late fall to early summer. In particular, ICs may dominate the forecast skill for up to three months or even six months during late fall and winter months, probably due to the low precipitation value and variability in the dry period. In contrast, CFs become more important for most of southern China or during summer months. The impact of ICs on SM forecasts tends to cover larger domains than on CR forecasts. These findings will greatly benefit future work that will target efforts towards improving current forecast levels for the particular regions and forecast periods.« less

  16. Epidemic Forecasting is Messier Than Weather Forecasting: The Role of Human Behavior and Internet Data Streams in Epidemic Forecast

    PubMed Central

    Moran, Kelly R.; Fairchild, Geoffrey; Generous, Nicholas; Hickmann, Kyle; Osthus, Dave; Priedhorsky, Reid; Hyman, James; Del Valle, Sara Y.

    2016-01-01

    Mathematical models, such as those that forecast the spread of epidemics or predict the weather, must overcome the challenges of integrating incomplete and inaccurate data in computer simulations, estimating the probability of multiple possible scenarios, incorporating changes in human behavior and/or the pathogen, and environmental factors. In the past 3 decades, the weather forecasting community has made significant advances in data collection, assimilating heterogeneous data steams into models and communicating the uncertainty of their predictions to the general public. Epidemic modelers are struggling with these same issues in forecasting the spread of emerging diseases, such as Zika virus infection and Ebola virus disease. While weather models rely on physical systems, data from satellites, and weather stations, epidemic models rely on human interactions, multiple data sources such as clinical surveillance and Internet data, and environmental or biological factors that can change the pathogen dynamics. We describe some of similarities and differences between these 2 fields and how the epidemic modeling community is rising to the challenges posed by forecasting to help anticipate and guide the mitigation of epidemics. We conclude that some of the fundamental differences between these 2 fields, such as human behavior, make disease forecasting more challenging than weather forecasting. PMID:28830111

  17. Short-term ensemble streamflow forecasting using operationally-produced single-valued streamflow forecasts - A Hydrologic Model Output Statistics (HMOS) approach

    NASA Astrophysics Data System (ADS)

    Regonda, Satish Kumar; Seo, Dong-Jun; Lawrence, Bill; Brown, James D.; Demargne, Julie

    2013-08-01

    We present a statistical procedure for generating short-term ensemble streamflow forecasts from single-valued, or deterministic, streamflow forecasts produced operationally by the U.S. National Weather Service (NWS) River Forecast Centers (RFCs). The resulting ensemble streamflow forecast provides an estimate of the predictive uncertainty associated with the single-valued forecast to support risk-based decision making by the forecasters and by the users of the forecast products, such as emergency managers. Forced by single-valued quantitative precipitation and temperature forecasts (QPF, QTF), the single-valued streamflow forecasts are produced at a 6-h time step nominally out to 5 days into the future. The single-valued streamflow forecasts reflect various run-time modifications, or "manual data assimilation", applied by the human forecasters in an attempt to reduce error from various sources in the end-to-end forecast process. The proposed procedure generates ensemble traces of streamflow from a parsimonious approximation of the conditional multivariate probability distribution of future streamflow given the single-valued streamflow forecast, QPF, and the most recent streamflow observation. For parameter estimation and evaluation, we used a multiyear archive of the single-valued river stage forecast produced operationally by the NWS Arkansas-Red River Basin River Forecast Center (ABRFC) in Tulsa, Oklahoma. As a by-product of parameter estimation, the procedure provides a categorical assessment of the effective lead time of the operational hydrologic forecasts for different QPF and forecast flow conditions. To evaluate the procedure, we carried out hindcasting experiments in dependent and cross-validation modes. The results indicate that the short-term streamflow ensemble hindcasts generated from the procedure are generally reliable within the effective lead time of the single-valued forecasts and well capture the skill of the single-valued forecasts. For smaller basins, however, the effective lead time is significantly reduced by short basin memory and reduced skill in the single-valued QPF.

  18. Flood monitoring for ungauged rivers: the power of combining space-based monitoring and global forecasting models

    NASA Astrophysics Data System (ADS)

    Revilla-Romero, Beatriz; Netgeka, Victor; Raynaud, Damien; Thielen, Jutta

    2013-04-01

    Flood warning systems typically rely on forecasts from national meteorological services and in-situ observations from hydrological gauging stations. This capacity is not equally developed in flood-prone developing countries. Low-cost satellite monitoring systems and global flood forecasting systems can be an alternative source of information for national flood authorities. The Global Flood Awareness System (GloFAS) has been develop jointly with the European Centre for Medium-Range Weather Forecast (ECMWF) and the Joint Research Centre, and it is running quasi operational now since June 2011. The system couples state-of-the art weather forecasts with a hydrological model driven at a continental scale. The system provides downstream countries with information on upstream river conditions as well as continental and global overviews. In its test phase, this global forecast system provides probabilities for large transnational river flooding at the global scale up to 30 days in advance. It has shown its real-life potential for the first time during the flood in Southeast Asia in 2011, and more recently during the floods in Australia in March 2012, India (Assam, September-October 2012) and Chad Floods (August-October 2012).The Joint Research Centre is working on further research and development, rigorous testing and adaptations of the system to create an operational tool for decision makers, including national and regional water authorities, water resource managers, hydropower companies, civil protection and first line responders, and international humanitarian aid organizations. Currently efforts are being made to link GloFAS to the Global Flood Detection System (GFDS). GFDS is a Space-based river gauging and flood monitoring system using passive microwave remote sensing which was developed by a collaboration between the JRC and Dartmouth Flood Observatory. GFDS provides flood alerts based on daily water surface change measurements from space. Alerts are shown on a world map, with detailed reports for individual gauging sites. A comparison of discharge estimates from the Global Flood Detection System (GFDS) and the Global Flood Awareness System (GloFAS) with observations for representative climatic zones is presented. Both systems have demonstrated strong potential in forecasting and detecting recent catastrophic floods. The usefulness of their combined information on global scale for decision makers at different levels is discussed. Combining space-based monitoring and global forecasting models is an innovative approach and has significant benefits for international river commissions as well as international aid organisations. This is in line with the objectives of the Hyogo and the Post-2015 Framework that aim at the development of systems which involve trans-boundary collaboration, space-based earth observation, flood forecasting and early warning.

  19. Storm Prediction Center Day 3-8 Fire Weather Forecast Issued on May 27,

    Science.gov Websites

    National RADAR Product Archive NOAA Weather Radio Research Non-op. Products Forecast Tools Svr. Tstm information in MS-Word or PDF. Note: Through September 29, 2015 the SPC will issue Experimental Probabilistic

  20. NREL and IBM Improve Solar Forecasting with Big Data | Energy Systems

    Science.gov Websites

    forecasting model using deep-machine-learning technology. The multi-scale, multi-model tool, named Watt-sun the first standard suite of metrics for this purpose. Validating Watt-sun at multiple sites across the

  1. A Trajectory Forecast Model as an Event Response Tool: Tracking an Anhydrous Ammonia Spill in Tampa Bay

    NASA Astrophysics Data System (ADS)

    Havens, H.; Luther, M. E.; Meyers, S. D.

    2008-12-01

    Response time is critical following a hazardous spill in a marine environment and rapid assessment of circulation patterns can mitigate the damage. Tampa Bay Physical Oceanographic Real-Time System (TB- PORTS) data are used to drive a numerical circulation model of the bay for the purpose of hazardous material spill response, monitoring of human health risks, and environmental protection and management. The model is capable of rapidly producing forecast simulations that, in the event of a human health or ecosystem threat, can alert authorities to areas in Tampa Bay with a high probability of being affected by the material. Responders to an anhydrous ammonia spill in November 2007 in Tampa Bay utilized the numerical model of circulation in the estuary to predict where the spill was likely to be transported. The model quickly generated a week-long simulation predicting how winds and currents might move the spill around the bay. The physical mechanisms transporting ammonium alternated from being tidally driven for the initial two days following the spill to a more classical two-layered circulation for the remainder of the simulation. Velocity profiles of Tampa Bay reveal a strong outward flowing current present at the time of the simulation which acted as a significant transport mechanism for ammonium within the bay. Probability distributions, calculated from the predicted model trajectories, guided sampling in the days after the spill resulting in the detection of a toxic Pseudo-nitzschia bloom that likely was initiated as a result of the anhydrous ammonia spill. The prediction system at present is only accessible to scientists in the Ocean Monitoring and Prediction Lab (OMPL) at the University of South Florida. The forecast simulations are compiled into an animation that is provided to end users at their request. In the future, decision makers will be allowed access to an online component of the coastal prediction system that can be used to manage response and mitigation efforts in order to reduce the risk from such disasters as a hazardous material spills or ship groundings.

  2. Operational tools to help stakeholders to protect and alert municipalities facing uncertainties and changes in karst flash floods

    NASA Astrophysics Data System (ADS)

    Borrell Estupina, V.; Raynaud, F.; Bourgeois, N.; Kong-A-Siou, L.; Collet, L.; Haziza, E.; Servat, E.

    2015-06-01

    Flash floods are often responsible for many deaths and involve many material damages. Regarding Mediterranean karst aquifers, the complexity of connections, between surface and groundwater, as well as weather non-stationarity patterns, increase difficulties in understanding the basins behaviour and thus warning and protecting people. Furthermore, given the recent changes in land use and extreme rainfall events, knowledge of the past floods is no longer sufficient to manage flood risks. Therefore the worst realistic flood that could occur should be considered. Physical and processes-based hydrological models are considered among the best ways to forecast floods under diverse conditions. However, they rarely match with the stakeholders' needs. In fact, the forecasting services, the municipalities, and the civil security have difficulties in running and interpreting data-consuming models in real-time, above all if data are uncertain or non-existent. To face these social and technical difficulties and help stakeholders, this study develops two operational tools derived from these models. These tools aim at planning real-time decisions given little, changing, and uncertain information available, which are: (i) a hydrological graphical tool (abacus) to estimate flood peak discharge from the karst past state and the forecasted but uncertain intense rainfall; (ii) a GIS-based method (MARE) to estimate the potential flooded pathways and areas, accounting for runoff and karst contributions and considering land use changes. Then, outputs of these tools are confronted to past and recent floods and municipalities observations, and the impacts of uncertainties and changes on planning decisions are discussed. The use of these tools on the recent 2014 events demonstrated their reliability and interest for stakeholders. This study was realized on French Mediterranean basins, in close collaboration with the Flood Forecasting Services (SPC Med-Ouest, SCHAPI, municipalities).

  3. Applications of the gambling score in evaluating earthquake predictions and forecasts

    NASA Astrophysics Data System (ADS)

    Zhuang, Jiancang; Zechar, Jeremy D.; Jiang, Changsheng; Console, Rodolfo; Murru, Maura; Falcone, Giuseppe

    2010-05-01

    This study presents a new method, namely the gambling score, for scoring the performance earthquake forecasts or predictions. Unlike most other scoring procedures that require a regular scheme of forecast and treat each earthquake equally, regardless their magnitude, this new scoring method compensates the risk that the forecaster has taken. Starting with a certain number of reputation points, once a forecaster makes a prediction or forecast, he is assumed to have betted some points of his reputation. The reference model, which plays the role of the house, determines how many reputation points the forecaster can gain if he succeeds, according to a fair rule, and also takes away the reputation points bet by the forecaster if he loses. This method is also extended to the continuous case of point process models, where the reputation points betted by the forecaster become a continuous mass on the space-time-magnitude range of interest. For discrete predictions, we apply this method to evaluate performance of Shebalin's predictions made by using the Reverse Tracing of Precursors (RTP) algorithm and of the outputs of the predictions from the Annual Consultation Meeting on Earthquake Tendency held by China Earthquake Administration. For the continuous case, we use it to compare the probability forecasts of seismicity in the Abruzzo region before and after the L'aquila earthquake based on the ETAS model and the PPE model.

  4. Development of an Advanced Stimulation / Production Predictive Simulator for Enhanced Geothermal Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pritchett, John W.

    2015-04-15

    There are several well-known obstacles to the successful deployment of EGS projects on a commercial scale, of course. EGS projects are expected to be deeper, on the average, than conventional “natural” geothermal reservoirs, and drilling costs are already a formidable barrier to conventional geothermal projects. Unlike conventional resources (which frequently announce their presence with natural manifestations such as geysers, hot springs and fumaroles), EGS prospects are likely to appear fairly undistinguished from the earth surface. And, of course, the probable necessity of fabricating a subterranean fluid circulation network to mine the heat from the rock (instead of simply relying onmore » natural, pre-existing permeable fractures) adds a significant degree of uncertainty to the prospects for success. Accordingly, the basic motivation for the work presented herein was to try to develop a new set of tools that would be more suitable for this purpose. Several years ago, the Department of Energy’s Geothermal Technologies Office recognized this need and funded a cost-shared grant to our company (then SAIC, now Leidos) to partner with Geowatt AG of Zurich, Switzerland and undertake the development of a new reservoir simulator that would be more suitable for EGS forecasting than the existing tools. That project has now been completed and a new numerical geothermal reservoir simulator has been developed. It is named “HeatEx” (for “Heat Extraction”) and is almost completely new, although its methodology owes a great deal to other previous geothermal software development efforts, including Geowatt’s “HEX-S” code, the STAR and SPFRAC simulators developed here at SAIC/Leidos, the MINC approach originally developed at LBNL, and tracer analysis software originally formulated at INEL. Furthermore, the development effort was led by engineers with many years of experience in using reservoir simulation software to make meaningful forecasts for real geothermal projects, not just software designers. It is hoped that, as a result, HeatEx will prove useful during the early stages of the development of EGS technology. The basic objective was to design a tool that could use field data that are likely to become available during the early phases of an EGS project (that is, during initial reconnaissance and fracture stimulation operations) to guide forecasts of the longer-term behavior of the system during production and heat-mining.« less

  5. Relationships between stratospheric clear air turbulence and synoptic meteorological parameters over the western United States between 12-20 km altitude

    NASA Technical Reports Server (NTRS)

    Scoggins, J. R.; Clark, T. L.; Possiel, N. C.

    1975-01-01

    Procedures for forecasting clear air turbulence in the stratosphere over the western United States from rawinsonde data are described and results presented. Approaches taken to relate meteorological parameters to regions of turbulence and nonturbulence encountered by the XB-70 during 46 flights at altitudes between 12-20 km include: empirical probabilities, discriminant function analysis, and mountainwave theory. Results from these techniques were combined into a procedure to forecast regions of clear air turbulence with an accuracy of 70-80 percent. A computer program was developed to provide an objective forecast directly from the rawinsonde sounding data.

  6. Urban flood early warning systems: approaches to hydrometeorological forecasting and communicating risk

    NASA Astrophysics Data System (ADS)

    Cranston, Michael; Speight, Linda; Maxey, Richard; Tavendale, Amy; Buchanan, Peter

    2015-04-01

    One of the main challenges for the flood forecasting community remains the provision of reliable early warnings of surface (or pluvial) flooding. The Scottish Flood Forecasting Service has been developing approaches for forecasting the risk of surface water flooding including capitalising on the latest developments in quantitative precipitation forecasting from the Met Office. A probabilistic Heavy Rainfall Alert decision support tool helps operational forecasters assess the likelihood of surface water flooding against regional rainfall depth-duration estimates from MOGREPS-UK linked to historical short-duration flooding in Scotland. The surface water flood risk is communicated through the daily Flood Guidance Statement to emergency responders. A more recent development is an innovative risk-based hydrometeorological approach that links 24-hour ensemble rainfall forecasts through a hydrological model (Grid-to-Grid) to a library of impact assessments (Speight et al., 2015). The early warning tool - FEWS Glasgow - presents the risk of flooding to people, property and transport across a 1km grid over the city of Glasgow with a lead time of 24 hours. Communication of the risk was presented in a bespoke surface water flood forecast product designed based on emergency responder requirements and trialled during the 2014 Commonwealth Games in Glasgow. The development of new approaches to surface water flood forecasting are leading to improved methods of communicating the risk and better performance in early warning with a reduction in false alarm rates with summer flood guidance in 2014 (67%) compared to 2013 (81%) - although verification of instances of surface water flooding remains difficult. However the introduction of more demanding hydrometeorological capabilities with associated greater levels of uncertainty does lead to an increased demand on operational flood forecasting skills and resources. Speight, L., Cole, S.J., Moore, R.J., Pierce, C., Wright, B., Golding, B., Cranston, M., Tavendale, A., Ghimire, S., and Dhondia, J. (2015) Developing surface water flood forecasting capabilities in Scotland: an operational pilot for the 2014 Commonwealth Games in Glasgow. Journal of Flood Risk Management, In Press.

  7. Wildland fire probabilities estimated from weather model-deduced monthly mean fire danger indices

    Treesearch

    Haiganoush K. Preisler; Shyh-Chin Chen; Francis Fujioka; John W. Benoit; Anthony L. Westerling

    2008-01-01

    The National Fire Danger Rating System indices deduced from a regional simulation weather model were used to estimate probabilities and numbers of large fire events on monthly and 1-degree grid scales. The weather model simulations and forecasts are ongoing experimental products from the Experimental Climate Prediction Center at the Scripps Institution of Oceanography...

  8. Skill in Precipitation Forecasting in the National Weather Service.

    NASA Astrophysics Data System (ADS)

    Charba, Jerome P.; Klein, William H.

    1980-12-01

    All known long-term records of forecasting performance for different types of precipitation forecasts in the National Weather Service were examined for relative skill and secular trends in skill. The largest upward trends were achieved by local probability of precipitation (PoP) forecasts for the periods 24-36 h and 36-48 h after 0000 and 1200 GMT. Over the last 13 years, the skill of these forecasts has improved at an average rate of 7.2% per 10-year interval. Over the same period, improvement has been smaller in local PoP skill in the 12-24 h range (2.0% per 10 years) and in the accuracy of "Yea/No" forecasts of measurable precipitation. The overall trend in accuracy of centralized quantitative precipitation forecasts of 0.5 in and 1.0 in has been slightly upward at the 0-24 h range and strongly upward at the 24-48 h range. Most of the improvement in these forecasts has been achieved from the early 1970s to the present. Strong upward accuracy trends in all types of precipitation forecasts within the past eight years are attributed primarily to improvements in numerical and statistical centralized guidance forecasts.The skill and accuracy of both measurable and quantitative precipitation forecasts is 35-55% greater during the cool season than during the warm season. Also, the secular rate of improvement of the cool season precipitation forecasts is 50-110% greater than that of the warm season. This seasonal difference in performance reflects the relative difficulty of forecasting predominantly stratiform precipitation of the cool season and convective precipitation of the warm season.

  9. A new forecast presentation tool for offshore contractors

    NASA Astrophysics Data System (ADS)

    Jørgensen, M.

    2009-09-01

    Contractors working off shore are often very sensitive to both sea and weather conditions, and it's essential that they have easy access to reliable information on coming conditions to enable planning of when to start or shut down offshore operations to avoid loss of life and materials. Danish Meteorological Institute, DMI, recently, in cooperation with business partners in the field, developed a new application to accommodate that need. The "Marine Forecast Service” is a browser based forecast presentation tool. It provides an interface for the user to enable easy and quick access to all relevant meteorological and oceanographic forecasts and observations for a given area of interest. Each customer gains access to the application via a standard login/password procedure. Once logged in, the user can inspect animated forecast maps of parameters like wind, gust, wave height, swell and current among others. Supplementing the general maps, the user can choose to look at forecast graphs for each of the locations where the user is running operations. These forecast graphs can also be overlaid with the user's own in situ observations, if such exist. Furthermore, the data from the graphs can be exported as data files that the customer can use in his own applications as he desires. As part of the application, a forecaster's view on the current and near future weather situation is presented to the user as well, adding further value to the information presented through maps and graphs. Among other features of the product, animated radar and satellite images could be mentioned. And finally the application provides the possibility of a "second opinion” through traditional weather charts from another recognized provider of weather forecasts. The presentation will provide more detailed insights into the contents of the applications as well as some of the experiences with the product.

  10. Implementation and Research on the Operational Use of the Mesoscale Prediction Model COAMPS in Poland

    DTIC Science & Technology

    2007-09-30

    COAMPS model. Bogumil Jakubiak, University of Warsaw – participated in EGU General Assembly , Vienna Austria 15-20 April 2007 giving one oral and two...conditional forecast (background) error probability density function using an ensemble of the model forecast to generate background error statistics...COAMPS system on ICM machines at Warsaw University for the purpose of providing operational support to the general public using the ICM meteorological

  11. Developing a Model for Predicting Snowpack Parameters Affecting Vehicle Mobility,

    DTIC Science & Technology

    1983-05-01

    Service River Forecast System -Snow accumulation and JO ablation model. NOAA Technical Memorandum NWS HYDRO-17, National Weather Service, JS Silver Spring... Forecast System . This model indexes each phys- ical process that occurs in the snowpack to the air temperature. Although this results in a signifi...pressure P Probability Q Energy Q Specific humidity R Precipitation s Snowfall depth T Air temperature t Time U Wind speed V Water vapor

  12. Statistical Analysis of Model Data for Operational Space Launch Weather Support at Kennedy Space Center and Cape Canaveral Air Force Station

    NASA Technical Reports Server (NTRS)

    Bauman, William H., III

    2010-01-01

    The 12-km resolution North American Mesoscale (NAM) model (MesoNAM) is used by the 45th Weather Squadron (45 WS) Launch Weather Officers at Kennedy Space Center (KSC) and Cape Canaveral Air Force Station (CCAFS) to support space launch weather operations. The 45 WS tasked the Applied Meteorology Unit to conduct an objective statistics-based analysis of MesoNAM output compared to wind tower mesonet observations and then develop a an operational tool to display the results. The National Centers for Environmental Prediction began running the current version of the MesoNAM in mid-August 2006. The period of record for the dataset was 1 September 2006 - 31 January 2010. The AMU evaluated MesoNAM hourly forecasts from 0 to 84 hours based on model initialization times of 00, 06, 12 and 18 UTC. The MesoNAM forecast winds, temperature and dew point were compared to the observed values of these parameters from the sensors in the KSC/CCAFS wind tower network. The data sets were stratified by model initialization time, month and onshore/offshore flow for each wind tower. Statistics computed included bias (mean difference), standard deviation of the bias, root mean square error (RMSE) and a hypothesis test for bias = O. Twelve wind towers located in close proximity to key launch complexes were used for the statistical analysis with the sensors on the towers positioned at varying heights to include 6 ft, 30 ft, 54 ft, 60 ft, 90 ft, 162 ft, 204 ft and 230 ft depending on the launch vehicle and associated weather launch commit criteria being evaluated. These twelve wind towers support activities for the Space Shuttle (launch and landing), Delta IV, Atlas V and Falcon 9 launch vehicles. For all twelve towers, the results indicate a diurnal signal in the bias of temperature (T) and weaker but discernable diurnal signal in the bias of dewpoint temperature (T(sub d)) in the MesoNAM forecasts. Also, the standard deviation of the bias and RMSE of T, T(sub d), wind speed and wind direction indicated the model error increased with the forecast period all four parameters. The hypothesis testing uses statistics to determine the probability that a given hypothesis is true. The goal of using the hypothesis test was to determine if the model bias of any of the parameters assessed throughout the model forecast period was statistically zero. For th is dataset, if this test produced a value >= -1 .96 or <= 1.96 for a data point, then the bias at that point was effectively zero and the model forecast for that point was considered to have no error. A graphical user interface (GUI) was developed so the 45 WS would have an operational tool at their disposal that would be easy to navigate among the multiple stratifications of information to include tower locations, month, model initialization times, sensor heights and onshore/offshore flow. The AMU developed the GUI using HyperText Markup Language (HTML) so the tool could be used in most popular web browsers with computers running different operating systems such as Microsoft Windows and Linux.

  13. Surface drift prediction in the Adriatic Sea using hyper-ensemble statistics on atmospheric, ocean and wave models: Uncertainties and probability distribution areas

    USGS Publications Warehouse

    Rixen, M.; Ferreira-Coelho, E.; Signell, R.

    2008-01-01

    Despite numerous and regular improvements in underlying models, surface drift prediction in the ocean remains a challenging task because of our yet limited understanding of all processes involved. Hence, deterministic approaches to the problem are often limited by empirical assumptions on underlying physics. Multi-model hyper-ensemble forecasts, which exploit the power of an optimal local combination of available information including ocean, atmospheric and wave models, may show superior forecasting skills when compared to individual models because they allow for local correction and/or bias removal. In this work, we explore in greater detail the potential and limitations of the hyper-ensemble method in the Adriatic Sea, using a comprehensive surface drifter database. The performance of the hyper-ensembles and the individual models are discussed by analyzing associated uncertainties and probability distribution maps. Results suggest that the stochastic method may reduce position errors significantly for 12 to 72??h forecasts and hence compete with pure deterministic approaches. ?? 2007 NATO Undersea Research Centre (NURC).

  14. [Research on engine remaining useful life prediction based on oil spectrum analysis and particle filtering].

    PubMed

    Sun, Lei; Jia, Yun-xian; Cai, Li-ying; Lin, Guo-yu; Zhao, Jin-song

    2013-09-01

    The spectrometric oil analysis(SOA) is an important technique for machine state monitoring, fault diagnosis and prognosis, and SOA based remaining useful life(RUL) prediction has an advantage of finding out the optimal maintenance strategy for machine system. Because the complexity of machine system, its health state degradation process can't be simply characterized by linear model, while particle filtering(PF) possesses obvious advantages over traditional Kalman filtering for dealing nonlinear and non-Gaussian system, the PF approach was applied to state forecasting by SOA, and the RUL prediction technique based on SOA and PF algorithm is proposed. In the prediction model, according to the estimating result of system's posterior probability, its prior probability distribution is realized, and the multi-step ahead prediction model based on PF algorithm is established. Finally, the practical SOA data of some engine was analyzed and forecasted by the above method, and the forecasting result was compared with that of traditional Kalman filtering method. The result fully shows the superiority and effectivity of the

  15. Forecasting the probability of future groundwater levels declining below specified low thresholds in the conterminous U.S.

    USGS Publications Warehouse

    Dudley, Robert W.; Hodgkins, Glenn A.; Dickinson, Jesse

    2017-01-01

    We present a logistic regression approach for forecasting the probability of future groundwater levels declining or maintaining below specific groundwater-level thresholds. We tested our approach on 102 groundwater wells in different climatic regions and aquifers of the United States that are part of the U.S. Geological Survey Groundwater Climate Response Network. We evaluated the importance of current groundwater levels, precipitation, streamflow, seasonal variability, Palmer Drought Severity Index, and atmosphere/ocean indices for developing the logistic regression equations. Several diagnostics of model fit were used to evaluate the regression equations, including testing of autocorrelation of residuals, goodness-of-fit metrics, and bootstrap validation testing. The probabilistic predictions were most successful at wells with high persistence (low month-to-month variability) in their groundwater records and at wells where the groundwater level remained below the defined low threshold for sustained periods (generally three months or longer). The model fit was weakest at wells with strong seasonal variability in levels and with shorter duration low-threshold events. We identified challenges in deriving probabilistic-forecasting models and possible approaches for addressing those challenges.

  16. Incorporating probabilistic seasonal climate forecasts into river management using a risk-based framework

    USGS Publications Warehouse

    Sojda, Richard S.; Towler, Erin; Roberts, Mike; Rajagopalan, Balaji

    2013-01-01

    [1] Despite the influence of hydroclimate on river ecosystems, most efforts to date have focused on using climate information to predict streamflow for water supply. However, as water demands intensify and river systems are increasingly stressed, research is needed to explicitly integrate climate into streamflow forecasts that are relevant to river ecosystem management. To this end, we present a five step risk-based framework: (1) define risk tolerance, (2) develop a streamflow forecast model, (3) generate climate forecast ensembles, (4) estimate streamflow ensembles and associated risk, and (5) manage for climate risk. The framework is successfully demonstrated for an unregulated watershed in southwest Montana, where the combination of recent drought and water withdrawals has made it challenging to maintain flows needed for healthy fisheries. We put forth a generalized linear modeling (GLM) approach to develop a suite of tools that skillfully model decision-relevant low flow characteristics in terms of climate predictors. Probabilistic precipitation forecasts are used in conjunction with the GLMs, resulting in season-ahead prediction ensembles that provide the full risk profile. These tools are embedded in an end-to-end risk management framework that directly supports proactive fish conservation efforts. Results show that the use of forecasts can be beneficial to planning, especially in wet years, but historical precipitation forecasts are quite conservative (i.e., not very “sharp”). Synthetic forecasts show that a modest “sharpening” can strongly impact risk and improve skill. We emphasize that use in management depends on defining relevant environmental flows and risk tolerance, requiring local stakeholder involvement.

  17. Dynamic Forecasting Conditional Probability of Bombing Attacks Based on Time-Series and Intervention Analysis.

    PubMed

    Li, Shuying; Zhuang, Jun; Shen, Shifei

    2017-07-01

    In recent years, various types of terrorist attacks occurred, causing worldwide catastrophes. According to the Global Terrorism Database (GTD), among all attack tactics, bombing attacks happened most frequently, followed by armed assaults. In this article, a model for analyzing and forecasting the conditional probability of bombing attacks (CPBAs) based on time-series methods is developed. In addition, intervention analysis is used to analyze the sudden increase in the time-series process. The results show that the CPBA increased dramatically at the end of 2011. During that time, the CPBA increased by 16.0% in a two-month period to reach the peak value, but still stays 9.0% greater than the predicted level after the temporary effect gradually decays. By contrast, no significant fluctuation can be found in the conditional probability process of armed assault. It can be inferred that some social unrest, such as America's troop withdrawal from Afghanistan and Iraq, could have led to the increase of the CPBA in Afghanistan, Iraq, and Pakistan. The integrated time-series and intervention model is used to forecast the monthly CPBA in 2014 and through 2064. The average relative error compared with the real data in 2014 is 3.5%. The model is also applied to the total number of attacks recorded by the GTD between 2004 and 2014. © 2016 Society for Risk Analysis.

  18. Forecasting the response of Earth's surface to future climatic and land use changes: A review of methods and research needs

    DOE PAGES

    Pelletier, Jon D.; Murray, A. Brad; Pierce, Jennifer L.; ...

    2015-07-14

    In the future, Earth will be warmer, precipitation events will be more extreme, global mean sea level will rise, and many arid and semiarid regions will be drier. Human modifications of landscapes will also occur at an accelerated rate as developed areas increase in size and population density. We now have gridded global forecasts, being continually improved, of the climatic and land use changes (C&LUC) that are likely to occur in the coming decades. However, besides a few exceptions, consensus forecasts do not exist for how these C&LUC will likely impact Earth-surface processes and hazards. In some cases, we havemore » the tools to forecast the geomorphic responses to likely future C&LUC. Fully exploiting these models and utilizing these tools will require close collaboration among Earth-surface scientists and Earth-system modelers. This paper assesses the state-of-the-art tools and data that are being used or could be used to forecast changes in the state of Earth's surface as a result of likely future C&LUC. We also propose strategies for filling key knowledge gaps, emphasizing where additional basic research and/or collaboration across disciplines are necessary. The main body of the paper addresses cross-cutting issues, including the importance of nonlinear/threshold-dominated interactions among topography, vegetation, and sediment transport, as well as the importance of alternate stable states and extreme, rare events for understanding and forecasting Earth-surface response to C&LUC. Five supplements delve into different scales or process zones (global-scale assessments and fluvial, aeolian, glacial/periglacial, and coastal process zones) in detail.« less

  19. Long‐term time‐dependent probabilities for the third Uniform California Earthquake Rupture Forecast (UCERF3)

    USGS Publications Warehouse

    Field, Edward; Biasi, Glenn P.; Bird, Peter; Dawson, Timothy E.; Felzer, Karen R.; Jackson, David A.; Johnson, Kaj M.; Jordan, Thomas H.; Madden, Christopher; Michael, Andrew J.; Milner, Kevin; Page, Morgan T.; Parsons, Thomas E.; Powers, Peter; Shaw, Bruce E.; Thatcher, Wayne R.; Weldon, Ray J.; Zeng, Yuehua

    2015-01-01

    The 2014 Working Group on California Earthquake Probabilities (WGCEP 2014) presents time-dependent earthquake probabilities for the third Uniform California Earthquake Rupture Forecast (UCERF3). Building on the UCERF3 time-independent model, published previously, renewal models are utilized to represent elastic-rebound-implied probabilities. A new methodology has been developed that solves applicability issues in the previous approach for un-segmented models. The new methodology also supports magnitude-dependent aperiodicity and accounts for the historic open interval on faults that lack a date-of-last-event constraint. Epistemic uncertainties are represented with a logic tree, producing 5,760 different forecasts. Results for a variety of evaluation metrics are presented, including logic-tree sensitivity analyses and comparisons to the previous model (UCERF2). For 30-year M≥6.7 probabilities, the most significant changes from UCERF2 are a threefold increase on the Calaveras fault and a threefold decrease on the San Jacinto fault. Such changes are due mostly to differences in the time-independent models (e.g., fault slip rates), with relaxation of segmentation and inclusion of multi-fault ruptures being particularly influential. In fact, some UCERF2 faults were simply too long to produce M 6.7 sized events given the segmentation assumptions in that study. Probability model differences are also influential, with the implied gains (relative to a Poisson model) being generally higher in UCERF3. Accounting for the historic open interval is one reason. Another is an effective 27% increase in the total elastic-rebound-model weight. The exact factors influencing differences between UCERF2 and UCERF3, as well as the relative importance of logic-tree branches, vary throughout the region, and depend on the evaluation metric of interest. For example, M≥6.7 probabilities may not be a good proxy for other hazard or loss measures. This sensitivity, coupled with the approximate nature of the model and known limitations, means the applicability of UCERF3 should be evaluated on a case-by-case basis.

  20. Time-varying loss forecast for an earthquake scenario in Basel, Switzerland

    NASA Astrophysics Data System (ADS)

    Herrmann, Marcus; Zechar, Jeremy D.; Wiemer, Stefan

    2014-05-01

    When an unexpected earthquake occurs, people suddenly want advice on how to cope with the situation. The 2009 L'Aquila quake highlighted the significance of public communication and pushed the usage of scientific methods to drive alternative risk mitigation strategies. For instance, van Stiphout et al. (2010) suggested a new approach for objective evacuation decisions on short-term: probabilistic risk forecasting combined with cost-benefit analysis. In the present work, we apply this approach to an earthquake sequence that simulated a repeat of the 1356 Basel earthquake, one of the most damaging events in Central Europe. A recent development to benefit society in case of an earthquake are probabilistic forecasts of the aftershock occurrence. But seismic risk delivers a more direct expression of the socio-economic impact. To forecast the seismic risk on short-term, we translate aftershock probabilities to time-varying seismic hazard and combine this with time-invariant loss estimation. Compared with van Stiphout et al. (2010), we use an advanced aftershock forecasting model and detailed settlement data to allow us spatial forecasts and settlement-specific decision-making. We quantify the risk forecast probabilistically in terms of human loss. For instance one minute after the M6.6 mainshock, the probability for an individual to die within the next 24 hours is 41 000 times higher than the long-term average; but the absolute value remains at minor 0.04 %. The final cost-benefit analysis adds value beyond a pure statistical approach: it provides objective statements that may justify evacuations. To deliver supportive information in a simple form, we propose a warning approach in terms of alarm levels. Our results do not justify evacuations prior to the M6.6 mainshock, but in certain districts afterwards. The ability to forecast the short-term seismic risk at any time-and with sufficient data anywhere-is the first step of personal decision-making and raising risk awareness among the public. Reference Van Stiphout, T., S. Wiemer, and W. Marzocchi (2010). 'Are short-term evacuations warranted? Case of the 2009 L'Aquila earthquake'. In: Geophysical Research Letters 37.6, pp. 1-5. url: http://onlinelibrary.wiley.com/doi/10.1029/ 2009GL042352/abstract.

  1. Communicating likelihoods and probabilities in forecasts of volcanic eruptions

    NASA Astrophysics Data System (ADS)

    Doyle, Emma E. H.; McClure, John; Johnston, David M.; Paton, Douglas

    2014-02-01

    The issuing of forecasts and warnings of natural hazard events, such as volcanic eruptions, earthquake aftershock sequences and extreme weather often involves the use of probabilistic terms, particularly when communicated by scientific advisory groups to key decision-makers, who can differ greatly in relative expertise and function in the decision making process. Recipients may also differ in their perception of relative importance of political and economic influences on interpretation. Consequently, the interpretation of these probabilistic terms can vary greatly due to the framing of the statements, and whether verbal or numerical terms are used. We present a review from the psychology literature on how the framing of information influences communication of these probability terms. It is also unclear as to how people rate their perception of an event's likelihood throughout a time frame when a forecast time window is stated. Previous research has identified that, when presented with a 10-year time window forecast, participants viewed the likelihood of an event occurring ‘today’ as being of less than that in year 10. Here we show that this skew in perception also occurs for short-term time windows (under one week) that are of most relevance for emergency warnings. In addition, unlike the long-time window statements, the use of the phrasing “within the next…” instead of “in the next…” does not mitigate this skew, nor do we observe significant differences between the perceived likelihoods of scientists and non-scientists. This finding suggests that effects occurring due to the shorter time window may be ‘masking’ any differences in perception due to wording or career background observed for long-time window forecasts. These results have implications for scientific advice, warning forecasts, emergency management decision-making, and public information as any skew in perceived event likelihood towards the end of a forecast time window may result in an underestimate of the likelihood of an event occurring ‘today’ leading to potentially inappropriate action choices. We thus present some initial guidelines for communicating such eruption forecasts.

  2. Flood Risk Assessment and Forecasting for the Ganges-Brahmaputra-Meghna River Basins

    NASA Astrophysics Data System (ADS)

    Hopson, T. M.; Priya, S.; Young, W.; Avasthi, A.; Clayton, T. D.; Brakenridge, G. R.; Birkett, C. M.; Riddle, E. E.; Broman, D.; Boehnert, J.; Sampson, K. M.; Kettner, A.; Singh, D.

    2017-12-01

    During the 2017 South Asia monsoon, torrential rains and catastrophic floods affected more than 45 million people, including 16 million children, across the Ganges-Brahmaputra-Meghna (GBM) basins. The basin is recognized as one of the world's most disaster-prone regions, with severe floods occurring almost annually causing extreme loss of life and property. In light of this vulnerability, the World Bank and collaborators have contributed toward reducing future flood impacts through recent developments to improve operational preparedness for such events, as well as efforts in more general preparedness and resilience building through planning based on detailed risk assessments. With respect to improved event-specific flood preparedness through operational warnings, we discuss a new forecasting system that provides probability-based flood forecasts developed for more than 85 GBM locations. Forecasts are available online, along with near-real-time data maps of rainfall (predicted and actual) and river levels. The new system uses multiple data sets and multiple models to enhance forecasting skill, and provides improved forecasts up to 16 days in advance of the arrival of high waters. These longer lead times provide the opportunity to save both lives and livelihoods. With sufficient advance notice, for example, farmers can harvest a threatened rice crop or move vulnerable livestock to higher ground. Importantly, the forecasts not only predict future water levels but indicate the level of confidence in each forecast. Knowing whether the probability of a danger-level flood is 10 percent or 90 percent helps people to decide what, if any, action to take. With respect to efforts in general preparedness and resilience building, we also present a recent flood risk assessment, and how it provides, for the first time, a numbers-based view of the impacts of different size floods across the Ganges basin. The findings help identify priority areas for tackling flood risks (for example, relocating levees, improving flood warning systems, or boosting overall economic resilience). The assessment includes the locations and numbers of people at risk, as well as the locations and value of buildings, roads and railways, and crops at risk. An accompanying atlas includes easy-to-use risk maps and tables for the Ganges basins.

  3. Using Enabling Technologies to Facilitate the Comparison of Satellite Observations with the Model Forecasts for Hurricane Study

    NASA Astrophysics Data System (ADS)

    Li, P.; Knosp, B.; Hristova-Veleva, S. M.; Niamsuwan, N.; Johnson, M. P.; Shen, T. P. J.; Tanelli, S.; Turk, J.; Vu, Q. A.

    2014-12-01

    Due to their complexity and volume, the satellite data are underutilized in today's hurricane research and operations. To better utilize these data, we developed the JPL Tropical Cyclone Information System (TCIS) - an Interactive Data Portal providing fusion between Near-Real-Time satellite observations and model forecasts to facilitate model evaluation and improvement. We have collected satellite observations and model forecasts in the Atlantic Basin and the East Pacific for the hurricane seasons since 2010 and supported the NASA Airborne Campaigns for Hurricane Study such as the Genesis and Rapid Intensification Processes (GRIP) in 2010 and the Hurricane and Severe Storm Sentinel (HS3) from 2012 to 2014. To enable the direct inter-comparisons of the satellite observations and the model forecasts, the TCIS was integrated with the NASA Earth Observing System Simulator Suite (NEOS3) to produce synthetic observations (e.g. simulated passive microwave brightness temperatures) from a number of operational hurricane forecast models (HWRF and GFS). An automated process was developed to trigger NEOS3 simulations via web services given the location and time of satellite observations, monitor the progress of the NEOS3 simulations, display the synthetic observation and ingest them into the TCIS database when they are done. In addition, three analysis tools, the joint PDF analysis of the brightness temperatures, ARCHER for finding the storm-center and the storm organization and the Wave Number Analysis tool for storm asymmetry and morphology analysis were integrated into TCIS to provide statistical and structural analysis on both observed and synthetic data. Interactive tools were built in the TCIS visualization system to allow the spatial and temporal selections of the datasets, the invocation of the tools with user specified parameters, and the display and the delivery of the results. In this presentation, we will describe the key enabling technologies behind the design of the TCIS interactive data portal and analysis tools, including the spatial database technology for the representation and query of the level 2 satellite data, the automatic process flow using web services, the interactive user interface using the Google Earth API, and a common and expandable Python wrapper to invoke the analysis tools.

  4. Operational short-term Probabilistic Volcanic Hazard Assessment of tephra fallout: an example from the 1982-1984 unrest at Campi Flegrei

    NASA Astrophysics Data System (ADS)

    Sandri, Laura; Selva, Jacopo; Costa, Antonio; Macedonio, Giovanni; Marzocchi, Warner

    2014-05-01

    Probabilistic Volcanic Hazard Assessment (PVHA) represents the most complete scientific contribution for planning rational strategies aimed at mitigating the risk posed by volcanic activity at different time scales. The definition of the space-time window for PVHA is related to the kind of risk mitigation actions that are under consideration. Short intervals (days to weeks) are important for short-term risk mitigation actions like the evacuation of a volcanic area. During volcanic unrest episodes or eruptions, it is of primary importance to produce short-term tephra fallout forecast, and frequently update it to account for the rapidly evolving situation. This information is obviously crucial for crisis management, since tephra may heavily affect building stability, public health, transportations and evacuation routes (airports, trains, road traffic) and lifelines (electric power supply). In this study, we propose a methodology for the short-term PVHA and its operational implementation, based on the model BET_EF, in which measures from the monitoring system are used to routinely update the forecast of some parameters related to the eruption dynamics, that is, the probabilities of eruption, of every possible vent position and every possible eruption size. Then, considering all possible vent positions and eruptive sizes, tephra dispersal models are coupled with frequently updated meteorological forecasts. Finally, these results are merged through a Bayesian procedure, accounting for epistemic uncertainties at all the considered steps. As case study we retrospectively study some stages of the volcanic unrest that took place in Campi Flegrei (CF) in 1982-1984. In particular, we aim at presenting a practical example of possible operational tephra fall PVHA on a daily basis, in the surroundings of CF at different stages of the 1982-84 unrest. Tephra dispersal is simulated using the analytical HAZMAP code. We consider three possible eruptive sizes (a low, a medium and a high eruption "scenario" respectively) and 700 possible vent positions within the CF Neapolitan Yellow Tuff caldera. The probabilities related to eruption dynamics, and estimated by BET_EF, are based on the set up of the code obtained specifically for CF during a 6-years long elicitation project, and on the actual monitoring parameters measured during the unrest and published in the literature. We take advantage here of two novel improvements: (i) a time function to describe how the probability of eruption evolves within the time window defined for the forecast, and (ii) the production of hazard curves and their confidence levels, a tool that allows a complete description of PVHA and its uncertainties. The general goal of this study is to show what, and how, pieces of scientific knowledge can be operationally transferred to decision makers, and specifically how this could have been translated in practice during the 1982-84 Campi Flegrei crisis, if scientists knew what we know today about this volcano.

  5. Scaling forecast models for wind turbulence and wind turbine power intermittency

    NASA Astrophysics Data System (ADS)

    Duran Medina, Olmo; Schmitt, Francois G.; Calif, Rudy

    2017-04-01

    The intermittency of the wind turbine power remains an important issue for the massive development of this renewable energy. The energy peaks injected in the electric grid produce difficulties in the energy distribution management. Hence, a correct forecast of the wind power in the short and middle term is needed due to the high unpredictability of the intermittency phenomenon. We consider a statistical approach through the analysis and characterization of stochastic fluctuations. The theoretical framework is the multifractal modelisation of wind velocity fluctuations. Here, we consider three wind turbine data where two possess a direct drive technology. Those turbines are producing energy in real exploitation conditions and allow to test our forecast models of power production at a different time horizons. Two forecast models were developed based on two physical principles observed in the wind and the power time series: the scaling properties on the one hand and the intermittency in the wind power increments on the other. The first tool is related to the intermittency through a multifractal lognormal fit of the power fluctuations. The second tool is based on an analogy of the power scaling properties with a fractional brownian motion. Indeed, an inner long-term memory is found in both time series. Both models show encouraging results since a correct tendency of the signal is respected over different time scales. Those tools are first steps to a search of efficient forecasting approaches for grid adaptation facing the wind energy fluctuations.

  6. Hydrologic ensembles based on convection-permitting precipitation nowcasts for flash flood warnings

    NASA Astrophysics Data System (ADS)

    Demargne, Julie; Javelle, Pierre; Organde, Didier; de Saint Aubin, Céline; Ramos, Maria-Helena

    2017-04-01

    In order to better anticipate flash flood events and provide timely warnings to communities at risk, the French national service in charge of flood forecasting (SCHAPI) is implementing a national flash flood warning system for small-to-medium ungauged basins. Based on a discharge-threshold flood warning method called AIGA (Javelle et al. 2014), the current version of the system runs a simplified hourly distributed hydrologic model with operational radar-gauge QPE grids from Météo-France at a 1-km2 resolution every 15 minutes. This produces real-time peak discharge estimates along the river network, which are subsequently compared to regionalized flood frequency estimates to provide warnings according to the AIGA-estimated return period of the ongoing event. To further extend the effective warning lead time while accounting for hydrometeorological uncertainties, the flash flood warning system is being enhanced to include Météo-France's AROME-NWC high-resolution precipitation nowcasts as time-lagged ensembles and multiple sets of hydrological regionalized parameters. The operational deterministic precipitation forecasts, from the nowcasting version of the AROME convection-permitting model (Auger et al. 2015), were provided at a 2.5-km resolution for a 6-hr forecast horizon for 9 significant rain events from September 2014 to June 2016. The time-lagged approach is a practical choice of accounting for the atmospheric forecast uncertainty when no extensive forecast archive is available for statistical modelling. The evaluation on 781 French basins showed significant improvements in terms of flash flood event detection and effective warning lead-time, compared to warnings from the current AIGA setup (without any future precipitation). We also discuss how to effectively communicate verification information to help determine decision-relevant warning thresholds for flood magnitude and probability. Javelle, P., Demargne, J., Defrance, D., Arnaud, P., 2014. Evaluating flash flood warnings at ungauged locations using post-event surveys: a case study with the AIGA warning system. Hydrological Sciences Journal, doi: 10.1080/02626667.2014.923970 Auger, L., Dupont, O., Hagelin, S., Brousseau, P., Brovelli, P., 2015. AROME-NWC: a new nowcasting tool based on an operational mesoscale forecasting system. Quarterly Journal of the Royal Meteorological Society, 141: 1603-1611, doi:10.1002/qj.2463

  7. A comparison of ensemble post-processing approaches that preserve correlation structures

    NASA Astrophysics Data System (ADS)

    Schefzik, Roman; Van Schaeybroeck, Bert; Vannitsem, Stéphane

    2016-04-01

    Despite the fact that ensemble forecasts address the major sources of uncertainty, they exhibit biases and dispersion errors and therefore are known to improve by calibration or statistical post-processing. For instance the ensemble model output statistics (EMOS) method, also known as non-homogeneous regression approach (Gneiting et al., 2005) is known to strongly improve forecast skill. EMOS is based on fitting and adjusting a parametric probability density function (PDF). However, EMOS and other common post-processing approaches apply to a single weather quantity at a single location for a single look-ahead time. They are therefore unable of taking into account spatial, inter-variable and temporal dependence structures. Recently many research efforts have been invested in designing post-processing methods that resolve this drawback but also in verification methods that enable the detection of dependence structures. New verification methods are applied on two classes of post-processing methods, both generating physically coherent ensembles. A first class uses the ensemble copula coupling (ECC) that starts from EMOS but adjusts the rank structure (Schefzik et al., 2013). The second class is a member-by-member post-processing (MBM) approach that maps each raw ensemble member to a corrected one (Van Schaeybroeck and Vannitsem, 2015). We compare variants of the EMOS-ECC and MBM classes and highlight a specific theoretical connection between them. All post-processing variants are applied in the context of the ensemble system of the European Centre of Weather Forecasts (ECMWF) and compared using multivariate verification tools including the energy score, the variogram score (Scheuerer and Hamill, 2015) and the band depth rank histogram (Thorarinsdottir et al., 2015). Gneiting, Raftery, Westveld, and Goldman, 2005: Calibrated probabilistic forecasting using ensemble model output statistics and minimum CRPS estimation. Mon. Wea. Rev., {133}, 1098-1118. Scheuerer and Hamill, 2015. Variogram-based proper scoring rules for probabilistic forecasts of multivariate quantities. Mon. Wea. Rev. {143},1321-1334. Schefzik, Thorarinsdottir, Gneiting. Uncertainty quantification in complex simulation models using ensemble copula coupling. Statistical Science {28},616-640, 2013. Thorarinsdottir, M. Scheuerer, and C. Heinz, 2015. Assessing the calibration of high-dimensional ensemble forecasts using rank histograms, arXiv:1310.0236. Van Schaeybroeck and Vannitsem, 2015: Ensemble post-processing using member-by-member approaches: theoretical aspects. Q.J.R. Meteorol. Soc., 141: 807-818.

  8. Forecasting of Water Consumptions Expenditure Using Holt-Winter’s and ARIMA

    NASA Astrophysics Data System (ADS)

    Razali, S. N. A. M.; Rusiman, M. S.; Zawawi, N. I.; Arbin, N.

    2018-04-01

    This study is carried out to forecast water consumption expenditure of Malaysian university specifically at University Tun Hussein Onn Malaysia (UTHM). The proposed Holt-Winter’s and Auto-Regressive Integrated Moving Average (ARIMA) models were applied to forecast the water consumption expenditure in Ringgit Malaysia from year 2006 until year 2014. The two models were compared and performance measurement of the Mean Absolute Percentage Error (MAPE) and Mean Absolute Deviation (MAD) were used. It is found that ARIMA model showed better results regarding the accuracy of forecast with lower values of MAPE and MAD. Analysis showed that ARIMA (2,1,4) model provided a reasonable forecasting tool for university campus water usage.

  9. Impact of Seasonal Forecasts on Agriculture

    NASA Astrophysics Data System (ADS)

    Aldor-Noiman, S. C.

    2014-12-01

    More extreme and volatile weather conditions are a threat to U.S. agricultural productivity today, as multiple environmental conditions during the growing season impact crop yields. That's why farmers' agronomic management decisions are dominated by consideration for near, medium and seasonal forecasts of climate. The Climate Corporation aims to help farmers around the world protect and improve their farming operations by providing agronomic decision support tools that leverage forecasts on multiple timescales to provide valuable insights directly to farmers. In this talk, we will discuss the impact of accurate seasonal forecasts on major decisions growers face each season. We will also discuss assessment and evaluation of seasonal forecasts in the context of agricultural applications.

  10. Novel Modeling Tools for Propagating Climate Change Variability and Uncertainty into Hydrodynamic Forecasts

    EPA Science Inventory

    Understanding impacts of climate change on hydrodynamic processes and ecosystem response within the Great Lakes is an important and challenging task. Variability in future climate conditions, uncertainty in rainfall-runoff model forecasts, the potential for land use change, and t...

  11. Assessment of the Charging Policy in Energy Efficiency of the Enterprise

    NASA Astrophysics Data System (ADS)

    Shutov, E. A.; E Turukina, T.; Anisimov, T. S.

    2017-04-01

    The forecasting problem for energy facilities with a power exceeding 670 kW is currently one of the main. In connection with rules of the retail electricity market such customers also pay for actual energy consumption deviations from plan value. In compliance with the hierarchical stages of the electricity market a guaranteeing supplier is to respect the interests of distribution and generation companies that require load leveling. The answer to this question for industrial enterprise is possible only within technological process through implementation of energy-efficient processing chains with the adaptive function and forecasting tool. In such a circumstance the primary objective of a forecasting is reduce the energy consumption costs by taking account of the energy cost correlation for 24 hours for forming of pumping unit work schedule. The pumping unit virtual model with the variable frequency drive is considered. The forecasting tool and the optimizer are integrated into typical control circuit. Economic assessment of the optimization method was estimated.

  12. Evaluation of statistical models for forecast errors from the HBV model

    NASA Astrophysics Data System (ADS)

    Engeland, Kolbjørn; Renard, Benjamin; Steinsland, Ingelin; Kolberg, Sjur

    2010-04-01

    SummaryThree statistical models for the forecast errors for inflow into the Langvatn reservoir in Northern Norway have been constructed and tested according to the agreement between (i) the forecast distribution and the observations and (ii) median values of the forecast distribution and the observations. For the first model observed and forecasted inflows were transformed by the Box-Cox transformation before a first order auto-regressive model was constructed for the forecast errors. The parameters were conditioned on weather classes. In the second model the Normal Quantile Transformation (NQT) was applied on observed and forecasted inflows before a similar first order auto-regressive model was constructed for the forecast errors. For the third model positive and negative errors were modeled separately. The errors were first NQT-transformed before conditioning the mean error values on climate, forecasted inflow and yesterday's error. To test the three models we applied three criterions: we wanted (a) the forecast distribution to be reliable; (b) the forecast intervals to be narrow; (c) the median values of the forecast distribution to be close to the observed values. Models 1 and 2 gave almost identical results. The median values improved the forecast with Nash-Sutcliffe R eff increasing from 0.77 for the original forecast to 0.87 for the corrected forecasts. Models 1 and 2 over-estimated the forecast intervals but gave the narrowest intervals. Their main drawback was that the distributions are less reliable than Model 3. For Model 3 the median values did not fit well since the auto-correlation was not accounted for. Since Model 3 did not benefit from the potential variance reduction that lies in bias estimation and removal it gave on average wider forecasts intervals than the two other models. At the same time Model 3 on average slightly under-estimated the forecast intervals, probably explained by the use of average measures to evaluate the fit.

  13. Evaluation of WRF-based convection-permitting multi-physics ensemble forecasts over China for an extreme rainfall event on 21 July 2012 in Beijing

    NASA Astrophysics Data System (ADS)

    Zhu, Kefeng; Xue, Ming

    2016-11-01

    On 21 July 2012, an extreme rainfall event that recorded a maximum rainfall amount over 24 hours of 460 mm, occurred in Beijing, China. Most operational models failed to predict such an extreme amount. In this study, a convective-permitting ensemble forecast system (CEFS), at 4-km grid spacing, covering the entire mainland of China, is applied to this extreme rainfall case. CEFS consists of 22 members and uses multiple physics parameterizations. For the event, the predicted maximum is 415 mm d-1 in the probability-matched ensemble mean. The predicted high-probability heavy rain region is located in southwest Beijing, as was observed. Ensemble-based verification scores are then investigated. For a small verification domain covering Beijing and its surrounding areas, the precipitation rank histogram of CEFS is much flatter than that of a reference global ensemble. CEFS has a lower (higher) Brier score and a higher resolution than the global ensemble for precipitation, indicating more reliable probabilistic forecasting by CEFS. Additionally, forecasts of different ensemble members are compared and discussed. Most of the extreme rainfall comes from convection in the warm sector east of an approaching cold front. A few members of CEFS successfully reproduce such precipitation, and orographic lift of highly moist low-level flows with a significantly southeasterly component is suggested to have played important roles in producing the initial convection. Comparisons between good and bad forecast members indicate a strong sensitivity of the extreme rainfall to the mesoscale environmental conditions, and, to less of an extent, the model physics.

  14. Forecasting eruption size: what we know, what we don't know

    NASA Astrophysics Data System (ADS)

    Papale, Paolo

    2017-04-01

    Any eruption forecast includes an evaluation of the expected size of the forthcoming eruption, usually expressed as the probability associated to given size classes. Such evaluation is mostly based on the previous volcanic history at the specific volcano, or it is referred to a broader class of volcanoes constituting "analogues" of the one under specific consideration. In any case, use of knowledge from past eruptions implies considering the completeness of the reference catalogue, and most importantly, the existence of systematic biases in the catalogue, that may affect probability estimates and translate into biased volcanic hazard forecasts. An analysis of existing catalogues, with major reference to the catalogue from the Smithsonian Global Volcanism Program, suggests that systematic biases largely dominate at global, regional and local scale: volcanic histories reconstructed at individual volcanoes, often used as a reference for volcanic hazard forecasts, are the result of systematic loss of information with time and poor sample representativeness. That situation strictly requires the use of techniques to complete existing catalogues, as well as careful consideration of the uncertainties deriving from inadequate knowledge and model-dependent data elaboration. A reconstructed global eruption size distribution, obtained by merging information from different existing catalogues, shows a mode in the VEI 1-2 range, <0.1% incidence of eruptions with VEI 7 or larger, and substantial uncertainties associated with individual VEI frequencies. Even larger uncertainties are expected to derive from application to individual volcanoes or classes of analogue volcanoes, suggesting large to very large uncertainties associated to volcanic hazard forecasts virtually at any individual volcano worldwide.

  15. Model simulation of the Manasquan water-supply system in Monmouth County, New Jersey

    USGS Publications Warehouse

    Chang, Ming; Tasker, Gary D.; Nieswand, Steven

    2001-01-01

    Model simulation of the Manasquan Water Supply System in Monmouth County, New Jersey, was completed using historic hydrologic data to evaluate the effects of operational and withdrawal alternatives on the Manasquan reservoir and pumping system. Changes in the system operations can be simulated with the model using precipitation forecasts. The Manasquan Reservoir system model operates by using daily streamflow values, which were reconstructed from historical U.S. Geological Survey streamflow-gaging station records. The model is able to run in two modes--General Risk analysis Model (GRAM) and Position Analysis Model (POSA). The GRAM simulation procedure uses reconstructed historical streamflow records to provide probability estimates of certain events, such as reservoir storage levels declining below a specific level, when given an assumed set of operating rules and withdrawal rates. POSA can be used to forecast the likelihood of specified outcomes, such as streamflows falling below statutory passing flows, associated with a specific working plan for the water-supply system over a period of months. The user can manipulate the model and generate graphs and tables of streamflows and storage, for example. This model can be used as a management tool to facilitate the development of drought warning and drought emergency rule curves and safe yield values for the water-supply system.

  16. Drought Water Right Curtailment

    NASA Astrophysics Data System (ADS)

    Walker, W.; Tweet, A.; Magnuson-Skeels, B.; Whittington, C.; Arnold, B.; Lund, J. R.

    2016-12-01

    California's water rights system allocates water based on priority, where lower priority, "junior" rights are curtailed first in a drought. The Drought Water Rights Allocation Tool (DWRAT) was developed to integrate water right allocation models with legal objectives to suggest water rights curtailments during drought. DWRAT incorporates water right use and priorities with a flow-forecasting model to mathematically represent water law and hydrology and suggest water allocations among water rights holders. DWRAT is compiled within an Excel workbook, with an interface and an open-source solver. By implementing California water rights law as an algorithm, DWRAT provides a precise and transparent framework for the complicated and often controversial technical aspects of curtailing water rights use during drought. DWRAT models have been developed for use in the Eel, Russian, and Sacramento river basins. In this study, an initial DWRAT model has been developed for the San Joaquin watershed, which incorporates all water rights holders in the basin and reference gage flows for major tributaries. The San Joaquin DWRAT can assess water allocation reliability by determining probability of rights holders' curtailment for a range of hydrologic conditions. Forecasted flow values can be input to the model to provide decision makers with the ability to make curtailment and water supply strategy decisions. Environmental flow allocations will be further integrated into the model to protect and improve ecosystem water reliability.

  17. Rodent reservoirs of future zoonotic diseases

    PubMed Central

    Han, Barbara A.; Schmidt, John Paul; Bowden, Sarah E.; Drake, John M.

    2015-01-01

    The increasing frequency of zoonotic disease events underscores a need to develop forecasting tools toward a more preemptive approach to outbreak investigation. We apply machine learning to data describing the traits and zoonotic pathogen diversity of the most speciose group of mammals, the rodents, which also comprise a disproportionate number of zoonotic disease reservoirs. Our models predict reservoir status in this group with over 90% accuracy, identifying species with high probabilities of harboring undiscovered zoonotic pathogens based on trait profiles that may serve as rules of thumb to distinguish reservoirs from nonreservoir species. Key predictors of zoonotic reservoirs include biogeographical properties, such as range size, as well as intrinsic host traits associated with lifetime reproductive output. Predicted hotspots of novel rodent reservoir diversity occur in the Middle East and Central Asia and the Midwestern United States. PMID:26038558

  18. Tsunami Early Warning via a Physics-Based Simulation Pipeline

    NASA Astrophysics Data System (ADS)

    Wilson, J. M.; Rundle, J. B.; Donnellan, A.; Ward, S. N.; Komjathy, A.

    2017-12-01

    Through independent efforts, physics-based simulations of earthquakes, tsunamis, and atmospheric signatures of these phenomenon have been developed. With the goal of producing tsunami forecasts and early warning tools for at-risk regions, we join these three spheres to create a simulation pipeline. The Virtual Quake simulator can produce thousands of years of synthetic seismicity on large, complex fault geometries, as well as the expected surface displacement in tsunamigenic regions. These displacements are used as initial conditions for tsunami simulators, such as Tsunami Squares, to produce catalogs of potential tsunami scenarios with probabilities. Finally, these tsunami scenarios can act as input for simulations of associated ionospheric total electron content, signals which can be detected by GNSS satellites for purposes of early warning in the event of a real tsunami. We present the most recent developments in this project.

  19. Forecasting the magnitude and onset of El Niño based on climate network

    NASA Astrophysics Data System (ADS)

    Meng, Jun; Fan, Jingfang; Ashkenazy, Yosef; Bunde, Armin; Havlin, Shlomo

    2018-04-01

    El Niño is probably the most influential climate phenomenon on inter-annual time scales. It affects the global climate system and is associated with natural disasters; it has serious consequences in many aspects of human life. However, the forecasting of the onset and in particular the magnitude of El Niño are still not accurate enough, at least more than half a year ahead. Here, we introduce a new forecasting index based on climate network links representing the similarity of low frequency temporal temperature anomaly variations between different sites in the Niño 3.4 region. We find that significant upward trends in our index forecast the onset of El Niño approximately 1 year ahead, and the highest peak since the end of last El Niño in our index forecasts the magnitude of the following event. We study the forecasting capability of the proposed index on several datasets, including, ERA-Interim, NCEP Reanalysis I, PCMDI-AMIP 1.1.3 and ERSST.v5.

  20. Probabilistic Storm Surge Forecast For Venice

    NASA Astrophysics Data System (ADS)

    Mel, Riccardo; Lionello, Piero

    2013-04-01

    This study describes an ensemble storm surge prediction procedure for the city of Venice, which is potentially very useful for its management, maintenance and for operating the movable barriers that are presently being built. Ensemble Prediction System (EPS) is meant to complement the existing SL forecast system by providing a probabilistic forecast and information on uncertainty of SL prediction. The procedure is applied to storm surge events in the period 2009-2010 producing for each of them an ensemble of 50 simulations. It is shown that EPS slightly increases the accuracy of SL prediction with respect to the deterministic forecast (DF) and it is more reliable than it. Though results are low biased and forecast uncertainty is underestimated, the probability distribution of maximum sea level produced by the EPS is acceptably realistic. The error of the EPS mean is shown to be correlated with the EPS spread. SL peaks correspond to maxima of uncertainty and uncertainty increases linearly with the forecast range. The quasi linear dynamics of the storm surges produces a modulation of the uncertainty after the SL peak with period corresponding to that of the main Adriatic seiche.

  1. The role of ensemble post-processing for modeling the ensemble tail

    NASA Astrophysics Data System (ADS)

    Van De Vyver, Hans; Van Schaeybroeck, Bert; Vannitsem, Stéphane

    2016-04-01

    The past decades the numerical weather prediction community has witnessed a paradigm shift from deterministic to probabilistic forecast and state estimation (Buizza and Leutbecher, 2015; Buizza et al., 2008), in an attempt to quantify the uncertainties associated with initial-condition and model errors. An important benefit of a probabilistic framework is the improved prediction of extreme events. However, one may ask to what extent such model estimates contain information on the occurrence probability of extreme events and how this information can be optimally extracted. Different approaches have been proposed and applied on real-world systems which, based on extreme value theory, allow the estimation of extreme-event probabilities conditional on forecasts and state estimates (Ferro, 2007; Friederichs, 2010). Using ensemble predictions generated with a model of low dimensionality, a thorough investigation is presented quantifying the change of predictability of extreme events associated with ensemble post-processing and other influencing factors including the finite ensemble size, lead time and model assumption and the use of different covariates (ensemble mean, maximum, spread...) for modeling the tail distribution. Tail modeling is performed by deriving extreme-quantile estimates using peak-over-threshold representation (generalized Pareto distribution) or quantile regression. Common ensemble post-processing methods aim to improve mostly the ensemble mean and spread of a raw forecast (Van Schaeybroeck and Vannitsem, 2015). Conditional tail modeling, on the other hand, is a post-processing in itself, focusing on the tails only. Therefore, it is unclear how applying ensemble post-processing prior to conditional tail modeling impacts the skill of extreme-event predictions. This work is investigating this question in details. Buizza, Leutbecher, and Isaksen, 2008: Potential use of an ensemble of analyses in the ECMWF Ensemble Prediction System, Q. J. R. Meteorol. Soc. 134: 2051-2066.Buizza and Leutbecher, 2015: The forecast skill horizon, Q. J. R. Meteorol. Soc. 141: 3366-3382.Ferro, 2007: A probability model for verifying deterministic forecasts of extreme events. Weather and Forecasting 22 (5), 1089-1100.Friederichs, 2010: Statistical downscaling of extreme precipitation events using extreme value theory. Extremes 13, 109-132.Van Schaeybroeck and Vannitsem, 2015: Ensemble post-processing using member-by-member approaches: theoretical aspects. Q.J.R. Meteorol. Soc., 141: 807-818.

  2. Sentinel Trees as a Tool to Forecast Invasions of Alien Plant Pathogens

    PubMed Central

    Vettraino, AnnaMaria; Roques, Alain; Yart, Annie; Fan, Jian-ting; Sun, Jiang-hua; Vannini, Andrea

    2015-01-01

    Recent disease outbreaks caused by alien invasive pathogens into European forests posed a serious threat to forest sustainability with relevant environmental and economic effects. Many of the alien tree pathogens recently introduced into Europe were not previously included on any quarantine lists, thus they were not subject to phytosanitary inspections. The identification and description of alien fungi potentially pathogenic to native European flora before their introduction in Europe, is a paramount need in order to limit the risk of invasion and the impact to forest ecosystems. To determine the potential invasive fungi, a sentinel trees plot was established in Fuyang, China, using healthy seedlings of European tree species including Quercus petreae, Q. suber, and Q. ilex. The fungal assemblage associated with symptomatic specimens was studied using the tag-encoded 454 pyrosequencing of the nuclear ribosomal internal transcribed spacer-1 (ITS 1). Taxa with probable Asiatic origin were identified and included plant pathogenic genera. These results indicate that sentinel plants may be a strategic tool to improve the prevention of bioinvasions. PMID:25826684

  3. Development of a management tool for reservoirs in Mediterranean environments based on uncertainty analysis

    NASA Astrophysics Data System (ADS)

    Gómez-Beas, R.; Moñino, A.; Polo, M. J.

    2012-05-01

    In compliance with the development of the Water Framework Directive, there is a need for an integrated management of water resources, which involves the elaboration of reservoir management models. These models should include the operational and technical aspects which allow us to forecast an optimal management in the short term, besides the factors that may affect the volume of water stored in the medium and long term. The climate fluctuations of the water cycle that affect the reservoir watershed should be considered, as well as the social and economic aspects of the area. This paper shows the development of a management model for Rules reservoir (southern Spain), through which the water supply is regulated based on set criteria, in a sustainable way with existing commitments downstream, with the supply capacity being well established depending on demand, and the probability of failure when the operating requirements are not fulfilled. The results obtained allowed us: to find out the reservoir response at different time scales, to introduce an uncertainty analysis and to demonstrate the potential of the methodology proposed here as a tool for decision making.

  4. Mid-Term Probabilistic Forecast of Oil Spill Trajectories

    NASA Astrophysics Data System (ADS)

    Castanedo, S.; Abascal, A. J.; Cardenas, M.; Medina, R.; Guanche, Y.; Mendez, F. J.; Camus, P.

    2012-12-01

    There is increasing concern about the threat posed by oil spills to the coastal environment. This is reflected in the promulgation of various national and international standards among which are those that require companies whose activities involves oil spill risk, to have oil pollution emergency plans or similar arrangements for responding promptly and effectively to oil pollution incidents. Operational oceanography systems (OOS) that provide decision makers with oil spill trajectory forecasting, have demonstrated their usefulness in recent accidents (Castanedo et al., 2006). In recent years, many national and regional OOS have been setup focusing on short-term oil spill forecast (up to 5 days). However, recent accidental marine oil spills (Prestige in Spain, Deep Horizon in Gulf of Mexico) have revealed the importance of having larger prediction horizons (up to 15 days) in regional-scale areas. In this work, we have developed a methodology to provide probabilistic oil spill forecast based on numerical modelling and statistical methods. The main components of this approach are: (1) Use of high resolution long-term (1948-2009) historical hourly data bases of wind, wind-induced currents and astronomical tide currents obtained using state-of-the-art numerical models; (2) classification of representative wind field patterns (n=100) using clustering techniques based on PCA and K-means algorithms (Camus et al., 2011); (3) determination of the cluster occurrence probability and the stochastic matrix (matrix of transition of probability or Markov matrix), p_ij, (probability of moving from a cluster "i" to a cluster "j" in one time step); (4) Initial state for mid-term simulations is obtained from available wind forecast using nearest-neighbors analog method; (5) 15-days Stochastic Markov Chain simulations (m=1000) are launched; (6) Corresponding oil spill trajectories are carried out by TESEO Lagrangian transport model (Abascal et al., 2009); (7) probability maps are delivered using an user friendly Web App. The application of the method to the Gulf of Biscay (North Spain) will show the ability of this approach. References Abascal, A.J., Castanedo, S., Mendez, F.J., Medina, R., Losada, I.J., 2009. Calibration of a Lagrangian transport model using drifting buoys deployed during the Prestige oil spill. J. Coast. Res. 25 (1), 80-90.. Camus, P., Méndez, F.J., Medina, R., 2011. Analysis of clustering and selection algorithms for the study of multivariate wave climate. Coastal Engineering, doi:10.1016/j.coastaleng.2011.02.003. Castanedo, S., Medina, R., Losada, I.J., Vidal, C., Méndez, F.J., Osorio, A., Juanes, J.A., Puente, A., 2006. The Prestige oil spill in Cantabria (Bay of Biscay). Part I: operational forecasting system for quick response, risk assessment and protection of natural resources. J. Coast. Res. 22 (6), 1474-1489.

  5. Forecasting client transitions in British Columbia's Long-Term Care Program.

    PubMed Central

    Lane, D; Uyeno, D; Stark, A; Gutman, G; McCashin, B

    1987-01-01

    This article presents a model for the annual transitions of clients through various home and facility placements in a long-term care program. The model, an application of Markov chain analysis, is developed, tested, and applied to over 9,000 clients (N = 9,483) in British Columbia's Long Term Care Program (LTC) over the period 1978-1983. Results show that the model gives accurate forecasts of the progress of groups of clients from state to state in the long-term care system from time of admission until eventual death. Statistical methods are used to test the modeling hypothesis that clients' year-over-year transitions occur in constant proportions from state to state within the long-term care system. Tests are carried out by examining actual year-over-year transitions of each year's new admission cohort (1978-1983). Various subsets of the available data are analyzed and, after accounting for clear differences among annual cohorts, the most acceptable model of the actual client transition data occurred when clients were separated into male and female groups, i.e., the transition behavior of each group is describable by a different Markov model. To validate the model, we develop model estimates for the numbers of existing clients in each state of the long-term care system for the period (1981-1983) for which actual data are available. When these estimates are compared with the actual data, total weighted absolute deviations do not exceed 10 percent of actuals. Finally, we use the properties of the Markov chain probability transition matrix and simulation methods to develop three-year forecasts with prediction intervals for the distribution of the existing total clients into each state of the system. The tests, forecasts, and Markov model supplemental information are contained in a mechanized procedure suitable for a microcomputer. The procedure provides a powerful, efficient tool for decision makers planning facilities and services in response to the needs of long-term care clients. PMID:3121537

  6. [Medium-term forecast of solar cosmic rays radiation risk during a manned Mars mission].

    PubMed

    Petrov, V M; Vlasov, A G

    2006-01-01

    Medium-term forecasting radiation hazard from solar cosmic rays will be vital in a manned Mars mission. Modern methods of space physics lack acceptable reliability in medium-term forecasting the SCR onset and parameters. The proposed estimation of average radiation risk from SCR during the manned Mars mission is made with the use of existing SCR fluence and spectrum models and correlation of solar particle event frequency with predicted Wolf number. Radiation risk is considered an additional death probability from acute radiation reactions (ergonomic component) or acute radial disease in flight. The algorithm for radiation risk calculation is described and resulted risk levels for various periods of the 23-th solar cycle are presented. Applicability of this method to advance forecasting and possible improvements are being investigated. Recommendations to the crew based on risk estimation are exemplified.

  7. Significance of stress transfer in time-dependent earthquake probability calculations

    USGS Publications Warehouse

    Parsons, T.

    2005-01-01

    A sudden change in stress is seen to modify earthquake rates, but should it also revise earthquake probability? Data used to derive input parameters permits an array of forecasts; so how large a static stress change is require to cause a statistically significant earthquake probability change? To answer that question, effects of parameter and philosophical choices are examined through all phases of sample calculations, Drawing at random from distributions of recurrence-aperiodicity pairs identifies many that recreate long paleoseismic and historic earthquake catalogs. Probability density funtions built from the recurrence-aperiodicity pairs give the range of possible earthquake forecasts under a point process renewal model. Consequences of choices made in stress transfer calculations, such as different slip models, fault rake, dip, and friction are, tracked. For interactions among large faults, calculated peak stress changes may be localized, with most of the receiving fault area changed less than the mean. Thus, to avoid overstating probability change on segments, stress change values should be drawn from a distribution reflecting the spatial pattern rather than using the segment mean. Disparity resulting from interaction probability methodology is also examined. For a fault with a well-understood earthquake history, a minimum stress change to stressing rate ratio of 10:1 to 20:1 is required to significantly skew probabilities with >80-85% confidence. That ratio must be closer to 50:1 to exceed 90-95% confidence levels. Thus revision to earthquake probability is achievable when a perturbing event is very close to the fault in question or the tectonic stressing rate is low.

  8. Do location specific forecasts pose a new challenge for communicating uncertainty?

    NASA Astrophysics Data System (ADS)

    Abraham, Shyamali; Bartlett, Rachel; Standage, Matthew; Black, Alison; Charlton-Perez, Andrew; McCloy, Rachel

    2015-04-01

    In the last decade, the growth of local, site-specific weather forecasts delivered by mobile phone or website represents arguably the fastest change in forecast consumption since the beginning of Television weather forecasts 60 years ago. In this study, a street-interception survey of 274 members of the public a clear first preference for narrow weather forecasts above traditional broad weather forecasts is shown for the first time, with a clear bias towards this preference for users under 40. The impact of this change on the understanding of forecast probability and intensity information is explored. While the correct interpretation of the statement 'There is a 30% chance of rain tomorrow' is still low in the cohort, in common with previous studies, a clear impact of age and educational attainment on understanding is shown, with those under 40 and educated to degree level or above more likely to correctly interpret it. The interpretation of rainfall intensity descriptors ('Light', 'Moderate', 'Heavy') by the cohort is shown to be significantly different to official and expert assessment of the same descriptors and to have large variance amongst the cohort. However, despite these key uncertainties, members of the cohort generally seem to make appropriate decisions about rainfall forecasts. There is some evidence that the decisions made are different depending on the communication format used, and the cohort expressed a clear preference for tabular over graphical weather forecast presentation.

  9. Model-free aftershock forecasts constructed from similar sequences in the past

    NASA Astrophysics Data System (ADS)

    van der Elst, N.; Page, M. T.

    2017-12-01

    The basic premise behind aftershock forecasting is that sequences in the future will be similar to those in the past. Forecast models typically use empirically tuned parametric distributions to approximate past sequences, and project those distributions into the future to make a forecast. While parametric models do a good job of describing average outcomes, they are not explicitly designed to capture the full range of variability between sequences, and can suffer from over-tuning of the parameters. In particular, parametric forecasts may produce a high rate of "surprises" - sequences that land outside the forecast range. Here we present a non-parametric forecast method that cuts out the parametric "middleman" between training data and forecast. The method is based on finding past sequences that are similar to the target sequence, and evaluating their outcomes. We quantify similarity as the Poisson probability that the observed event count in a past sequence reflects the same underlying intensity as the observed event count in the target sequence. Event counts are defined in terms of differential magnitude relative to the mainshock. The forecast is then constructed from the distribution of past sequences outcomes, weighted by their similarity. We compare the similarity forecast with the Reasenberg and Jones (RJ95) method, for a set of 2807 global aftershock sequences of M≥6 mainshocks. We implement a sequence-specific RJ95 forecast using a global average prior and Bayesian updating, but do not propagate epistemic uncertainty. The RJ95 forecast is somewhat more precise than the similarity forecast: 90% of observed sequences fall within a factor of two of the median RJ95 forecast value, whereas the fraction is 85% for the similarity forecast. However, the surprise rate is much higher for the RJ95 forecast; 10% of observed sequences fall in the upper 2.5% of the (Poissonian) forecast range. The surprise rate is less than 3% for the similarity forecast. The similarity forecast may be useful to emergency managers and non-specialists when confidence or expertise in parametric forecasting may be lacking. The method makes over-tuning impossible, and minimizes the rate of surprises. At the least, this forecast constitutes a useful benchmark for more precisely tuned parametric forecasts.

  10. [A method for forecasting the seasonal dynamic of malaria in the municipalities of Colombia].

    PubMed

    Velásquez, Javier Oswaldo Rodríguez

    2010-03-01

    To develop a methodology for forecasting the seasonal dynamic of malaria outbreaks in the municipalities of Colombia. Epidemiologic ranges were defined by multiples of 50 cases for the six municipalities with the highest incidence, 25 cases for the municipalities that ranked 10th and 11th by incidence, 10 for the municipality that ranked 193rd, and 5 for the municipality that ranked 402nd. The specific probability values for each epidemiologic range appearing in each municipality, as well as the S/k value--the ratio between entropy (S) and the Boltzmann constant (k)--were calculated for each three-week set, along with the differences in this ratio divided by the consecutive sets of weeks. These mathematical ratios were used to determine the values for forecasting the case dynamic, which were compared with the actual epidemiologic data from the period 2003-2007. The probability of the epidemiologic ranges appearing ranged from 0.019 and 1.00, while the differences in the S/k ratio between the sets of consecutive weeks ranged from 0.23 to 0.29. Three ratios were established to determine whether the dynamic corresponded to an outbreak. These ratios were corroborated with real epidemiological data from 810 Colombian municipalities. This methodology allows us to forecast the malaria case dynamic and outbreaks in the municipalities of Colombia and can be used in planning interventions and public health policies.

  11. Operational 0-3 h probabilistic quantitative precipitation forecasts: Recent performance and potential enhancements

    NASA Astrophysics Data System (ADS)

    Sokol, Z.; Kitzmiller, D.; Pešice, P.; Guan, S.

    2009-05-01

    The NOAA National Weather Service has maintained an automated, centralized 0-3 h prediction system for probabilistic quantitative precipitation forecasts since 2001. This advective-statistical system (ADSTAT) produces probabilities that rainfall will exceed multiple threshold values up to 50 mm at some location within a 40-km grid box. Operational characteristics and development methods for the system are described. Although development data were stratified by season and time of day, ADSTAT utilizes only a single set of nation-wide equations that relate predictor variables derived from radar reflectivity, lightning, satellite infrared temperatures, and numerical prediction model output to rainfall occurrence. A verification study documented herein showed that the operational ADSTAT reliably models regional variations in the relative frequency of heavy rain events. This was true even in the western United States, where no regional-scale, gridded hourly precipitation data were available during the development period in the 1990s. An effort was recently launched to improve the quality of ADSTAT forecasts by regionalizing the prediction equations and to adapt the model for application in the Czech Republic. We have experimented with incorporating various levels of regional specificity in the probability equations. The geographic localization study showed that in the warm season, regional climate differences and variations in the diurnal temperature cycle have a marked effect on the predictor-predictand relationships, and thus regionalization would lead to better statistical reliability in the forecasts.

  12. Improving hydrologic disaster forecasting and response for transportation by assimilating and fusing NASA and other data sets : final report.

    DOT National Transportation Integrated Search

    2017-04-15

    In this 3-year project, the research team developed the Hydrologic Disaster Forecast and Response (HDFR) system, a set of integrated software tools for end users that streamlines hydrologic prediction workflows involving automated retrieval of hetero...

  13. Severe Weather Forecast Decision Aid

    NASA Technical Reports Server (NTRS)

    Bauman, William H., III; Wheeler, Mark

    2005-01-01

    The Applied Meteorology Unit developed a forecast tool that provides an assessment of the likelihood of local convective severe weather for the day in order to enhance protection of personnel and material assets of the 45th Space Wing Cape Canaveral Air Force Station (CCAFS), and Kennedy Space Center (KSC).

  14. Development of a decision support tool for seasonal water supply management incorporating system uncertainties and operational constraints

    NASA Astrophysics Data System (ADS)

    Wang, H.; Asefa, T.

    2017-12-01

    A real-time decision support tool (DST) for water supply system would consider system uncertainties, e.g., uncertain streamflow and demand, as well as operational constraints and infrastructure outage (e.g., pump station shutdown, an offline reservoir due to maintenance). Such DST is often used by water managers for resource allocation and delivery for customers. Although most seasonal DST used by water managers recognize those system uncertainties and operational constraints, most use only historical information or assume deterministic outlook of water supply systems. This study presents a seasonal DST that incorporates rainfall/streamflow uncertainties, seasonal demand outlook and system operational constraints. Large scale climate-information is captured through a rainfall simulator driven by a Bayesian non-homogeneous Markov Chain Monte Carlo model that allows non-stationary transition probabilities contingent on Nino 3.4 index. An ad-hoc seasonal demand forecasting model considers weather conditions explicitly and socio-economic factors implicitly. Latin Hypercube sampling is employed to effectively sample probability density functions of flow and demand. Seasonal system operation is modelled as a mixed-integer optimization problem that aims at minimizing operational costs. It embeds the flexibility of modifying operational rules at different components, e.g., surface water treatment plants, desalination facilities, and groundwater pumping stations. The proposed framework is illustrated at a wholesale water supplier in Southeastern United States, Tampa Bay Water. The use of the tool is demonstrated in proving operational guidance in a typical drawdown and refill cycle of a regional reservoir. The DST provided: 1) probabilistic outlook of reservoir storage and chance of a successful refill by the end of rainy season; 2) operational expectations for large infrastructures (e.g., high service pumps and booster stations) throughout the season. Other potential use of such DST is also discussed.

  15. Evaluation of a Wildfire Smoke Forecasting System as a Tool for Public Health Protection

    PubMed Central

    Brauer, Michael; Henderson, Sarah B.

    2013-01-01

    Background: Exposure to wildfire smoke has been associated with cardiopulmonary health impacts. Climate change will increase the severity and frequency of smoke events, suggesting a need for enhanced public health protection. Forecasts of smoke exposure can facilitate public health responses. Objectives: We evaluated the utility of a wildfire smoke forecasting system (BlueSky) for public health protection by comparing its forecasts with observations and assessing their associations with population-level indicators of respiratory health in British Columbia, Canada. Methods: We compared BlueSky PM2.5 forecasts with PM2.5 measurements from air quality monitors, and BlueSky smoke plume forecasts with plume tracings from National Oceanic and Atmospheric Administration Hazard Mapping System remote sensing data. Daily counts of the asthma drug salbutamol sulfate dispensations and asthma-related physician visits were aggregated for each geographic local health area (LHA). Daily continuous measures of PM2.5 and binary measures of smoke plume presence, either forecasted or observed, were assigned to each LHA. Poisson regression was used to estimate the association between exposure measures and health indicators. Results: We found modest agreement between forecasts and observations, which was improved during intense fire periods. A 30-μg/m3 increase in BlueSky PM2.5 was associated with an 8% increase in salbutamol dispensations and a 5% increase in asthma-related physician visits. BlueSky plume coverage was associated with 5% and 6% increases in the two health indicators, respectively. The effects were similar for observed smoke, and generally stronger in very smoky areas. Conclusions: BlueSky forecasts showed modest agreement with retrospective measures of smoke and were predictive of respiratory health indicators, suggesting they can provide useful information for public health protection. Citation: Yao J, Brauer M, Henderson SB. 2013. Evaluation of a wildfire smoke forecasting system as a tool for public health protection. Environ Health Perspect 121:1142–1147; http://dx.doi.org/10.1289/ehp.1306768 PMID:23906969

  16. Stress-based aftershock forecasts made within 24h post mainshock: Expected north San Francisco Bay area seismicity changes after the 2014M=6.0 West Napa earthquake

    USGS Publications Warehouse

    Parsons, Thomas E.; Segou, Margaret; Sevilgen, Volkan; Milner, Kevin; Field, Edward; Toda, Shinji; Stein, Ross S.

    2014-01-01

    We calculate stress changes resulting from the M= 6.0 West Napa earthquake on north San Francisco Bay area faults. The earthquake ruptured within a series of long faults that pose significant hazard to the Bay area, and we are thus concerned with potential increases in the probability of a large earthquake through stress transfer. We conduct this exercise as a prospective test because the skill of stress-based aftershock forecasting methodology is inconclusive. We apply three methods: (1) generalized mapping of regional Coulomb stress change, (2) stress changes resolved on Uniform California Earthquake Rupture Forecast faults, and (3) a mapped rate/state aftershock forecast. All calculations were completed within 24 h after the main shock and were made without benefit of known aftershocks, which will be used to evaluative the prospective forecast. All methods suggest that we should expect heightened seismicity on parts of the southern Rodgers Creek, northern Hayward, and Green Valley faults.

  17. Stress-based aftershock forecasts made within 24 h postmain shock: Expected north San Francisco Bay area seismicity changes after the 2014 M = 6.0 West Napa earthquake

    NASA Astrophysics Data System (ADS)

    Parsons, Tom; Segou, Margaret; Sevilgen, Volkan; Milner, Kevin; Field, Edward; Toda, Shinji; Stein, Ross S.

    2014-12-01

    We calculate stress changes resulting from the M = 6.0 West Napa earthquake on north San Francisco Bay area faults. The earthquake ruptured within a series of long faults that pose significant hazard to the Bay area, and we are thus concerned with potential increases in the probability of a large earthquake through stress transfer. We conduct this exercise as a prospective test because the skill of stress-based aftershock forecasting methodology is inconclusive. We apply three methods: (1) generalized mapping of regional Coulomb stress change, (2) stress changes resolved on Uniform California Earthquake Rupture Forecast faults, and (3) a mapped rate/state aftershock forecast. All calculations were completed within 24 h after the main shock and were made without benefit of known aftershocks, which will be used to evaluative the prospective forecast. All methods suggest that we should expect heightened seismicity on parts of the southern Rodgers Creek, northern Hayward, and Green Valley faults.

  18. Situational Lightning Climatologies for Central Florida, Phase 2, Part 3

    NASA Technical Reports Server (NTRS)

    Bauman, William H., III

    2007-01-01

    The threat of lightning is a daily concern during the warm season in Florida. The forecasters at the Spaceflight Meteorology Group (SMG) at Johnson Spaceflight Center in Houston, TX consider lightning in their landing forecasts for space shuttles at the Kennedy Space Center (KSC), FL Shuttle Landing Facility (SLF). The forecasters at the National Weather Service in Melbourne, FL (NWS MLB) do the same in their routine Terminal Aerodrome Forecasts (TAFs) for seven airports in the NWS MLB County Warning Area (CWA). The Applied Meteorology Unit created flow regime climatologies of lightning probability in the 5-, 10-, 20-, and 30-n mi circles surrounding the Shuttle Landing Facility (SLF) and all airports in the NWS MLB county warning area in 1-, 3-, and 6-hour increments. The results were presented in tabular and graphical format and incorporated into a web-based graphical user interface so forecasters could easily navigate through the data and to make the GUI usable in any web browser on computers with different operating systems.

  19. The ecological forecast horizon, and examples of its uses and determinants

    PubMed Central

    Petchey, Owen L; Pontarp, Mikael; Massie, Thomas M; Kéfi, Sonia; Ozgul, Arpat; Weilenmann, Maja; Palamara, Gian Marco; Altermatt, Florian; Matthews, Blake; Levine, Jonathan M; Childs, Dylan Z; McGill, Brian J; Schaepman, Michael E; Schmid, Bernhard; Spaak, Piet; Beckerman, Andrew P; Pennekamp, Frank; Pearse, Ian S; Vasseur, David

    2015-01-01

    Forecasts of ecological dynamics in changing environments are increasingly important, and are available for a plethora of variables, such as species abundance and distribution, community structure and ecosystem processes. There is, however, a general absence of knowledge about how far into the future, or other dimensions (space, temperature, phylogenetic distance), useful ecological forecasts can be made, and about how features of ecological systems relate to these distances. The ecological forecast horizon is the dimensional distance for which useful forecasts can be made. Five case studies illustrate the influence of various sources of uncertainty (e.g. parameter uncertainty, environmental variation, demographic stochasticity and evolution), level of ecological organisation (e.g. population or community), and organismal properties (e.g. body size or number of trophic links) on temporal, spatial and phylogenetic forecast horizons. Insights from these case studies demonstrate that the ecological forecast horizon is a flexible and powerful tool for researching and communicating ecological predictability. It also has potential for motivating and guiding agenda setting for ecological forecasting research and development. PMID:25960188

  20. Willingness-to-pay for a probabilistic flood forecast: a risk-based decision-making game

    NASA Astrophysics Data System (ADS)

    Arnal, Louise; Ramos, Maria-Helena; Coughlan, Erin; Cloke, Hannah L.; Stephens, Elisabeth; Wetterhall, Fredrik; van Andel, Schalk-Jan; Pappenberger, Florian

    2016-04-01

    Forecast uncertainty is a twofold issue, as it constitutes both an added value and a challenge for the forecaster and the user of the forecasts. Many authors have demonstrated the added (economic) value of probabilistic forecasts over deterministic forecasts for a diversity of activities in the water sector (e.g. flood protection, hydroelectric power management and navigation). However, the richness of the information is also a source of challenges for operational uses, due partially to the difficulty to transform the probability of occurrence of an event into a binary decision. The setup and the results of a risk-based decision-making experiment, designed as a game on the topic of flood protection mitigation, called ``How much are you prepared to pay for a forecast?'', will be presented. The game was played at several workshops in 2015, including during this session at the EGU conference in 2015, and a total of 129 worksheets were collected and analysed. The aim of this experiment was to contribute to the understanding of the role of probabilistic forecasts in decision-making processes and their perceived value by decision-makers. Based on the participants' willingness-to-pay for a forecast, the results of the game showed that the value (or the usefulness) of a forecast depends on several factors, including the way users perceive the quality of their forecasts and link it to the perception of their own performances as decision-makers. Balancing avoided costs and the cost (or the benefit) of having forecasts available for making decisions is not straightforward, even in a simplified game situation, and is a topic that deserves more attention from the hydrological forecasting community in the future.

  1. A New Multivariate Approach in Generating Ensemble Meteorological Forcings for Hydrological Forecasting

    NASA Astrophysics Data System (ADS)

    Khajehei, Sepideh; Moradkhani, Hamid

    2015-04-01

    Producing reliable and accurate hydrologic ensemble forecasts are subject to various sources of uncertainty, including meteorological forcing, initial conditions, model structure, and model parameters. Producing reliable and skillful precipitation ensemble forecasts is one approach to reduce the total uncertainty in hydrological applications. Currently, National Weather Prediction (NWP) models are developing ensemble forecasts for various temporal ranges. It is proven that raw products from NWP models are biased in mean and spread. Given the above state, there is a need for methods that are able to generate reliable ensemble forecasts for hydrological applications. One of the common techniques is to apply statistical procedures in order to generate ensemble forecast from NWP-generated single-value forecasts. The procedure is based on the bivariate probability distribution between the observation and single-value precipitation forecast. However, one of the assumptions of the current method is fitting Gaussian distribution to the marginal distributions of observed and modeled climate variable. Here, we have described and evaluated a Bayesian approach based on Copula functions to develop an ensemble precipitation forecast from the conditional distribution of single-value precipitation forecasts. Copula functions are known as the multivariate joint distribution of univariate marginal distributions, which are presented as an alternative procedure in capturing the uncertainties related to meteorological forcing. Copulas are capable of modeling the joint distribution of two variables with any level of correlation and dependency. This study is conducted over a sub-basin in the Columbia River Basin in USA using the monthly precipitation forecasts from Climate Forecast System (CFS) with 0.5x0.5 Deg. spatial resolution to reproduce the observations. The verification is conducted on a different period and the superiority of the procedure is compared with Ensemble Pre-Processor approach currently used by National Weather Service River Forecast Centers in USA.

  2. Case studies of seasonal rainfall forecasts for Hong Kong and its vicinity using a regional climate model

    Treesearch

    David Hui; Karen Shum; Ji Chen; Shyh-Chin Chen; Jack Ritchie; John Roads

    2007-01-01

    Seasonal climate forecasts are one of the most promising tools for providing early warnings for natural hazards such as floods and droughts. Using two case studies, this paper documents the skill of a regional climate model in the seasonal forecasting of below normal rainfall in southern China during the rainy seasons of July–August–September 2003 and April–...

  3. Forecasting Tools Point to Fishing Hotspots

    NASA Technical Reports Server (NTRS)

    2009-01-01

    Private weather forecaster WorldWinds Inc. of Slidell, Louisiana has employed satellite-gathered oceanic data from Marshall Space Flight Center to create a service that is every fishing enthusiast s dream. The company's FishBytes system uses information about sea surface temperature and chlorophyll levels to forecast favorable conditions for certain fish populations. Transmitting the data to satellite radio subscribers, FishBytes provides maps that guide anglers to the areas they are most likely to make their favorite catch.

  4. Solar-Terrestrial Predictions: Proceedings of a workshop. Volume 2: Geomagnetic and space environment papers and ionosphere papers

    NASA Astrophysics Data System (ADS)

    Thompson, R. J.; Cole, D. G.; Wilkinson, P. J.; Shea, M. A.; Smart, D.

    1990-11-01

    The following subject areas were covered: a probability forecast for geomagnetic activity; cost recovery in solar-terrestrial predictions; magnetospheric specification and forecasting models; a geomagnetic forecast and monitoring system for power system operation; some aspects of predicting magnetospheric storms; some similarities in ionospheric disturbance characteristics in equatorial, mid-latitude, and sub-auroral regions; ionospheric support for low-VHF radio transmission; a new approach to prediction of ionospheric storms; a comparison of the total electron content of the ionosphere around L=4 at low sunspot numbers with the IRI model; the French ionospheric radio propagation predictions; behavior of the F2 layer at mid-latitudes; and the design of modern ionosondes.

  5. Statistical prediction of seasonal discharge in Central Asia for water resources management: development of a generic (pre-)operational modeling tool

    NASA Astrophysics Data System (ADS)

    Apel, Heiko; Baimaganbetov, Azamat; Kalashnikova, Olga; Gavrilenko, Nadejda; Abdykerimova, Zharkinay; Agalhanova, Marina; Gerlitz, Lars; Unger-Shayesteh, Katy; Vorogushyn, Sergiy; Gafurov, Abror

    2017-04-01

    The semi-arid regions of Central Asia crucially depend on the water resources supplied by the mountainous areas of the Tien-Shan and Pamirs. During the summer months the snow and glacier melt dominated river discharge originating in the mountains provides the main water resource available for agricultural production, but also for storage in reservoirs for energy generation during the winter months. Thus a reliable seasonal forecast of the water resources is crucial for a sustainable management and planning of water resources. In fact, seasonal forecasts are mandatory tasks of all national hydro-meteorological services in the region. In order to support the operational seasonal forecast procedures of hydromet services, this study aims at the development of a generic tool for deriving statistical forecast models of seasonal river discharge. The generic model is kept as simple as possible in order to be driven by available hydrological and meteorological data, and be applicable for all catchments with their often limited data availability in the region. As snowmelt dominates summer runoff, the main meteorological predictors for the forecast models are monthly values of winter precipitation and temperature as recorded by climatological stations in the catchments. These data sets are accompanied by snow cover predictors derived from the operational ModSnow tool, which provides cloud free snow cover data for the selected catchments based on MODIS satellite images. In addition to the meteorological data antecedent streamflow is used as a predictor variable. This basic predictor set was further extended by multi-monthly means of the individual predictors, as well as composites of the predictors. Forecast models are derived based on these predictors as linear combinations of up to 3 or 4 predictors. A user selectable number of best models according to pre-defined performance criteria is extracted automatically by the developed model fitting algorithm, which includes a test for robustness by a leave-one-out cross validation. Based on the cross validation the predictive uncertainty was quantified for every prediction model. According to the official procedures of the hydromet services forecasts of the mean seasonal discharge of the period April to September are derived every month starting from January until June. The application of the model for several catchments in Central Asia - ranging from small to the largest rivers - for the period 2000-2015 provided skillful forecasts for most catchments already in January. The skill of the prediction increased every month, with R2 values often in the range 0.8 - 0.9 in April just before the prediction period. The forecasts further improve in the following months, most likely due to the integration of spring precipitation, which is not included in the predictors before May, or spring discharge, which contains indicative information for the overall seasonal discharge. In summary, the proposed generic automatic forecast model development tool provides robust predictions for seasonal water availability in Central Asia, which will be tested against the official forecasts in the upcoming years, with the vision of eventual operational implementation.

  6. Forecasting volcanic unrest using seismicity: The good, the bad and the time consuming

    NASA Astrophysics Data System (ADS)

    Salvage, Rebecca; Neuberg, Jurgen W.

    2013-04-01

    Volcanic eruptions are inherently unpredictable in nature, with scientists struggling to forecast the type and timing of events, in particular in real time scenarios. Current understanding suggests that the use of statistical patterns within precursory datasets of seismicity prior to eruptive events could hold the potential to be used as real time forecasting tools. They allow us to determine times of clear deviation in data, which might be indicative of volcanic unrest. The identification of low frequency seismic swarms and the acceleration of this seismicity prior to observed volcanic unrest may be key in developing forecasting tools. The development of these real time forecasting models which can be implemented at volcano observatories is of particular importance since the identification of early warning signals allows danger to the proximal population to be minimized. We concentrate on understanding the significance and development of these seismic swarms as unrest develops at the volcano. In particular, analysis of accelerations in event rate, amplitude and energy rates released by seismicity prior to eruption suggests that these are important indicators of developing unrest. Real time analysis of these parameters simultaneously allows possible improvements to forecasting models. Although more time and computationally intense, cross correlation techniques applied to continuous seismicity prior to volcanic unrest scenarios allows all significant seismic events to be analysed, rather than only those which can be detected by an automated identification system. This may allow a more accurate forecast since all precursory seismicity can be taken into account. In addition, the classification of seismic events based on spectral characteristics may allow us to isolate individual types of signals which are responsible for certain types of unrest. In this way, we may be able to better forecast the type of eruption that may ensue, or at least some of its prevailing characteristics.

  7. HexSim: A flexible simulation model for forecasting wildlife responses to multiple interacting stressors

    EPA Science Inventory

    With SERDP funding, we have improved upon a popular life history simulator (PATCH), and in doing so produced a powerful new forecasting tool (HexSim). PATCH, our starting point, was spatially explicit and individual-based, and was useful for evaluating a range of terrestrial lif...

  8. National Centers for Environmental Prediction

    Science.gov Websites

    Reference List Table of Contents NCEP OPERATIONAL MODEL FORECAST GRAPHICS PARALLEL/EXPERIMENTAL MODEL Developmental Air Quality Forecasts and Verification Back to Table of Contents 2. PARALLEL/EXPERIMENTAL GRAPHICS VERIFICATION (GRID VS.OBS) WEB PAGE (NCEP EXPERIMENTAL PAGE, INTERNAL USE ONLY) Interactive web page tool for

  9. Linking highway improvements to changes in land use with quasi-experimental research design : a better forecasting tool for transportation decision-making.

    DOT National Transportation Integrated Search

    2009-10-01

    An important issue for future improvement and extensions of highways will be the ability of projects to sustain challenges to Environmental Impact Statements based upon forecasts of regional growth. A legal precedent for such challenges was establish...

  10. Predicting critical transitions in dynamical systems from time series using nonstationary probability density modeling.

    PubMed

    Kwasniok, Frank

    2013-11-01

    A time series analysis method for predicting the probability density of a dynamical system is proposed. A nonstationary parametric model of the probability density is estimated from data within a maximum likelihood framework and then extrapolated to forecast the future probability density and explore the system for critical transitions or tipping points. A full systematic account of parameter uncertainty is taken. The technique is generic, independent of the underlying dynamics of the system. The method is verified on simulated data and then applied to prediction of Arctic sea-ice extent.

  11. Diagnosis of edge condition based on force measurement during milling of composites

    NASA Astrophysics Data System (ADS)

    Felusiak, Agata; Twardowski, Paweł

    2018-04-01

    The present paper presents comparative results of the forecasting of a cutting tool wear with the application of different methods of diagnostic deduction based on the measurement of cutting force components. The research was carried out during the milling of the Duralcan F3S.10S aluminum-ceramic composite. Prediction of the toolwear was based on one variable, two variables regression Multilayer Perceptron(MLP)and Radial Basis Function(RBF)neural networks. Forecasting the condition of the cutting tool on the basis of cutting forces has yielded very satisfactory results.

  12. Multiple Solutions of Real-time Tsunami Forecasting Using Short-term Inundation Forecasting for Tsunamis Tool

    NASA Astrophysics Data System (ADS)

    Gica, E.

    2016-12-01

    The Short-term Inundation Forecasting for Tsunamis (SIFT) tool, developed by NOAA Center for Tsunami Research (NCTR) at the Pacific Marine Environmental Laboratory (PMEL), is used in forecast operations at the Tsunami Warning Centers in Alaska and Hawaii. The SIFT tool relies on a pre-computed tsunami propagation database, real-time DART buoy data, and an inversion algorithm to define the tsunami source. The tsunami propagation database is composed of 50×100km unit sources, simulated basin-wide for at least 24 hours. Different combinations of unit sources, DART buoys, and length of real-time DART buoy data can generate a wide range of results within the defined tsunami source. For an inexperienced SIFT user, the primary challenge is to determine which solution, among multiple solutions for a single tsunami event, would provide the best forecast in real time. This study investigates how the use of different tsunami sources affects simulated tsunamis at tide gauge locations. Using the tide gauge at Hilo, Hawaii, a total of 50 possible solutions for the 2011 Tohoku tsunami are considered. Maximum tsunami wave amplitude and root mean square error results are used to compare tide gauge data and the simulated tsunami time series. Results of this study will facilitate SIFT users' efforts to determine if the simulated tide gauge tsunami time series from a specific tsunami source solution would be within the range of possible solutions. This study will serve as the basis for investigating more historical tsunami events and tide gauge locations.

  13. Operational Earthquake Forecasting of Aftershocks for New England

    NASA Astrophysics Data System (ADS)

    Ebel, J.; Fadugba, O. I.

    2015-12-01

    Although the forecasting of mainshocks is not possible, recent research demonstrates that probabilistic forecasts of expected aftershock activity following moderate and strong earthquakes is possible. Previous work has shown that aftershock sequences in intraplate regions behave similarly to those in California, and thus the operational aftershocks forecasting methods that are currently employed in California can be adopted for use in areas of the eastern U.S. such as New England. In our application, immediately after a felt earthquake in New England, a forecast of expected aftershock activity for the next 7 days will be generated based on a generic aftershock activity model. Approximately 24 hours after the mainshock, the parameters of the aftershock model will be updated using the observed aftershock activity observed to that point in time, and a new forecast of expected aftershock activity for the next 7 days will be issued. The forecast will estimate the average number of weak, felt aftershocks and the average expected number of aftershocks based on the aftershock statistics of past New England earthquakes. The forecast also will estimate the probability that an earthquake that is stronger than the mainshock will take place during the next 7 days. The aftershock forecast will specify the expected aftershocks locations as well as the areas over which aftershocks of different magnitudes could be felt. The system will use web pages, email and text messages to distribute the aftershock forecasts. For protracted aftershock sequences, new forecasts will be issued on a regular basis, such as weekly. Initially, the distribution system of the aftershock forecasts will be limited, but later it will be expanded as experience with and confidence in the system grows.

  14. Environmental predictors of stunting among children under-five in Somalia: cross-sectional studies from 2007 to 2010.

    PubMed

    Kinyoki, Damaris K; Berkley, James A; Moloney, Grainne M; Odundo, Elijah O; Kandala, Ngianga-Bakwin; Noor, Abdisalan M

    2016-07-28

    Stunting among children under five years old is associated with long-term effects on cognitive development, school achievement, economic productivity in adulthood and maternal reproductive outcomes. Accurate estimation of stunting and tools to forecast risk are key to planning interventions. We estimated the prevalence and distribution of stunting among children under five years in Somalia from 2007 to 2010 and explored the role of environmental covariates in its forecasting. Data from household nutritional surveys in Somalia from 2007 to 2010 with a total of 1,066 clusters covering 73,778 children were included. We developed a Bayesian hierarchical space-time model to forecast stunting by using the relationship between observed stunting and environmental covariates in the preceding years. We then applied the model coefficients to environmental covariates in subsequent years. To determine the accuracy of the forecasting, we compared this model with a model that used data from all the years with the corresponding environmental covariates. Rainfall (OR = 0.994, 95 % Credible interval (CrI): 0.993, 0.995) and vegetation cover (OR = 0.719, 95 % CrI: 0.603, 0.858) were significant in forecasting stunting. The difference in estimates of stunting using the two approaches was less than 3 % in all the regions for all forecast years. Stunting in Somalia is spatially and temporally heterogeneous. Rainfall and vegetation are major drivers of these variations. The use of environmental covariates for forecasting of stunting is a potentially useful and affordable tool for planning interventions to reduce the high burden of malnutrition in Somalia.

  15. Improving global flood risk awareness through collaborative research: Id-Lab

    NASA Astrophysics Data System (ADS)

    Weerts, A.; Zijderveld, A.; Cumiskey, L.; Buckman, L.; Verlaan, M.; Baart, F.

    2015-12-01

    Scientific and end-user collaboration on operational flood risk modelling and forecasting requires an environment where scientists and end-users can physically work together and demonstrate, enhance and learn about new tools, methods and models for forecasting and warning purposes. Therefore, Deltares has built a real-time demonstration, training and research infrastructure ('operational' room and ICT backend). This research infrastructure supports various functions like (1) Real time response and disaster management, (2) Training, (3) Collaborative Research, (4) Demonstration. The research infrastructure will be used for a mixture of these functions on a regular basis by Deltares and a multitude of both scientists as well as end users such as universities, research institutes, consultants, governments and aid agencies. This infrastructure facilitates emergency advice and support during international and national disasters caused by rainfall, tropical cyclones or tsunamis. It hosts research flood and storm surge forecasting systems for global/continental/regional scale. It facilitates training for emergency & disaster management (along with hosting forecasting system user trainings in for instance the forecasting platform Delft-FEWS) both internally and externally. The facility is expected to inspire and initiate creative innovations by bringing together different experts from various organizations. The room hosts interactive modelling developments, participatory workshops and stakeholder meetings. State of the art tools, models and software, being applied across the globe are available and on display within the facility. We will present the Id-Lab in detail and we will put particular focus on the global operational forecasting systems GLOFFIS (Global Flood Forecasting Information System) and GLOSSIS (Global Storm Surge Information System).

  16. Air Quality Forecasts Using the NASA GEOS Model: A Unified Tool from Local to Global Scales

    NASA Technical Reports Server (NTRS)

    Knowland, E. Emma; Keller, Christoph; Nielsen, J. Eric; Orbe, Clara; Ott, Lesley; Pawson, Steven; Saunders, Emily; Duncan, Bryan; Cook, Melanie; Liu, Junhua; hide

    2017-01-01

    We provide an introduction to a new high-resolution (0.25 degree) global composition forecast produced by NASA's Global Modeling and Assimilation office. The NASA Goddard Earth Observing System version 5 (GEOS-5) model has been expanded to provide global near-real-time forecasts of atmospheric composition at a horizontal resolution of 0.25 degrees (approximately 25 km). Previously, this combination of detailed chemistry and resolution was only provided by regional models. This system combines the operational GEOS-5 weather forecasting model with the state-of-the-science GEOS-Chem chemistry module (version 11) to provide detailed chemical analysis of a wide range of air pollutants such as ozone, carbon monoxide, nitrogen oxides, and fine particulate matter (PM2.5). The resolution of the forecasts is the highest resolution compared to current, publically-available global composition forecasts. Evaluation and validation of modeled trace gases and aerosols compared to surface and satellite observations will be presented for constituents relative to health air quality standards. Comparisons of modeled trace gases and aerosols against satellite observations show that the model produces realistic concentrations of atmospheric constituents in the free troposphere. Model comparisons against surface observations highlight the model's capability to capture the diurnal variability of air pollutants under a variety of meteorological conditions. The GEOS-5 composition forecasting system offers a new tool for scientists and the public health community, and is being developed jointly with several government and non-profit partners. Potential applications include air quality warnings, flight campaign planning and exposure studies using the archived analysis fields.

  17. Using a cross correlation technique to refine the accuracy of the Failure Forecast Method: Application to Soufrière Hills volcano, Montserrat

    NASA Astrophysics Data System (ADS)

    Salvage, R. O.; Neuberg, J. W.

    2016-09-01

    Prior to many volcanic eruptions, an acceleration in seismicity has been observed, suggesting the potential for this as a forecasting tool. The Failure Forecast Method (FFM) relates an accelerating precursor to the timing of failure by an empirical power law, with failure being defined in this context as the onset of an eruption. Previous applications of the FFM have used a wide variety of accelerating time series, often generating questionable forecasts with large misfits between data and the forecast, as well as the generation of a number of different forecasts from the same data series. Here, we show an alternative approach applying the FFM in combination with a cross correlation technique which identifies seismicity from a single active source mechanism and location at depth. Isolating a single system at depth avoids additional uncertainties introduced by averaging data over a number of different accelerating phenomena, and consequently reduces the misfit between the data and the forecast. Similar seismic waveforms were identified in the precursory accelerating seismicity to dome collapses at Soufrière Hills volcano, Montserrat in June 1997, July 2003 and February 2010. These events were specifically chosen since they represent a spectrum of collapse scenarios at this volcano. The cross correlation technique generates a five-fold increase in the number of seismic events which could be identified from continuous seismic data rather than using triggered data, thus providing a more holistic understanding of the ongoing seismicity at the time. The use of similar seismicity as a forecasting tool for collapses in 1997 and 2003 greatly improved the forecasted timing of the dome collapse, as well as improving the confidence in the forecast, thereby outperforming the classical application of the FFM. We suggest that focusing on a single active seismic system at depth allows a more accurate forecast of some of the major dome collapses from the ongoing eruption at Soufrière Hills volcano, and provides a simple addition to the well-used methodology of the FFM.

  18. A comparative analysis of errors in long-term econometric forecasts

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tepel, R.

    1986-04-01

    The growing body of literature that documents forecast accuracy falls generally into two parts. The first is prescriptive and is carried out by modelers who use simulation analysis as a tool for model improvement. These studies are ex post, that is, they make use of known values for exogenous variables and generate an error measure wholly attributable to the model. The second type of analysis is descriptive and seeks to measure errors, identify patterns among errors and variables and compare forecasts from different sources. Most descriptive studies use an ex ante approach, that is, they evaluate model outputs based onmore » estimated (or forecasted) exogenous variables. In this case, it is the forecasting process, rather than the model, that is under scrutiny. This paper uses an ex ante approach to measure errors in forecast series prepared by Data Resources Incorporated (DRI), Wharton Econometric Forecasting Associates (Wharton), and Chase Econometrics (Chase) and to determine if systematic patterns of errors can be discerned between services, types of variables (by degree of aggregation), length of forecast and time at which the forecast is made. Errors are measured as the percent difference between actual and forecasted values for the historical period of 1971 to 1983.« less

  19. Low Probability Tail Event Analysis and Mitigation in BPA Control Area: Task 2 Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lu, Shuai; Makarov, Yuri V.; McKinstry, Craig A.

    Task report detailing low probability tail event analysis and mitigation in BPA control area. Tail event refers to the situation in a power system when unfavorable forecast errors of load and wind are superposed onto fast load and wind ramps, or non-wind generators falling short of scheduled output, causing the imbalance between generation and load to become very significant.

  20. Verification of Ensemble Forecasts for the New York City Operations Support Tool

    NASA Astrophysics Data System (ADS)

    Day, G.; Schaake, J. C.; Thiemann, M.; Draijer, S.; Wang, L.

    2012-12-01

    The New York City water supply system operated by the Department of Environmental Protection (DEP) serves nine million people. It covers 2,000 square miles of portions of the Catskill, Delaware, and Croton watersheds, and it includes nineteen reservoirs and three controlled lakes. DEP is developing an Operations Support Tool (OST) to support its water supply operations and planning activities. OST includes historical and real-time data, a model of the water supply system complete with operating rules, and lake water quality models developed to evaluate alternatives for managing turbidity in the New York City Catskill reservoirs. OST will enable DEP to manage turbidity in its unfiltered system while satisfying its primary objective of meeting the City's water supply needs, in addition to considering secondary objectives of maintaining ecological flows, supporting fishery and recreation releases, and mitigating downstream flood peaks. The current version of OST relies on statistical forecasts of flows in the system based on recent observed flows. To improve short-term decision making, plans are being made to transition to National Weather Service (NWS) ensemble forecasts based on hydrologic models that account for short-term weather forecast skill, longer-term climate information, as well as the hydrologic state of the watersheds and recent observed flows. To ensure that the ensemble forecasts are unbiased and that the ensemble spread reflects the actual uncertainty of the forecasts, a statistical model has been developed to post-process the NWS ensemble forecasts to account for hydrologic model error as well as any inherent bias and uncertainty in initial model states, meteorological data and forecasts. The post-processor is designed to produce adjusted ensemble forecasts that are consistent with the DEP historical flow sequences that were used to develop the system operating rules. A set of historical hindcasts that is representative of the real-time ensemble forecasts is needed to verify that the post-processed forecasts are unbiased, statistically reliable, and preserve the skill inherent in the "raw" NWS ensemble forecasts. A verification procedure and set of metrics will be presented that provide an objective assessment of ensemble forecasts. The procedure will be applied to both raw ensemble hindcasts and to post-processed ensemble hindcasts. The verification metrics will be used to validate proper functioning of the post-processor and to provide a benchmark for comparison of different types of forecasts. For example, current NWS ensemble forecasts are based on climatology, using each historical year to generate a forecast trace. The NWS Hydrologic Ensemble Forecast System (HEFS) under development will utilize output from both the National Oceanic Atmospheric Administration (NOAA) Global Ensemble Forecast System (GEFS) and the Climate Forecast System (CFS). Incorporating short-term meteorological forecasts and longer-term climate forecast information should provide sharper, more accurate forecasts. Hindcasts from HEFS will enable New York City to generate verification results to validate the new forecasts and further fine-tune system operating rules. Project verification results will be presented for different watersheds across a range of seasons, lead times, and flow levels to assess the quality of the current ensemble forecasts.

Top