DOE Office of Scientific and Technical Information (OSTI.GOV)
Backus, George A.; Lowry, Thomas Stephen; Jones, Shannon M.
This report uses the CMIP5 series of climate model simulations to produce country- level uncertainty distributions for use in socioeconomic risk assessments of climate change impacts. It provides appropriate probability distributions, by month, for 169 countries and autonomous-areas on temperature, precipitation, maximum temperature, maximum wind speed, humidity, runoff, soil moisture and evaporation for the historical period (1976-2005), and for decadal time periods to 2100. It also provides historical and future distributions for the Arctic region on ice concentration, ice thickness, age of ice, and ice ridging in 15-degree longitude arc segments from the Arctic Circle to 80 degrees latitude, plusmore » two polar semicircular regions from 80 to 90 degrees latitude. The uncertainty is meant to describe the lack of knowledge rather than imprecision in the physical simulation because the emphasis is on unfalsified risk and its use to determine potential socioeconom ic impacts. The full report is contained in 27 volumes.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Backus, George A.; Lowry, Thomas Stephen; Jones, Shannon M.
2017-06-01
This report uses the CMIP5 series of climate model simulations to produce country- level uncertainty distributions for use in socioeconomic risk assessments of climate change impacts. It provides appropriate probability distributions, by month, for 169 countries and autonomous-areas on temperature, precipitation, maximum temperature, maximum wind speed, humidity, runoff, soil moisture and evaporation for the historical period (1976-2005), and for decadal time periods to 2100. It also provides historical and future distributions for the Arctic region on ice concentration, ice thickness, age of ice, and ice ridging in 15-degree longitude arc segments from the Arctic Circle to 80 degrees latitude, plusmore » two polar semicircular regions from 80 to 90 degrees latitude. The uncertainty is meant to describe the lack of knowledge rather than imprecision in the physical simulation because the emphasis is on unfalsified risk and its use to determine potential socioeconomic impacts. The full report is contained in 27 volumes.« less
Challenges in Defining Tsunami Wave Height
NASA Astrophysics Data System (ADS)
Stroker, K. J.; Dunbar, P. K.; Mungov, G.; Sweeney, A.; Arcos, N. P.
2017-12-01
The NOAA National Centers for Environmental Information (NCEI) and co-located World Data Service for Geophysics maintain the global tsunami archive consisting of the historical tsunami database, imagery, and raw and processed water level data. The historical tsunami database incorporates, where available, maximum wave heights for each coastal tide gauge and deep-ocean buoy that recorded a tsunami signal. These data are important because they are used for tsunami hazard assessment, model calibration, validation, and forecast and warning. There have been ongoing discussions in the tsunami community about the correct way to measure and report these wave heights. It is important to understand how these measurements might vary depending on how the data were processed and the definition of maximum wave height. On September 16, 2015, an 8.3 Mw earthquake located 48 km west of Illapel, Chile generated a tsunami that was observed all over the Pacific region. We processed the time-series water level data for 57 tide gauges that recorded this tsunami and compared the maximum wave heights determined from different definitions. We also compared the maximum wave heights from the NCEI-processed data with the heights reported by the NOAA Tsunami Warning Centers. We found that in the near field different methods of determining the maximum tsunami wave heights could result in large differences due to possible instrumental clipping. We also found that the maximum peak is usually larger than the maximum amplitude (½ peak-to-trough), but the differences for the majority of the stations were <20 cm. For this event, the maximum tsunami wave heights determined by either definition (maximum peak or amplitude) would have validated the forecasts issued by the NOAA Tsunami Warning Centers. Since there is currently only one field in the NCEI historical tsunami database to store the maximum tsunami wave height, NCEI will consider adding an additional field for the maximum peak measurement.
Challenges in Defining Tsunami Wave Heights
NASA Astrophysics Data System (ADS)
Dunbar, Paula; Mungov, George; Sweeney, Aaron; Stroker, Kelly; Arcos, Nicolas
2017-08-01
The National Oceanic and Atmospheric Administration (NOAA) National Centers for Environmental Information (NCEI) and co-located World Data Service for Geophysics maintain the global tsunami archive consisting of the historical tsunami database, imagery, and raw and processed water level data. The historical tsunami database incorporates, where available, maximum wave heights for each coastal tide gauge and deep-ocean buoy that recorded a tsunami signal. These data are important because they are used for tsunami hazard assessment, model calibration, validation, and forecast and warning. There have been ongoing discussions in the tsunami community about the correct way to measure and report these wave heights. It is important to understand how these measurements might vary depending on how the data were processed and the definition of maximum wave height. On September 16, 2015, an 8.3 M w earthquake located 48 km west of Illapel, Chile generated a tsunami that was observed all over the Pacific region. We processed the time-series water level data for 57 coastal tide gauges that recorded this tsunami and compared the maximum wave heights determined from different definitions. We also compared the maximum wave heights from the NCEI-processed data with the heights reported by the NOAA Tsunami Warning Centers. We found that in the near field different methods of determining the maximum tsunami wave heights could result in large differences due to possible instrumental clipping. We also found that the maximum peak is usually larger than the maximum amplitude (½ peak-to-trough), but the differences for the majority of the stations were <20 cm. For this event, the maximum tsunami wave heights determined by either definition (maximum peak or amplitude) would have validated the forecasts issued by the NOAA Tsunami Warning Centers. Since there is currently only one field in the NCEI historical tsunami database to store the maximum tsunami wave height for each tide gauge and deep-ocean buoy, NCEI will consider adding an additional field for the maximum peak measurement.
Water-balance wodel of a wetland on the Fort Berthold Reservation, North Dakota
Vining, Kevin C.
2007-01-01
A numerical water-balance model was developed to simulate the responses of a wetland on the Fort Berthold Reservation, North Dakota, to historical and possible extreme hydrological inputs and to changes in hydrological inputs that might occur if a proposed refinery is built on the reservation. Results from model simulations indicated that the study wetland would likely contain water during most historical and extreme-precipitation events with the addition of maximum potential discharges of 0.6 acre-foot per day from proposed refinery holding ponds. Extended periods with little precipitation and above-normal temperatures may result in the wetland becoming nearly dry, especially if potential holding-pond discharges are near zero. Daily simulations based on the historical-enhanced climate data set for May and June 2005, which included holding-pond discharges of 0.6 acre-foot per day, indicated that the study-wetland maximum simulated water volume was about 16.2 acre-feet and the maximum simulated water level was about 1.2 feet at the outlet culvert. Daily simulations based on the extreme summer data set, created to represent an extreme event with excessive June precipitation and holding-pond discharges of 0.6 acre-foot per day, indicated that the study-wetland maximum simulated water volume was about 38.6 acre-feet and the maximum simulated water level was about 2.6 feet at the outlet culvert. A simulation performed using the extreme winter climate data set and an outlet culvert blocked with snow and ice resulted in the greatest simulated wetland water volume of about 132 acre-feet and the greatest simulated water level, which would have been about 6.2 feet at the outlet culvert, but water was not likely to overflow an adjacent highway.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Backus, George A.; Lowry, Thomas Stephen; Jones, Shannon M
2017-06-01
This report uses the CMIP5 series of climate model simulations to produce country- level uncertainty distributions for use in socioeconomic risk assessments of climate change impacts. It provides appropriate probability distributions, by month, for 169 countries and autonomous-areas on temperature, precipitation, maximum temperature, maximum wind speed, humidity, runoff, soil moisture and evaporation for the historical period (1976-2005), and for decadal time periods to 2100. It also provides historical and future distributions for the Arctic region on ice concentration, ice thickness, age of ice, and ice ridging in 15-degree longitude arc segments from the Arctic Circle to 80 degrees latitude, plusmore » two polar semicircular regions from 80 to 90 degrees latitude. The uncertainty is meant to describe the lack of knowledge rather than imprecision in the physical simulation because the emphasis is on unfalsified risk and its use to determine potential socioeconomic impacts. The full report is contained in 27 volumes.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Backus, George A.; Lowry, Thomas Stephen; Jones, Shannon M.
2017-06-01
This report uses the CMIP5 series of climate model simulations to produce country- level uncertainty distributions for use in socioeconomic risk assessments of climate change impacts. It provides appropriate probability distributions, by month, for 169 countries and autonomous-areas on temperature, precipitation, maximum temperature, maximum wind speed, humidity, runoff, soil moisture and evaporation for the historical period (1976-2005), and for decadal time periods to 2100. It also provides historical and future distributions for the Arctic region on ice concentration, ice thickness, age of ice, and ice ridging in 15-degree longitude arc segments from the Arctic Circle to 80 degrees latitude, plusmore » two polar semicircular regions from 80 to 90 degrees latitude. The uncertainty is meant to describe the lack of knowledge rather than imprecision in the physical simulation because the emphasis is on unfalsified risk and its use to determine potential socioeconomic impacts. The full report is contained in 27 volumes.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Backus, George A.; Lowry, Thomas Stephen; Jones, Shannon M.
2017-06-01
This report uses the CMIP5 series of climate model simulations to produce country- level uncertainty distributions for use in socioeconomic risk assessments of climate change impacts. It provides appropriate probability distributions, by month, for 169 countries and autonomous-areas on temperature, precipitation, maximum temperature, maximum wind speed, humidity, runoff, soil moisture and evaporation for the historical period (1976-2005), and for decadal time periods to 2100. It also provides historical and future distributions for the Arctic region on ice concentration, ice thickness, age of ice, and ice ridging in 15-degree longitude arc segments from the Arctic Circle to 80 degrees latitude, plusmore » two polar semicircular regions from 80 to 90 degrees latitude. The uncertainty is meant to describe the lack of knowledge rather than imprecision in the physical simulation because the emphasis is on unfalsified risk and its use to determine potential socioeconomic impacts. The full report is contained in 27 volumes.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Backus, George A.; Lowry, Thomas Stephen; Jones, Shannon M
2017-06-01
This report uses the CMIP5 series of climate model simulations to produce country- level uncertainty distributions for use in socioeconomic risk assessments of climate change impacts. It provides appropriate probability distributions, by month, for 169 countries and autonomous-areas on temperature, precipitation, maximum temperature, maximum wind speed, humidity, runoff, soil moisture and evaporation for the historical period (1976-2005), and for decadal time periods to 2100. It also provides historical and future distributions for the Arctic region on ice concentration, ice thickness, age of ice, and ice ridging in 15-degree longitude arc segments from the Arctic Circle to 80 degrees latitude, plusmore » two polar semicircular regions from 80 to 90 degrees latitude. The uncertainty is meant to describe the lack of knowledge rather than imprecision in the physical simulation because the emphasis is on unfalsified risk and its use to determine potential socioeconomic impacts. The full report is contained in 27 volumes.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Backus, George A.; Lowry, Thomas Stephen; Jones, Shannon M
2017-05-01
This report uses the CMIP5 series of climate model simulations to produce country- level uncertainty distributions for use in socioeconomic risk assessments of climate change impacts. It provides appropriate probability distributions, by month, for 169 countries and autonomous-areas on temperature, precipitation, maximum temperature, maximum wind speed, humidity, runoff, soil moisture and evaporation for the historical period (1976-2005), and for decadal time periods to 2100. It also provides historical and future distributions for the Arctic region on ice concentration, ice thickness, age of ice, and ice ridging in 15-degree longitude arc segments from the Arctic Circle to 80 degrees latitude, plusmore » two polar semicircular regions from 80 to 90 degrees latitude. The uncertainty is meant to describe the lack of knowledge rather than imprecision in the physical simulation because the emphasis is on unfalsified risk and its use to determine potential socioeconomic impacts. The full report is contained in 27 volumes.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Backus, George A.; Lowry, Thomas Stephen; Jones, Shannon M
2017-06-01
This report uses the CMIP5 series of climate model simulations to produce country- level uncertainty distributions for use in socioeconomic risk assessments of climate change impacts. It provides appropriate probability distributions, by month, for 169 countries and autonomous-areas on temperature, precipitation, maximum temperature, maximum wind speed, humidity, runoff, soil moisture and evaporation for the historical period (1976-2005), and for decadal time periods to 2100. It also provides historical and future distributions for the Arctic region on ice concentration, ice thickness, age of ice, and ice ridging in 15-degree longitude arc segments from the Arctic Circle to 80 degrees latitude, plusmore » two polar semicircular regions from 80 to 90 degrees latitude. The uncertainty is meant to describe the lack of knowledge rather than imprecision in the physical simulation because the emphasis is on unfalsified risk and its use to determine potential socioeconomic impacts. The full report is contained in 27 volumes.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Backus, George A.; Lowry, Thomas Stephen; Jones, Shannon M
This report uses the CMIP5 series of climate model simulations to produce country- level uncertainty distributions for use in socioeconomic risk assessments of climate change impacts. It provides appropriate probability distributions, by month, for 169 countries and autonomous-areas on temperature, precipitation, maximum temperature, maximum wind speed, humidity, runoff, soil moisture and evaporation for the historical period (1976-2005), and for decadal time periods to 2100. It also provides historical and future distributions for the Arctic region on ice concentration, ice thickness, age of ice, and ice ridging in 15-degree longitude arc segments from the Arctic Circle to 80 degrees latitude, plusmore » two polar semicircular regions from 80 to 90 degrees latitude. The uncertainty is meant to describe the lack of knowledge rather than imprecision in the physical simulation because the emphasis is on unfalsified risk and its use to determine potential socioeconomic impacts. The full report is contained in 27 volumes.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Backus, George A.; Lowry, Thomas Stephen; Jones, Shannon M.
2017-06-01
This report uses the CMIP5 series of climate model simulations to produce country- level uncertainty distributions for use in socioeconomic risk assessments of climate change impacts. It provides appropriate probability distributions, by month, for 169 countries and autonomous-areas on temperature, precipitation, maximum temperature, maximum wind speed, humidity, runoff, soil moisture and evaporation for the historical period (1976-2005), and for decadal time periods to 2100. It also provides historical and future distributions for the Arctic region on ice concentration, ice thickness, age of ice, and ice ridging in 15-degree longitude arc segments from the Arctic Circle to 80 degrees latitude, plusmore » two polar semicircular regions from 80 to 90 degrees latitude. The uncertainty is meant to describe the lack of knowledge rather than imprecision in the physical simulation because the emphasis is on unfalsified risk and its use to determine potential socioeconomic impacts. The full report is contained in 27 volumes.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Backus, George A.; Lowry, Thomas Stephen; Jones, Shannon M.
2017-05-01
This report uses the CMIP5 series of climate model simulations to produce country- level uncertainty distributions for use in socioeconomic risk assessments of climate change impacts. It provides appropriate probability distributions, by month, for 169 countries and autonomous-areas on temperature, precipitation, maximum temperature, maximum wind speed, humidity, runoff, soil moisture and evaporation for the historical period (1976-2005), and for decadal time periods to 2100. It also provides historical and future distributions for the Arctic region on ice concentration, ice thickness, age of ice, and ice ridging in 15-degree longitude arc segments from the Arctic Circle to 80 degrees latitude, plusmore » two polar semicircular regions from 80 to 90 degrees latitude. The uncertainty is meant to describe the lack of knowledge rather than imprecision in the physical simulation because the emphasis is on unfalsified risk and its use to determine potential socioeconomic impacts. The full report is contained in 27 volumes.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Backus, George A.; Lowry, Thomas Stephen; Jones, Shannon M.
2017-06-01
This report uses the CMIP5 series of climate model simulations to produce country- level uncertainty distributions for use in socioeconomic risk assessments of climate change impacts. It provides appropriate probability distributions, by month, for 169 countries and autonomous-areas on temperature, precipitation, maximum temperature, maximum wind speed, humidity, runoff, soil moisture and evaporation for the historical period (1976-2005), and for decadal time periods to 2100. It also provides historical and future distributions for the Arctic region on ice concentration, ice thickness, age of ice, and ice ridging in 15-degree longitude arc segments from the Arctic Circle to 80 degrees latitude, plusmore » two polar semicircular regions from 80 to 90 degrees latitude. The uncertainty is meant to describe the lack of knowledge rather than imprecision in the physical simulation because the emphasis is on unfalsified risk and its use to determine potential socioeconomic impacts. The full report is contained in 27 volumes.« less
Why does Japan use the probability method to set design flood?
NASA Astrophysics Data System (ADS)
Nakamura, S.; Oki, T.
2015-12-01
Design flood is hypothetical flood to make flood prevention plan. In Japan, a probability method based on precipitation data is used to define the scale of design flood: Tone River, the biggest river in Japan, is 1 in 200 years, Shinano River is 1 in 150 years, and so on. It is one of important socio-hydrological issue how to set reasonable and acceptable design flood in a changing world. The method to set design flood vary among countries. Although the probability method is also used in Netherland, but the base data is water level or discharge data and the probability is 1 in 1250 years (in fresh water section). On the other side, USA and China apply the maximum flood method which set the design flood based on the historical or probable maximum flood. This cases can leads a question: "what is the reason why the method vary among countries?" or "why does Japan use the probability method?" The purpose of this study is to clarify the historical process which the probability method was developed in Japan based on the literature. In the late 19the century, the concept of "discharge" and modern river engineering were imported by Dutch engineers, and modern flood prevention plans were developed in Japan. In these plans, the design floods were set based on the historical maximum method. Although the historical maximum method had been used until World War 2, however, the method was changed to the probability method after the war because of limitations of historical maximum method under the specific socio-economic situations: (1) the budget limitation due to the war and the GHQ occupation, (2) the historical floods: Makurazaki typhoon in 1945, Kathleen typhoon in 1947, Ione typhoon in 1948, and so on, attacked Japan and broke the record of historical maximum discharge in main rivers and the flood disasters made the flood prevention projects difficult to complete. Then, Japanese hydrologists imported the hydrological probability statistics from the West to take account of socio-economic situation in design flood, and they applied to Japanese rivers in 1958. The probability method was applied Japan to adapt the specific socio-economic and natural situation during the confusion after the war.
Maslia, Morris L.; Aral, Mustafa M.; Ruckart, Perri Z.; Bove, Frank J.
2017-01-01
A U.S. government health agency conducted epidemiological studies to evaluate whether exposures to drinking water contaminated with volatile organic compounds (VOC) at U.S. Marine Corps Base Camp Lejeune, North Carolina, were associated with increased health risks to children and adults. These health studies required knowledge of contaminant concentrations in drinking water—at monthly intervals—delivered to family housing, barracks, and other facilities within the study area. Because concentration data were limited or unavailable during much of the period of contamination (1950s–1985), the historical reconstruction process was used to quantify estimates of monthly mean contaminant-specific concentrations. This paper integrates many efforts, reports, and papers into a synthesis of the overall approach to, and results from, a drinking-water historical reconstruction study. Results show that at the Tarawa Terrace water treatment plant (WTP) reconstructed (simulated) tetrachloroethylene (PCE) concentrations reached a maximum monthly average value of 183 micrograms per liter (μg/L) compared to a one-time maximum measured value of 215 μg/L and exceeded the U.S. Environmental Protection Agency’s current maximum contaminant level (MCL) of 5 μg/L during the period November 1957–February 1987. At the Hadnot Point WTP, reconstructed trichloroethylene (TCE) concentrations reached a maximum monthly average value of 783 μg/L compared to a one-time maximum measured value of 1400 μg/L during the period August 1953–December 1984. The Hadnot Point WTP also provided contaminated drinking water to the Holcomb Boulevard housing area continuously prior to June 1972, when the Holcomb Boulevard WTP came on line (maximum reconstructed TCE concentration of 32 μg/L) and intermittently during the period June 1972–February 1985 (maximum reconstructed TCE concentration of 66 μg/L). Applying the historical reconstruction process to quantify contaminant-specific monthly drinking-water concentrations is advantageous for epidemiological studies when compared to using the classical exposed versus unexposed approach. PMID:28868161
NASA Astrophysics Data System (ADS)
Haylock, M. R.
2011-10-01
Uncertainty in the return levels of insured loss from European wind storms was quantified using storms derived from twenty-two 25 km regional climate model runs driven by either the ERA40 reanalyses or one of four coupled atmosphere-ocean global climate models. Storms were identified using a model-dependent storm severity index based on daily maximum 10 m wind speed. The wind speed from each model was calibrated to a set of 7 km historical storm wind fields using the 70 storms with the highest severity index in the period 1961-2000, employing a two stage calibration methodology. First, the 25 km daily maximum wind speed was downscaled to the 7 km historical model grid using the 7 km surface roughness length and orography, also adopting an empirical gust parameterisation. Secondly, downscaled wind gusts were statistically scaled to the historical storms to match the geographically-dependent cumulative distribution function of wind gust speed. The calibrated wind fields were run through an operational catastrophe reinsurance risk model to determine the return level of loss to a European population density-derived property portfolio. The risk model produced a 50-yr return level of loss of between 0.025% and 0.056% of the total insured value of the portfolio.
Tide-surge historical assessment of extreme water levels for the St. Johns River: 1928-2017
NASA Astrophysics Data System (ADS)
Bacopoulos, Peter
2017-10-01
An historical storm population is developed for the St. Johns River, located in northeast Florida-US east coast, via extreme value assessment of an 89-year-long record of hourly water-level data. Storm surge extrema and the corresponding (independent) storm systems are extracted from the historical record as well as the linear and nonlinear trends of mean sea level. Peaks-over-threshold analysis reveals the top 16 most-impactful (storm surge) systems in the general return-period range of 1-100 years. Hurricane Matthew (2016) broke the record with a new absolute maximum water level of 1.56 m, although the peak surge occurred during slack tide level (0.00 m). Hurricanes and tropical systems contribute to return periods of 10-100 years with water levels in the approximate range of 1.3-1.55 m. Extratropical systems and nor'easters contribute to the historical storm population (in the general return-period range of 1-10 years) and are capable of producing extreme storm surges (in the approximate range of 1.15-1.3 m) on par with those generated by hurricanes and tropical systems. The highest astronomical tide is 1.02 m, which by evaluation of the historical record can contribute as much as 94% to the total storm-tide water level. Statically, a hypothetical scenario of Hurricane Matthew's peak surge coinciding with the highest astronomical tide would yield an overall storm-tide water level of 2.58 m, corresponding to an approximate 1000-year return period by historical comparison. Sea-level trends (linear and nonlinear) impact water-level return periods and constitute additional risk hazard for coastal engineering designs.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Backus, George A.; Lowry, Thomas Stephen; Jones, Shannon M.
2017-05-01
This report uses the CMIP5 series of climate model simulations to produce country- level uncertainty distributions for use in socioeconomic risk assessments of climate change impacts. It provides appropriate probability distributions, by month, for 169 countries and autonomous-areas on temperature, precipitation, maximum temperature, maximum wind speed, humidity, runoff, soil moisture and evaporation for the historical period (1976-2005), and for decadal time periods to 2100. It also provides historical and future distributions for the Arctic region on ice concentration, ice thickness, age of ice, and ice ridging in 15-degree longitude arc segments from the Arctic Circle to 80 degrees latitude, plusmore » two polar semicircular regions from 80 to 90 degrees latitude. The uncertainty is meant to describe the lack of knowledge rather than imprecision in the physical simulation because the emphasis is on unfalsified risk and its use to determine potential socioeconomic impacts. The full report is contained in 27 volumes.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Backus, George A.; Lowry, Thomas Stephen; Jones, Shannon M.
2017-04-01
This report uses the CMIP5 series of climate model simulations to produce country- level uncertainty distributions for use in socioeconomic risk assessments of climate change impacts. It provides appropriate probability distributions, by month, for 169 countries and autonomous-areas on temperature, precipitation, maximum temperature, maximum wind speed, humidity, runoff, soil moisture and evaporation for the historical period (1976-2005), and for decadal time periods to 2100. It also provides historical and future distributions for the Arctic region on ice concentration, ice thickness, age of ice, and ice ridging in 15-degree longitude arc segments from the Arctic Circle to 80 degrees latitude, plusmore » two polar semicircular regions from 80 to 90 degrees latitude. The uncertainty is meant to describe the lack of knowledge rather than imprecision in the physical simulation because the emphasis is on unfalsified risk and its use to determine potential socioeconomic impacts. The full report is contained in 27 volumes.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Backus, George A.; Lowry, Thomas Stephen; Jones, Shannon M.
2017-06-01
This report uses the CMIP5 series of climate model simulations to produce country- level uncertainty distributions for use in socioeconomic risk assessments of climate change impacts. It provides appropriate probability distributions, by month, for 169 countries and autonomous-areas on temperature, precipitation, maximum temperature, maximum wind speed, humidity, runoff, soil moisture and evaporation for the historical period (1976-2005), and for decadal time periods to 2100. It also provides historical and future distributions for the Arctic region on ice concentration, ice thickness, age of ice, and ice ridging in 15-degree longitude arc segments from the Arctic Circle to 80 degrees latitude, plusmore » two polar semicircular regions from 80 to 90 degrees latitude. The uncertainty is meant to describe the lack of knowledge rather than imprecision in the physical simulation because the emphasis is on unfalsified risk and its use to determine potential socioeconomic impacts. The full report is contained in 27 volumes.« less
County-Level Climate Uncertainty for Risk Assessments: Volume 25 Appendix X - Forecast Sea Ice Age.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Backus, George A.; Lowry, Thomas Stephen; Jones, Shannon M.
2017-05-01
This report uses the CMIP5 series of climate model simulations to produce country- level uncertainty distributions for use in socioeconomic risk assessments of climate change impacts. It provides appropriate probability distributions, by month, for 169 countries and autonomous-areas on temperature, precipitation, maximum temperature, maximum wind speed, humidity, runoff, soil moisture and evaporation for the historical period (1976-2005), and for decadal time periods to 2100. It also provides historical and future distributions for the Arctic region on ice concentration, ice thickness, age of ice, and ice ridging in 15-degree longitude arc segments from the Arctic Circle to 80 degrees latitude, plusmore » two polar semicircular regions from 80 to 90 degrees latitude. The uncertainty is meant to describe the lack of knowledge rather than imprecision in the physical simulation because the emphasis is on unfalsified risk and its use to determine potential socioeconomic impacts. The full report is contained in 27 volumes.« less
County-Level Climate Uncertainty for Risk Assessments: Volume 27 Appendix Z - Forecast Ridging Rate.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Backus, George A.; Lowry, Thomas Stephen; Jones, Shannon M.
2017-06-01
This report uses the CMIP5 series of climate model simulations to produce country- level uncertainty distributions for use in socioeconomic risk assessments of climate change impacts. It provides appropriate probability distributions, by month, for 169 countries and autonomous-areas on temperature, precipitation, maximum temperature, maximum wind speed, humidity, runoff, soil moisture and evaporation for the historical period (1976-2005), and for decadal time periods to 2100. It also provides historical and future distributions for the Arctic region on ice concentration, ice thickness, age of ice, and ice ridging in 15-degree longitude arc segments from the Arctic Circle to 80 degrees latitude, plusmore » two polar semicircular regions from 80 to 90 degrees latitude. The uncertainty is meant to describe the lack of knowledge rather than imprecision in the physical simulation because the emphasis is on unfalsified risk and its use to determine potential socioeconomic impacts. The full report is contained in 27 volumes.« less
County-Level Climate Uncertainty for Risk Assessments: Volume 17 Appendix P - Forecast Soil Moisture
DOE Office of Scientific and Technical Information (OSTI.GOV)
Backus, George A.; Lowry, Thomas Stephen; Jones, Shannon M.
This report uses the CMIP5 series of climate model simulations to produce country- level uncertainty distributions for use in socioeconomic risk assessments of climate change impacts. It provides appropriate probability distributions, by month, for 169 countries and autonomous-areas on temperature, precipitation, maximum temperature, maximum wind speed, humidity, runoff, soil moisture and evaporation for the historical period (1976-2005), and for decadal time periods to 2100. It also provides historical and future distributions for the Arctic region on ice concentration, ice thickness, age of ice, and ice ridging in 15-degree longitude arc segments from the Arctic Circle to 80 degrees latitude, plusmore » two polar semicircular regions from 80 to 90 degrees latitude. The uncertainty is meant to describe the lack of knowledge rather than imprecision in the physical simulation because the emphasis is on unfalsified risk and its use to determine potential socioeconomic impacts. The full report is contained in 27 volumes.« less
County-Level Climate Uncertainty for Risk Assessments: Volume 1.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Backus, George A.; Lowry, Thomas Stephen; Jones, Shannon M.
This report uses the CMIP5 series of climate model simulations to produce country- level uncertainty distributions for use in socioeconomic risk assessments of climate change impacts. It provides appropriate probability distributions, by month, for 169 countries and autonomous-areas on temperature, precipitation, maximum temperature, maximum wind speed, humidity, runoff, soil moisture and evaporation for the historical period (1976-2005), and for decadal time periods to 2100. It also provides historical and future distributions for the Arctic region on ice concentration, ice thickness, age of ice, and ice ridging in 15-degree longitude arc segments from the Arctic Circle to 80 degrees latitude, plusmore » two polar semicircular regions from 80 to 90 degrees latitude. The uncertainty is meant to describe the lack of knowledge rather than imprecision in the physical simulation because the emphasis is on unfalsified risk and its use to determine potential socioeconomic impacts. The full report is contained in 27 volumes.« less
Federal Register 2010, 2011, 2012, 2013, 2014
2011-05-04
... crab rebuilding plan to define the stock as rebuilt the first year the stock biomass is above the level... stock assessment model to estimate the biomass level and fishing rate necessary to achieve maximum sustainable yield. Tier 4 stocks have a stock assessment model that estimates biomass using the historical...
Hazard from far-field tsunami at Hilo: Earthquakes from the Ring of Fire
NASA Astrophysics Data System (ADS)
Arcas, D.; Weiss, R.; Titov, V.
2007-12-01
Historical data and modeling are used to study tsunami hazard at Hilo, Hawaii. Hilo has one of the best historical tsunami record in the US. Considering the tsunami observations from the early eighteen hundreds until today reveals that the number of observed events per decade depends on the awareness of tsunami events. The awareness appears to be a function of the observation techniques such as seismometers and communication devices, as well as direct measurements. Three time periods can be identified, in which the number of observed events increases from one event per decade in the first period to 7.7 in the second, to 9.4 events per decade in the third one. A total of 89 events from far-field sources have been encountered. In contrast only 11 events have been observed with sources in the near field. To remove this historical observation bias from the hazard estimate, we have complimented the historical analysis with a modeling study. We have carried out modeling of 1476 individual earthquakes along the subduction zones of the Pacific Ocean in four different magnitude levels (7.5, 8.2, 8.7 and 9.3). The maximum run up and maximum peak at the tide gauge is plotted for the different magnitude levels to reveal sensitive and source areas of tsunami waves for Hilo and a linear scaling of both parameters for small, but non-linear scaling for larger earthquakes
Peter R. Pettengill; Robert E. Manning; William Valliere; Laura E. Anderson
2010-01-01
Historically, transportation planning and management have been guided largely by principles of efficiency. Specifically, the Transportation Research Board has utilized a levels of service (LOS) framework to assess quality of service in terms of traffic congestion, speed and travel time, and maximum road capacity. In the field of park and outdoor recreation management,...
Global imprint of historical connectivity on freshwater fish biodiversity.
Dias, Murilo S; Oberdorff, Thierry; Hugueny, Bernard; Leprieur, Fabien; Jézéquel, Céline; Cornu, Jean-François; Brosse, Sébastien; Grenouillet, Gael; Tedesco, Pablo A
2014-09-01
The relative importance of contemporary and historical processes is central for understanding biodiversity patterns. While several studies show that past conditions can partly explain the current biodiversity patterns, the role of history remains elusive. We reconstructed palaeo-drainage basins under lower sea level conditions (Last Glacial Maximum) to test whether the historical connectivity between basins left an imprint on the global patterns of freshwater fish biodiversity. After controlling for contemporary and past environmental conditions, we found that palaeo-connected basins displayed greater species richness but lower levels of endemism and beta diversity than did palaeo-disconnected basins. Palaeo-connected basins exhibited shallower distance decay of compositional similarity, suggesting that palaeo-river connections favoured the exchange of fish species. Finally, we found that a longer period of palaeo-connection resulted in lower levels of beta diversity. These findings reveal the first unambiguous results of the role played by history in explaining the global contemporary patterns of biodiversity. © 2014 John Wiley & Sons Ltd/CNRS.
A Synthesis of Solar Cycle Prediction Techniques
NASA Technical Reports Server (NTRS)
Hathaway, David H.; Wilson, Robert M.; Reichmann, Edwin J.
1999-01-01
A number of techniques currently in use for predicting solar activity on a solar cycle timescale are tested with historical data. Some techniques, e.g., regression and curve fitting, work well as solar activity approaches maximum and provide a month-by-month description of future activity, while others, e.g., geomagnetic precursors, work well near solar minimum but only provide an estimate of the amplitude of the cycle. A synthesis of different techniques is shown to provide a more accurate and useful forecast of solar cycle activity levels. A combination of two uncorrelated geomagnetic precursor techniques provides a more accurate prediction for the amplitude of a solar activity cycle at a time well before activity minimum. This combined precursor method gives a smoothed sunspot number maximum of 154 plus or minus 21 at the 95% level of confidence for the next cycle maximum. A mathematical function dependent on the time of cycle initiation and the cycle amplitude is used to describe the level of solar activity month by month for the next cycle. As the time of cycle maximum approaches a better estimate of the cycle activity is obtained by including the fit between previous activity levels and this function. This Combined Solar Cycle Activity Forecast gives, as of January 1999, a smoothed sunspot maximum of 146 plus or minus 20 at the 95% level of confidence for the next cycle maximum.
Vandenberg Air Force Base Upper Level Wind Launch Weather Constraints
NASA Technical Reports Server (NTRS)
Shafer, Jaclyn A.; Wheeler, Mark M.
2012-01-01
The 30th Operational Support Squadron Weather Flight (30 OSSWF) provides comprehensive weather services to the space program at Vandenberg Air Force Base (VAFB) in California. One of their responsibilities is to monitor upper-level winds to ensure safe launch operations of the Minuteman III ballistic missile. The 30 OSSWF tasked the Applied Meteorology Unit (AMU) to analyze VAFB sounding data with the goal of determining the probability of violating (PoV) their upper-level thresholds for wind speed and shear constraints specific to this launch vehicle, and to develop a tool that will calculate the PoV of each constraint on the day of launch. In order to calculate the probability of exceeding each constraint, the AMU collected and analyzed historical data from VAFB. The historical sounding data were retrieved from the National Oceanic and Atmospheric Administration Earth System Research Laboratory archive for the years 1994-2011 and then stratified into four sub-seasons: January-March, April-June, July-September, and October-December. The maximum wind speed and 1000-ft shear values for each sounding in each subseason were determined. To accurately calculate the PoV, the AMU determined the theoretical distributions that best fit the maximum wind speed and maximum shear datasets. Ultimately it was discovered that the maximum wind speeds follow a Gaussian distribution while the maximum shear values follow a lognormal distribution. These results were applied when calculating the averages and standard deviations needed for the historical and real-time PoV calculations. In addition to the requirements outlined in the original task plan, the AMU also included forecast sounding data from the Rapid Refresh model. This information provides further insight for the launch weather officers (LWOs) when determining if a wind constraint violation will occur over the next few hours on day of launch. The interactive graphical user interface (GUI) for this project was developed in Microsoft Excel using Visual Basic for Applications. The GUI displays the critical sounding data easily and quickly for the LWOs on day of launch. This tool will replace the existing one used by the 30 OSSWF, assist the LWOs in determining the probability of exceeding specific wind threshold values, and help to improve the overall upper winds forecast for the launch customer.
van, Afferden M.; Hansen, A.M.; Fuller, C.C.
2005-01-01
Historical trend in deposition of DDT and its metabolites has been reconstructed by analyzing sediment cores of the Zempoala Lagoon, in the center of Mexico. The small watershed of this mountain lagoon is closed, and it is located between 2.800 and 3.700 masl. It ls neither affected by agriculture nor by permanent populations. The Zempoala Lagoon has an average depth of 3.9 mand a maximum depth of 8.8 m. Sediments were extracted with a eore sampler and analyzed by isotope methods (137CS and 2'OPb) for dating. Average sedimentation rate was determined in 0.129 9 cm" yr', corresponding to a maximum age of the 44 cm eore of approximately 60 years. The first presence of total-DDT oecurs in a depth between 28 and 32 cm of the sediment profile, corresponding to the 1960's, with a concentration of 5.3 I1g kg-'. The maximum eoncentration of total-DDT (13.0I1g kg-') occurs in sediment layers representing the late 1970's and beginning 1980's. More recently the concentration decreases towards the present concentration of 1.6 I1g kg-'. This concentration is below most DDT levels reported in recent sediment studies in the USA. The results indicate that the Zempoala Lagoon represents a natural reeipient for studies of the reconstruction of historical trends of atmospheric contaminant deposition in this region. The limitations of the methodology applied, due to the influenee of biodegradation on the definition of correct historical coneentrations of DDT depositions, are demonstrated.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Backus, George A.; Lowry, Thomas Stephen; Jones, Shannon M
This report uses the CMIP5 series of climate model simulations to produce country- level uncertainty distributions for use in socioeconomic risk assessments of climate change impacts. It provides appropriate probability distributions, by month, for 169 countries and autonomous-areas on temperature, precipitation, maximum temperature, maximum wind speed, humidity, runoff, soil moisture and evaporation for the historical period (1976-2005), and for decadal time periods to 2100. It also provides historical and future distributions for the Arctic region on ice concentration, ice thickness, age of ice, and ice ridging in 15-degree longitude arc segments from the Arctic Circle to 80 degrees latitude, plusmore » two polar semicircular regions from 80 to 90 degrees latitude. The uncertainty is meant to describe the lack of knowledge rather than imprecision in the physical simulation because the emphasis is on unfalsified risk and its use to determine potential socioeconomic impacts. The full report is contained in 27 volumes.« less
The maximum intelligible range of the human voice
NASA Astrophysics Data System (ADS)
Boren, Braxton
This dissertation examines the acoustics of the spoken voice at high levels and the maximum number of people that could hear such a voice unamplified in the open air. In particular, it examines an early auditory experiment by Benjamin Franklin which sought to determine the maximum intelligible crowd for the Anglican preacher George Whitefield in the eighteenth century. Using Franklin's description of the experiment and a noise source on Front Street, the geometry and diffraction effects of such a noise source are examined to more precisely pinpoint Franklin's position when Whitefield's voice ceased to be intelligible. Based on historical maps, drawings, and prints, the geometry and material of Market Street is constructed as a computer model which is then used to construct an acoustic cone tracing model. Based on minimal values of the Speech Transmission Index (STI) at Franklin's position, Whitefield's on-axis Sound Pressure Level (SPL) at 1 m is determined, leading to estimates centering around 90 dBA. Recordings are carried out on trained actors and singers to determine their maximum time-averaged SPL at 1 m. This suggests that the greatest average SPL achievable by the human voice is 90-91 dBA, similar to the median estimates for Whitefield's voice. The sites of Whitefield's largest crowds are acoustically modeled based on historical evidence and maps. Based on Whitefield's SPL, the minimal STI value, and the crowd's background noise, this allows a prediction of the minimally intelligible area for each site. These yield maximum crowd estimates of 50,000 under ideal conditions, while crowds of 20,000 to 30,000 seem more reasonable when the crowd was reasonably quiet and Whitefield's voice was near 90 dBA.
137. Photographic copy of historic photo, April 14, 1936 (original ...
137. Photographic copy of historic photo, April 14, 1936 (original print filed in Record Group 115, National Archives, Washington, D.C.). AFTER EXAMINATION OF ALL THE FEATURES OF THE GATE AND CONTROL, AND DUE TO THE PERFECT OPERATION THE GATE WAS RAISED TO MAXIMUM HEIGHT. IT IS SHOWN HERE SOME SEVEN OR EIGHT FEET ABOVE WATER LEVEL COMPLETELY SHUTTING OFF THE FLOW OF THE RIVER. EXAMINATION OF THE INTERIOR OF THE OUTLET TUNNEL WAS CONDUCTED. NO DAMAGE COULD BE DISCOVERED AND TESTS OF THE RING GATE CONTINUED. - Owyhee Dam, Across Owyhee River, Nyssa, Malheur County, OR
2011-01-01
Background Earth history events such as climate change are believed to have played a major role in shaping patterns of genetic structure and diversity in species. However, there is a lag between the time of historical events and the collection of present-day samples that are used to infer contemporary population structure. During this lag phase contemporary processes such as dispersal or non-random mating can erase or reinforce population differences generated by historical events. In this study we evaluate the role of both historical and contemporary processes on the phylogeography of a widespread North American songbird, the Northern Cardinal, Cardinalis cardinalis. Results Phylogenetic analysis revealed deep mtDNA structure with six lineages across the species' range. Ecological niche models supported the same geographic breaks revealed by the mtDNA. A paleoecological niche model for the Last Glacial Maximum indicated that cardinals underwent a dramatic range reduction in eastern North America, whereas their ranges were more stable in México. In eastern North America cardinals expanded out of glacial refugia, but we found no signature of decreased genetic diversity in areas colonized after the Last Glacial Maximum. Present-day demographic data suggested that population growth across the expansion cline is positively correlated with latitude. We propose that there was no loss of genetic diversity in areas colonized after the Last Glacial Maximum because recent high-levels of gene flow across the region have homogenized genetic diversity in eastern North America. Conclusion We show that both deep historical events as well as demographic processes that occurred following these events are critical in shaping genetic pattern and diversity in C. cardinalis. The general implication of our results is that patterns of genetic diversity are best understood when information on species history, ecology, and demography are considered simultaneously. PMID:21599972
NASA Astrophysics Data System (ADS)
Passeri, Davina L.; Hagen, Scott C.; Medeiros, Stephen C.; Bilskie, Matthew V.
2015-12-01
This study evaluates the geophysical influence of the combined effects of historic sea level rise (SLR) and morphology on tidal hydrodynamics in the Grand Bay estuary, located in the Mississippi Sound. Since 1848, the landscape of the Mississippi Sound has been significantly altered as a result of natural and anthropogenic factors including the migration of the offshore Mississippi-Alabama (MSAL) barrier islands and the construction of navigational channels. As a result, the Grand Bay estuary has undergone extensive erosion resulting in the submergence of its protective barrier island, Grand Batture. A large-domain hydrodynamic model was used to simulate present (circa 2005) and past conditions (circa 1848, 1917, and 1960) with unique sea levels, bathymetry, topography and shorelines representative of each time period. Additionally, a hypothetical scenario was performed in which Grand Batture Island exists under 2005 conditions in order to observe the influence of the island on tidal hydrodynamics within the Grand Bay estuary. Changes in tidal amplitudes from the historic conditions varied. Within the Sound, tidal amplitudes were unaltered due to the open exposed shoreline; however, in semi-enclosed embayments outside of the Sound, tidal amplitudes increased. In addition, harmonic constituent phases were slower historically. The position of the MSAL barrier island inlets influenced tidal currents within the Sound; the westward migration of Petit Bois Island allowed stronger tidal velocities to be centered on the Grand Batture Island. Maximum tidal velocities within the Grand Bay estuary were 5 cm/s faster historically, and reversed from being flood dominant in 1848 to ebb dominant in 2005. If the Grand Batture Island was reconstructed under 2005 conditions, tidal amplitudes and phases would not be altered, indicating that the offshore MSAL barrier islands and SLR have a greater influence on these tidal parameters within the estuary. However, maximum tidal velocities would increase by as much as 5 cm/s (63%) and currents would become more ebb dominant. Results of this study illustrate the hydrodynamic response of the system to SLR and the changing landscape, and provide insight into potential future changes under SLR and barrier island evolution.
Display of historical and hypothetical tsunami on the coast of Sakhalin Island
NASA Astrophysics Data System (ADS)
Kostenko, Irina; Zaytsev, Andrey; Kurkin, Andrey; Yalciner, Ahmet
2014-05-01
Tsunami waves achieve the coast of the Sakhalin Island and their sources are located in the Japan Sea, in the Okhotsk Sea, in Kuril Islands region and in the Pacific Ocean. Study of tsunami generation characteristics and its propagation allows studying display of the tsunami on the various parts of the island coast. For this purpose the series of computational experiments of some historical tsunamis was carried out. Their sources located in Japan Sea and Kuril Islands region. The simulation results are compared with the observations. Analysis of all recorded historical tsunami on coast of Sakhalin Island was done. To identify the possible display of the tsunami on the coast of Sakhalin Island the series of computational experiments of hypothetical tsunamis was carried out. Their sources located in the Japan Sea and in the Okhotsk Sea. There were used hydrodynamic sources. There were used different parameters of sources (length, width, height, raising and lowering of sea level), which correspond to earthquakes of various magnitudes. The analysis of the results was carried out. Pictures of the distribution of maximum amplitudes from each tsunami were done. Areas of Okhotsk Sea, Japan Sea and offshore strip of Sakhalin Island with maximum tsunami amplitudes were defined. Graphs of the distribution of maximum tsunami wave heights along the coast of the Sakhalin Island were plotted. Based on shallow-water equation tsunami numerical code NAMI DANCE was used for numerical simulations. This work was supported by ASTARTE project.
NASA Astrophysics Data System (ADS)
Tan, Elcin
A new physically-based methodology for probable maximum precipitation (PMP) estimation is developed over the American River Watershed (ARW) using the Weather Research and Forecast (WRF-ARW) model. A persistent moisture flux convergence pattern, called Pineapple Express, is analyzed for 42 historical extreme precipitation events, and it is found that Pineapple Express causes extreme precipitation over the basin of interest. An average correlation between moisture flux convergence and maximum precipitation is estimated as 0.71 for 42 events. The performance of the WRF model is verified for precipitation by means of calibration and independent validation of the model. The calibration procedure is performed only for the first ranked flood event 1997 case, whereas the WRF model is validated for 42 historical cases. Three nested model domains are set up with horizontal resolutions of 27 km, 9 km, and 3 km over the basin of interest. As a result of Chi-square goodness-of-fit tests, the hypothesis that "the WRF model can be used in the determination of PMP over the ARW for both areal average and point estimates" is accepted at the 5% level of significance. The sensitivities of model physics options on precipitation are determined using 28 microphysics, atmospheric boundary layer, and cumulus parameterization schemes combinations. It is concluded that the best triplet option is Thompson microphysics, Grell 3D ensemble cumulus, and YSU boundary layer (TGY), based on 42 historical cases, and this TGY triplet is used for all analyses of this research. Four techniques are proposed to evaluate physically possible maximum precipitation using the WRF: 1. Perturbations of atmospheric conditions; 2. Shift in atmospheric conditions; 3. Replacement of atmospheric conditions among historical events; and 4. Thermodynamically possible worst-case scenario creation. Moreover, climate change effect on precipitation is discussed by emphasizing temperature increase in order to determine the physically possible upper limits of precipitation due to climate change. The simulation results indicate that the meridional shift in atmospheric conditions is the optimum method to determine maximum precipitation in consideration of cost and efficiency. Finally, exceedance probability analyses of the model results of 42 historical extreme precipitation events demonstrate that the 72-hr basin averaged probable maximum precipitation is 21.72 inches for the exceedance probability of 0.5 percent. On the other hand, the current operational PMP estimation for the American River Watershed is 28.57 inches as published in the hydrometeorological report no. 59 and a previous PMP value was 31.48 inches as published in the hydrometeorological report no. 36. According to the exceedance probability analyses of this proposed method, the exceedance probabilities of these two estimations correspond to 0.036 percent and 0.011 percent, respectively.
NASA Technical Reports Server (NTRS)
Shafer, Jaclyn A.; Brock, Tyler M.
2012-01-01
The 30th Operational Support Squadron Weather Flight (30 OSSWF) provides comprehensive weather services to the space program at Vandenberg Air Force Base (VAFB) in California. One of their responsibilities is to monitor upper-level winds to ensure safe launch operations of the Minuteman Ill ballistic missile. The 30 OSSWF tasked the Applied Meteorology Unit (AMU) to analyze VAFB sounding data with the goal of determining the probability of violating (PoV) their upper-level thresholds for wind speed and shear constraints specific to this launch vehicle, and to develop a tool that will calculate the PoV of each constraint on the day of launch. In order to calculate the probability of exceeding each constraint, the AMU collected and analyzed historical data from VAFB. The historical sounding data were retrieved from the National Oceanic and Atmospheric Administration Earth System Research Laboratory archive for the years 1994-2011 and then stratified into four sub-seasons: January-March, April-June, July-September, and October-December. The AMU determined the theoretical distributions that best fit the maximum wind speed and maximum wind shear datasets and applied this information when calculating the averages and standard deviations needed for the historical and real-time PoV calculations. In addition, the AMU included forecast sounding data from the Rapid Refresh model. This information provides further insight for the launch weather officers (LWOs) when determining if a wind constraint violation will occur over the next few hours on the day of launch. The AMU developed an interactive graphical user interface (GUI) in Microsoft Excel using Visual Basic for Applications. The GUI displays the critical sounding data easily and quickly for LWOs on day of launch. This tool will replace the existing one used by the 30 OSSWF, assist the LWOs in determining the probability of exceeding specific wind threshold values, and help to improve the overall upper winds forecast for the launch customer. This presentation will describe how the AMU calculated the historical and real-time PoV values for the specific upper-level wind launch constraints and outline the development of the interactive GUI display.
2014-10-30
Force Weather Agency (AFWA) WRF 15-km atmospheric model forecast data and low-level turbulence. Archives of historical model data forecast predictors...Relationships between WRF model predictors and PIREPS were developed using the new data mining methodology. The new methodology was inspired...convection. Predictors of turbulence were collected from the AFWA WRF 15km model, and corresponding PIREPS (the predictand) were collected between 2013
Increasing influence of heat stress on French maize yields from the 1960s to the 2030s
Hawkins, Ed; Fricker, Thomas E; Challinor, Andrew J; Ferro, Christopher A T; Kit Ho, Chun; Osborne, Tom M
2013-01-01
Improved crop yield forecasts could enable more effective adaptation to climate variability and change. Here, we explore how to combine historical observations of crop yields and weather with climate model simulations to produce crop yield projections for decision relevant timescales. Firstly, the effects on historical crop yields of improved technology, precipitation and daily maximum temperatures are modelled empirically, accounting for a nonlinear technology trend and interactions between temperature and precipitation, and applied specifically for a case study of maize in France. The relative importance of precipitation variability for maize yields in France has decreased significantly since the 1960s, likely due to increased irrigation. In addition, heat stress is found to be as important for yield as precipitation since around 2000. A significant reduction in maize yield is found for each day with a maximum temperature above 32 °C, in broad agreement with previous estimates. The recent increase in such hot days has likely contributed to the observed yield stagnation. Furthermore, a general method for producing near-term crop yield projections, based on climate model simulations, is developed and utilized. We use projections of future daily maximum temperatures to assess the likely change in yields due to variations in climate. Importantly, we calibrate the climate model projections using observed data to ensure both reliable temperature mean and daily variability characteristics, and demonstrate that these methods work using retrospective predictions. We conclude that, to offset the projected increased daily maximum temperatures over France, improved technology will need to increase base level yields by 12% to be confident about maintaining current levels of yield for the period 2016–2035; the current rate of yield technology increase is not sufficient to meet this target. PMID:23504849
Nishizawa, Hideaki; Okuyama, Junichi; Kobayashi, Masato; Abe, Osamu; Arai, Nobuaki
2010-01-01
Mitochondrial DNA sequence polymorphisms and patterns of genetic diversity represent the genealogy and relative impacts of historical, geographic, and demographic events on populations. In this study, historical patterns of population dynamics and differentiation in hawksbill (Eretmochelys imbricata) and green turtles (Chelonia mydas) in the Pacific were estimated from feeding populations in the Yaeyama Islands, Japan. Phylogenetic relationships of the haplotypes indicated that hawksbill and green turtles in the Pacific probably underwent very similar patterns and processes of population dynamics over the last million years, with population subdivision during the early Pleistocene and population expansion after the last glacial maximum. These significant contemporary historical events were suggested to have been caused by climatic and sea-level fluctuations. On the other hand, comparing our results to long-term population dynamics in the Atlantic, population subdivisions during the early Pleistocene were specific to Pacific hawksbill and green turtles. Therefore, regional differences in historical population dynamics are suggested. Despite limited sampling locations, these results are the first step in estimating the historical trends in Pacific sea turtles by using phylogenetics and population genetics.
NASA Astrophysics Data System (ADS)
Hagan, Nicole; Robins, Nicholas; Hsu-Kim, Heileen; Halabi, Susan; Morris, Mark; Woodall, George; Zhang, Tong; Bacon, Allan; Richter, Daniel De B.; Vandenberg, John
2011-12-01
Detailed Spanish records of mercury use and silver production during the colonial period in Potosí, Bolivia were evaluated to estimate atmospheric emissions of mercury from silver smelting. Mercury was used in the silver production process in Potosí and nearly 32,000 metric tons of mercury were released to the environment. AERMOD was used in combination with the estimated emissions to approximate historical air concentrations of mercury from colonial mining operations during 1715, a year of relatively low silver production. Source characteristics were selected from archival documents, colonial maps and images of silver smelters in Potosí and a base case of input parameters was selected. Input parameters were varied to understand the sensitivity of the model to each parameter. Modeled maximum 1-h concentrations were most sensitive to stack height and diameter, whereas an index of community exposure was relatively insensitive to uncertainty in input parameters. Modeled 1-h and long-term concentrations were compared to inhalation reference values for elemental mercury vapor. Estimated 1-h maximum concentrations within 500 m of the silver smelters consistently exceeded present-day occupational inhalation reference values. Additionally, the entire community was estimated to have been exposed to levels of mercury vapor that exceed present-day acute inhalation reference values for the general public. Estimated long-term maximum concentrations of mercury were predicted to substantially exceed the EPA Reference Concentration for areas within 600 m of the silver smelters. A concentration gradient predicted by AERMOD was used to select soil sampling locations along transects in Potosí. Total mercury in soils ranged from 0.105 to 155 mg kg-1, among the highest levels reported for surface soils in the scientific literature. The correlation between estimated air concentrations and measured soil concentrations will guide future research to determine the extent to which the current community of Potosí and vicinity is at risk of adverse health effects from historical mercury contamination.
Sloto, Ronald A.; McManus, B. Craig
1996-01-01
Valley Forge National Historical Park is just southwest of the Commodore Semiconductor Group (CSG) National Priorities List (Superfund) Site, a source of volatile organic compounds (VOC's) in ground water. The 7.5-square-mile study area includes the part of the park in Lower Providence and West Norriton Townships in Montgomery County, Pa., and surrounding vicinity. The park is underlain by sedimentary rocks of the Upper Triassic age stockton Formation. A potentiometric-surface map constructed from water levels measured in 59 wells shows a cone of depression, approximately 0.5 mile in diameter, centered near the CSG Site. The cone of depression is caused by the pumping of six public supply wells. A ground-water divide between the cone of depression and Valley Forge National Historical Park provides a hydraulic barrier to the flow of ground water and contaminants from the CSG Site to the park. If pumping in the cone of depression was to cease, water levels would recover, and the ground-water divide would shift to the north. A hydraulic gradient between the CSG Site and the Schuylkill River would be established, causing contaminated ground water to flow to the park.Water samples were collected from 12 wells within the park boundary and 9 wells between the park boundary and the ground-water divide to the north of the park. All water samples were analyzed for physical properties (field determinations), nutrients, common ions, metals and other trace constituents, and VOC's. Water samples from the 12 wells inside the park boundary also were analyzed for pesticides. Concentrations of inorganic constituents in the water samples did not exceed U.S. Environmental Protection Agency maximum contaminant levels. Very low concentrations of organic compounds were detected in some of the water samples. VOC's were detected in water from 76 percent of the wells sampled; the maximum concentration detected was 5.8 micrograms per liter of chloroform. The most commonly detected VOC was chloroform. The second most commonly detected compound was methyl tert-butyl ether (MTBE), which was detected in water from 24 percent of wells sampled. Several pesticides were detected in water samples collected from within the park boundaries.: chlordane, DDD, dieldrin, endrin, heptachlor epoxide, and simazine. Concentrations of the detected pesticides were 0.1 micrograms per liter or less and did not exceed U.S. Environmental Protection Agency maximum contaminant levels.
Analysis of ground-water-quality data of the Upper Colorado River basin, water years 1972-92
Apodaca, L.E.
1998-01-01
As part of the U.S. Geological Survey's National Water-Quality Assessment program, an analysis of the existing ground-water-quality data in the Upper Colorado River Basin study unit is necessary to provide information on the historic water-quality conditions. Analysis of the historical data provides information on the availability or lack of data and water-quality issues. The information gathered from the historical data will be used in the design of ground-water-quality studies in the basin. This report includes an analysis of the ground-water data (well and spring data) available for the Upper Colorado River Basin study unit from water years 1972 to 1992 for major cations and anions, metals and selected trace elements, and nutrients. The data used in the analysis of the ground-water quality in the Upper Colorado River Basin study unit were predominantly from the U.S. Geological Survey National Water Information System and the Colorado Department of Public Health and Environment data bases. A total of 212 sites representing alluvial aquifers and 187 sites representing bedrock aquifers were used in the analysis. The available data were not ideal for conducting a comprehensive basinwide water-quality assessment because of lack of sufficient geographical coverage.Evaluation of the ground-water data in the Upper Colorado River Basin study unit was based on the regional environmental setting, which describes the natural and human factors that can affect the water quality. In this report, the ground-water-quality information is evaluated on the basis of aquifers or potential aquifers (alluvial, Green River Formation, Mesaverde Group, Mancos Shale, Dakota Sandstone, Morrison Formation, Entrada Sandstone, Leadville Limestone, and Precambrian) and land-use classifications for alluvial aquifers.Most of the ground-water-quality data in the study unit were for major cations and anions and dissolved-solids concentrations. The aquifer with the highest median concentrations of major ions was the Mancos Shale. The U.S. Environmental Protection Agency secondary maximum contaminant level of 500 milligrams per liter for dissolved solids in drinking water was exceeded in about 75 percent of the samples from the Mancos Shale aquifer. The guideline by the Food and Agriculture Organization of the United States for irrigation water of 2,000 milligrams per liter was also exceeded by the median concentration from the Mancos Shale aquifer. For sulfate, the U.S. Environmental Protection Agency proposed maximum contaminant level of 500 milligrams per liter for drinking water was exceeded by the median concentration for the Mancos Shale aquifer. A total of 66 percent of the sites in the Mancos Shale aquifer exceeded the proposed maximum contaminant level.Metal and selected trace-element data were available for some sites, but most of these data also were below the detection limit. The median concentrations for iron for the selected aquifers and land-use classifications were below the U.S. Environmental Protection Agency secondary maximum contaminant level of 300 micrograms per liter in drinking water. Median concentration of manganese for the Mancos Shale exceeded the U.S. Environmental Protection Agency secondary maximum contaminant level of 50 micrograms per liter in drinking water. The highest selenium concentrations were in the alluvial aquifer and were associated with rangeland. However, about 22 percent of the selenium values from the Mancos Shale exceeded the U.S. Environmental Protection Agency maximum contaminant level of 50 micrograms per liter in drinking water.Few nutrient data were available for the study unit. The only nutrient species presented in this report were nitrate-plus-nitrite as nitrogen and orthophosphate. Median concentrations for nitrate-plus-nitrite as nitrogen were below the U.S. Environmental Protection Agency maximum contaminant level of 10 milligrams per liter in drinking water except for 0.02 percent of the sites in the al
NASA Technical Reports Server (NTRS)
Hathaway, D. H.
2000-01-01
A number of techniques for predicting solar activity on a solar cycle time scale are identified, described, and tested with historical data. Some techniques, e.g,, regression and curve-fitting, work well as solar activity approaches maximum and provide a month- by-month description of future activity, while others, e.g., geomagnetic precursors, work well near solar minimum but provide an estimate only of the amplitude of the cycle. A synthesis of different techniques is shown to provide a more accurate and useful forecast of solar cycle activity levels. A combination of two uncorrelated geomagnetic precursor techniques provides the most accurate prediction for the amplitude of a solar activity cycle at a time well before activity minimum. This precursor method gave a smoothed sunspot number maximum of 154+21 for cycle 23. A mathematical function dependent upon the time of cycle initiation and the cycle amplitude then describes the level of solar activity for the complete cycle. As the time of cycle maximum approaches a better estimate of the cycle activity is obtained by including the fit between recent activity levels and this function. This Combined Solar Cycle Activity Forecast now gives a smoothed sunspot maximum of 140+20 for cycle 23. The success of the geomagnetic precursors in predicting future solar activity suggests that solar magnetic phenomena at latitudes above the sunspot activity belts are linked to solar activity, which occurs many years later in the lower latitudes.
NASA Astrophysics Data System (ADS)
Panagoulia, Dionysia; Vlahogianni, Eleni I.
2018-06-01
A methodological framework based on nonlinear recurrence analysis is proposed to examine the historical data evolution of extremes of maximum and minimum daily mean areal temperature patterns over time under different climate scenarios. The methodology is based on both historical data and atmospheric General Circulation Model (GCM) produced climate scenarios for the periods 1961-2000 and 2061-2100 which correspond to 1 × CO2 and 2 × CO2 scenarios. Historical data were derived from the actual daily observations coupled with atmospheric circulation patterns (CPs). The dynamics of the temperature was reconstructed in the phase-space from the time series of temperatures. The statistically comparing different temperature patterns were based on some discriminating statistics obtained by the Recurrence Quantification Analysis (RQA). Moreover, the bootstrap method of Schinkel et al. (2009) was adopted to calculate the confidence bounds of RQA parameters based on a structural preserving resampling. The overall methodology was implemented to the mountainous Mesochora catchment in Central-Western Greece. The results reveal substantial similarities between the historical maximum and minimum daily mean areal temperature statistical patterns and their confidence bounds, as well as the maximum and minimum temperature patterns in evolution under the 2 × CO2 scenario. A significant variability and non-stationary behaviour characterizes all climate series analyzed. Fundamental differences are produced from the historical and maximum 1 × CO2 scenarios, the maximum 1 × CO2 and minimum 1 × CO2 scenarios, as well as the confidence bounds for the two CO2 scenarios. The 2 × CO2 scenario reflects the strongest shifts in intensity, duration and frequency in temperature patterns. Such transitions can help the scientists and policy makers to understand the effects of extreme temperature changes on water resources, economic development, and health of ecosystems and hence to proceed to effective proactive management of extreme phenomena. The impacts of the findings on the predictability of the extreme daily mean areal temperature patterns are also commented.
Multivariate Regression Analysis of Winter Ozone Events in the Uinta Basin of Eastern Utah, USA
NASA Astrophysics Data System (ADS)
Mansfield, M. L.
2012-12-01
I report on a regression analysis of a number of variables that are involved in the formation of winter ozone in the Uinta Basin of Eastern Utah. One goal of the analysis is to develop a mathematical model capable of predicting the daily maximum ozone concentration from values of a number of independent variables. The dependent variable is the daily maximum ozone concentration at a particular site in the basin. Independent variables are (1) daily lapse rate, (2) daily "basin temperature" (defined below), (3) snow cover, (4) midday solar zenith angle, (5) monthly oil production, (6) monthly gas production, and (7) the number of days since the beginning of a multi-day inversion event. Daily maximum temperature and daily snow cover data are available at ten or fifteen different sites throughout the basin. The daily lapse rate is defined operationally as the slope of the linear least-squares fit to the temperature-altitude plot, and the "basin temperature" is defined as the value assumed by the same least-squares line at an altitude of 1400 m. A multi-day inversion event is defined as a set of consecutive days for which the lapse rate remains positive. The standard deviation in the accuracy of the model is about 10 ppb. The model has been combined with historical climate and oil & gas production data to estimate historical ozone levels.
Lerner, Heather R L; Johnson, Jeff A; Lindsay, Alec R; Kiff, Lloyd F; Mindell, David P
2009-10-05
The harpy eagle (Harpia harpyja) is the largest Neotropical bird of prey and is threatened by human persecution and habitat loss and fragmentation. Current conservation strategies include local education, captive rearing and reintroduction, and protection or creation of trans-national habitat blocks and corridors. Baseline genetic data prior to reintroduction of captive-bred stock is essential for guiding such efforts but has not been gathered previously. We assessed levels of genetic diversity, population structure and demographic history for harpy eagles using samples collected throughout a large portion of their geographic distribution in Central America (n = 32) and South America (n = 31). Based on 417 bp of mitochondrial control region sequence data, relatively high levels of haplotype and nucleotide diversity were estimated for both Central and South America, although haplotype diversity was significantly higher for South America. Historical restriction of gene flow across the Andes (i.e. between our Central and South American subgroups) is supported by coalescent analyses, the haplotype network and significant F(ST) values, however reciprocally monophyletic lineages do not correspond to geographical locations in maximum likelihood analyses. A sudden population expansion for South America is indicated by a mismatch distribution analysis, and further supported by significant (p<0.05) negative values of Fu and Li's D(F) and F, and Fu's F(S). This expansion, estimated at approximately 60 000 years BP (99 000-36 000 years BP 95% CI), encompasses a transition from a warm and dry time period prior to 50 000 years BP to an interval of maximum precipitation (50 000-36 000 years BP). Notably, this time period precedes the climatic and habitat changes associated with the last glacial maximum. In contrast, a multimodal distribution of haplotypes was observed for Central America suggesting either population equilibrium or a recent decline. High levels of mitochondrial genetic diversity in combination with genetic differentiation among subgroups within regions and between regions highlight the importance of local population conservation in order to preserve maximal levels of genetic diversity in this species. Evidence of historically restricted female-mediated gene flow is an important consideration for captive-breeding programs.
Historical view and future demand for knee arthroplasty in Sweden
Rolfson, Ola; W-Dahl, Annette; Garellick, Göran; Sundberg, Martin; Kärrholm, Johan; Robertsson, Otto
2015-01-01
Background and purpose The incidence of knee osteoarthritis will most likely increase. We analyzed historical trends in the incidence of knee arthroplasty in Sweden between 1975 and 2013, in order to be able to provide projections of future demand. Patients and methods We obtained information on all knee arthroplasties in Sweden in the period 1975–2013 from the Swedish Knee Arthroplasty Register, and used public domain data from Statistics Sweden on the evolution of and forecasts for the Swedish population. We forecast the incidence, presuming the existence of a maximum incidence. Results We found that the incidence of knee arthroplasty will continue to increase until a projected upper incidence level of about 469 total knee replacements per 105 Swedish residents aged 40 years and older is reached around the year 2130. In 2020, the estimated incidence of total knee arthroplasties per 105 Swedish residents aged 40 years and older will be 334 (95% prediction interval (PI): 281–374) and in 2030 it will be 382 (PI: 308–441). Using officially forecast population growth data, around 17,500 operations would be expected to be performed in 2020 and around 21,700 would be expected to be performed in 2030. Interpretation Today’s levels of knee arthroplasty are well below the expected maximum incidence, and we expect a continued annual increase in the total number of knee arthroplasties performed. PMID:25806653
NASA Astrophysics Data System (ADS)
Fischer, Andrea; Seiser, Bernd
2014-05-01
First documentations of Austrian glaciers date from as early as 1601. Early documentations were triggered by glacier advances that created glacier-dammed lakes that caused floods whenever the dam collapsed . Since then, Austrian glaciers have been documented in drawings, descriptions and later on in maps and photography. These data are stored in historical archives but today only partly exploited for historical glaciology. They are of special interest for historical hydrology in glacier-covered basins, as the extent of the snow, firn and ice cover and its elevation affect the hydrological response of the basin to precipitation events in several ways: - Firn cover: the more area is covered by firn, the higher is the capacity for retention or even refreezing of liquid precipitation and melt water. - Ice cover: the area covered by glaciers can be affected by melt and contributes to a peak discharge on summer afternoons. - Surface elevation and temperatures: in case of precipitation events, the lower surface temperatures and higher surface elevation of the glaciers compared to ice-free ground have some impact on the capacity to store precipitation. - Glacier floods: for the LIA maximum around 1850, a number of advancing glaciers dammed lakes which emptied during floods. These parameters show different variability with time: glacier area varies only by about 60% to 70% between the LIA maximum and today. The variability of the maximum meltwater peak changes much more than the area. Even during the LIA maximum, several years were extremely warm, so that more than twice the size of today's glacier area was subject to glacier melt. The minimum elevations of large glaciers were several hundred meters lower than today, so that in terms of today's summer mean temperatures, the melt water production from ice ablation would have been much higher than today. A comparison of historical glacier images and description with today's makes it clear that the extent of the snow cover and thus the albedo of the glacier surface has been highly variable. This has significant impact on the meltwater production. These historical glacier data complement the first available runoff data from the early 20th century taken close to the glacier tongues.
Human Influence on Tropical Cyclone Intensity
NASA Technical Reports Server (NTRS)
Sobel, Adam H.; Camargo, Suzana J.; Hall, Timothy M.; Lee, Chia-Ying; Tippett, Michael K.; Wing, Allison A.
2016-01-01
Recent assessments agree that tropical cyclone intensity should increase as the climate warms. Less agreement exists on the detection of recent historical trends in tropical cyclone intensity.We interpret future and recent historical trends by using the theory of potential intensity, which predicts the maximum intensity achievable by a tropical cyclone in a given local environment. Although greenhouse gas-driven warming increases potential intensity, climate model simulations suggest that aerosol cooling has largely canceled that effect over the historical record. Large natural variability complicates analysis of trends, as do poleward shifts in the latitude of maximum intensity. In the absence of strong reductions in greenhouse gas emissions, future greenhouse gas forcing of potential intensity will increasingly dominate over aerosol forcing, leading to substantially larger increases in tropical cyclone intensities.
Kendy, Eloise; Tresch, R.E.
1996-01-01
This report combines a literature review with new information to provide summaries of the geography, geology, and hydrology of each of 32 intermontane basins in western Montana. The summary of each intermontane basin includes concise descriptions of topography, areal extent, altitude, climate, 1990 population, land and water use, geology, surface water, aquifer hydraulic characteristics, ground-water flow, and ground-water quality. If present, geothermal features are described. Average annual and monthly temperature and precipitation are reported from one National Weather Service station in each basin. Streamflow data, including the drainage area, period of record, and average, minimum, and maximum historical streamflow, are reported for all active and discontinued USGS streamflow-gaging stations in each basin. Monitoring-well data, including the well depth, aquifer, period of record, and minimum and maximum historical water levels, are reported for all long-term USGS monitoring wells in each basin. Brief descriptions of geologic, geophysical, and potentiometric- surface maps available for each basin also are included. The summary for each basin also includes a bibliography of hydrogeologic literature. When used alone or in conjunction with regional RASA reports, this report provides a practical starting point for site-specific hydrogeologic investigations.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Getman, Daniel J
2008-01-01
Many attempts to observe changes in terrestrial systems over time would be significantly enhanced if it were possible to improve the accuracy of classifications of low-resolution historic satellite data. In an effort to examine improving the accuracy of historic satellite image classification by combining satellite and air photo data, two experiments were undertaken in which low-resolution multispectral data and high-resolution panchromatic data were combined and then classified using the ECHO spectral-spatial image classification algorithm and the Maximum Likelihood technique. The multispectral data consisted of 6 multispectral channels (30-meter pixel resolution) from Landsat 7. These data were augmented with panchromatic datamore » (15m pixel resolution) from Landsat 7 in the first experiment, and with a mosaic of digital aerial photography (1m pixel resolution) in the second. The addition of the Landsat 7 panchromatic data provided a significant improvement in the accuracy of classifications made using the ECHO algorithm. Although the inclusion of aerial photography provided an improvement in accuracy, this improvement was only statistically significant at a 40-60% level. These results suggest that once error levels associated with combining aerial photography and multispectral satellite data are reduced, this approach has the potential to significantly enhance the precision and accuracy of classifications made using historic remotely sensed data, as a way to extend the time range of efforts to track temporal changes in terrestrial systems.« less
Historical floods in flood frequency analysis: Is this game worth the candle?
NASA Astrophysics Data System (ADS)
Strupczewski, Witold G.; Kochanek, Krzysztof; Bogdanowicz, Ewa
2017-11-01
In flood frequency analysis (FFA) the profit from inclusion of historical information on the largest historical pre-instrumental floods depends primarily on reliability of the information, i.e. the accuracy of magnitude and return period of floods. This study is focused on possible theoretical maximum gain in accuracy of estimates of upper quantiles, that can be obtained by incorporating the largest historical floods of known return periods into the FFA. We assumed a simple case: N years of systematic records of annual maximum flows and either one largest (XM1) or two largest (XM1 and XM2) flood peak flows in a historical M-year long period. The problem is explored by Monte Carlo simulations with the maximum likelihood (ML) method. Both correct and false distributional assumptions are considered. In the first case the two-parameter extreme value models (Gumbel, log-Gumbel, Weibull) with various coefficients of variation serve as parent distributions. In the case of unknown parent distribution, the Weibull distribution was assumed as estimating model and the truncated Gumbel as parent distribution. The return periods of XM1 and XM2 are determined from the parent distribution. The results are then compared with the case, when return periods of XM1 and XM2 are defined by their plotting positions. The results are presented in terms of bias, root mean square error and the probability of overestimation of the quantile with 100-year return period. The results of the research indicate that the maximal profit of inclusion of pre-instrumental foods in the FFA may prove smaller than the cost of reconstruction of historical hydrological information.
How the Second Law of Thermodynamics Has Informed Ecosystem Ecology through Its History
NASA Astrophysics Data System (ADS)
Chapman, E. J.; Childers, D. L.; Vallino, J. J.
2014-12-01
Throughout the history of ecosystem ecology many attempts have been made to develop a general principle governing how systems develop and organize. We reviewed the historical developments that led to conceptualization of several goal-oriented principles in ecosystem ecology and the relationships among them. We focused our review on two prominent principles—the Maximum Power Principle and the Maximum Entropy Production Principle—and the literature that applies to both. While these principles have considerable conceptual overlap and both use concepts in physics (power and entropy), we found considerable differences in their historical development, the disciplines that apply these principles, and their adoption in the literature. We reviewed the literature using Web of Science keyword searches for the MPP, the MEPP, as well as for papers that cited pioneers in the MPP and the MEPP development. From the 6000 papers that our keyword searches returned, we limited our further meta-analysis to 32 papers by focusing on studies with a foundation in ecosystems research. Despite these seemingly disparate pasts, we concluded that the conceptual approaches of these two principles were more similar than dissimilar and that maximization of power in ecosystems occurs with maximum entropy production. We also found that these two principles have great potential to explain how systems develop, organize, and function, but there are no widely agreed upon theoretical derivations for the MEPP or the MPP, possibly hindering their broader use in ecological research. We end with recommendations for how ecosystems-level studies may better use these principles.
NASA Astrophysics Data System (ADS)
Peng, Machuan; Xie, Lian; Pietrafesa, Leonard J.
The asymmetry of tropical cyclone induced maximum coastal sea level rise (positive surge) and fall (negative surge) is studied using a three-dimensional storm surge model. It is found that the negative surge induced by offshore winds is more sensitive to wind speed and direction changes than the positive surge by onshore winds. As a result, negative surge is inherently more difficult to forecast than positive surge since there is uncertainty in tropical storm wind forecasts. The asymmetry of negative and positive surge under parametric wind forcing is more apparent in shallow water regions. For tropical cyclones with fixed central pressure, the surge asymmetry increases with decreasing storm translation speed. For those with the same translation speed, a weaker tropical cyclone is expected to gain a higher AI (asymmetry index) value though its induced maximum surge and fall are smaller. With fixed RMW (radius of maximum wind), the relationship between central pressure and AI is heterogeneous and depends on the value of RMW. Tropical cyclone's wind inflow angle can also affect surge asymmetry. A set of idealized cases as well as two historic tropical cyclones are used to illustrate the surge asymmetry.
Physical understanding of the tropical cyclone wind-pressure relationship.
Chavas, Daniel R; Reed, Kevin A; Knaff, John A
2017-11-08
The relationship between the two common measures of tropical cyclone intensity, the central pressure deficit and the peak near-surface wind speed, is a long-standing problem in tropical meteorology that has been approximated empirically yet lacks physical understanding. Here we provide theoretical grounding for this relationship. We first demonstrate that the central pressure deficit is highly predictable from the low-level wind field via gradient wind balance. We then show that this relationship reduces to a dependence on two velocity scales: the maximum azimuthal-mean azimuthal wind speed and half the product of the Coriolis parameter and outer storm size. This simple theory is found to hold across a hierarchy of models spanning reduced-complexity and Earth-like global simulations and observations. Thus, the central pressure deficit is an intensity measure that combines maximum wind speed, storm size, and background rotation rate. This work has significant implications for both fundamental understanding and risk analysis, including why the central pressure better explains historical economic damages than does maximum wind speed.
Human influence on tropical cyclone intensity.
Sobel, Adam H; Camargo, Suzana J; Hall, Timothy M; Lee, Chia-Ying; Tippett, Michael K; Wing, Allison A
2016-07-15
Recent assessments agree that tropical cyclone intensity should increase as the climate warms. Less agreement exists on the detection of recent historical trends in tropical cyclone intensity. We interpret future and recent historical trends by using the theory of potential intensity, which predicts the maximum intensity achievable by a tropical cyclone in a given local environment. Although greenhouse gas-driven warming increases potential intensity, climate model simulations suggest that aerosol cooling has largely canceled that effect over the historical record. Large natural variability complicates analysis of trends, as do poleward shifts in the latitude of maximum intensity. In the absence of strong reductions in greenhouse gas emissions, future greenhouse gas forcing of potential intensity will increasingly dominate over aerosol forcing, leading to substantially larger increases in tropical cyclone intensities. Copyright © 2016, American Association for the Advancement of Science.
Lerner, Heather R. L.; Johnson, Jeff A.; Lindsay, Alec R.; Kiff, Lloyd F.; Mindell, David P.
2009-01-01
Background The harpy eagle (Harpia harpyja) is the largest Neotropical bird of prey and is threatened by human persecution and habitat loss and fragmentation. Current conservation strategies include local education, captive rearing and reintroduction, and protection or creation of trans-national habitat blocks and corridors. Baseline genetic data prior to reintroduction of captive-bred stock is essential for guiding such efforts but has not been gathered previously. Methodology/Findings We assessed levels of genetic diversity, population structure and demographic history for harpy eagles using samples collected throughout a large portion of their geographic distribution in Central America (n = 32) and South America (n = 31). Based on 417 bp of mitochondrial control region sequence data, relatively high levels of haplotype and nucleotide diversity were estimated for both Central and South America, although haplotype diversity was significantly higher for South America. Historical restriction of gene flow across the Andes (i.e. between our Central and South American subgroups) is supported by coalescent analyses, the haplotype network and significant F ST values, however reciprocally monophyletic lineages do not correspond to geographical locations in maximum likelihood analyses. A sudden population expansion for South America is indicated by a mismatch distribution analysis, and further supported by significant (p<0.05) negative values of Fu and Li's DF and F, and Fu's F S. This expansion, estimated at approximately 60 000 years BP (99 000–36 000 years BP 95% CI), encompasses a transition from a warm and dry time period prior to 50 000 years BP to an interval of maximum precipitation (50 000–36 000 years BP). Notably, this time period precedes the climatic and habitat changes associated with the last glacial maximum. In contrast, a multimodal distribution of haplotypes was observed for Central America suggesting either population equilibrium or a recent decline. Significance High levels of mitochondrial genetic diversity in combination with genetic differentiation among subgroups within regions and between regions highlight the importance of local population conservation in order to preserve maximal levels of genetic diversity in this species. Evidence of historically restricted female-mediated gene flow is an important consideration for captive-breeding programs. PMID:19802391
Ehrenfeld, Jesse M; Dexter, Franklin; Rothman, Brian S; Minton, Betty Sue; Johnson, Diane; Sandberg, Warren S; Epstein, Richard H
2013-12-01
When the phase I postanesthesia care unit (PACU) is at capacity, completed cases need to be held in the operating room (OR), causing a "PACU delay." Statistical methods based on historical data can optimize PACU staffing to achieve the least possible labor cost at a given service level. A decision support process to alert PACU charge nurses that the PACU is at or near maximum census might be effective in lessening the incidence of delays and reducing over-utilized OR time, but only if alerts are timely (i.e., neither too late nor too early to act upon) and the PACU slot can be cleared quickly. We evaluated the maximum potential benefit of such a system, using assumptions deliberately biased toward showing utility. We extracted 3 years of electronic PACU data from a tertiary care medical center. At this hospital, PACU admissions were limited by neither inadequate PACU staffing nor insufficient PACU beds. We developed a model decision support system that simulated alerts to the PACU charge nurse. PACU census levels were reconstructed from the data at a 1-minute level of resolution and used to evaluate if subsequent delays would have been prevented by such alerts. The model assumed there was always a patient ready for discharge and an available hospital bed. The time from each alert until the maximum census was exceeded ("alert lead time") was determined. Alerts were judged to have utility if the alert lead time fell between various intervals from 15 or 30 minutes to 60, 75, or 90 minutes after triggering. In addition, utility for reducing over-utilized OR time was assessed using the model by determining if 2 patients arrived from 5 to 15 minutes of each other when the PACU census was at 1 patient less than the maximum census. At most, 23% of alerts arrived 30 to 60 minutes prior to the admission that resulted in the PACU exceeding the specified maximum capacity. When the notification window was extended to 15 to 90 minutes, the maximum utility was <50%. At most, 45% of alerts potentially would have resulted in reassigning the last available PACU slot to 1 OR versus another within 15 minutes of the original assignment. Despite multiple biases that favored effectiveness, the maximum potential benefit of a decision support system to mitigate PACU delays on the day on the surgery was below the 70% minimum threshold for utility of automated decision support messages, previously established via meta-analysis. Neither reduction in PACU delays nor reassigning promised PACU slots based on reducing over-utilized OR time were realized sufficiently to warrant further development of the system. Based on these results, the only evidence-based method of reducing PACU delays is to adjust PACU staffing and staff scheduling using computational algorithms to match the historical workload (e.g., as developed in 2001).
Dealing with uncertainty in the probability of overtopping of a flood mitigation dam
NASA Astrophysics Data System (ADS)
Michailidi, Eleni Maria; Bacchi, Baldassare
2017-05-01
In recent years, copula multivariate functions were used to model, probabilistically, the most important variables of flood events: discharge peak, flood volume and duration. However, in most of the cases, the sampling uncertainty, from which small-sized samples suffer, is neglected. In this paper, considering a real reservoir controlled by a dam as a case study, we apply a structure-based approach to estimate the probability of reaching specific reservoir levels, taking into account the key components of an event (flood peak, volume, hydrograph shape) and of the reservoir (rating curve, volume-water depth relation). Additionally, we improve information about the peaks from historical data and reports through a Bayesian framework, allowing the incorporation of supplementary knowledge from different sources and its associated error. As it is seen here, the extra information can result in a very different inferred parameter set and consequently this is reflected as a strong variability of the reservoir level, associated with a given return period. Most importantly, the sampling uncertainty is accounted for in both cases (single-site and multi-site with historical information scenarios), and Monte Carlo confidence intervals for the maximum water level are calculated. It is shown that water levels of specific return periods in a lot of cases overlap, thus making risk assessment, without providing confidence intervals, deceiving.
Attard, Catherine R M; Beheregaray, Luciano B; Jenner, K Curt S; Gill, Peter C; Jenner, Micheline-Nicole M; Morrice, Margaret G; Teske, Peter R; Möller, Luciana M
2015-05-01
Unusually low genetic diversity can be a warning of an urgent need to mitigate causative anthropogenic activities. However, current low levels of genetic diversity in a population could also be due to natural historical events, including recent evolutionary divergence, or long-term persistence at a small population size. Here, we determine whether the relatively low genetic diversity of pygmy blue whales (Balaenoptera musculus brevicauda) in Australia is due to natural causes or overexploitation. We apply recently developed analytical approaches in the largest genetic dataset ever compiled to study blue whales (297 samples collected after whaling and representing lineages from Australia, Antarctica and Chile). We find that low levels of genetic diversity in Australia are due to a natural founder event from Antarctic blue whales (Balaenoptera musculus intermedia) that occurred around the Last Glacial Maximum, followed by evolutionary divergence. Historical climate change has therefore driven the evolution of blue whales into genetically, phenotypically and behaviourally distinct lineages that will likely be influenced by future climate change. © 2015 The Author(s) Published by the Royal Society. All rights reserved.
Reliability of reservoir firm yield determined from the historical drought of record
Archfield, S.A.; Vogel, R.M.
2005-01-01
The firm yield of a reservoir is typically defined as the maximum yield that could have been delivered without failure during the historical drought of record. In the future, reservoirs will experience droughts that are either more or less severe than the historical drought of record. The question addressed here is what the reliability of such systems will be when operated at the firm yield. To address this question, we examine the reliability of 25 hypothetical reservoirs sited across five locations in the central and western United States. These locations provided a continuous 756-month streamflow record spanning the same time interval. The firm yield of each reservoir was estimated from the historical drought of record at each location. To determine the steady-state monthly reliability of each firm-yield estimate, 12,000-month synthetic records were generated using the moving-blocks bootstrap method. Bootstrapping was repeated 100 times for each reservoir to obtain an average steady-state monthly reliability R, the number of months the reservoir did not fail divided by the total months. Values of R were greater than 0.99 for 60 percent of the study reservoirs; the other 40 percent ranged from 0.95 to 0.98. Estimates of R were highly correlated with both the level of development (ratio of firm yield to average streamflow) and average lag-1 monthly autocorrelation. Together these two predictors explained 92 percent of the variability in R, with the level of development alone explaining 85 percent of the variability. Copyright ASCE 2005.
Cohn, T.A.; Lane, W.L.; Baier, W.G.
1997-01-01
This paper presents the expected moments algorithm (EMA), a simple and efficient method for incorporating historical and paleoflood information into flood frequency studies. EMA can utilize three types of at-site flood information: systematic stream gage record; information about the magnitude of historical floods; and knowledge of the number of years in the historical period when no large flood occurred. EMA employs an iterative procedure to compute method-of-moments parameter estimates. Initial parameter estimates are calculated from systematic stream gage data. These moments are then updated by including the measured historical peaks and the expected moments, given the previously estimated parameters, of the below-threshold floods from the historical period. The updated moments result in new parameter estimates, and the last two steps are repeated until the algorithm converges. Monte Carlo simulations compare EMA, Bulletin 17B's [United States Water Resources Council, 1982] historically weighted moments adjustment, and maximum likelihood estimators when fitting the three parameters of the log-Pearson type III distribution. These simulations demonstrate that EMA is more efficient than the Bulletin 17B method, and that it is nearly as efficient as maximum likelihood estimation (MLE). The experiments also suggest that EMA has two advantages over MLE when dealing with the log-Pearson type III distribution: It appears that EMA estimates always exist and that they are unique, although neither result has been proven. EMA can be used with binomial or interval-censored data and with any distributional family amenable to method-of-moments estimation.
Averill, Colin; Waring, Bonnie G; Hawkes, Christine V
2016-05-01
Soil moisture constrains the activity of decomposer soil microorganisms, and in turn the rate at which soil carbon returns to the atmosphere. While increases in soil moisture are generally associated with increased microbial activity, historical climate may constrain current microbial responses to moisture. However, it is not known if variation in the shape and magnitude of microbial functional responses to soil moisture can be predicted from historical climate at regional scales. To address this problem, we measured soil enzyme activity at 12 sites across a broad climate gradient spanning 442-887 mm mean annual precipitation. Measurements were made eight times over 21 months to maximize sampling during different moisture conditions. We then fit saturating functions of enzyme activity to soil moisture and extracted half saturation and maximum activity parameter values from model fits. We found that 50% of the variation in maximum activity parameters across sites could be predicted by 30-year mean annual precipitation, an indicator of historical climate, and that the effect is independent of variation in temperature, soil texture, or soil carbon concentration. Based on this finding, we suggest that variation in the shape and magnitude of soil microbial response to soil moisture due to historical climate may be remarkably predictable at regional scales, and this approach may extend to other systems. If historical contingencies on microbial activities prove to be persistent in the face of environmental change, this approach also provides a framework for incorporating historical climate effects into biogeochemical models simulating future global change scenarios. © 2016 John Wiley & Sons Ltd.
NASA Astrophysics Data System (ADS)
Cohn, T. A.; Lane, W. L.; Baier, W. G.
This paper presents the expected moments algorithm (EMA), a simple and efficient method for incorporating historical and paleoflood information into flood frequency studies. EMA can utilize three types of at-site flood information: systematic stream gage record; information about the magnitude of historical floods; and knowledge of the number of years in the historical period when no large flood occurred. EMA employs an iterative procedure to compute method-of-moments parameter estimates. Initial parameter estimates are calculated from systematic stream gage data. These moments are then updated by including the measured historical peaks and the expected moments, given the previously estimated parameters, of the below-threshold floods from the historical period. The updated moments result in new parameter estimates, and the last two steps are repeated until the algorithm converges. Monte Carlo simulations compare EMA, Bulletin 17B's [United States Water Resources Council, 1982] historically weighted moments adjustment, and maximum likelihood estimators when fitting the three parameters of the log-Pearson type III distribution. These simulations demonstrate that EMA is more efficient than the Bulletin 17B method, and that it is nearly as efficient as maximum likelihood estimation (MLE). The experiments also suggest that EMA has two advantages over MLE when dealing with the log-Pearson type III distribution: It appears that EMA estimates always exist and that they are unique, although neither result has been proven. EMA can be used with binomial or interval-censored data and with any distributional family amenable to method-of-moments estimation.
NASA Astrophysics Data System (ADS)
Dong, Sheng; Chi, Kun; Zhang, Qiyi; Zhang, Xiangdong
2012-03-01
Compared with traditional real-time forecasting, this paper proposes a Grey Markov Model (GMM) to forecast the maximum water levels at hydrological stations in the estuary area. The GMM combines the Grey System and Markov theory into a higher precision model. The GMM takes advantage of the Grey System to predict the trend values and uses the Markov theory to forecast fluctuation values, and thus gives forecast results involving two aspects of information. The procedure for forecasting annul maximum water levels with the GMM contains five main steps: 1) establish the GM (1, 1) model based on the data series; 2) estimate the trend values; 3) establish a Markov Model based on relative error series; 4) modify the relative errors caused in step 2, and then obtain the relative errors of the second order estimation; 5) compare the results with measured data and estimate the accuracy. The historical water level records (from 1960 to 1992) at Yuqiao Hydrological Station in the estuary area of the Haihe River near Tianjin, China are utilized to calibrate and verify the proposed model according to the above steps. Every 25 years' data are regarded as a hydro-sequence. Eight groups of simulated results show reasonable agreement between the predicted values and the measured data. The GMM is also applied to the 10 other hydrological stations in the same estuary. The forecast results for all of the hydrological stations are good or acceptable. The feasibility and effectiveness of this new forecasting model have been proved in this paper.
Donner, Simon D
2011-07-01
Over the past 30 years, warm thermal disturbances have become commonplace on coral reefs worldwide. These periods of anomalous sea surface temperature (SST) can lead to coral bleaching, a breakdown of the symbiosis between the host coral and symbiotic dinoflagellates which reside in coral tissue. The onset of bleaching is typically predicted to occur when the SST exceeds a local climatological maximum by 1 degrees C for a month or more. However, recent evidence suggests that the threshold at which bleaching occurs may depend on thermal history. This study uses global SST data sets (HadISST and NOAA AVHRR) and mass coral bleaching reports (from Reefbase) to examine the effect of historical SST variability on the accuracy of bleaching prediction. Two variability-based bleaching prediction methods are developed from global analysis of seasonal and interannual SST variability. The first method employs a local bleaching threshold derived from the historical variability in maximum annual SST to account for spatial variability in past thermal disturbance frequency. The second method uses a different formula to estimate the local climatological maximum to account for the low seasonality of SST in the tropics. The new prediction methods are tested against the common globally fixed threshold method using the observed bleaching reports. The results find that estimating the bleaching threshold from local historical SST variability delivers the highest predictive power, but also a higher rate of Type I errors. The second method has the lowest predictive power globally, though regional analysis suggests that it may be applicable in equatorial regions. The historical data analysis suggests that the bleaching threshold may have appeared to be constant globally because the magnitude of interannual variability in maximum SST is similar for many of the world's coral reef ecosystems. For example, the results show that a SST anomaly of 1 degrees C is equivalent to 1.73-2.94 standard deviations of the maximum monthly SST for two-thirds of the world's coral reefs. Coral reefs in the few regions that experience anomalously high interannual SST variability like the equatorial Pacific could prove critical to understanding how coral communities acclimate or adapt to frequent and/or severe thermal disturbances.
NASA Technical Reports Server (NTRS)
Spruce, Joseph P.; Hargrove, William; Gasser, Jerry; Smoot, James; Kuper, Philip D.
2014-01-01
This presentation discusses MODIS NDVI change detection methods and products used in the ForWarn Early Warning System (EWS) for near real time (NRT) recognition and tracking of regionally evident forest disturbances throughout the conterminous US (CONUS). The latter has provided NRT forest change products to the forest health protection community since 2010, using temporally processed MODIS Aqua and Terra NDVI time series data to currently compute and post 6 different forest change products for CONUS every 8 days. Multiple change products are required to improve detectability and to more fully assess the nature of apparent disturbances. Each type of forest change product reports per pixel percent change in NDVI for a given 24 day interval, comparing current versus a given historical baseline NDVI. EMODIS 7 day expedited MODIS MOD13 data are used to obtain current and historical NDVIs, respectively. Historical NDVI data is processed with Time Series Product Tool (TSPT); and 2) the Phenological Parameters Estimation Tool (PPET) software. While each change products employ maximum value compositing (MVC) of NDVI, the design of specific products primarily differs in terms of the historical baseline. The three main change products use either 1, 3, or all previous years of MVC NDVI as a baseline. Another product uses an Adaptive Length Compositing (ALC) version of MVC to derive an alternative current NDVI that is the freshest quality NDVI as opposed to merely the MVC NDVI across a 24 day time frame. The ALC approach can improve detection speed by 8 to 16 days. ForWarn also includes 2 change products that improve detectability of forest disturbances in lieu of climatic fluctuations, especially in the spring and fall. One compares current MVC NDVI to the zonal maximum under the curve NDVI per pheno-region cluster class, considering all previous years in the MODIS record. The other compares current maximum NDVI to the mean of maximum NDVI for all previous MODIS years.
NASA Astrophysics Data System (ADS)
Aller, D.; Hohl, R.; Mair, F.; Schiesser, H.-H.
2003-04-01
Extreme hailfall can cause massive damage to building structures. For the insurance and reinsurance industry it is essential to estimate the probable maximum hail loss of their portfolio. The probable maximum loss (PML) is usually defined with a return period of 1 in 250 years. Statistical extrapolation has a number of critical points, as historical hail loss data are usually only available from some events while insurance portfolios change over the years. At the moment, footprints are derived from historical hail damage data. These footprints (mean damage patterns) are then moved over a portfolio of interest to create scenario losses. However, damage patterns of past events are based on the specific portfolio that was damaged during that event and can be considerably different from the current spread of risks. A new method for estimating the probable maximum hail loss to a building portfolio is presented. It is shown that footprints derived from historical damages are different to footprints of hail kinetic energy calculated from radar reflectivity measurements. Based on the relationship between radar-derived hail kinetic energy and hail damage to buildings, scenario losses can be calculated. A systematic motion of the hail kinetic energy footprints over the underlying portfolio creates a loss set. It is difficult to estimate the return period of losses calculated with footprints derived from historical damages being moved around. To determine the return periods of the hail kinetic energy footprints over Switzerland, 15 years of radar measurements and 53 years of agricultural hail losses are available. Based on these data, return periods of several types of hailstorms were derived for different regions in Switzerland. The loss set is combined with the return periods of the event set to obtain an exceeding frequency curve, which can be used to derive the PML.
Decadal-scale coastal cliff retreat in southern and central California
NASA Astrophysics Data System (ADS)
Young, Adam P.
2018-01-01
Airborne LiDAR data collected in 1998 and 2009-2010 were used to measure coastal cliff erosion and retreat between the Mexico/California border and Bodega Head, California. Cliff erosion was detected along 44% of the 595 km of shoreline evaluated, while the remaining cliffs were relatively stable. The mean cliff top retreat rate was 0.12 m/yr, while mean retreat averaged over the entire cliff face was 0.04 m/yr. The maximum cliff top and face retreat rates were 4.2 and 3.8 m/yr, respectively. Historical ( 1930s to 1998) and recent retreat rates were significantly inversely correlated for areas with large historical or recent cliff retreat, such that locations with elevated historical retreat had low levels of recent retreat and locations with elevated recent retreat were preceded by low rates of historical retreat. The strength of this inverse correlation increased with cliff change magnitudes up to r2 of 0.91 for cliff top retreat rates > 2.9 m/yr. Mean recent retreat rates were 52-83% lower than mean historical retreat rates. Although beaches can protect cliffs against wave-driven erosion, cliffs fronted by beaches retreated 49% more than cliffs without beaches. On average, unarmored cliff faces retreated 0.05 m/yr between 1998 and 2009-2010, about three times faster than artificially armored cliffs. Alongshore metrics of wave-cliff impact, precipitation, and cliff hardness were generally not well correlated with recent cliff changes. A cliff hazard metric is used to detect cliff steepening and areas prone to future cliff top failures.
Flood Frequency Analysis With Historical and Paleoflood Information
NASA Astrophysics Data System (ADS)
Stedinger, Jery R.; Cohn, Timothy A.
1986-05-01
An investigation is made of flood quantile estimators which can employ "historical" and paleoflood information in flood frequency analyses. Two categories of historical information are considered: "censored" data, where the magnitudes of historical flood peaks are known; and "binomial" data, where only threshold exceedance information is available. A Monte Carlo study employing the two-parameter lognormal distribution shows that maximum likelihood estimators (MLEs) can extract the equivalent of an additional 10-30 years of gage record from a 50-year period of historical observation. The MLE routines are shown to be substantially better than an adjusted-moment estimator similar to the one recommended in Bulletin 17B of the United States Water Resources Council Hydrology Committee (1982). The MLE methods performed well even when floods were drawn from other than the assumed lognormal distribution.
Nuclear Power’s Role in Generating Electricity
2008-05-01
and Without EPAct Incentives 9 2-1. Historical Volatility in Fuel Prices 20 2-2. Carbon Dioxide Emissions of Base-Load Technologies for Generating...options using that fuel would operate at their maximum capacity factor.CBO 20 NUCLEAR POWER’S ROLE IN GENERATING ELECTRICITY CBOFigure 2-1. Historical ... Volatility in Fuel Prices (Percentage change) Source: Congressional Budget Office based on data from the Energy Information Administration (EIA
Petersen, M.D.; Cramer, C.H.; Reichle, M.S.; Frankel, A.D.; Hanks, T.C.
2000-01-01
We examine the difference between expected earthquake rates inferred from the historical earthquake catalog and the geologic data that was used to develop the consensus seismic source characterization for the state of California [California Department of Conservation, Division of Mines and Geology (CDMG) and U.S. Geological Survey (USGS) Petersen et al., 1996; Frankel et al., 1996]. On average the historic earthquake catalog and the seismic source model both indicate about one M 6 or greater earthquake per year in the state of California. However, the overall earthquake rates of earthquakes with magnitudes (M) between 6 and 7 in this seismic source model are higher, by at least a factor of 2, than the mean historic earthquake rates for both southern and northern California. The earthquake rate discrepancy results from a seismic source model that includes earthquakes with characteristic (maximum) magnitudes that are primarily between M 6.4 and 7.1. Many of these faults are interpreted to accommodate high strain rates from geologic and geodetic data but have not ruptured in large earthquakes during historic time. Our sensitivity study indicates that the rate differences between magnitudes 6 and 7 can be reduced by adjusting the magnitude-frequency distribution of the source model to reflect more characteristic behavior, by decreasing the moment rate available for seismogenic slip along faults, by increasing the maximum magnitude of the earthquake on a fault, or by decreasing the maximum magnitude of the background seismicity. However, no single parameter can be adjusted, consistent with scientific consensus, to eliminate the earthquake rate discrepancy. Applying a combination of these parametric adjustments yields an alternative earthquake source model that is more compatible with the historic data. The 475-year return period hazard for peak ground and 1-sec spectral acceleration resulting from this alternative source model differs from the hazard resulting from the standard CDMG-USGS model by less than 10% across most of California but is higher (generally about 10% to 30%) within 20 km from some faults.
Groundwater-level trends in the U.S. glacial aquifer system, 1964-2013
Hodgkins, Glenn A.; Dudley, Robert W.; Nielsen, Martha G.; Renard, Benjamin; Qi, Sharon L.
2017-01-01
The glacial aquifer system in the United States is a major source of water supply but previous work on historical groundwater trends across the system is lacking. Trends in annual minimum, mean, and maximum groundwater levels for 205 monitoring wells were analyzed across three regions of the system (East, Central, West Central) for four time periods: 1964-2013, 1974-2013, 1984-2013, and 1994-2013. Trends were computed separately for wells in the glacial aquifer system with low potential for human influence on groundwater levels and ones with high potential influence from activities such as groundwater pumping. Generally there were more wells with significantly increasing groundwater levels (levels closer to ground surface) than wells with significantly decreasing levels. The highest numbers of significant increases for all four time periods were with annual minimum and/or mean levels. There were many more wells with significant increases from 1964 to 2013 than from more recent periods, consistent with low precipitation in the 1960s. Overall there were low numbers of wells with significantly decreasing trends regardless of time period considered; the highest number of these were generally for annual minimum groundwater levels at wells with likely human influence. There were substantial differences in the number of wells with significant groundwater-level trends over time, depending on whether the historical time series are assumed to be independent, have short-term persistence, or have long-term persistence. Mean annual groundwater levels have significant lag-one-year autocorrelation at 26.0% of wells in the East region, 65.4% of wells in the Central region, and 100% of wells in the West Central region. Annual precipitation across the glacial aquifer system, on the other hand, has significant autocorrelation at only 5.5% of stations, about the percentage expected due to chance.
Implications of Extended Solar Minima
NASA Technical Reports Server (NTRS)
Adams, Mitzi L.; Davis, J. M.
2009-01-01
Since the discovery of periodicity in the solar cycle, the historical record of sunspot number has been carefully examined, attempting to make predictions about the next cycle. Much emphasis has been on predicting the maximum amplitude and length of the next cycle. Because current space-based and suborbital instruments are designed to study active phenomena, there is considerable interest in estimating the length and depth of the current minimum. We have developed criteria for the definition of a minimum and applied it to the historical sunspot record starting in 1749. In doing so, we find that 1) the current minimum is not yet unusually long and 2) there is no obvious way of predicting when, using our definition, the current minimum may end. However, by grouping the data into 22- year cycles there is an interesting pattern of extended minima that recurs every fourth or fifth 22-year cycle. A preliminary comparison of this pattern with other records, suggests the possibility of a correlation between extended minima and lower levels of solar irradiance.
Walker, Matt J; Stockman, Amy K; Marek, Paul E; Bond, Jason E
2009-01-01
Background Species that are widespread throughout historically glaciated and currently non-glaciated areas provide excellent opportunities to investigate the role of Pleistocene climatic change on the distribution of North American biodiversity. Many studies indicate that northern animal populations exhibit low levels of genetic diversity over geographically widespread areas whereas southern populations exhibit relatively high levels. Recently, paleoclimatic data have been combined with niche-based distribution modeling to locate possible refugia during the Last Glacial Maximum. Using phylogeographic, population, and paleoclimatic data, we show that the distribution and mitochondrial data for the millipede genus Narceus are consistent with classical examples of Pleistocene refugia and subsequent post-glacial population expansion seen in other organismal groups. Results The phylogeographic structure of Narceus reveals a complex evolutionary history with signatures of multiple refugia in southeastern North America followed by two major northern expansions. Evidence for refugial populations were found in the southern Appalachian Mountains and in the coastal plain. The northern expansions appear to have radiated from two separate refugia, one from the Gulf Coastal Plain area and the other from the mid-Atlantic coastal region. Distributional models of Narceus during the Last Glacial Maximum show a dramatic reduction from the current distribution, with suitable ecological zones concentrated along the Gulf and Atlantic coastal plain. We found a strong correlation between these zones of ecological suitability inferred from our paleo-model with levels of genetic diversity derived from phylogenetic and population estimates of genetic structuring. Conclusion The signature of climatic change, during and after the Pleistocene, on the distribution of the millipede genus Narceus is evident in the genetic data presented. Niche-based historical distribution modeling strengthens the conclusions drawn from the genetic data and proves useful in identifying probable refugia. Such interdisciplinary biogeographic studies provide a comprehensive approach to understanding these processes that generate and maintain biodiversity as well as the framework necessary to explore questions regarding evolutionary diversification of taxa. PMID:19183468
NASA Astrophysics Data System (ADS)
Yang, Xuhong; Jin, Xiaobin; Guo, Beibei; Long, Ying; Zhou, Yinkang
2015-05-01
Constructing a spatially explicit time series of historical cultivated land is of upmost importance for climatic and ecological studies that make use of Land Use and Cover Change (LUCC) data. Some scholars have made efforts to simulate and reconstruct the quantitative information on historical land use at the global or regional level based on "top-down" decision-making behaviors to match overall cropland area to land parcels using land arability and universal parameters. Considering the concentrated distribution of cultivated land and various factors influencing cropland distribution, including environmental and human factors, this study developed a "bottom-up" model of historical cropland based on constrained Cellular Automaton (CA). Our model takes a historical cropland area as an external variable and the cropland distribution in 1980 as the maximum potential scope of historical cropland. We selected elevation, slope, water availability, average annual precipitation, and distance to the nearest rural settlement as the main influencing factors of land use suitability. Then, an available labor force index is used as a proxy for the amount of cropland to inspect and calibrate these spatial patterns. This paper applies the model to a traditional cultivated region in China and reconstructs its spatial distribution of cropland during 6 periods. The results are shown as follows: (1) a constrained CA is well suited for simulating and reconstructing the spatial distribution of cropland in China's traditional cultivated region. (2) Taking the different factors affecting spatial pattern of cropland into consideration, the partitioning of the research area effectively reflected the spatial differences in cropland evolution rules and rates. (3) Compared with "HYDE datasets", this research has formed higher-resolution Boolean spatial distribution datasets of historical cropland with a more definitive concept of spatial pattern in terms of fractional format. We conclude that our reconstruction is closer to the actual change pattern of the traditional cultivated region in China.
Staley, S; Romlein, J; Chacko, A K; Wider, R
2000-05-01
Picture archiving and communication system (PACS) maintenance on an individual site basis has historically been a complex and costly challenge. With the advent of enterprise-wide PACS projects such as the Virtual Radiology Environment (VRE) project, the challenge of a maintenance program with even more complexities has presented itself. The approach of the project management team for the VRE project is not one of reactive maintenance, but one of highly proactive planning and negotiations, in hopes of capitalizing on the economies of scale of an enterprise-wide PACS maintenance program. A proactive maintenance program is one aspect of life-cycle management. As with any capital acquisition, life-cycle management may be used to manage the specific project aspects related to PACS. The purpose of an enterprise-wide warranty and maintenance life-cycle management approach is to maintain PACS at its maximum operational efficiency and utilization levels through a flexible, shared, yet symbiotic relationship between local, regional, and vendor resources. These goals include providing maximum operational performance levels on a local, regional, and enterprise basis, while maintaining acceptable costs and resource utilization levels. This goal must be achieved without negatively impacting point of care activities, regardless of changes to the clinical business environment.
Lundgren, Robert F.; Vining, Kevin C.
2013-01-01
The Turtle Mountain Indian Reservation relies on groundwater supplies to meet the demands of community and economic needs. The U.S. Geological Survey, in cooperation with the Turtle Mountain Band of Chippewa Indians, examined historical groundwater-level and groundwater-quality data for the Fox Hills, Hell Creek, Rolla, and Shell Valley aquifers. The two main sources of water-quality data for groundwater were the U.S. Geological Survey National Water Information System database and the North Dakota State Water Commission database. Data included major ions, trace elements, nutrients, field properties, and physical properties. The Fox Hills and Hell Creek aquifers had few groundwater water-quality data. The lack of data limits any detailed assessments that can be made about these aquifers. Data for the Rolla aquifer exist from 1978 through 1980 only. The concentrations of some water-quality constituents exceeded the U.S. Environmental Protection Agency secondary maximum contaminant levels. No samples were analyzed for pesticides and hydrocarbons. Numerous water-quality samples have been obtained from the Shell Valley aquifer. About one-half of the water samples from the Shell Valley aquifer had concentrations of iron, manganese, sulfate, and dissolved solids that exceeded the U.S. Environmental Protection Agency secondary maximum contaminant levels. Overall, the data did not indicate obvious patterns in concentrations.
Methodology and Implications of Maximum Paleodischarge Estimates for
Channels, M.; Pruess, J.; Wohl, E.E.; Jarrett, R.D.
1998-01-01
Historical and geologic records may be used to enhance magnitude estimates for extreme floods along mountain channels, as demonstrated in this study from the San Juan Mountains of Colorado. Historical photographs and local newspaper accounts from the October 1911 flood indicate the likely extent of flooding and damage. A checklist designed to organize and numerically score evidence of flooding was used in 15 field reconnaissance surveys in the upper Animas River valley of southwestern Colorado. Step-backwater flow modeling estimated the discharges necessary to create longitudinal flood bars observed at 6 additional field sites. According to these analyses, maximum unit discharge peaks at approximately 1.3 m3 s~' km"2 around 2200 m elevation, with decreased unit discharges at both higher and lower elevations. These results (1) are consistent with Jarrett's (1987, 1990, 1993) maximum 2300-m elevation limit for flash-flooding in the Colorado Rocky Mountains, and (2) suggest that current Probable Maximum Flood (PMF) estimates based on a 24-h rainfall of 30 cm at elevations above 2700 m are unrealistically large. The methodology used for this study should be readily applicable to other mountain regions where systematic streamflow records are of short duration or nonexistent. ?? 1998 Regents of the University of Colorado.
NASA Technical Reports Server (NTRS)
1990-01-01
It is NASA's intent to provide small disadvantaged businesses, including women-owned, historically black colleges and universities and minority education institutions the maximum practicable opportunity to receive a fair proportion of NASA prime and subcontracted awards. Annually, NASA will establish socioeconomic procurement goals including small disadvantaged business goals, with a target of reaching the eight percent level by the end of FY 1994. The NASA Associate Administrators, who are responsible for the programs at the various NASA Centers, will be held accountable for full implementation of the socioeconomic procurement plans. Various aspects of this plan, including its history, are discussed.
Jeanine M. Rhemtulla; David J. Mladenoff; Murray K. Clayton
2009-01-01
Historical land use can influence forest species composition and structure for centuries after direct use has ceased. In Wisconsin, USA, Euro-American settlement in the mid- to late 1800s was accompanied by widespread logging, agricultural conversion, and fire suppression. To determine the maximum magnitude of change in forest ecosystems at the height of the...
Wind extremes in the North Sea basin under climate change: an ensemble study of 12 CMIP5 GCMs
NASA Astrophysics Data System (ADS)
de Winter, R.; Ruessink, G.; Sterl, A.
2012-12-01
Coastal safety may be influenced by climate change, as changes in extreme surge levels and wave extremes may increase the vulnerability of dunes and other coastal defenses. In the North Sea, an area already prone to severe flooding, these high surge levels and waves are generated by severe wind speeds during storm events. As a result of the geometry of the North Sea, not only the maximum wind speed is relevant, but also wind direction. Analyzing changes in a changing climate implies that several uncertainties need to be taken into account. First, there is the uncertainty in climate experiments, which represents the possible development of the emission of greenhouse gases. Second, there is uncertainty between the climate models that are used to analyze the effect of different climate experiments. The third uncertainty is the natural variability of the climate. When this system variability is large, small trends will be difficult to detect. The natural variability results in statistical uncertainty, especially for events with high return values. We addressed the first two types of uncertainties for extreme wind conditions in the North Sea using 12 CMIP5 GCMs. To evaluate the differences between the climate experiments, two climate experiments (rcp4.5 and rcp8.5) from 2050-2100 are compared with historical runs, running from 1950-2000. Rcp4.5 is considered to be a middle climate experiment and rcp8.5 represents high-end climate scenarios. The projections of the 12 GCMs for a given scenario illustrate model uncertainty. We focus on the North Sea basin, because changes in wind conditions could have a large impact on safety of the densely populated North Sea coast, an area that has already a high exposure to flooding. Our results show that, consistent with ERA-Interim results, the annual maximum wind speed in the historical run demonstrates large interannual variability. For the North Sea, the annual maximum wind speed is not projected to change in either rcp4.5 or rcp8.5. In fact, the differences in the 12 GCMs are larger than the difference between the three experiments. Furthermore, our results show that, the variation in direction of annual maximum wind speed is large and this precludes a firm statement on climate-change induced changes in these directions. Nonetheless, most models indicate a decrease in annual maximum wind speed from south-eastern directions and an increase from south-western and western directions. This might be caused by a poleward shift of the storm track. The amount of wind from north-west and north-north-west, wind directions that are responsible for the development of extreme storm surges in the southern part of the North Sea, are not projected to change. However, North Sea coasts that have the longest fetch for western direction, e.g. the German Bight, may encounter more often high storm surge levels and extreme waves when the annual maximum wind will indeed be more often from western direction.
NASA Astrophysics Data System (ADS)
Bozzano, F.; Caserta, A.; Govoni, A.; Marra, F.; Martino, S.
2008-01-01
The paper presents the results of a case study conducted on the Holocene alluvial deposits of the Tiber River valley, in the city of Rome. The main test site selected for the study, Valco S. Paolo, is located about 2 km South of Rome's historical centre. The alluvial deposits were dynamically characterized in a comprehensive way via site investigations and geotechnical laboratory tests. Normalized shear modulus decay and damping curves (G/G0 and D/D0 vs γ) were obtained for the dominantly fine-grained levels. The curves demonstrate that these levels have a more marked shear stiffness decay if compared with the underlying Pliocene bedrock. Decay curves from laboratory tests for the Tiber alluvia correlated well with the trend of the function proposed by Hardin and Drnevich, making it possible to derive their specific interpolation function coefficients. Use was made of the extrapolation of the findings from the Valco S. Paolo test site to a large part of Rome's historical centre by means of two other test sites, supported by an engineering-geology model of the complex spatial distribution of the Tiber alluvia. The experimental Valco S. Paolo Vs profile was extrapolated to the other test sites on the basis of a stratigraphic criterion; the analysis of seismic noise measurements, obtained for the three test sites, validated the engineering-geology based extrapolation and showed that the main rigidity contrast occurs inside the alluvial body (at the contact with the underlying basal gravel-level G) and not between the alluvia and the Plio-Pleistocene bedrock, composed of highly consistent clay (Marne Vaticane). The 1D modeling of local seismic response to the maximum expected earthquakes in the city of Rome confirms that the deposits have one principal mode of vibration at about 1 Hz. However, the simulation also evidenced that the silty-clay deposits (level C), making up the most part of the Tiber alluvial body, play a key role in characterizing the soil column deformation profile since it can be affected by non linear effects induced by the maximum expected earthquake when some stratigraphic conditions are satisfied.
NASA Astrophysics Data System (ADS)
Nagy, B. K.; Mohssen, M.; Hughey, K. F. D.
2017-04-01
This study addresses technical questions concerning the use of the partial duration series (PDS) within the domain of flood frequency analysis. The recurring questions which often prevent the standardised use of the PDS are peak independence and threshold selection. This paper explores standardised approaches to peak and threshold selection to produce PDS samples with differing average annual exceedances, using six theoretical probability distributions. The availability of historical annual maximum (AMS) data (1930-1966) in addition to systemic AMS data (1967-2015) enables a unique comparison between the performance of the PDS sample and the systemic AMS sample. A recently derived formula for the translation of the PDS into the annual domain, simplifying the use of the PDS, is utilised in an applied case study for the first time. Overall, the study shows that PDS sampling returns flood magnitudes similar to those produced by AMS series utilising historical data and thus the use of the PDS should be preferred in cases where historical flood data is unavailable.
The Effects of Global Warming on Temperature and Precipitation Trends in Northeast America
NASA Astrophysics Data System (ADS)
Francis, F.
2013-12-01
The objective of this paper is to discuss the analysis of results in temperature and precipitation (rainfall) data and how they are affected by the theory of global warming in Northeast America. The topic was chosen because it will show the trends in temperature and precipitation and their relations to global warming. Data was collected from The Global Historical Climatology Network (GHCN). The data range from years of 1973 to 2012. We were able to calculate the yearly and monthly regress to estimate the relationship of variables found in the individual sources. With the use of specially designed software, analysis and manual calculations we are able to give a visualization of these trends in precipitation and temperature and to question if these trends are due to the theory of global warming. With the Calculation of the trends in slope we were able to interpret the changes in minimum and maximum temperature and precipitation. Precipitation had a 9.5 % increase over the past forty years, while maximum temperature increased 1.9 %, a greater increase is seen in minimum temperature of 3.3 % was calculated over the years. The trends in precipitation, maximum and minimum temperature is statistically significant at a 95% level.
Kittel, T.G.F.; Rosenbloom, N.A.; Royle, J. Andrew; Daly, Christopher; Gibson, W.P.; Fisher, H.H.; Thornton, P.; Yates, D.N.; Aulenbach, S.; Kaufman, C.; McKeown, R.; Bachelet, D.; Schimel, D.S.; Neilson, R.; Lenihan, J.; Drapek, R.; Ojima, D.S.; Parton, W.J.; Melillo, J.M.; Kicklighter, D.W.; Tian, H.; McGuire, A.D.; Sykes, M.T.; Smith, B.; Cowling, S.; Hickler, T.; Prentice, I.C.; Running, S.; Hibbard, K.A.; Post, W.M.; King, A.W.; Smith, T.; Rizzo, B.; Woodward, F.I.
2004-01-01
Analysis and simulation of biospheric responses to historical forcing require surface climate data that capture those aspects of climate that control ecological processes, including key spatial gradients and modes of temporal variability. We developed a multivariate, gridded historical climate dataset for the conterminous USA as a common input database for the Vegetation/Ecosystem Modeling and Analysis Project (VEMAP), a biogeochemical and dynamic vegetation model intercomparison. The dataset covers the period 1895-1993 on a 0.5?? latitude/longitude grid. Climate is represented at both monthly and daily timesteps. Variables are: precipitation, mininimum and maximum temperature, total incident solar radiation, daylight-period irradiance, vapor pressure, and daylight-period relative humidity. The dataset was derived from US Historical Climate Network (HCN), cooperative network, and snowpack telemetry (SNOTEL) monthly precipitation and mean minimum and maximum temperature station data. We employed techniques that rely on geostatistical and physical relationships to create the temporally and spatially complete dataset. We developed a local kriging prediction model to infill discontinuous and limited-length station records based on spatial autocorrelation structure of climate anomalies. A spatial interpolation model (PRISM) that accounts for physiographic controls was used to grid the infilled monthly station data. We implemented a stochastic weather generator (modified WGEN) to disaggregate the gridded monthly series to dailies. Radiation and humidity variables were estimated from the dailies using a physically-based empirical surface climate model (MTCLIM3). Derived datasets include a 100 yr model spin-up climate and a historical Palmer Drought Severity Index (PDSI) dataset. The VEMAP dataset exhibits statistically significant trends in temperature, precipitation, solar radiation, vapor pressure, and PDSI for US National Assessment regions. The historical climate and companion datasets are available online at data archive centers. ?? Inter-Research 2004.
Spatial distribution of impacts to channel bed mobility due to flow regulation, Kootenai River, USA
Michael Burke; Klaus Jorde; John M. Buffington; Jeffrey H. Braatne; Rohan Benjakar
2006-01-01
The regulated hydrograph of the Kootenai River between Libby Dam and Kootenay Lake has altered the natural flow regime, resulting in a significant decrease in maximum flows (60% net reduction in median 1-day annual maximum, and 77%-84% net reductions in median monthly flows for the historic peak flow months of May and June, respectively). Other key hydrologic...
Mapping historical information for better understanding the causality factors of past disasters
NASA Astrophysics Data System (ADS)
Boudou, Martin; Lang, Michel; Vinet, Freddy; Coeur, Denis
2015-04-01
The Flood Directive of 2007 promotes the use of historical information in order to mitigate the impact of future extreme events. According to this text, the study of past events offers new insights for better understanding the causality factors of a disaster, from hydrometeorological keys to socio-political repercussions of the flood. In this presentation we decided to focus on the study of factors leading to the exceptionality of a hydrological flood event. This aspect is regularly pointed out by the feedbacks carried out after a catastrophic event and remains a subject of debate for risk managers. The role of antecedent meteorological conditions is especially underestimated by local authorities. These factors can however be considered as a key issue to appreciate the exceptional character of a hydrological disaster. For example the 2013 June floods in France that affected the region of Pyrenees revealed the significant contribution of snow melting to the discharges recorded. In an article of 2014, Schröter et al. showed that the soil moisture can be considered as a key driver of the generalised flood hazard intensity that affected Germany over the same month of June 2013. With regard to these assessments, some considerations emerge. Does a diachronic appraisal of past disasters point out the main issues responsible for an exceptional flood hazard level? Is there common causality issues involved into these extreme hydrological events? In order to answer these questions this presentation proposes a comparative analysis of nine major floods that impacted the French territory during the XXth century (from 1910 to 2010). The set is composed by different flood typologies (from torrential events to floods resulting from groundwater level rising) so as to get a complete view of flood risk in France. The methodology proposed relies on a cartographic approach to highlight the causality factors of these past hydrological disasters. For instance, mapping the rainfall data over the representation of the maximum discharges recorded can help to understand the significance of the rainfall event. In some cases, the use of textual historical information allows to emphasize the significance of other factors such as snow melting or the influence of anthropogenic infrastructures. Indeed, mapping historical information seems to be an original approach to represent the various spatial and temporal scales of historical disasters and an interesting tool to explore the exceptionality of the hazard level.
Extreme daily precipitation: the case of Serbia in 2014
NASA Astrophysics Data System (ADS)
Tošić, Ivana; Unkašević, Miroslava; Putniković, Suzana
2017-05-01
The extreme daily precipitation in Serbia was examined at 16 stations during the period 1961-2014. Two synoptic situations in May and September of 2014 were analysed, when extreme precipitation was recorded in western and eastern Serbia, respectively. The synoptic situation from 14 to 16 May 2014 remained nearly stationary over the western and central Serbia for the entire period. On 15 May 2014, the daily rainfall broke previous historical records in Belgrade (109.8 mm), Valjevo (108.2 mm) and Loznica (110 mm). Precipitation exceeded 200 mm in 72 h, producing the most catastrophic floods in the recent history of Serbia. In Negotin (eastern Serbia), daily precipitation of 161.3 mm was registered on 16 September 2014, which was the maximum value recorded during the period 1961-2014. The daily maximum in 2014 was registered at 6 out of 16 stations. The total annual precipitation for 2014 was the highest for the period 1961-2014 at almost all stations in Serbia. A non-significant positive trend was found for all precipitation indices: annual daily maximum precipitation, the total precipitation in consecutive 3 and 5 days, the total annual precipitation, and number of days with at least 10 and 20 mm of precipitation. The generalised extreme value distribution was fitted to the annual daily maximum precipitation. The estimated 100-year return levels were 123.4 and 147.4 mm for the annual daily maximum precipitation in Belgrade and Negotin, respectively.
de Oliveira Bünger, Mariana; Fernanda Mazine, Fiorella; Forest, Félix; Leandro Bueno, Marcelo; Renato Stehmann, João; Lucas, Eve J.
2016-01-01
Background and Aims Eugenia sect. Phyllocalyx Nied. includes 14 species endemic to the Neotropics, mostly distributed in the Atlantic coastal forests of Brazil. Here the first comprehensive phylogenetic study of this group is presented, and this phylogeny is used as the basis to evaluate the recent infrageneric classification in Eugenia sensu lato (s.l.) to test the history of the evolution of traits in the group and test hypotheses associated with the history of this clade. Methods A total of 42 taxa were sampled, of which 14 were Eugenia sect. Phyllocalyx for one nuclear (ribosomal internal transcribed spacer) and four plastid markers (psbA-trnH, rpl16, trnL-rpl32 and trnQ-rps16). The relationships were reconstructed based on Bayesian analysis and maximum likelihood. Additionally, ancestral area analysis and modelling methods were used to estimate species dispersal, comparing historically climatic stable (refuges) and unstable areas. Key Results Maximum likelihood and Bayesian inferences indicate that Eugenia sect. Phyllocalyx is paraphyletic and the two clades recovered are characterized by combinations of morphological characters. Phylogenetic relationships support a link between Cerrado and south-eastern species and a difference in the composition of species from north-eastern and south-eastern Atlantic forest. Refugia and stable areas identified within unstable areas suggest that these areas were important to maintain diversity in the Atlantic forest biodiversity hotspot. Conclusion This study provides a robust phylogenetic framework to address important historical questions for Eugenia s.l. within an evolutionary context, supporting the need for better taxonomic study of one of the largest genera in the Neotropics. Furthermore, valuable insight is offered into diversification and biome shifts of plant species in the highly environmentally impacted Atlantic forest of South America. Evidence is presented that climate stability in the south-eastern Atlantic forest during the Quaternary contributed to the highest levels of plant diversity in this region that acted as a refugium. PMID:27974324
Tsunami Risk Assessment Modelling in Chabahar Port, Iran
NASA Astrophysics Data System (ADS)
Delavar, M. R.; Mohammadi, H.; Sharifi, M. A.; Pirooz, M. D.
2017-09-01
The well-known historical tsunami in the Makran Subduction Zone (MSZ) region was generated by the earthquake of November 28, 1945 in Makran Coast in the North of Oman Sea. This destructive tsunami killed over 4,000 people in Southern Pakistan and India, caused great loss of life and devastation along the coasts of Western India, Iran and Oman. According to the report of "Remembering the 1945 Makran Tsunami", compiled by the Intergovernmental Oceanographic Commission (UNESCO/IOC), the maximum inundation of Chabahar port was 367 m toward the dry land, which had a height of 3.6 meters from the sea level. In addition, the maximum amount of inundation at Pasni (Pakistan) reached to 3 km from the coastline. For the two beaches of Gujarat (India) and Oman the maximum run-up height was 3 m from the sea level. In this paper, we first use Makran 1945 seismic parameters to simulate the tsunami in generation, propagation and inundation phases. The effect of tsunami on Chabahar port is simulated using the ComMIT model which is based on the Method of Splitting Tsunami (MOST). In this process the results are compared with the documented eyewitnesses and some reports from researchers for calibration and validation of the result. Next we have used the model to perform risk assessment for Chabahar port in the south of Iran with the worst case scenario of the tsunami. The simulated results showed that the tsunami waves will reach Chabahar coastline 11 minutes after generation and 9 minutes later, over 9.4 Km2 of the dry land will be flooded with maximum wave amplitude reaching up to 30 meters.
NASA Astrophysics Data System (ADS)
Qi, Peng; Du, Mei
2018-06-01
China's southeast coastal areas frequently suffer from storm surge due to the attack of tropical cyclones (TCs) every year. Hazards induced by TCs are complex, such as strong wind, huge waves, storm surge, heavy rain, floods, and so on. The atmospheric and oceanic hazards cause serious disasters and substantial economic losses. This paper, from the perspective of hazard group, sets up a multi-factor evaluation method for the risk assessment of TC hazards using historical extreme data of concerned atmospheric and oceanic elements. Based on the natural hazard dynamic process, the multi-factor indicator system is composed of nine natural hazard factors representing intensity and frequency, respectively. Contributing to the indicator system, in order of importance, are maximum wind speed by TCs, attack frequency of TCs, maximum surge height, maximum wave height, frequency of gusts ≥ Scale 8, rainstorm intensity, maximum tidal range, rainstorm frequency, then sea-level rising rate. The first four factors are the most important, whose weights exceed 10% in the indicator system. With normalization processing, all the single-hazard factors are superposed by multiplying their weights to generate a superposed TC hazard. The multi-factor evaluation indicator method was applied to the risk assessment of typhoon-induced atmospheric and oceanic hazard group in typhoon-prone southeast coastal cities of China.
Pruess, J.; Wohl, E.E.; Jarrett, R.D.
1998-01-01
Historical and geologic records may be used to enhance magnitude estimates for extreme floods along mountain channels, as demonstrated in this study from the San Juan Mountains of Colorado. Historical photographs and local newspaper accounts from the October 1911 flood indicate the likely extent of flooding and damage. A checklist designed to organize and numerically score evidence of flooding was used in 15 field reconnaissance surveys in the upper Animas River valley of southwestern Colorado. Step-backwater flow modeling estimated the discharges necessary to create longitudinal flood bars observed at 6 additional field sites. According to these analyses, maximum unit discharge peaks at approximately 1.3 m3 s-1 km-2 around 2200 m elevation, with decreased unit discharges at both higher and lower elevations. These results (1) are consistent with Jarrett's (1987, 1990, 1993) maximum 2300-m elevation limit for flash-flooding in the Colorado Rocky Mountains, and (2) suggest that current Probable Maximum Flood (PMF) estimates based on a 24-h rainfall of 30 cm at elevations above 2700 m are unrealistically large. The methodology used for this study should be readily applicable to other mountain regions where systematic streamflow records are of short duration or nonexistent.
Normal-faulting slip maxima and stress-drop variability: a geological perspective
Hecker, S.; Dawson, T.E.; Schwartz, D.P.
2010-01-01
We present an empirical estimate of maximum slip in continental normal-faulting earthquakes and present evidence that stress drop in intraplate extensional environments is dependent on fault maturity. A survey of reported slip in historical earthquakes globally and in latest Quaternary paleoearthquakes in the Western Cordillera of the United States indicates maximum vertical displacements as large as 6–6.5 m. A difference in the ratio of maximum-to-mean displacements between data sets of prehistoric and historical earthquakes, together with constraints on bias in estimates of mean paleodisplacement, suggest that applying a correction factor of 1.4±0.3 to the largest observed displacement along a paleorupture may provide a reasonable estimate of the maximum displacement. Adjusting the largest paleodisplacements in our regional data set (~6 m) by a factor of 1.4 yields a possible upper-bound vertical displacement for the Western Cordillera of about 8.4 m, although a smaller correction factor may be more appropriate for the longest ruptures. Because maximum slip is highly localized along strike, if such large displacements occur, they are extremely rare. Static stress drop in surface-rupturing earthquakes in the Western Cordillera, as represented by maximum reported displacement as a fraction of modeled rupture length, appears to be larger on normal faults with low cumulative geologic displacement (<2 km) and larger in regions such as the Rocky Mountains, where immature, low-throw faults are concentrated. This conclusion is consistent with a growing recognition that structural development influences stress drop and indicates that this influence is significant enough to be evident among faults within a single intraplate environment.
NASA Astrophysics Data System (ADS)
Wang, Zhihua; Yang, Xiaomei; Lu, Chen; Yang, Fengshuo
2018-07-01
Automatic updating of land use/cover change (LUCC) databases using high spatial resolution images (HSRI) is important for environmental monitoring and policy making, especially for coastal areas that connect the land and coast and that tend to change frequently. Many object-based change detection methods are proposed, especially those combining historical LUCC with HSRI. However, the scale parameter(s) segmenting the serial temporal images, which directly determines the average object size, is hard to choose without experts' intervention. And the samples transferred from historical LUCC also need experts' intervention to avoid insufficient or wrong samples. With respect to the scale parameter(s) choosing, a Scale Self-Adapting Segmentation (SSAS) approach based on the exponential sampling of a scale parameter and location of the local maximum of a weighted local variance was proposed to determine the scale selection problem when segmenting images constrained by LUCC for detecting changes. With respect to the samples transferring, Knowledge Transfer (KT), a classifier trained on historical images with LUCC and applied in the classification of updated images, was also proposed. Comparison experiments were conducted in a coastal area of Zhujiang, China, using SPOT 5 images acquired in 2005 and 2010. The results reveal that (1) SSAS can segment images more effectively without intervention of experts. (2) KT can also reach the maximum accuracy of samples transfer without experts' intervention. Strategy SSAS + KT would be a good choice if the temporal historical image and LUCC match, and the historical image and updated image are obtained from the same resource.
Concorde noise-induced building vibrations for Sully Plantation, Chantilly, Virginia
NASA Technical Reports Server (NTRS)
Mayes, W. H.; Scholl, H. F.; Stephens, D. G.; Holliday, B. G.; Deloach, R.; Holmes, H. K.; Lewis, R. B.; Lynch, J. W.
1976-01-01
A study to assess the noise-induced building vibrations associated with Concorde operations is presented. The approach is to record the levels of induced vibrations and associated indoor/outdoor noise levels in selected homes, historic and other buildings near Dulles and Kennedy International Airports. Presented is a small, representative sample of data recorded at Sully Plantation, Chantilly, Virginia during the period of May 20 through May 28, 1976. Recorded data provide relationships between the vibration levels of walls, floors, windows, and the noise associated with Concorde operations (2 landings and 3 takeoffs), other aircraft, nonaircraft sources, and normal household activities. Results suggest that building vibrations resulting from aircraft operations were proportional to the overall sound pressure levels and relatively insensitive to spectral differences associated with the different types of aircraft. Furthermore, the maximum levels of vibratory response resulting from Concorde operations were higher than those associated with conventional aircraft. The vibrations of nonaircraft events were observed in some cases to exceed the levels resulting from aircraft operations. These nonaircraft events are currently being analyzed in greater detail.
NASA Technical Reports Server (NTRS)
Rieker, Lorra L.; Haraburda, Francis M.
1989-01-01
The National Aeronautics and Space Administration has adopted the policy to achieve the maximum practical level of commonality for the Space Station Freedom program in order to significantly reduce life cycle costs. Commonality means using identical or similar hardware/software for meeting common sets of functionally similar requirements. Information on how the concept of commonality is being implemented with respect to electric power system hardware for the Space Station Freedom and the U.S. Polar Platform is presented. Included is a historical account of the candidate common items which have the potential to serve the same power system functions on both Freedom and the Polar Platform.
Hydrogeology of parts of the Central Platte and Lower Loup Natural Resources Districts, Nebraska
Peckenpaugh, J.M.; Dugan, J.T.
1983-01-01
Water-level declines of at least 15 feet have occurred in this heavily irrigated area of central Nebraska since the 1930's, and potential for additonal declines is high. To test the effects of additional irrigation development on water levels and streamflow , computer programs were developed that represent the surface-water system, soil zone, and saturated zone. A two-dimensional, finite-difference ground-water flow model of the 3,374 square-mile study area was developed and calibrated using steady-state and transient conditions. Three management alternatives were examined. First, 125,000 acre-feet of water would be diverted annually from the Platte River. During a water year in which flows are similar to those in 1957, months of zero streamflow at Grand Island increased from the historical 2, to 7. After 5 years of such low flows, in 36 nodes (997.4 acres per node) water levels declined more than 5 feet, with a maximum decline of 10.7 feet. A second alternative would allow no new ground-water development after 1980. The third alternative would allow irrigable but unirrigated land to be developed at an annual rate of 2, 5, and 8 percent and to apply irrigation water at 80, 100, and 120 percent of consumptive irrigation requirements. The maximum projected declines by 2020 are 119 and 139 feet, respectively, for the second and third alternatives. (USGS)
NASA Technical Reports Server (NTRS)
Rosenberg, Leigh; Hihn, Jairus; Roust, Kevin; Warfield, Keith
2000-01-01
This paper presents an overview of a parametric cost model that has been built at JPL to estimate costs of future, deep space, robotic science missions. Due to the recent dramatic changes in JPL business practices brought about by an internal reengineering effort known as develop new products (DNP), high-level historic cost data is no longer considered analogous to future missions. Therefore, the historic data is of little value in forecasting costs for projects developed using the DNP process. This has lead to the development of an approach for obtaining expert opinion and also for combining actual data with expert opinion to provide a cost database for future missions. In addition, the DNP cost model has a maximum of objective cost drivers which reduces the likelihood of model input error. Version 2 is now under development which expands the model capabilities, links it more tightly with key design technical parameters, and is grounded in more rigorous statistical techniques. The challenges faced in building this model will be discussed, as well as it's background, development approach, status, validation, and future plans.
Fault specific GIS based seismic hazard maps for the Attica region, Greece
NASA Astrophysics Data System (ADS)
Deligiannakis, G.; Papanikolaou, I. D.; Roberts, G.
2018-04-01
Traditional seismic hazard assessment methods are based on the historical seismic records for the calculation of an annual probability of exceedance for a particular ground motion level. A new fault-specific seismic hazard assessment method is presented, in order to address problems related to the incompleteness and the inhomogeneity of the historical records and to obtain higher spatial resolution of hazard. This method is applied to the region of Attica, which is the most densely populated area in Greece, as nearly half of the country's population lives in Athens and its surrounding suburbs, in the Greater Athens area. The methodology is based on a database of 24 active faults that could cause damage to Attica in case of seismic rupture. This database provides information about the faults slip rates, lengths and expected magnitudes. The final output of the method is four fault-specific seismic hazard maps, showing the recurrence of expected intensities for each locality. These maps offer a high spatial resolution, as they consider the surface geology. Despite the fact that almost half of the Attica region lies on the lowest seismic risk zone according to the official seismic hazard zonation of Greece, different localities have repeatedly experienced strong ground motions during the last 15 kyrs. Moreover, the maximum recurrence for each intensity occurs in different localities across Attica. Highest recurrence for intensity VII (151-156 times over 15 kyrs, or up to a 96 year return period) is observed in the central part of the Athens basin. The maximum intensity VIII recurrence (115 times over 15 kyrs, or up to a 130 year return period) is observed in the western part of Attica, while the maximum intensity IX (73-77/15 kyrs, or a 195 year return period) and X (25-29/15 kyrs, or a 517 year return period) recurrences are observed near the South Alkyonides fault system, which dominates the strong ground motions hazard in the western part of the Attica mainland.
Future Extreme Event Vulnerability in the Rural Northeastern United States
NASA Astrophysics Data System (ADS)
Winter, J.; Bowen, F. L.; Partridge, T.; Chipman, J. W.
2017-12-01
Future climate change impacts on humans will be determined by the convergence of evolving physical climate and socioeconomic systems. Of particular concern is the intersection of extreme events and vulnerable populations. Rural areas of the Northeastern United States have experienced increased temperature and precipitation extremes, especially over the past three decades, and face unique challenges due to their physical isolation, natural resources dependent economies, and high poverty rates. To explore the impacts of future extreme events on vulnerable, rural populations in the Northeast, we project extreme events and vulnerability indicators to identify where changes in extreme events and vulnerable populations coincide. Specifically, we analyze future (2046-2075) maximum annual daily temperature, minimum annual daily temperature, maximum annual daily precipitation, and maximum consecutive dry day length for Representative Concentration Pathways (RCP) 4.5 and 8.5 using four global climate models (GCM) and a gridded observational dataset. We then overlay those projections with estimates of county-level population and relative income for 2060 to calculate changes in person-events from historical (1976-2005), with a focus on Northeast counties that have less than 250,000 people and are in the bottom income quartile. We find that across the rural Northeast for RCP4.5, heat person-events per year increase tenfold, far exceeding decreases in cold person-events and relatively small changes in precipitation and drought person-events. Counties in the bottom income quartile have historically (1976-2005) experienced a disproportionate number of heat events, and counties in the bottom two income quartiles are projected to experience a greater heat event increase by 2046-2075 than counties in the top two income quartiles. We further explore the relative contributions of event frequency, population, and income changes to the total and geographic distribution of climate change impacts on rural, vulnerable areas of the Northeast.
Evaluating changes to reservoir rule curves using historical water-level data
Mower, Ethan; Miranda, Leandro E.
2013-01-01
Flood control reservoirs are typically managed through rule curves (i.e. target water levels) which control the storage and release timing of flood waters. Changes to rule curves are often contemplated and requested by various user groups and management agencies with no information available about the actual flood risk of such requests. Methods of estimating flood risk in reservoirs are not easily available to those unfamiliar with hydrological models that track water movement through a river basin. We developed a quantile regression model that uses readily available daily water-level data to estimate risk of spilling. Our model provided a relatively simple process for estimating the maximum applicable water level under a specific flood risk for any day of the year. This water level represents an upper-limit umbrella under which water levels can be operated in a variety of ways. Our model allows the visualization of water-level management under a user-specified flood risk and provides a framework for incorporating the effect of a changing environment on water-level management in reservoirs, but is not designed to replace existing hydrological models. The model can improve communication and collaboration among agencies responsible for managing natural resources dependent on reservoir water levels.
Coe, J.A.; Michael, J.A.; Crovelli, R.A.; Savage, W.Z.; Laprade, W.T.; Nashem, W.D.
2004-01-01
Ninety years of historical landslide records were used as input to the Poisson and binomial probability models. Results from these models show that, for precipitation-triggered landslides, approximately 9 percent of the area of Seattle has annual exceedance probabilities of 1 percent or greater. Application of the Poisson model for estimating the future occurrence of individual landslides results in a worst-case scenario map, with a maximum annual exceedance probability of 25 percent on a hillslope near Duwamish Head in West Seattle. Application of the binomial model for estimating the future occurrence of a year with one or more landslides results in a map with a maximum annual exceedance probability of 17 percent (also near Duwamish Head). Slope and geology both play a role in localizing the occurrence of landslides in Seattle. A positive correlation exists between slope and mean exceedance probability, with probability tending to increase as slope increases. Sixty-four percent of all historical landslide locations are within 150 m (500 ft, horizontal distance) of the Esperance Sand/Lawton Clay contact, but within this zone, no positive or negative correlation exists between exceedance probability and distance to the contact.
Development of a guideline on vegetation area to reduce the risk of weed pollinosis in Korea
NASA Astrophysics Data System (ADS)
Rang Kim, Kyu; Lee, Hye-Rim; Kim, Mijin; Baek, Won-ki; Oh, Jae-Won; Choi, Young-Jean; Jung, Hyun-Sook
2013-04-01
Allergenic pollens are influenced by the environmental conditions so that the daily number of pollens varies by temperature, humidity, wind speed, etc. The relationship between the daily pollens and meteorological conditions were determined and utilized to forecast daily risk level of pollen allergy in Korea. Another important factor for the daily risk level of pollens is the vegetation area of the allergenic plants. In this study, the relationship between the area and pollen concentration was identified for two major weed species: Ragweed and Japanese Hop. It was then utilized to determine the upper limit of vegetation area to confine the risk level to a certain degree in the field. Three sites with different levels of pollen concentration were selected among twelve pollen observation sites in Korea based on the historical observation of the weed pollens. The vegetation area of the two weed species within four square kilometers at each site was surveyed. The maximum daily pollen concentration was highly correlated with the vegetation area and it was selected as a dependent variable for the regression equations, which were used as the guideline for vegetation area. According to the guideline, to limit the maximum daily pollen concentration under the moderate risk level or less than 50 pollen grains per cubic meter for Ragweed, the vegetation area should remain less than 0.6% of the ground area. For the moderate risk of Japanese Hop, pollen grains should be limited less than 100 and the area be less than 0.4%.
2002 Commercial Space Transportation Lecture Series, volumes 1,2, and 3
DOT National Transportation Integrated Search
2003-04-01
This document includes three presentations which are part of the 2002 Commercial Space Transportation Lecture Series: The Early Years, AST - A Historical Perspective; Approval of Reentry Vehicles; and, Setting Insurance Requirements: Maximum Probable...
Probable Maximum Precipitation in the U.S. Pacific Northwest in a Changing Climate
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, Xiaodong; Hossain, Faisal; Leung, Lai-Yung
2017-12-22
The safety of large and aging water infrastructures is gaining attention in water management given the accelerated rate of change in landscape, climate and society. In current engineering practice, such safety is ensured by the design of infrastructure for the Probable Maximum Precipitation (PMP). Recently, several physics-based numerical modeling approaches have been proposed to modernize the conventional and ad hoc PMP estimation approach. However, the underlying physics has not been investigated and thus differing PMP estimates are obtained without clarity on their interpretation. In this study, we present a hybrid approach that takes advantage of both traditional engineering wisdom andmore » modern climate science to estimate PMP for current and future climate conditions. The traditional PMP approach is improved and applied to outputs from an ensemble of five CMIP5 models. This hybrid approach is applied in the Pacific Northwest (PNW) to produce ensemble PMP estimation for the historical (1970-2016) and future (2050-2099) time periods. The new historical PMP estimates are verified by comparing them with the traditional estimates. PMP in the PNW will increase by 50% of the current level by 2099 under the RCP8.5 scenario. Most of the increase is caused by warming, which mainly affects moisture availability, with minor contributions from changes in storm efficiency in the future. Moist track change tends to reduce the future PMP. Compared with extreme precipitation, ensemble PMP exhibits higher internal variation. Thus high-quality data of both precipitation and related meteorological fields (temperature, wind fields) are required to reduce uncertainties in the ensemble PMP estimates.« less
A framework for the probabilistic analysis of meteotsunamis
Geist, Eric L.; ten Brink, Uri S.; Gove, Matthew D.
2014-01-01
A probabilistic technique is developed to assess the hazard from meteotsunamis. Meteotsunamis are unusual sea-level events, generated when the speed of an atmospheric pressure or wind disturbance is comparable to the phase speed of long waves in the ocean. A general aggregation equation is proposed for the probabilistic analysis, based on previous frameworks established for both tsunamis and storm surges, incorporating different sources and source parameters of meteotsunamis. Parameterization of atmospheric disturbances and numerical modeling is performed for the computation of maximum meteotsunami wave amplitudes near the coast. A historical record of pressure disturbances is used to establish a continuous analytic distribution of each parameter as well as the overall Poisson rate of occurrence. A demonstration study is presented for the northeast U.S. in which only isolated atmospheric pressure disturbances from squall lines and derechos are considered. For this study, Automated Surface Observing System stations are used to determine the historical parameters of squall lines from 2000 to 2013. The probabilistic equations are implemented using a Monte Carlo scheme, where a synthetic catalog of squall lines is compiled by sampling the parameter distributions. For each entry in the catalog, ocean wave amplitudes are computed using a numerical hydrodynamic model. Aggregation of the results from the Monte Carlo scheme results in a meteotsunami hazard curve that plots the annualized rate of exceedance with respect to maximum event amplitude for a particular location along the coast. Results from using multiple synthetic catalogs, resampled from the parent parameter distributions, yield mean and quantile hazard curves. Further refinements and improvements for probabilistic analysis of meteotsunamis are discussed.
Maximum entropy modeling risk of anthrax in the Republic of Kazakhstan.
Abdrakhmanov, S K; Mukhanbetkaliyev, Y Y; Korennoy, F I; Sultanov, A A; Kadyrov, A S; Kushubaev, D B; Bakishev, T G
2017-09-01
The objective of this study was to zone the territory of the Republic of Kazakhstan (RK) into risk categories according to the probability of anthrax emergence in farm animals as stipulated by the re-activation of preserved natural foci. We used historical data on anthrax morbidity in farm animals during the period 1933 - 2014, collected by the veterinary service of the RK. The database covers the entire territory of the RK and contains 4058 anthrax outbreaks tied to 1798 unique locations. Considering the strongly pronounced natural focality of anthrax, we employed environmental niche modeling (Maxent) to reveal patterns in the outbreaks' linkages to specific combinations of environmental factors. The set of bioclimatic factors BIOCLIM, derived from remote sensing data, the altitude above sea level, the land cover type, the maximum green vegetation fraction (MGVF) and the soil type were examined as explanatory variables. The model demonstrated good predictive ability, while the MGVF, the bioclimatic variables reflecting precipitation level and humidity, and the soil type were found to contribute most significantly to the model. A continuous probability surface was obtained that reflects the suitability of the study area for the emergence of anthrax outbreaks. The surface was turned into a categorical risk map by averaging the probabilities within the administrative divisions at the 2nd level and putting them into four categories of risk, namely: low, medium, high and very high risk zones, where very high risk refers to more than 50% suitability to the disease re-emergence and low risk refers to less than 10% suitability. The map indicated increased risk of anthrax re-emergence in the districts along the northern, eastern and south-eastern borders of the country. It was recommended that the national veterinary service uses the risk map for the development of contra-epizootic measures aimed at the prevention of anthrax re-emergence in historically affected regions of the RK. The map can also be considered when developing large-scale construction projects in the areas comprising preserved soil foci of anthrax. Copyright © 2017 Elsevier B.V. All rights reserved.
Thermo-mechanical Design Methodology for ITER Cryodistribution cold boxes
NASA Astrophysics Data System (ADS)
Shukla, Vinit; Patel, Pratik; Das, Jotirmoy; Vaghela, Hitensinh; Bhattacharya, Ritendra; Shah, Nitin; Choukekar, Ketan; Chang, Hyun-Sik; Sarkar, Biswanath
2017-04-01
The ITER cryo-distribution (CD) system is in charge of proper distribution of the cryogen at required mass flow rate, pressure and temperature level to the users; namely the superconducting (SC) magnets and cryopumps (CPs). The CD system is also capable to use the magnet structures as a thermal buffer in order to operate the cryo-plant as much as possible at a steady state condition. A typical CD cold box is equipped with mainly liquid helium (LHe) bath, heat exchangers (HX’s), cryogenic valves, filter, heaters, cold circulator, cold compressor and process piping. The various load combinations which are likely to occur during the life cycle of the CD cold boxes are imposed on the representative model and impacts on the system are analyzed. This study shows that break of insulation vacuum during nominal operation (NO) along with seismic event (Seismic Level-2) is the most stringent load combination having maximum stress of 224 MPa. However, NO+SMHV (Séismes Maximaux Historiquement Vraisemblables = Maximum Historically Probable Earthquakes) load combination is having the least safety margin and will lead the basis of the design of the CD system and its sub components. This paper presents and compares the results of different load combinations which are likely to occur on a typical CD cold box.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rabitsch, W.B.
1997-07-01
Asymmetries of bilaterally symmetrical morphological traits in workers of the ant Formica pratensis Retzius were compared at sites with different levels of metal contamination and between mature and pre-mature colonies. Statistical analyses of the right-minus-left differences revealed that their distributions fit assumptions of fluctuating asymmetry (FA). No direct asymmetry or antisymmetry were present. Mean measurement error accounts for a third of the variation, but the maximum measurement error was 65%. Although significant differences of FA in ants were observed, the inconsistent results render uncovering a clear pattern difficult. Lead, cadmium, and zinc concentrations in the ants decreased with the distancemore » from the contamination source, but no relation was found between FA and the heavy metal levels. Ants from the premature colonies were more asymmetrical than those from mature colonies but accumulated less metals. The use of asymmetry measures in ecotoxicology and biomonitoring is criticized, but should remain widely applicable if statistical assumptions are complemented by genetic and historical data.« less
NASA Astrophysics Data System (ADS)
Rahman, M. Moklesur; Bai, Ling; Khan, Nangyal Ghani; Li, Guohui
2018-02-01
The Himalayan-Tibetan region has a long history of devastating earthquakes with wide-spread casualties and socio-economic damages. Here, we conduct the probabilistic seismic hazard analysis by incorporating the incomplete historical earthquake records along with the instrumental earthquake catalogs for the Himalayan-Tibetan region. Historical earthquake records back to more than 1000 years ago and an updated, homogenized and declustered instrumental earthquake catalog since 1906 are utilized. The essential seismicity parameters, namely, the mean seismicity rate γ, the Gutenberg-Richter b value, and the maximum expected magnitude M max are estimated using the maximum likelihood algorithm assuming the incompleteness of the catalog. To compute the hazard value, three seismogenic source models (smoothed gridded, linear, and areal sources) and two sets of ground motion prediction equations are combined by means of a logic tree on accounting the epistemic uncertainties. The peak ground acceleration (PGA) and spectral acceleration (SA) at 0.2 and 1.0 s are predicted for 2 and 10% probabilities of exceedance over 50 years assuming bedrock condition. The resulting PGA and SA maps show a significant spatio-temporal variation in the hazard values. In general, hazard value is found to be much higher than the previous studies for regions, where great earthquakes have actually occurred. The use of the historical and instrumental earthquake catalogs in combination of multiple seismogenic source models provides better seismic hazard constraints for the Himalayan-Tibetan region.
Maximum Precipitation Documents Miscellaneous Publications Storm Analysis Record Precipitation Contact Us ; - Probability analysis for selected historical storm events learn more > - Record point precipitation for the Oceanic and Atmospheric Administration National Weather Service Office of Water Prediction (OWP) 1325 East
Anomalous Variability in Antarctic Sea Ice Extents During the 1960s With the Use of Nimbus Data
NASA Technical Reports Server (NTRS)
Gallaher, David W.; Campbell, G. Garrett; Meier, Walter N.
2014-01-01
The Nimbus I, II, and III satellites provide a new opportunity for climate studies in the 1960s. The rescue of the visible and infrared imager data resulted in the utilization of the early Nimbus data to determine sea ice extent. A qualitative analysis of the early NASA Nimbus missions has revealed Antarctic sea ice extents that are signicant larger and smaller than the historic 1979-2012 passive microwave record. The September 1964 ice mean area is 19.7x10 km +/- 0.3x10 km. This is more the 250,000 km greater than the 19.44x10 km seen in the new 2012 historic maximum. However, in August 1966 the maximum sea ice extent fell to 15.9x10 km +/- 0.3x10 km. This is more than 1.5x10 km below the passive microwave record of 17.5x10 km set in September of 1986. This variation between 1964 and 1966 represents a change of maximum sea ice of over 3x10 km in just two years. These inter-annual variations while large, are small when compared to the Antarctic seasonal cycle.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fernandez, P.; Vilanova, R.M.; Martinez, C.
2000-05-15
Historical records of the deposition fluxes of polycyclic aromatic hydrocarbons (PAH) in 10 remote high altitude lakes distributed throughout Europe have been studied. Cores from each site were dated radiometrically, and the results were used for the reconstruction of the pollutant changes between 1830 and present. In general, both PAH pyrolytic fluxes and concentrations increased from uniform background levels at the turn of the century to maximum values in 1960--1980. After these peak values a slight decrease to present day levels has been observed in some lakes, though they are still 3--20 times greater than the preindustrial period. Distinctive featuresmore » in the downcore PAH profiles and concentrations between sites allowed for differentiation between five regions in Europe: peripheral areas (Norway and the Liberian Peninsula), Pyrenees and western Alps, central Alps, Tatra Mountains, and the Arctic. Atmospheric PAH inventories were estimated from the vertical integration of sedimentary inventories using {sup 210}Pb to correct for post depositional transport processes. This approach consistently reduces variability among lakes from the same region. The results obtained define the lakes in the Tatra mountains and that on Spits Bergen Island as those of highest and lowest atmospheric PAH input. The other lakes exhibit lower differences although their atmospheric inventory values group consistently with the above-mentioned regions.« less
Global positioning system surveying to monitor land subsidence in Sacramento Valley, California, USA
Ikehara, M.E.
1994-01-01
A subsidence research program began in 1985 to document the extent and magnitude of land subsidence in Sacramento Valley, California, an area of about 15 600 km2m, using Global Positioning System (GPS) surveying. In addition to periodic conventional spirit levelling, an examination was made of the changes in GPS-derived ellipsoidal height differences (summary differences) between pairs of adjacent bench marks in central Sacramento Valley from 1986 to 1989. The average rates of land subsidence in the southern Sacramento Valley for the past several decades were determined by comparing GPS-derived orthometric heights with historic published elevations. A maximum average rate of 0.053 m year-1 (0.90 m in 17 years) of subsidence has been measured. -Author
Staunton, Kyran M; Nakamura, Akihiro; Burwell, Chris J; Robson, Simon K A; Williams, Stephen E
2016-01-01
Understanding how the environment influences patterns of diversity is vital for effective conservation management, especially in a changing global climate. While assemblage structure and species richness patterns are often correlated with current environmental factors, historical influences may also be considerable, especially for taxa with poor dispersal abilities. Mountain-top regions throughout tropical rainforests can act as important refugia for taxa characterised by low dispersal capacities such as flightless ground beetles (Carabidae), an ecologically significant predatory group. We surveyed flightless ground beetles along elevational gradients in five different subregions within the Australian Wet Tropics World Heritage Area to investigate (1) whether the diversity and composition of flightless ground beetles are elevationally stratified, and, if so, (2) what environmental factors (other than elevation per se) are associated with these patterns. Generalised linear models and model averaging techniques were used to relate patterns of diversity to environmental factors. Unlike most taxonomic groups, flightless ground beetles increased in species richness and abundance with elevation. Additionally, each subregion consisted of relatively distinct assemblages containing a high level of regional endemic species. Species richness was most strongly and positively associated with historical and current climatic stabilities and negatively associated with severity of recent disturbance (treefalls). Assemblage composition was associated with latitude and historical and current climatic conditions. Although the results need to be interpreted carefully due to inter-correlation between historical and current climatic variables, our study is in agreement with the hypothesis that upland refugia provided stable climatic conditions since the last glacial maximum, and supported a diverse fauna of flightless beetle species. These findings are important for conservation management as upland habitats become increasingly threatened by climate change.
Staunton, Kyran M.; Nakamura, Akihiro; Burwell, Chris J.; Robson, Simon K. A.; Williams, Stephen E.
2016-01-01
Understanding how the environment influences patterns of diversity is vital for effective conservation management, especially in a changing global climate. While assemblage structure and species richness patterns are often correlated with current environmental factors, historical influences may also be considerable, especially for taxa with poor dispersal abilities. Mountain-top regions throughout tropical rainforests can act as important refugia for taxa characterised by low dispersal capacities such as flightless ground beetles (Carabidae), an ecologically significant predatory group. We surveyed flightless ground beetles along elevational gradients in five different subregions within the Australian Wet Tropics World Heritage Area to investigate (1) whether the diversity and composition of flightless ground beetles are elevationally stratified, and, if so, (2) what environmental factors (other than elevation per se) are associated with these patterns. Generalised linear models and model averaging techniques were used to relate patterns of diversity to environmental factors. Unlike most taxonomic groups, flightless ground beetles increased in species richness and abundance with elevation. Additionally, each subregion consisted of relatively distinct assemblages containing a high level of regional endemic species. Species richness was most strongly and positively associated with historical and current climatic stabilities and negatively associated with severity of recent disturbance (treefalls). Assemblage composition was associated with latitude and historical and current climatic conditions. Although the results need to be interpreted carefully due to inter-correlation between historical and current climatic variables, our study is in agreement with the hypothesis that upland refugia provided stable climatic conditions since the last glacial maximum, and supported a diverse fauna of flightless beetle species. These findings are important for conservation management as upland habitats become increasingly threatened by climate change. PMID:27192085
Identifying evidence of climate change impact on extreme events in permeable chalk catchments
NASA Astrophysics Data System (ADS)
Butler, A. P.; Nubert, S.
2009-12-01
The permeable chalk catchments of southern England are vital for the economy and well being of the UK. Not only important as a water resource, their freely draining soils support intensive agricultural production, and the rolling downs and chalk streams provide important habitants for many protected plant and animal species. Consequently, there are concerns about the potential impact of climate change on such catchments, particularly in relation to groundwater recharge. Of major concern are possible changes in extreme events, such as groundwater floods and droughts, as any increase in the frequency and/or severity of these has important consequences for water resources, ecological systems and local infrastructure. Studies of climate change impact on extreme events for such catchments have indicated that, under medium and high emissions scenarios, droughts are likely to become more severe whilst floods less so. However, given the uncertainties in such predictions and the inherent variability in historic data, producing definitive evidence of changes in flood/drought frequency/severity poses a significant challenge. Thus, there is a need for specific extreme event statistics that can be used as indicators of actual climate change in streamflow and groundwater level observations. Identifying such indicators that are sufficiently robust requires catchments with long historic time series data. One such catchment is the River Lavant, an intermittent chalk stream in West Sussex, UK. Located within this catchment is Chilgrove House, the site of the UK’s longest groundwater monitoring well (with a continuous record of water level observations of varying frequency dating back to 1836). Using a variety of meteorological datasets, the behaviour of the catchment has been modelled, from 1855 to present, using a 'leaky aquifer' conceptual model. Model calibration was based on observed daily streamflow, at a gauging station just outside the town of Chichester, from 1970. Long-term performance was assessed using groundwater levels at various long period observation wells, including Chilgrove. Extreme event analyses (annual maximum daily flow, annual minimum groundwater level) based on historic model runs, looking at successive 30 year time periods, show high variability in the values of extreme events, However, there is far less (by an order of magnitude) variation in more frequent (i.e. less extreme) events with a recurrence interval of around 0.6 (i.e. a return period of around 1.67 years). Simulations of climate change impact for 2020 emission scenarios using UKCIP02 data give 0.6 recurrence estimates that are significantly different (at the 1% confidence level) than those obtained from historic data, which is not the case for more extreme events. It is proposed that, at least for such permeable catchments, deviations from historic values of this relatively frequent recurrence interval provide a more robust indicator for detecting evidence of climate change than focusing on much rarer, albeit more dramatic, events.
Matching current windstorms to historical analogues
NASA Astrophysics Data System (ADS)
Becker, Bernd; Maisey, Paul; Scannell, Claire; Vanvyve, Emilie; Mitchell, Lorna; Steptoe, Hamish
2015-04-01
European windstorms are capable of producing devastating socioeconomic impacts. They are capable of causing power outages to millions of people, closing transport networks, uprooting trees, causing walls, buildings and other structures to collapse, which in the worst cases has resulted in dozens of fatalities. In Europe windstorms present the greatest natural hazard risk for primary insurers and result in the greatest aggregate loss due to the high volume of claims. In the winter of 2013/2014 alone storms Christian, Xaver, Dirk and Tini cost the insurance industry an estimated EUR 2.5 bn. Here we make use of a high resolution (4 km) historical storm footprint catalogue which contains over 6000 storms. This catalogue was created using the 35 year ERA-Interim model reanalysis dataset, downscaled to 12 km and then to 4.4 km. This approach was taken in order to provide a long term, high resolution data set, consistent with Met Office high resolution deterministic forecast capability for Europe. The footprints are defined as the maximum 3 second gust at each model grid point over a 72 hour period during each storm. Matches between current/forecast storm footprints and footprints from the historical catalogue are found using fingerprint identification techniques, by way of calculating image texture derived from the gray-level-co-occurrence matrix (Haralick, 1973). The best match is found by firstly adding the current or forecast footprints to the stack of the historical storm catalogue. An "identical twin" or "best match" of this footprint is then sought from within this stack. This search is repeated for a set of measures (15 in total) including position of the strongest gusts, storm damage potential and 13 Haralick measures. Each time a candidate is found, the nearest neighbours are noted and a rank proximity measure is calculated. Finally, the Frobenius norm (distance between the two fields at each grid-point averaged) is calculated. This provides an independent assessment of the goodness of fit made by the rank proximity measure. Using this technique a series of potential historical footprints matching the current footprint is found. Each potential match is indexed according to its closeness to the current footprint where an index rating of 0 is a perfect match or "identical twin". Such pattern matching of current and forecast windstorms against an historical archive can enable insurers estimate a rapid prediction of likely loss and aid the timely deployment of staff and funds at the right level.
The Diesel Exhaust in Miners Study: I. Overview of the Exposure Assessment Process
Stewart, Patricia A.; Coble, Joseph B.; Vermeulen, Roel; Schleiff, Patricia; Blair, Aaron; Lubin, Jay; Attfield, Michael; Silverman, Debra T.
2010-01-01
This report provides an overview of the exposure assessment process for an epidemiologic study that investigated mortality, with a special focus on lung cancer, associated with diesel exhaust (DE) exposure among miners. Details of several components are provided in four other reports. A major challenge for this study was the development of quantitative estimates of historical exposures to DE. There is no single standard method for assessing the totality of DE, so respirable elemental carbon (REC), a component of DE, was selected as the primary surrogate in this study. Air monitoring surveys at seven of the eight study mining facilities were conducted between 1998 and 2001 and provided reference personal REC exposure levels and measurements for other agents and DE components in the mining environment. (The eighth facility had closed permanently prior to the surveys.) Exposure estimates were developed for mining facility/department/job/year combinations. A hierarchical grouping strategy was developed for assigning exposure levels to underground jobs [based on job titles, on the amount of time spent in various areas of the underground mine, and on similar carbon monoxide (CO, another DE component) concentrations] and to surface jobs (based on the use of, or proximity to, diesel-powered equipment). Time trends in air concentrations for underground jobs were estimated from mining facility-specific prediction models using diesel equipment horsepower, total air flow rates exhausted from the underground mines, and, because there were no historical REC measurements, historical measurements of CO. Exposures to potentially confounding agents, i.e. respirable dust, silica, radon, asbestos, and non-diesel sources of polycyclic aromatic hydrocarbons, also were assessed. Accuracy and reliability of the estimated REC exposures levels were evaluated by comparison with several smaller datasets and by development of alternative time trend models. During 1998–2001, the average measured REC exposure level by facility ranged from 40 to 384 μg m−3 for the underground workers and from 2 to 6 μg m−3 for the surface workers. For one prevalent underground job, ‘miner operator’, the maximum annual REC exposure estimate by facility ranged up to 685% greater than the corresponding 1998–2001 value. A comparison of the historical CO estimates from the time trend models with 1976–1977 CO measurements not used in the modeling found an overall median relative difference of 29%. Other comparisons showed similar levels of agreement. The assessment process indicated large differences in REC exposure levels over time and across the underground operations. Method evaluations indicated that the final estimates were consistent with those from alternative time trend models and demonstrated moderate to high agreement with external data. PMID:20876233
The diesel exhaust in miners study: I. Overview of the exposure assessment process.
Stewart, Patricia A; Coble, Joseph B; Vermeulen, Roel; Schleiff, Patricia; Blair, Aaron; Lubin, Jay; Attfield, Michael; Silverman, Debra T
2010-10-01
This report provides an overview of the exposure assessment process for an epidemiologic study that investigated mortality, with a special focus on lung cancer, associated with diesel exhaust (DE) exposure among miners. Details of several components are provided in four other reports. A major challenge for this study was the development of quantitative estimates of historical exposures to DE. There is no single standard method for assessing the totality of DE, so respirable elemental carbon (REC), a component of DE, was selected as the primary surrogate in this study. Air monitoring surveys at seven of the eight study mining facilities were conducted between 1998 and 2001 and provided reference personal REC exposure levels and measurements for other agents and DE components in the mining environment. (The eighth facility had closed permanently prior to the surveys.) Exposure estimates were developed for mining facility/department/job/year combinations. A hierarchical grouping strategy was developed for assigning exposure levels to underground jobs [based on job titles, on the amount of time spent in various areas of the underground mine, and on similar carbon monoxide (CO, another DE component) concentrations] and to surface jobs (based on the use of, or proximity to, diesel-powered equipment). Time trends in air concentrations for underground jobs were estimated from mining facility-specific prediction models using diesel equipment horsepower, total air flow rates exhausted from the underground mines, and, because there were no historical REC measurements, historical measurements of CO. Exposures to potentially confounding agents, i.e. respirable dust, silica, radon, asbestos, and non-diesel sources of polycyclic aromatic hydrocarbons, also were assessed. Accuracy and reliability of the estimated REC exposures levels were evaluated by comparison with several smaller datasets and by development of alternative time trend models. During 1998-2001, the average measured REC exposure level by facility ranged from 40 to 384 μg m⁻³ for the underground workers and from 2 to 6 μg m⁻³ for the surface workers. For one prevalent underground job, 'miner operator', the maximum annual REC exposure estimate by facility ranged up to 685% greater than the corresponding 1998-2001 value. A comparison of the historical CO estimates from the time trend models with 1976-1977 CO measurements not used in the modeling found an overall median relative difference of 29%. Other comparisons showed similar levels of agreement. The assessment process indicated large differences in REC exposure levels over time and across the underground operations. Method evaluations indicated that the final estimates were consistent with those from alternative time trend models and demonstrated moderate to high agreement with external data.
Land subsidence in the San Joaquin Valley, California, USA, 2007-2014
NASA Astrophysics Data System (ADS)
Sneed, M.; Brandt, J. T.
2015-11-01
Rapid land subsidence was recently measured using multiple methods in two areas of the San Joaquin Valley (SJV): between Merced and Fresno (El Nido), and between Fresno and Bakersfield (Pixley). Recent land-use changes and diminished surface-water availability have led to increased groundwater pumping, groundwater-level declines, and land subsidence. Differential land subsidence has reduced the flow capacity of water-conveyance systems in these areas, exacerbating flood hazards and affecting the delivery of irrigation water. Vertical land-surface changes during 2007-2014 were determined by using Interferometric Synthetic Aperture Radar (InSAR), Continuous Global Positioning System (CGPS), and extensometer data. Results of the InSAR analysis indicate that about 7600 km2 subsided 50-540 mm during 2008-2010; CGPS and extensometer data indicate that these rates continued or accelerated through December 2014. The maximum InSAR-measured rate of 270 mm yr-1 occurred in the El Nido area, and is among the largest rates ever measured in the SJV. In the Pixley area, the maximum InSAR-measured rate during 2008-2010 was 90 mm yr-1. Groundwater was an important part of the water supply in both areas, and pumping increased when land use changed or when surface water was less available. This increased pumping caused groundwater-level declines to near or below historical lows during the drought periods 2007-2009 and 2012-present. Long-term groundwater-level and land-subsidence monitoring in the SJV is critical for understanding the interconnection of land use, groundwater levels, and subsidence, and evaluating management strategies that help mitigate subsidence hazards to infrastructure while optimizing water supplies.
Consanguinity in two Uruguayan cities: historical evolution and characteristics (1800--1994).
Lusiardo, A; Barreto, I; Hidalgo, P C; Bonilla, C; Bertoni, B; Portas, M; Sans, M
2004-01-01
Information about consanguinity in Uruguay is scarce and limited to the end of the 20th century. To determine the frequency and characteristics of consanguineous marriages, as well as chronological trends, in two Uruguayan cities over almost two centuries. We analysed 28,393 Roman Catholic Church marriage records and Diocesan consanguinity dispensations belonging to the cities of Melo (Northeast), and Montevideo (South), for the period 1800--1994. 633 (2.23%) marriages were consanguineous. Among them, first cousin marriages were the most common (58.8% of all consanguineous marriages, including double consanguineous), especially those where the bride and groom were related through their maternal side. During the first decades of the 19th century both regions showed low levels of consanguinity. Consanguinity reached its maximum during the mid-1800s and decreased significantly throughout the 20th century. The overall mean coefficients of inbreeding were moderate in both cases, being greater in the Northeast (alpha=0.00165) than in the South (alpha = 0.00089). The low level of consanguinity as well as the structure of consanguineous marriages (distribution by degrees) is similar to that found in other southern South American countries. Temporal trends are similar to those found in industrialized regions in Europe, with maximum inbreeding levels during the middle-late 19th century; however, the clear predominance of first cousin unions, differs from most of the data for European countries. Small differences between the two cities can be related to diverse facts, such as socio-economic conditions, ethnic origin, immigration, and sampling.
Earthquake response of heavily damaged historical masonry mosques after restoration
NASA Astrophysics Data System (ADS)
Altunışık, Ahmet Can; Fuat Genç, Ali
2017-10-01
Restoration works have been accelerated substantially in Turkey in the last decade. Many historical buildings, mosques, minaret, bridges, towers and structures have been restored. With these restorations an important issue arises, namely how restoration work affects the structure. For this reason, we aimed to investigate the restoration effect on the earthquake response of a historical masonry mosque considering the openings on the masonry dome. For this purpose, we used the Hüsrev Pasha Mosque, which is located in the Ortakapı district in the old city of Van, Turkey. The region of Van is in an active seismic zone; therefore, earthquake analyses were performed in this study. Firstly a finite element model of the mosque was constructed considering the restoration drawings and 16 window openings on the dome. Then model was constructed with eight window openings. Structural analyses were performed under dead load and earthquake load, and the mode superposition method was used in analyses. Maximum displacements, maximum-minimum principal stresses and shear stresses are given with contours diagrams. The results are analyzed according to Turkish Earthquake Code (TEC, 2007) and compared between 8 and 16 window openings cases. The results show that reduction of the window openings affected the structural behavior of the mosque positively.
Effects of Climate Change on Flood Frequency in the Pacific Northwest
NASA Astrophysics Data System (ADS)
Gergel, D. R.; Stumbaugh, M. R.; Lee, S. Y.; Nijssen, B.; Lettenmaier, D. P.
2014-12-01
A key concern about climate change as related to water resources is the potential for changes in hydrologic extremes, including flooding. We explore changes in flood frequency in the Pacific Northwest using downscaled output from ten Global Climate Models (GCMs) from the Coupled Model Inter-Comparison Project 5 (CMIP5) for historical forcings (1950-2005) and future Representative Concentration Pathways (RCPs) 4.5 and 8.5 (2006-2100). We use archived output from the Integrated Scenarios Project (ISP) (http://maca.northwestknowledge.net/), which uses the Multivariate Adaptive Constructed Analogs (MACA) method for statistical downscaling. The MACA-downscaled GCM output was then used to force the Variable Infiltration Capacity (VIC) hydrology model with a 1/16th degree spatial resolution and a daily time step. For each of the 238 HUC-08 areas within the Pacific Northwest (USGS Hydrologic Region 15), we computed, from the ISP archive, the series of maximum daily runoff values (surrogate for the annual maximum flood), and then the mean annual flood. Finally, we computed the ratios of the RCP4.5 and RCP8.5 mean annual floods to their corresponding values for the historical period. We evaluate spatial patterns in the results. For snow-dominated watersheds, the changes are dominated by reductions in flood frequency in basins that currently have spring-dominant floods, and increases in snow affected basins with fall-dominant floods. In low elevation basins west of the Cascades, changes in flooding are more directly related to changes in precipitation extremes. We further explore the nature of these effects by evaluating the mean Julian day of the annual maximum flood for each HUC-08 and how this changes between the historical and RCP4.5 and RCP8.5 scenarios.
40 CFR 141.66 - Maximum contaminant levels for radionuclides.
Code of Federal Regulations, 2012 CFR
2012-07-01
... 40 Protection of Environment 24 2012-07-01 2012-07-01 false Maximum contaminant levels for... Regulations: Maximum Contaminant Levels and Maximum Residual Disinfectant Levels § 141.66 Maximum contaminant levels for radionuclides. (a) [Reserved] (b) MCL for combined radium-226 and -228. The maximum...
40 CFR 141.66 - Maximum contaminant levels for radionuclides.
Code of Federal Regulations, 2014 CFR
2014-07-01
... 40 Protection of Environment 23 2014-07-01 2014-07-01 false Maximum contaminant levels for... Regulations: Maximum Contaminant Levels and Maximum Residual Disinfectant Levels § 141.66 Maximum contaminant levels for radionuclides. (a) [Reserved] (b) MCL for combined radium-226 and -228. The maximum...
40 CFR 141.66 - Maximum contaminant levels for radionuclides.
Code of Federal Regulations, 2013 CFR
2013-07-01
... 40 Protection of Environment 24 2013-07-01 2013-07-01 false Maximum contaminant levels for... Regulations: Maximum Contaminant Levels and Maximum Residual Disinfectant Levels § 141.66 Maximum contaminant levels for radionuclides. (a) [Reserved] (b) MCL for combined radium-226 and -228. The maximum...
40 CFR 141.66 - Maximum contaminant levels for radionuclides.
Code of Federal Regulations, 2011 CFR
2011-07-01
... 40 Protection of Environment 23 2011-07-01 2011-07-01 false Maximum contaminant levels for... Regulations: Maximum Contaminant Levels and Maximum Residual Disinfectant Levels § 141.66 Maximum contaminant levels for radionuclides. (a) [Reserved] (b) MCL for combined radium-226 and -228. The maximum...
de Oliveira Bünger, Mariana; Fernanda Mazine, Fiorella; Forest, Félix; Leandro Bueno, Marcelo; Renato Stehmann, João; Lucas, Eve J
2016-12-01
Eugenia sect. Phyllocalyx Nied. includes 14 species endemic to the Neotropics, mostly distributed in the Atlantic coastal forests of Brazil. Here the first comprehensive phylogenetic study of this group is presented, and this phylogeny is used as the basis to evaluate the recent infrageneric classification in Eugenia sensu lato (s.l.) to test the history of the evolution of traits in the group and test hypotheses associated with the history of this clade. A total of 42 taxa were sampled, of which 14 were Eugenia sect. Phyllocalyx for one nuclear (ribosomal internal transcribed spacer) and four plastid markers (psbA-trnH, rpl16, trnL-rpl32 and trnQ-rps16). The relationships were reconstructed based on Bayesian analysis and maximum likelihood. Additionally, ancestral area analysis and modelling methods were used to estimate species dispersal, comparing historically climatic stable (refuges) and unstable areas. Maximum likelihood and Bayesian inferences indicate that Eugenia sect. Phyllocalyx is paraphyletic and the two clades recovered are characterized by combinations of morphological characters. Phylogenetic relationships support a link between Cerrado and south-eastern species and a difference in the composition of species from north-eastern and south-eastern Atlantic forest. Refugia and stable areas identified within unstable areas suggest that these areas were important to maintain diversity in the Atlantic forest biodiversity hotspot. This study provides a robust phylogenetic framework to address important historical questions for Eugenia s.l. within an evolutionary context, supporting the need for better taxonomic study of one of the largest genera in the Neotropics. Furthermore, valuable insight is offered into diversification and biome shifts of plant species in the highly environmentally impacted Atlantic forest of South America. Evidence is presented that climate stability in the south-eastern Atlantic forest during the Quaternary contributed to the highest levels of plant diversity in this region that acted as a refugium. © The Authors 2016. Published by Oxford University Press on behalf of the Annals of Botany Company. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Historical biogeography of the fern genus Deparia (Athyriaceae) and its relation with polyploidy.
Kuo, Li-Yaung; Ebihara, Atsushi; Shinohara, Wataru; Rouhan, Germinal; Wood, Kenneth R; Wang, Chun-Neng; Chiou, Wen-Liang
2016-11-01
The wide geographical distribution of many fern species is related to their high dispersal ability. However, very limited studies surveyed biological traits that could contribute to colonization success after dispersal. In this study, we applied phylogenetic approaches to infer historical biogeography of the fern genus Deparia (Athyriaceae, Eupolypods II). Because polyploids are suggested to have better colonization abilities and are abundant in Deparia, we also examined whether polyploidy could be correlated to long-distance dispersal events and whether polyploidy could play a role in these dispersals/establishment and range expansion. Maximum likelihood and Bayesian phylogenetic reconstructions were based on a four-region combined cpDNA dataset (rps16-matK IGS, trnL-L-F, matK and rbcL; a total of 4252 characters) generated from 50 ingroup (ca. 80% of the species diversity) and 13 outgroup taxa. Using the same sequence alignment and maximum likelihood trees, we carried out molecular dating analyses. The resulting chronogram was used to reconstruct ancestral distribution using the DEC model and ancestral ploidy level using ChromEvol. We found that Deparia originated around 27.7Ma in continental Asia/East Asia. A vicariant speciation might account for the disjunctive distribution of East Asia-northeast North America. There were multiple independent long-distance dispersals to Africa/Madagascar (at least once), Southeast Asia (at least once), south Pacific islands (at least twice), Australia/New Guinea/New Zealand (at least once), and the Hawaiian Islands (at least once). In particular, the long-distance dispersal to the Hawaiian Islands was associated with polyploidization, and the dispersal rate was slightly higher in the polyploids than in diploids. Moreover, we found five species showing recent infraspecific range expansions, all of which took place concurrently with polyploidization. In conclusion, our study provides the first investigation using phylogenetic and biogeographic analyses trying to explore the link between historical biogeography and ploidy evolution in a fern genus and our results imply that polyploids might be better colonizers than diploids. Copyright © 2016 Elsevier Inc. All rights reserved.
Trends and Variability of Global Fire Emissions Due To Historical Anthropogenic Activities
NASA Astrophysics Data System (ADS)
Ward, Daniel S.; Shevliakova, Elena; Malyshev, Sergey; Rabin, Sam
2018-01-01
Globally, fires are a major source of carbon from the terrestrial biosphere to the atmosphere, occurring on a seasonal cycle and with substantial interannual variability. To understand past trends and variability in sources and sinks of terrestrial carbon, we need quantitative estimates of global fire distributions. Here we introduce an updated version of the Fire Including Natural and Agricultural Lands model, version 2 (FINAL.2), modified to include multiday burning and enhanced fire spread rate in forest crowns. We demonstrate that the improved model reproduces the interannual variability and spatial distribution of fire emissions reported in present-day remotely sensed inventories. We use FINAL.2 to simulate historical (post-1700) fires and attribute past fire trends and variability to individual drivers: land use and land cover change, population growth, and lightning variability. Global fire emissions of carbon increase by about 10% between 1700 and 1900, reaching a maximum of 3.4 Pg C yr-1 in the 1910s, followed by a decrease to about 5% below year 1700 levels by 2010. The decrease in emissions from the 1910s to the present day is driven mainly by land use change, with a smaller contribution from increased fire suppression due to increased human population and is largest in Sub-Saharan Africa and South Asia. Interannual variability of global fire emissions is similar in the present day as in the early historical period, but present-day wildfires would be more variable in the absence of land use change.
Long-term archives reveal shifting extinction selectivity in China's postglacial mammal fauna
Crees, Jennifer J.; Li, Zhipeng; Bielby, Jon; Yuan, Jing
2017-01-01
Ecosystems have been modified by human activities for millennia, and insights about ecology and extinction risk based only on recent data are likely to be both incomplete and biased. We synthesize multiple long-term archives (over 250 archaeological and palaeontological sites dating from the early Holocene to the Ming Dynasty and over 4400 historical records) to reconstruct the spatio-temporal dynamics of Holocene–modern range change across China, a megadiverse country experiencing extensive current-day biodiversity loss, for 34 mammal species over three successive postglacial time intervals. Our combined zooarchaeological, palaeontological, historical and current-day datasets reveal that both phylogenetic and spatial patterns of extinction selectivity have varied through time in China, probably in response both to cumulative anthropogenic impacts (an ‘extinction filter’ associated with vulnerable species and accessible landscapes being affected earlier by human activities) and also to quantitative and qualitative changes in regional pressures. China has experienced few postglacial global species-level mammal extinctions, and most species retain over 50% of their maximum estimated Holocene range despite millennia of increasing regional human pressures, suggesting that the potential still exists for successful species conservation and ecosystem restoration. Data from long-term archives also demonstrate that herbivores have experienced more historical extinctions in China, and carnivores have until recently displayed greater resilience. Accurate assessment of patterns of biodiversity loss and the likely predictive power of current-day correlates of faunal vulnerability and resilience is dependent upon novel perspectives provided by long-term archives. PMID:29167363
Camizuli, Estelle; Scheifler, Renaud; Garnier, Stéphane; Monna, Fabrice; Losno, Rémi; Gourault, Claude; Hamm, Gilles; Lachiche, Caroline; Delivet, Guillaume; Chateau, Carmela; Alibert, Paul
2018-02-21
Throughout history, ancient human societies exploited mineral resources all over the world, even in areas that are now protected and considered to be relatively pristine. Here, we show that past mining still has an impact on wildlife in some French protected areas. We measured cadmium, copper, lead, and zinc concentrations in topsoils and wood mouse kidneys from sites located in the Cévennes and the Morvan. The maximum levels of metals in these topsoils are one or two orders of magnitude greater than their commonly reported mean values in European topsoils. The transfer to biota was effective, as the lead concentration (and to a lesser extent, cadmium) in wood mouse kidneys increased with soil concentration, unlike copper and zinc, providing direct evidence that lead emitted in the environment several centuries ago is still bioavailable to free-ranging mammals. The negative correlation between kidney lead concentration and animal body condition suggests that historical mining activity may continue to play a role in the complex relationships between trace metal pollution and body indices. Ancient mining sites could therefore be used to assess the long-term fate of trace metals in soils and the subsequent risks to human health and the environment.
A geochemical record of the mining history of the Erme Estuary, south Devon, UK.
Price, Gregory D; Winkle, Karen; Gehrels, W Roland
2005-12-01
The concentration of selected trace metals (Cu, Pb and Zn) in salt-marsh sediments from within the Erme Estuary have been measured in order to assess possible historical sources of pollution. The Erme Estuary, south Devon, UK is an Area of Outstanding Natural Beauty and has remained largely unaffected by industrialisation, although a number of small silver-lead mines were in operation in the 1800s. Five cores reveal comparable geochemical profiles. An increase of lead at approximately 40 cm depth is observed, reaching maximum values of 427 ppm. Less distinct trends are revealed by zinc and copper, probably reflecting the lack of widespread mining for ores of these elements within the catchment and possible post-depositional mobility rendering the metal concentrations non-contemporaneous with the chemostratigraphy of lead. The geochemical analysis of the salt-marsh sediments provides a fairly robust chemostratigraphic scheme and the likely sources of mine waste can be pinpointed within the catchment. Based upon reference to the historical mining record of these mines chemostratigraphic dating of the sediments can be achieved in order to provide an estimate of salt-marsh accretion rates and sea-level rise.
Linking the historic 2011 Mississippi River flood to coastal wetland sedimentation
Falcini, Federico; Khan, Nicole S.; Macelloni, Leonardo; Horton, Benjamin P.; Lutken, Carol B.; McKee, Karen L.; Santoleri, Rosalia; Colella, Simone; Li, Chunyan; Volpe, Gianluca; D’Emidio, Marco; Salusti, Alessandro; Jerolmack, Douglas J.
2012-01-01
Wetlands in the Mississippi River deltaic plain are deteriorating in part because levees and control structures starve them of sediment. In Spring of 2011 a record-breaking flood brought discharge on the lower Mississippi River to dangerous levels, forcing managers to divert up to 3500 m3/s-1 of water to the Atchafalaya River Basin. Here we quantify differences between the Mississippi and Atchafalaya River inundation and sediment-plume patterns using field-calibrated satellite data, and assess the impact these outflows had on wetland sedimentation. We characterize hydrodynamics and suspended sediment patterns of the Mississippi River plume using in-situ data collected during the historic flood. We show that the focused, high-momentum jet from the leveed Mississippi delivered sediment far offshore. In contrast, the plume from the Atchafalaya was more diffuse; diverted water inundated a large area; and sediment was trapped within the coastal current. Maximum sedimentation (up to several centimetres) occurred in the Atchafalaya Basin despite the larger sediment load carried by the Mississippi. Minimum accumulation occurred along the shoreline between these river sources. Our findings provide a mechanistic link between river-mouth dynamics and wetland sedimentation patterns that is relevant for plans to restore deltaic wetlands using artificial diversions.
Xia, Jianyang; McGuire, A. David; Lawrence, David; ...
2017-01-26
Realistic projection of future climate-carbon (C) cycle feedbacks requires better understanding and an improved representation of the C cycle in permafrost regions in the current generation of Earth system models. Here we evaluated 10 terrestrial ecosystem models for their estimates of net primary productivity (NPP) and responses to historical climate change in permafrost regions in the Northern Hemisphere. In comparison with the satellite estimate from the Moderate Resolution Imaging Spectroradiometer (MODIS; 246 ± 6 g C m -2 yr -1), most models produced higher NPP (309 ± 12 g C m -2 yr -1) over the permafrost region during 2000–2009.more » By comparing the simulated gross primary productivity (GPP) with a flux tower-based database, we found that although mean GPP among the models was only overestimated by 10% over 1982–2009, there was a twofold discrepancy among models (380 to 800 g C m -2 yr -1), which mainly resulted from differences in simulated maximum monthly GPP (GPP max). Most models overestimated C use efficiency (CUE) as compared to observations at both regional and site levels. Further analysis shows that model variability of GPP and CUE are nonlinearly correlated to variability in specific leaf area and the maximum rate of carboxylation by the enzyme Rubisco at 25°C (Vc max_25), respectively. The models also varied in their sensitivities of NPP, GPP, and CUE to historical changes in climate and atmospheric CO 2 concentration. In conclusion, these results indicate that model predictive ability of the C cycle in permafrost regions can be improved by better representation of the processes controlling CUE and GPP max as well as their sensitivity to climate change.« less
NASA Astrophysics Data System (ADS)
Xia, Jianyang; McGuire, A. David; Lawrence, David; Burke, Eleanor; Chen, Guangsheng; Chen, Xiaodong; Delire, Christine; Koven, Charles; MacDougall, Andrew; Peng, Shushi; Rinke, Annette; Saito, Kazuyuki; Zhang, Wenxin; Alkama, Ramdane; Bohn, Theodore J.; Ciais, Philippe; Decharme, Bertrand; Gouttevin, Isabelle; Hajima, Tomohiro; Hayes, Daniel J.; Huang, Kun; Ji, Duoying; Krinner, Gerhard; Lettenmaier, Dennis P.; Miller, Paul A.; Moore, John C.; Smith, Benjamin; Sueyoshi, Tetsuo; Shi, Zheng; Yan, Liming; Liang, Junyi; Jiang, Lifen; Zhang, Qian; Luo, Yiqi
2017-02-01
Realistic projection of future climate-carbon (C) cycle feedbacks requires better understanding and an improved representation of the C cycle in permafrost regions in the current generation of Earth system models. Here we evaluated 10 terrestrial ecosystem models for their estimates of net primary productivity (NPP) and responses to historical climate change in permafrost regions in the Northern Hemisphere. In comparison with the satellite estimate from the Moderate Resolution Imaging Spectroradiometer (MODIS; 246 ± 6 g C m-2 yr-1), most models produced higher NPP (309 ± 12 g C m-2 yr-1) over the permafrost region during 2000-2009. By comparing the simulated gross primary productivity (GPP) with a flux tower-based database, we found that although mean GPP among the models was only overestimated by 10% over 1982-2009, there was a twofold discrepancy among models (380 to 800 g C m-2 yr-1), which mainly resulted from differences in simulated maximum monthly GPP (GPPmax). Most models overestimated C use efficiency (CUE) as compared to observations at both regional and site levels. Further analysis shows that model variability of GPP and CUE are nonlinearly correlated to variability in specific leaf area and the maximum rate of carboxylation by the enzyme Rubisco at 25°C (Vcmax_25), respectively. The models also varied in their sensitivities of NPP, GPP, and CUE to historical changes in climate and atmospheric CO2 concentration. These results indicate that model predictive ability of the C cycle in permafrost regions can be improved by better representation of the processes controlling CUE and GPPmax as well as their sensitivity to climate change.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Xia, Jianyang; McGuire, A. David; Lawrence, David
Realistic projection of future climate-carbon (C) cycle feedbacks requires better understanding and an improved representation of the C cycle in permafrost regions in the current generation of Earth system models. Here we evaluated 10 terrestrial ecosystem models for their estimates of net primary productivity (NPP) and responses to historical climate change in permafrost regions in the Northern Hemisphere. In comparison with the satellite estimate from the Moderate Resolution Imaging Spectroradiometer (MODIS; 246 ± 6 g C m -2 yr -1), most models produced higher NPP (309 ± 12 g C m -2 yr -1) over the permafrost region during 2000–2009.more » By comparing the simulated gross primary productivity (GPP) with a flux tower-based database, we found that although mean GPP among the models was only overestimated by 10% over 1982–2009, there was a twofold discrepancy among models (380 to 800 g C m -2 yr -1), which mainly resulted from differences in simulated maximum monthly GPP (GPP max). Most models overestimated C use efficiency (CUE) as compared to observations at both regional and site levels. Further analysis shows that model variability of GPP and CUE are nonlinearly correlated to variability in specific leaf area and the maximum rate of carboxylation by the enzyme Rubisco at 25°C (Vc max_25), respectively. The models also varied in their sensitivities of NPP, GPP, and CUE to historical changes in climate and atmospheric CO 2 concentration. In conclusion, these results indicate that model predictive ability of the C cycle in permafrost regions can be improved by better representation of the processes controlling CUE and GPP max as well as their sensitivity to climate change.« less
Xia, Jianyang; McGuire, A. David; Lawrence, David; Burke, Eleanor J.; Chen, Guangsheng; Chen, Xiaodong; Delire, Christine; Koven, Charles; MacDougall, Andrew; Peng, Shushi; Rinke, Annette; Saito, Kazuyuki; Zhang, Wenxin; Alkama, Ramdane; Bohn, Theodore J.; Ciais, Philippe; Decharme, Bertrand; Gouttevin, Isabelle; Hajima, Tomohiro; Hayes, Daniel J.; Huang, Kun; Ji, Duoying; Krinner, Gerhard; Lettenmaier, Dennis P.; Miller, Paul A.; Moore, John C.; Smith, Benjamin; Sueyoshi, Tetsuo; Shi, Zheng; Yan, Liming; Liang, Junyi; Jiang, Lifen; Zhang, Qian; Luo, Yiqi
2017-01-01
Realistic projection of future climate-carbon (C) cycle feedbacks requires better understanding and an improved representation of the C cycle in permafrost regions in the current generation of Earth system models. Here we evaluated 10 terrestrial ecosystem models for their estimates of net primary productivity (NPP) and responses to historical climate change in permafrost regions in the Northern Hemisphere. In comparison with the satellite estimate from the Moderate Resolution Imaging Spectroradiometer (MODIS; 246 ± 6 g C m−2 yr−1), most models produced higher NPP (309 ± 12 g C m−2 yr−1) over the permafrost region during 2000–2009. By comparing the simulated gross primary productivity (GPP) with a flux tower-based database, we found that although mean GPP among the models was only overestimated by 10% over 1982–2009, there was a twofold discrepancy among models (380 to 800 g C m−2 yr−1), which mainly resulted from differences in simulated maximum monthly GPP (GPPmax). Most models overestimated C use efficiency (CUE) as compared to observations at both regional and site levels. Further analysis shows that model variability of GPP and CUE are nonlinearly correlated to variability in specific leaf area and the maximum rate of carboxylation by the enzyme Rubisco at 25°C (Vcmax_25), respectively. The models also varied in their sensitivities of NPP, GPP, and CUE to historical changes in climate and atmospheric CO2 concentration. These results indicate that model predictive ability of the C cycle in permafrost regions can be improved by better representation of the processes controlling CUE and GPPmax as well as their sensitivity to climate change.
NASA Astrophysics Data System (ADS)
Nasim, Wajid; Amin, Asad; Fahad, Shah; Awais, Muhammad; Khan, Naeem; Mubeen, Muhammad; Wahid, Abdul; Turan, Veysel; Rehman, Muhammad Habibur; Ihsan, Muhammad Zahid; Ahmad, Shakeel; Hussain, Sajjad; Mian, Ishaq Ahmad; Khan, Bushra; Jamal, Yousaf
2018-06-01
Climate change has adverse effects at global, regional and local level. Heat wave events have serious contribution for global warming and natural hazards in Pakistan. Historical (1997-2015) heat wave were analyzed over different provinces (Punjab, Sindh and Baluchistan) of Pakistan to identify the maximum temperature trend. Heat accumulation in Pakistan were simulated by the General Circulation Model (GCM) combined with 3 GHG (Green House Gases) Representative Concentration Pathways (RCPs) (RCP-4.5, 6.0, and 8.5) by using SimCLIM model (statistical downscaling model for future trend projections). Heat accumulation was projected for year 2030, 2060, and 2090 for seasonal and annual analysis in Pakistan. Heat accumulation were projected to increase by the baseline year (1995) was represented in percentage change. Projection shows that Sindh and southern Punjab was mostly affected by heat accumulation. This study identified the rising trend of heat wave over the period (1997-2015) for Punjab, Sindh and Baluchistan (provinces of Pakistan), which identified that most of the meteorological stations in Punjab and Sindh are highly prone to heat waves. According to model projection; future trend of annual heat accumulation, in 2030 was increased 17%, 26%, and 32% but for 2060 the trends were reported by 54%, 49%, and 86% for 2090 showed highest upto 62%, 75%, and 140% for RCP-4.5, RCP-6.0, and RCP-8.5, respectively. While seasonal trends of heat accumulation were projected to maximum values for monsoon and followed by pre-monsoon and post monsoon. Heat accumulation in monsoon may affect the agricultural activities in the region under study.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, Xiaodong; Hossain, Faisal; Leung, L. Ruby
The safety of large and aging water infrastructures is gaining attention in water management given the accelerated rate of change in landscape, climate and society. In current engineering practice, such safety is ensured by the design of infrastructure for the Probable Maximum Precipitation (PMP). Recently, several physics-based numerical modeling approaches have been proposed to modernize the conventional and ad hoc PMP estimation approach. However, the underlying physics has not been investigated and thus differing PMP estimates are obtained without clarity on their interpretation. In this study, we present a hybrid approach that takes advantage of both traditional engineering wisdom andmore » modern climate science to estimate PMP for current and future climate conditions. The traditional PMP approach is improved and applied to outputs from an ensemble of five CMIP5 models. This hybrid approach is applied in the Pacific Northwest (PNW) to produce ensemble PMP estimation for the historical (1970-2016) and future (2050-2099) time periods. The new historical PMP estimates are verified by comparing them with the traditional estimates. PMP in the PNW will increase by 50% of the current level by 2099 under the RCP8.5 scenario. Most of the increase is caused by warming, which mainly affects moisture availability, with minor contributions from changes in storm efficiency in the future. Moist track change tends to reduce the future PMP. Compared with extreme precipitation, ensemble PMP exhibits higher internal variation. Thus high-quality data of both precipitation and related meteorological fields (temperature, wind fields) are required to reduce uncertainties in the ensemble PMP estimates.« less
MEAN MAXIMUM TEMPERATURE DATA - U.S HISTORICAL CLIMATOLOGY NETWORK (HCN)
The Carbon Dioxide Information Analysis Center, which includes the World Data Center-A for Atmospheric Trace Gases, is the primary global-change data and information analysis center of the U.S. Department of Energy (DOE). CDIACs scope includes potentially anything and everything...
Metzger, Loren F.; Fio, John L.
1997-01-01
The installation of at least 100 residential wells in the town of Atherton, California, during the 198792 drought has raised concerns about the increased potential for land subsidence and salt water intrusion. Data were collected and monitor ing networks were established to assess current processes and to monitor future conditions affect ing these processes. Data include recorded pump age, recorded operation time, and measured pumpage rates from 38 wells; water levels from 49 wells; water chemistry samples from 20 wells, and land-surface elevation data from 22 survey sites, including one National Geodetic Survey estab lished bench mark. Geologic, lithologic, climato logic, well construction, well location, and historical information obtained from available reports and local, state, and Federal agencies were used in this assessment. Estimates of annual residential pumpage from 269 assumed active residential wells in the study area indicate that the average annual total pumping rate is between 395 and 570 acre-feet per year. The nine assumed active institutional wells are estimated to pump a total of about 200 acre- feet per year, or 35 to 50 percent of the total resi dential pumpage. Assuming that 510 acre-feet per year is the best estimate of annual residential pumpage, total pumpage of 710 acre-feet per year would represent about 19 percent of the study area's total water supply, as estimated. Depth-to-water-level measurements in wells during April 1993 through September 1995 typically ranged from less than 20 feet below land surface nearest to San Francisco Bay to more than 70 feet below land surface in upslope areas near exposed bedrock, depending on the season. This range, which is relatively high historically, is attributed to above normal rainfall between 1993 and 1995. Water levels expressed as hydraulic heads indicate the presence of three different hydrologic subareas on the basis of hydraulic-head contour configurations and flow direction. That all measured hydraulic heads in the study area from April 1993 through September 1995 were above sea level indicates that saltwater intrusion was unlikely during this period. The chemistry of 20 well-water samples is characterized as a calcium magnesium carbonate bicarbonate type water. There is no evidence of saltwater intrusion from San Francisco Bay; how ever, water samples from wells nearest the bay and bedrock assemblages indicate a greater concentra tion of dissolved constituents and salinity. Dissolved-solids concentrations of water samples from wells in these areas exceeded 1,000 milli grams per liter, and several samples contained a substantial fraction of sodium and chloride. Water hardness for the 20 wells sampled averaged 471 milligrams per liter as calcium carbonate, which is classified as very hard. One well sample exceeded the primary maximum contaminant level for drinking water in nitrate, several wells exceeded the secondary maximum contaminant level for chloride and sulfate, and all wells sampled exceeded the secondary maximum contaminant level for total dissolved solids. Land-subsidence and the resultant damage because of excessive ground-water pumping, in combination with periodic drought, have a well- documented history in the south San Francisco Bay area. Land-elevation surveying data from 1934 to 1967 indicate that subsidence ranged from 0.1 to approximately 0.5 foot in the vicinity of the study area. It could not be determined from land- surface elevation surveying data from 1993 whether subsidence is currently occurring in the study area.
NASA Astrophysics Data System (ADS)
Casati, Michele; Straser, Valentino; Feron, Alessandro
2017-04-01
The purpose of this study is to verify a possible relationship between solar activity transitions (minimum and maximum), seismic activity and atmospheric circulation in a particular area. The hypothesis has already been advanced by other authors and is found in studies, for example: [Sytinsky A.D.,1980,1987,1997][Mazzarella,Palumbo, 1989][Odintsov, et al, 2006][Khachikyan, Inchin, Lozbin, 2012][Czymzik,Markus, 2013][Nedeljko,Vujović,2014]. The geographical area studied is approximately 8x13 km sq. and includes villages such as Fivizzano and Equi Terme, in north-west Tuscany, Italy, on the Lunigiana/Garfagnana border. The North Apuan Fault Zone" (NAFZ) is found in the area of study and major historical earthquakes have occurred in this area [Di Naccio Deborah, et al., 2013]. In this research, we compared the local seismicity with heavy rainfall (in quantity) that occurred in a short time frame in this area (measured by the daily rain gauge accumulations). These events occurred during the numerous floods from 2009 to 2013 (the transition between the solar cycle SC23 and SC24 solar and the rise of solar cycle SC24). The data was provided by the hydrological sector of the Tuscan Region Hydrological Service (SIR) and the LaMMA consortium. In this study we hypothesize, a slow and continuous destabilizing action on local geological structures, due to the multiple and violent atmospheric disturbances (V-shaped, flash floods, squall-line, etc..). Destabilization that led to an earthquake of magnitude Mw 5.36, which occurred on 21 June 2013. Comparing the SIDC count of sunspots with: a) the historical local seismic events catalogue with magnitude M4.5 + (CPTI15, the 2015 version of the Parametric Catalogue of Italian Earthquakes), b) the recent earthquakes of magnitude M 2.5+, which occurred from 1984 (ISIDe working group (2016) version 1.0), and c) the historical reconstructed maximum annual flows of the Serchio river from 1750, the daily maximum annual flows of the Magra river since 1939 (Data provided by Serchio River Authority and Aauthority and Magra Interregional River Authority), we observe that floods and/or local seismic events occur more frequently when there are solar maximum and solar minima.
Comparative analysis of rainfall and landslide damage for landslide susceptibility zonation
NASA Astrophysics Data System (ADS)
Petrucci, O.; Pasqua, A. A.
2009-04-01
In the present work we applied a methodology tested in previous works to a regional sector of Calabria (Southern Italy), aiming to obtain a zonation of this area according to the susceptibility to develop landslides, as inferred from the combined analysis of past landslide events and cumulate rainfall which triggered them. The complete series of both historical landslides and daily rainfall have been organised in two databases. For each landslide event, damage, mainly defined in relation to the reimbursement requests sent to the Department of Public Works, has been quantified using a procedure based on a Local Damage Index. Rainfall has been described by the Maximum Return Period of cumulative rainfall recorded during the landslide events. Damage index and population density, presumed to represent the location of vulnerable elements, have been referred to Thiessen polygons associated to rain gauges working at the time of the event. The procedure allowed us to carry out a classification of the polygons composing the study area according to their susceptibility to damage during DHEs. In high susceptibility polygons, severe damage occurs during rainfall characterised by low return periods; in medium susceptibility polygons, maximum return period rainfall and induced damage show equal levels of exceptionality; in low susceptibility polygons, high return period rainfall induces a low level of damage. The results can prove useful in establishing civil defence plans, emergency management, and prioritizing hazard mitigation measures.
Effect of Tide Elevation on Extratropical Storm Surge in Northwest Europe
NASA Astrophysics Data System (ADS)
Keshtpoor, M.; Carnacina, I.; Yablonsky, R. M.
2016-12-01
Extratropical cyclones (ETCs) are the major storm surge-generating meteorological events in northwest Europe. The total water level increase induced by these ETCs is significantly influenced by the local tidal range, which exceeds 8 meters along the southwestern UK coastline. In particular, a surge-generating ETC during high tide may put coastal assets and infrastructure in risk. Also, during low tide, the risk of surge induced by extreme ETC events is diminished. Here, the effect of tidal elevation on storm surge is investigated at 196 tide gauges in northwest Europe. A numerical, hydrodynamic model was developed using Delft3D-FM framework to simulate the coastal hydrodynamics during ETCs. Then, 1750 historical events were simulated to investigate the pattern of coastal inundation. Results suggest that in areas with a large tidal range ( 8 meters) and during the time period surrounding high or low tide, the pattern of coastal hydrodynamics is governed by tide and not storm surge. This result is most evident near the English Channel and Bristol Channel, where low frequency maximum water levels are observed when storm surge is combined with high tide. In contrast, near the tidal phase reversal, coastal hydrodynamics responds primarily to the storm surge, and low frequency maximum water elevation largely depends on the surge. In the areas with a small tidal range, ETC strength determines the pattern of coastal inundation.
Jager, Justin; Keyes, Katherine M.; Schulenberg, John E.
2015-01-01
This study examines historical variation in age 18–26 binge drinking trajectories, focusing on differences in both level of use and rates of change (growth) across cohorts of young adults over three decades. As part of the national Monitoring the Future Study, over 64,000 youths from the high school classes of 1976–2004 were surveyed at biennial intervals between ages 18 and 26. We found that, relative to past cohorts, recent cohorts both enter the age 18–26 age band engaging in lower levels and exit the age 18–26 age band engaging in higher levels of binge drinking. The reason for this reversal is that, relative to past cohorts, binge drinking among recent cohorts accelerates more quickly across ages 18–22 and decelerates more slowly across ages 22–26. Moreover, we found that historical increases in minimum legal drinking age account for a portion of the historical decline in age 18 level, while historical variation in social role acquisition (e.g., marriage, parenthood, and employment) accounts for a portion of the historical acceleration in age 18–22 growth. We also found that historical variation in the age 18–22 and age 22–26 growth rates was strongly and positively connected, suggesting common mechanism(s) underlie historical variation of both growth rates. Findings were generally consistent across gender and indicate that historical time is an important source of individual differences in young adult binge drinking trajectories. Beyond binge drinking, historical time may also inform the developmental course of other young adult risk behaviors, highlighting the interplay of epidemiology and etiology. PMID:26010381
NASA Astrophysics Data System (ADS)
Osman, Marisol; Vera, C. S.
2017-10-01
This work presents an assessment of the predictability and skill of climate anomalies over South America. The study was made considering a multi-model ensemble of seasonal forecasts for surface air temperature, precipitation and regional circulation, from coupled global circulation models included in the Climate Historical Forecast Project. Predictability was evaluated through the estimation of the signal-to-total variance ratio while prediction skill was assessed computing anomaly correlation coefficients. Both indicators present over the continent higher values at the tropics than at the extratropics for both, surface air temperature and precipitation. Moreover, predictability and prediction skill for temperature are slightly higher in DJF than in JJA while for precipitation they exhibit similar levels in both seasons. The largest values of predictability and skill for both variables and seasons are found over northwestern South America while modest but still significant values for extratropical precipitation at southeastern South America and the extratropical Andes. The predictability levels in ENSO years of both variables are slightly higher, although with the same spatial distribution, than that obtained considering all years. Nevertheless, predictability at the tropics for both variables and seasons diminishes in both warm and cold ENSO years respect to that in all years. The latter can be attributed to changes in signal rather than in the noise. Predictability and prediction skill for low-level winds and upper-level zonal winds over South America was also assessed. Maximum levels of predictability for low-level winds were found were maximum mean values are observed, i.e. the regions associated with the equatorial trade winds, the midlatitudes westerlies and the South American Low-Level Jet. Predictability maxima for upper-level zonal winds locate where the subtropical jet peaks. Seasonal changes in wind predictability are observed that seem to be related to those associated with the signal, especially at the extratropics.
Vaccaro, John J.
1992-01-01
The sensitivity of groundwater recharge estimates was investigated for the semiarid Ellensburg basin, located on the Columbia Plateau, Washington, to historic and projected climatic regimes. Recharge was estimated for predevelopment and current (1980s) land use conditions using a daily energy-soil-water balance model. A synthetic daily weather generator was used to simulate lengthy sequences with parameters estimated from subsets of the historical record that were unusually wet and unusually dry. Comparison of recharge estimates corresponding to relatively wet and dry periods showed that recharge for predevelopment land use varies considerably within the range of climatic conditions observed in the 87-year historical observation period. Recharge variations for present land use conditions were less sensitive to the same range of historical climatic conditions because of irrigation. The estimated recharge based on the 87-year historical climatology was compared with adjustments to the historical precipitation and temperature records for the same record to reflect CO2-doubling climates as projected by general circulation models (GCMs). Two GCM scenarios were considered: an average of conditions for three different GCMs with CO2 doubling, and a most severe “maximum” case. For the average GCM scenario, predevelopment recharge increased, and current recharge decreased. Also considered was the sensitivity of recharge to the variability of climate within the historical and adjusted historical records. Predevelopment and current recharge were less and more sensitive, respectively, to the climate variability for the average GCM scenario as compared to the variability within the historical record. For the maximum GCM scenario, recharge for both predevelopment and current land use decreased, and the sensitivity to the CO2-related climate change was larger than sensitivity to the variability in the historical and adjusted historical climate records.
A development perspective on adolescent drug abuse.
Baumrind, D; Moselle, K A
1985-01-01
Adolescent drug use is placed in an historical and developmental perspective. Existing evidence concerning causes and consequences of adolescent drug use is inconclusive. In the absence of conclusive empirical evidence and cogent theories, we present a prima facie case against early adolescent drug use by defending six propositions which posit specific cognitive, conative, and affective negative consequences including impairment of attention and memory; developmental lag imposing categorical limitations on the level of maximum functioning available to the user in cognitive, moral and psychosocial domains; amotivational syndrome; consolidation of diffuse or negative identity; and social alienation and estrangement. We call for a program of research which could provide credible evidence to support or rebut these propositions, and thus address the factual claims underlying the sociomoral concerns of social policy planners.
40 CFR 141.55 - Maximum contaminant level goals for radionuclides.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 40 Protection of Environment 22 2010-07-01 2010-07-01 false Maximum contaminant level goals for... PROGRAMS (CONTINUED) NATIONAL PRIMARY DRINKING WATER REGULATIONS Maximum Contaminant Level Goals and Maximum Residual Disinfectant Level Goals § 141.55 Maximum contaminant level goals for radionuclides...
40 CFR 141.55 - Maximum contaminant level goals for radionuclides.
Code of Federal Regulations, 2011 CFR
2011-07-01
... 40 Protection of Environment 23 2011-07-01 2011-07-01 false Maximum contaminant level goals for... PROGRAMS (CONTINUED) NATIONAL PRIMARY DRINKING WATER REGULATIONS Maximum Contaminant Level Goals and Maximum Residual Disinfectant Level Goals § 141.55 Maximum contaminant level goals for radionuclides...
NASA Astrophysics Data System (ADS)
Cardoso, Rita M.; Soares, Pedro M. M.; Lima, Daniela C. A.; Miranda, Pedro M. A.
2018-02-01
Large temperature spatio-temporal gradients are a common feature of Mediterranean climates. The Portuguese complex topography and coastlines enhances such features, and in a small region large temperature gradients with high interannual variability is detected. In this study, the EURO-CORDEX high-resolution regional climate simulations (0.11° and 0.44° resolutions) are used to investigate the maximum and minimum temperature projections across the twenty-first century according to RCP4.5 and RCP8.5. An additional WRF simulation with even higher resolution (9 km) for RCP8.5 scenario is also examined. All simulations for the historical period (1971-2000) are evaluated against the available station observations and the EURO-CORDEX model results are ranked in order to build multi-model ensembles. In present climate models are able to reproduce the main topography/coast related temperature gradients. Although there are discernible differences between models, most present a cold bias. The multi-model ensembles improve the overall representation of the temperature. The ensembles project a significant increase of the maximum and minimum temperatures in all seasons and scenarios. Maximum increments of 8 °C in summer and autumn and between 2 and 4 °C in winter and spring are projected in RCP8.5. The temperature distributions for all models show a significant increase in the upper tails of the PDFs. In RCP8.5 more than half of the extended summer (MJJAS) has maximum temperatures exceeding the historical 90th percentile and, on average, 60 tropical nights are projected for the end of the century, whilst there are only 7 tropical nights in the historical period. Conversely, the number of cold days almost disappears. The yearly average number of heat waves increases by seven to ninefold by 2100 and the most frequent length rises from 5 to 22 days throughout the twenty-first century. 5% of the longest events will last for more than one month. The amplitude is overwhelming larger, reaching values which are not observed in the historical period. More than half of the heat waves will be stronger than the extreme heat wave of 2003 by the end of the century. The future heatwaves will also enclose larger areas, approximately 100 events in the 2071-2100 period (more than 3 per year) will cover the whole country. The RCP4.5 scenario has in general smaller magnitudes.
NASA Astrophysics Data System (ADS)
Gaál, Ladislav; Szolgay, Ján; Kohnová, Silvia; Hlavčová, Kamila; Viglione, Alberto
2010-01-01
The paper deals with at-site flood frequency estimation in the case when also information on hydrological events from the past with extraordinary magnitude are available. For the joint frequency analysis of systematic observations and historical data, respectively, the Bayesian framework is chosen, which, through adequately defined likelihood functions, allows for incorporation of different sources of hydrological information, e.g., maximum annual flood peaks, historical events as well as measurement errors. The distribution of the parameters of the fitted distribution function and the confidence intervals of the flood quantiles are derived by means of the Markov chain Monte Carlo simulation (MCMC) technique. The paper presents a sensitivity analysis related to the choice of the most influential parameters of the statistical model, which are the length of the historical period
Cristiano, Maykon Passos; Clemes Cardoso, Danon; Fernandes-Salomão, Tânia Maria; Heinze, Jürgen
2016-01-01
Past climate changes often have influenced the present distribution and intraspecific genetic diversity of organisms. The objective of this study was to investigate the phylogeography and historical demography of populations of Acromyrmex striatus (Roger, 1863), a leaf-cutting ant species restricted to the open plains of South America. Additionally, we modeled the distribution of this species to predict its contemporary and historic habitat. From the partial sequences of the mitochondrial gene cytochrome oxidase I of 128 A. striatus workers from 38 locations we estimated genetic diversity and inferred historical demography, divergence time, and population structure. The potential distribution areas of A. striatus for current and quaternary weather conditions were modeled using the maximum entropy algorithm. We identified a total of 58 haplotypes, divided into five main haplogroups. The analysis of molecular variance (AMOVA) revealed that the largest proportion of genetic variation is found among the groups of populations. Paleodistribution models suggest that the potential habitat of A. striatus may have decreased during the Last Interglacial Period (LIG) and expanded during the Last Maximum Glacial (LGM). Overall, the past potential distribution recovered by the model comprises the current potential distribution of the species. The general structuring pattern observed was consistent with isolation by distance, suggesting a balance between gene flow and drift. Analysis of historical demography showed that populations of A. striatus had remained constant throughout its evolutionary history. Although fluctuations in the area of their potential historic habitat occurred during quaternary climate changes, populations of A. striatus are strongly structured geographically. However, explicit barriers to gene flow have not been identified. These findings closely match those in Mycetophylax simplex, another ant species that in some areas occurs in sympatry with A. striatus. Ecophysiological traits of this species and isolation by distance may together have shaped the phylogeographic pattern. PMID:26734939
Campbell, M A; Lopéz, J A
2014-02-01
Mitochondrial genetic variability among populations of the blackfish genus Dallia (Esociformes) across Beringia was examined. Levels of divergence and patterns of geographic distribution of mitochondrial DNA lineages were characterized using phylogenetic inference, median-joining haplotype networks, Bayesian skyline plots, mismatch analysis and spatial analysis of molecular variance (SAMOVA) to infer genealogical relationships and to assess patterns of phylogeography among extant mitochondrial lineages in populations of species of Dallia. The observed variation includes extensive standing mitochondrial genetic diversity and patterns of distinct spatial segregation corresponding to historical and contemporary barriers with minimal or no mixing of mitochondrial haplotypes between geographic areas. Mitochondrial diversity is highest in the common delta formed by the Yukon and Kuskokwim Rivers where they meet the Bering Sea. Other regions sampled in this study host comparatively low levels of mitochondrial diversity. The observed levels of mitochondrial diversity and the spatial distribution of that diversity are consistent with persistence of mitochondrial lineages in multiple refugia through the last glacial maximum. © 2014 The Fisheries Society of the British Isles.
Increased flood risks in the Sacramento-San Joaquin Valleys, CA, under climate change
NASA Astrophysics Data System (ADS)
Das, T.; Hidalgo-Leon, H.; Dettinger, M.; Cayan, D.
2008-12-01
Natural calamities like floods cause immense damages to human society globally, and California is no exception. A simulation analysis of flood generation in the western Sierra Nevada of California was carried out on simulated by the Variable Infiltration Capacity (VIC) hydrologic model under prescribed changes in precipitation (+10 percent) and temperature (+3oC and +5oC) to evaluate likely changes in 3-day flood- frequency curves under climate change. An additional experiment was carried out where snow production was artificially turned off in VIC. All these experiments showed larger flood magnitudes from California's Northern Sierra Nevada (NSN) and Southern Sierra Nevada (SSN), but the changes (for floods larger than the historical 20-year floods) were significant (at 90 percent confidence level) only in the SSN for severe warming cases. Another analysis using downscaled daily precipitation and temperature projections from three General Circulation Models (CNRM CM3, GFDL CM2.1 and NCAR PCM1) and emission scenario A2 as input to VIC yielded a general increase in the 3-days annual maximum flows under climate change. The increases are significant (at 90 percent confidence level) in the SSN for the period 2051-2099 with all the three climate models analyzed. In the NSN the increases are significant only with the CNRM CM3 model. In general, the frequency of floods increases or stayed same under the projected future climates, and some of the projected floods were unprecedentedly large when compared to historical simulations.
Compilation and analysis of multiple groundwater-quality datasets for Idaho
Hundt, Stephen A.; Hopkins, Candice B.
2018-05-09
Groundwater is an important source of drinking and irrigation water throughout Idaho, and groundwater quality is monitored by various Federal, State, and local agencies. The historical, multi-agency records of groundwater quality include a valuable dataset that has yet to be compiled or analyzed on a statewide level. The purpose of this study is to combine groundwater-quality data from multiple sources into a single database, to summarize this dataset, and to perform bulk analyses to reveal spatial and temporal patterns of water quality throughout Idaho. Data were retrieved from the Water Quality Portal (https://www.waterqualitydata.us/), the Idaho Department of Environmental Quality, and the Idaho Department of Water Resources. Analyses included counting the number of times a sample location had concentrations above Maximum Contaminant Levels (MCL), performing trends tests, and calculating correlations between water-quality analytes. The water-quality database and the analysis results are available through USGS ScienceBase (https://doi.org/10.5066/F72V2FBG).
Land subsidence in the San Joaquin Valley, California, USA, 2007-14
Sneed, Michelle; Brandt, Justin
2015-01-01
Rapid land subsidence was recently measured using multiple methods in two areas of the San Joaquin Valley (SJV): between Merced and Fresno (El Nido), and between Fresno and Bakersfield (Pixley). Recent land-use changes and diminished surface-water availability have led to increased groundwater pumping, groundwater-level declines, and land subsidence. Differential land subsidence has reduced the flow capacity of water-conveyance systems in these areas, exacerbating flood hazards and affecting the delivery of irrigation water. Vertical land-surface changes during 2007–2014 were determined by using Interferometric Synthetic Aperture Radar (InSAR), Continuous Global Positioning System (CGPS), and extensometer data. Results of the InSAR analysis indicate that about 7600 km2 subsided 50–540 mm during 2008–2010; CGPS and extensometer data indicate that these rates continued or accelerated through December 2014. The maximum InSAR-measured rate of 270 mm yr−1 occurred in the El Nido area, and is among the largest rates ever measured in the SJV. In the Pixley area, the maximum InSAR-measured rate during 2008–2010 was 90 mm yr−1. Groundwater was an important part of the water supply in both areas, and pumping increased when land use changed or when surface water was less available. This increased pumping caused groundwater-level declines to near or below historical lows during the drought periods 2007–2009 and 2012–present. Long-term groundwater-level and land-subsidence monitoring in the SJV is critical for understanding the interconnection of land use, groundwater levels, and subsidence, and evaluating management strategies that help mitigate subsidence hazards to infrastructure while optimizing water supplies.
Venkatesan, M.I.; De Leon, R. P.; VanGeen, A.; Luoma, S.N.
1999-01-01
Sediment cores of known chronology from Richardson and San Pablo Bays in San Francisco Bay, CA, were analyzed for a suite of chlorinated hydrocarbon pesticides and polychlorinated biphenyls to reconstruct a historic record of inputs. Total DDTs (DDT = 2,4'- and 4,4'-dichlorodiphenyltrichloroethane and the metabolites, 2,4'- and 4,4'-DDE, -DDD) range in concentration from 4-21 ng/g and constitute a major fraction (> 84%) of the total pesticides in the top 70 cm of Richardson Bay sediment. A subsurface maximum corresponds to a peak deposition date of 1969-1974. The first measurable DDT levels are found in sediment deposited in the late 1930's. The higher DDT inventory in the San Pablo relative to the Richardson Bay core probably reflects the greater proximity of San Pablo Bay to agricultural activities in the watershed of the Sacramento and San Joaquin rivers. Total polychlorinated biphenyls (PCBs) occur at comparable levels in the two Bays (< 1-34 ng/g). PCBs are first detected in sediment deposited during the 1930's in Richardson Bay, about a decade earlier than the onset of detectable levels of DDTs. PCB inventories in San Pablo Bay are about a factor of four higher in the last four decades than in Richardson Bay, suggesting a distribution of inputs not as strongly weighed towards the upper reaches of the estuary as DDTs. The shallower subsurface maximum in PCBs compared to DDT in the San Pablo Bay core is consistent with the imposition of drastic source control measures four these constituents in 1970 and 1977 respectively. The observed decline in DDT and PCB levels towards the surface of both cores is consistent with a dramatic drop in the input of these pollutants once the effect of sediment resuspension and mixing is taken into account.
Water quality and cyanobacteria densities from 1989-2015 were compiled for 20 Midwestern USA reservoirs. Maximum summer cyanobacteria densities increased over the last 7-15 years of the record, with greatest increases typically observed in reservoirs with low watershed forest cov...
USDA-ARS?s Scientific Manuscript database
DayCent is a biogeochemical model of intermediate complexity used to simulate carbon, nutrient, and greenhouse gas fluxes for crop, grassland, forest, and savanna ecosystems. Model inputs include: soil texture and hydraulic properties, current and historical land use, vegetation cover, daily maximum...
NASA Astrophysics Data System (ADS)
Bolles, K.; Forman, S. L.
2017-12-01
Understanding the spatiotemporal dynamics of dust sources is essential to accurately quantify the various impacts of dust on the Earth system; however, a persistent deficiency in modeling dust emission is detailed knowledge of surface texture, geomorphology, and location of dust emissive surfaces, which strongly influence the effects of wind erosion. Particle emission is closely linked to both climatic and physical surface factors - interdependent variables that respond to climate nonlinearly and are mitigated by variability in land use or management practice. Recent efforts have focused on development of a preferential dust source (PDS) identification scheme to improve global dust-cycle models, which posits certain surfaces are more likely to emit dust than others, dependent upon associated sediment texture and geomorphological limitations which constrain sediment supply and availability. In this study, we outline an approach to identify and verify the physical properties and distribution of dust emissive surfaces in the U.S. Great Plains from historical aerial imagery in order to establish baseline records of dust sources, associated erodibility, and spatiotemporal variability, prior to the satellite era. We employ a multi-criteria, spatially-explicit model to identify counties that are "representative" of the broader landscape on the Great Plains during the 1930s. Parameters include: percentage of county cultivated and uncultivated per the 1935 Agricultural Census, average soil sand content, mean annual Palmer Drought Severity Index (PDSI), maximum annual temperature and percent difference to the 30-year normal maximum temperature, and annual precipitation and percent difference to the 30-year normal precipitation level. Within these areas we generate random points to select areas for photo reproduction. Selected frames are photogrammetrically scanned at 1200 dpi, radiometrically corrected, mosaicked and georectified to create an IKONOS-equivalent image. Gray-level co-occurrence matrices are calculated in a 3x3 moving window to determine textural properties of the mosaic and delineate bare surfaces of different sedimentological properties. Field stratigraphic assessments and spatially-referenced historical data are integrated within ArcGIS to ground-truth imagery.
NASA Astrophysics Data System (ADS)
Keshtpoor, M.; Carnacina, I.; Yablonsky, R. M.
2016-12-01
Extratropical cyclones (ETCs) are the primary driver of storm surge events along the UK and northwest mainland Europe coastlines. In an effort to evaluate the storm surge risk in coastal communities in this region, a stochastic catalog is developed by perturbing the historical storm seeds of European ETCs to account for 10,000 years of possible ETCs. Numerical simulation of the storm surge generated by the full 10,000-year stochastic catalog, however, is computationally expensive and may take several months to complete with available computational resources. A new statistical regression model is developed to select the major surge-generating events from the stochastic ETC catalog. This regression model is based on the maximum storm surge, obtained via numerical simulations using a calibrated version of the Delft3D-FM hydrodynamic model with a relatively coarse mesh, of 1750 historical ETC events that occurred over the past 38 years in Europe. These numerically-simulated surge values were regressed to the local sea level pressure and the U and V components of the wind field at the location of 196 tide gauge stations near the UK and northwest mainland Europe coastal areas. The regression model suggests that storm surge values in the area of interest are highly correlated to the U- and V-component of wind speed, as well as the sea level pressure. Based on these correlations, the regression model was then used to select surge-generating storms from the 10,000-year stochastic catalog. Results suggest that roughly 105,000 events out of 480,000 stochastic storms are surge-generating events and need to be considered for numerical simulation using a hydrodynamic model. The selected stochastic storms were then simulated in Delft3D-FM, and the final refinement of the storm population was performed based on return period analysis of the 1750 historical event simulations at each of the 196 tide gauges in preparation for Delft3D-FM fine mesh simulations.
NASA Astrophysics Data System (ADS)
Reppert, Michael; Tokmakoff, Andrei
The structural characterization of intrinsically disordered peptides (IDPs) presents a challenging biophysical problem. Extreme heterogeneity and rapid conformational interconversion make traditional methods difficult to interpret. Due to its ultrafast (ps) shutter speed, Amide I vibrational spectroscopy has received considerable interest as a novel technique to probe IDP structure and dynamics. Historically, Amide I spectroscopy has been limited to delivering global secondary structural information. More recently, however, the method has been adapted to study structure at the local level through incorporation of isotope labels into the protein backbone at specific amide bonds. Thanks to the acute sensitivity of Amide I frequencies to local electrostatic interactions-particularly hydrogen bonds-spectroscopic data on isotope labeled residues directly reports on local peptide conformation. Quantitative information can be extracted using electrostatic frequency maps which translate molecular dynamics trajectories into Amide I spectra for comparison with experiment. Here we present our recent efforts in the development of a rigorous approach to incorporating Amide I spectroscopic restraints into refined molecular dynamics structural ensembles using maximum entropy and related approaches. By combining force field predictions with experimental spectroscopic data, we construct refined structural ensembles for a family of short, strongly disordered, elastin-like peptides in aqueous solution.
Ma, Jian-Zhang
2017-01-01
Extant Feliformia species are one of the most diverse radiations of Carnivora (~123 species). Despite substantial recent interest in their conservation, diversification, and systematic study, no previous phylogeny contains a comprehensive species set, and no biogeography of this group is available. Here, we present a phylogenetic estimate for Feliformia with a comprehensive species set and establish a historical biogeography based on mitochondrial DNA. Both the Bayesian and maximum likelihood phylogeny for Feliformia are elucidated in our analyses and are strongly consistent with many groups recognized in previous studies. The mitochondrial phylogenetic relationships of Felidae were for the first time successfully reconstructed in our analyses with strong supported. When divergence times and dispersal/vicariance histories were compared with historical sea level changes, four dispersal and six vicariance events were identified. These vicariance events were closely related with global sea level changes. The transgression of sea into the lowland plains between Eurasia and Africa may have caused the vicariance in these regions. A fall in the sea level during late Miocene to Pliocene produced the Bering strait land bridge, which assisted the migration of American Feliformia ancestors from Asia to North America. In contrast with the ‘sweepstakes hypothesis’, our results suggest that the climate cooling during 30–27 Ma assisted Feliformia migration from the African mainland to Madagascar by creating a short-lived ice bridge across the Mozambique Channel. Lineages-through-time plots revealed a large increase in lineages since the Mid-Miocene. During the Mid-Miocene Climatic Optimum, the ecosystems and population of Feliformia rapidly expanded. Subsequent climate cooling catalyzed immigration, speciation, and the extinction of Feliformia. PMID:28358848
NASA Astrophysics Data System (ADS)
Wu, Tso-Ren; Wu, Han; Tsai, Yu-Lin
2016-04-01
In 1661, Chinese navy led by General Zheng Chenggong at the end of Ming Dynasty had a naval battle against Netherlands. This battle was not only the first official sea warfare that China confronted the Western world, but also the only naval battle won by Chinese Navy so far. This event was important because it changed the fate of Taiwan until today. One of the critical points that General Zheng won the battle was entering Luermen bay unexpected. Luermen bay was and is an extreme shallow bay with a 2.1m maximum water depth during the high tide, which was not possible for a fleet of 20,000 marines to across. Therefore, no defense was deployed from the Netherlands side. However, plenty of historical literatures mentioned a strange phenomenon that helped Chinese warships entered the Luermen bay, the rise of water level. In this study, we will discuss the possible causes that might rise the water level, e.g. Tsunami, storm surge, and high tide. We analyzed it based on the knowledge of hydrodynamics. We performed the newly developed Impact Intensify Analysis (IIA) for finding the potential tsunami sources, and the COMCOT tsunami model was adopted for the nonlinear scenario simulations, associated with the high resolution bathymetry data. Both earthquake and mudslide tsunamis were inspected. Other than that, we also collected the information of tide and weather for identifying the effects form high tide and storm surge. After the thorough study, a scenario that satisfy most of the descriptions in the historical literatures will be presented. The results will explain the cause of mysterious event that changed the destiny of Taiwan.
The First Historical Eruption of Kambalny Volcano in 2017 .
NASA Astrophysics Data System (ADS)
Gordeev, E.
2017-12-01
The first historical eruption at Kambalny volcano began about 21:20 UTC on March 24, 2017 with powerful ash emissions up to 6 km above sea level from the pre-summit crater. According to tephrochronological data, it is assumed that the strong eruptions of the volcano occurred 200 (?) and 600 years ago. KVERT (Kamchatka Volcanic Eruption Response Team) of the Institute of Volcanology and Seismology FEB RAS has been monitoring Kambalny volcano since 2002. KVERT worked closely with AMC Elizovo and Tokyo VAAC during the eruption at Kambalny volcano in 2017. The maximum intensity of ash emissions occurred on 25-26 March: a continuous plume laden with ash particles spread over several thousand kilometers, changing the direction of propagation from the volcano from the south-west to the south and south-east. On 27-29 March, the ash plume extended to the west, on 30 March - to the southeast of the volcano. On March 31 and April 01, the volcano was relatively quiet. The resumption of the volcano activity after two days of rest was expressed in powerful ash emissions up to 7 km above sea level. Gas-steam plumes containing some amount of ash were noted on 02-05 April, and powerful ash emissions up to 7 km above sea level occurred on 09 April. The explosive activity at the volcano ended on 11 April. The area of ash deposits was about 1500 km2, the total area covered by ash falls, for example, on 25 March, was about 650 thousand km2. To monitor and study the Kambalny volcano eruption we mainly used satellite images of medium resolution available in the information system "Monitoring volcanic activity in Kamchatka and Kurile Islands" (VolSatView). This work was supported by the Russian Science Foundation, project No. 16-17-00042.
Xue, Dong-Xiu; Wang, Hai-Yan; Zhang, Tao; Liu, Jin-Xian
2014-01-01
The pen shell, Atrina pectinata, is one of the commercial bivalves in East Asia and thought to be recently affected by anthropogenic pressure (habitat destruction and/or fishing pressure). Information on its population genetic structure is crucial for the conservation of A. pectinata. Considering its long pelagic larval duration and iteroparity with high fecundity, the genetic structure for A. pectinata could be expected to be weak at a fine scale. However, the unusual oceanography in the coasts of China and Korea suggests potential for restricted dispersal of pelagic larvae and geographical differentiation. In addition, environmental changes associated with Pleistocene sea level fluctuations on the East China Sea continental shelf may also have strongly influenced historical population demography and genetic diversity of marine organisms. Here, partial sequences of the mitochondrial Cytochrome c oxidase subunit I (COI) gene and seven microsatellite loci were used to estimate population genetic structure and demographic history of seven samples from Northern China coast and one sample from North Korea coast. Despite high levels of genetic diversity within samples, there was no genetic differentiation among samples from Northern China coast and low but significant genetic differentiation between some of the Chinese samples and the North Korean sample. A late Pleistocene population expansion, probably after the Last Glacial Maximum, was also demonstrated for A. pectinata samples. No recent genetic bottleneck was detected in any of the eight samples. We concluded that both historical recolonization (through population range expansion and demographic expansion in the late Pleistocene) and current gene flow (through larval dispersal) were responsible for the weak level of genetic structure detected in A. pectinata. PMID:24789175
Relationship Between Maximum Tsunami Amplitude and Duration of Signal
NASA Astrophysics Data System (ADS)
Kim, Yoo Yin; Whitmore, Paul M.
2014-12-01
All available tsunami observations at tide gauges situated along the North American coast were examined to determine if there is any clear relationship between maximum amplitude and signal duration. In total, 89 historical tsunami recordings generated by 13 major earthquakes between 1952 and 2011 were investigated. Tidal variations were filtered out of the signal and the duration between the arrival time and the time at which the signals drops and stays below 0.3 m amplitude was computed. The processed tsunami time series were evaluated and a linear least-squares fit with a 95 % confidence interval was examined to compare tsunami durations with maximum tsunami amplitude in the study region. The confidence interval is roughly 20 h over the range of maximum tsunami amplitudes in which we are interested. This relatively large confidence interval likely results from variations in local resonance effects, late-arriving reflections, and other effects.
Lanza, Heather A; Cochran, Rebecca S; Mudge, Joseph F; Olson, Adric D; Blackwell, Brett R; Maul, Jonathan D; Salice, Christopher J; Anderson, Todd A
2017-08-01
Perfluoroalkyl substances (PFAS) have recently received increased research attention, particularly concerning aquatic organisms and in regions of exposure to aqueous film forming foams (AFFFs). Air Force bases historically applied AFFFs in the interest of fire training exercises and have since expressed concern for PFAS contamination in biota from water bodies surrounding former fire training areas. Six PFAS were monitored, including perfluorooctane sulfonate (PFOS), in aquatic species from 8 bayou locations at Barksdale Air Force Base in Bossier City, Louisiana (USA) over the course of 1 yr. The focus was to evaluate temporal and spatial variability in PFAS concentrations from historic use of AFFF. The PFOS concentrations in fish peaked in early summer, and also increased significantly downstream of former fire training areas. Benthic organisms had lower PFOS concentrations than pelagic species, contrary to previous literature observations. Bioconcentration factors varied with time but were reduced compared with previously reported literature values. The highest concentration of PFOS in whole fish was 9349 ng/g dry weight, with 15% of samples exceeding what is believed to be the maximum whole fish concentration reported to date of 1500 ng/g wet weight. Further studies are ongoing, to measure PFAS in larger fish and tissue-specific partitioning data to compare with the current whole fish values. The high concentrations presently observed could have effects on higher trophic level organisms in this system or pose a potential risk to humans consuming contaminated fish. Environ Toxicol Chem 2017;36:2022-2029. © 2016 SETAC. © 2016 SETAC.
NASA Astrophysics Data System (ADS)
Wichansky, Paul Stuart
The 19th-century agrarian landscape of New Jersey (NJ) and the surrounding region has been extensively transformed to the present-day land cover by urbanization, reforestation, and localized areas of deforestation. This study used a mesoscale atmospheric numerical model to investigate the sensitivity of the warm season climate of NJ to these land cover changes. Reconstructed 1880s-era and present-day land cover datasets were used as surface boundary conditions for a set of simulations performed with the Regional Atmospheric Modeling System (RAMS). Three-member ensembles with historical and present-day land cover were compared to examine the sensitivity of surface air and dewpoint temperatures, rainfall, the individual components of the surface energy budget, horizontal and vertical winds, and the vertical profiles of temperature and humidity to these land cover changes. Mean temperatures for the present-day landscape were 0.3-0.6°C warmer than for the historical landscape over a considerable portion of NJ and the surrounding region, with daily maximum temperatures at least 1.0°C warmer over some of the highly urbanized locations. Reforested regions in the present-day landscape, however, showed a slight cooling. Surface warming was generally associated with repartitioning of net radiation from latent to sensible heat flux, and conversely for cooling. Reduced evapotranspiration from much of the present-day land surface led to dewpoint temperature decreases of 0.3-0.6°C. While urbanization was accompanied by strong surface albedo decreases and increases in net shortwave radiation, reforestation and potential changes in forest composition have generally increased albedos and also enhanced landscape heterogeneity. The increased deciduousness of forests may have further reduced net downward longwave radiation. These land cover changes have modified boundary-layer dynamics by increasing low-level convergence and upper-level divergence in the interior of NJ, especially where sensible heat fluxes have increased for the present-day landscape, hence enhancing uplift in the mid-troposphere. The mesoscale circulations that developed in the present-day ensemble were also more effective at lifting available moisture to higher levels of the boundary layer, lowering dewpoints near the surface but increasing them aloft. Likewise, the sea breeze in coastal areas of NJ in the present-day ensemble had stronger uplift during the afternoon and enhanced moisture transport to higher levels.
Anomalous Variability in Antarctic Sea Ice Extents During the 1960s With the Use of Nimbus Data
NASA Technical Reports Server (NTRS)
Gallaher, David W.; Campbell, G. Garrett; Meier, Walter N.
2013-01-01
The Nimbus I, II, and III satellites provide a new opportunity for climate studies in the 1960s. The rescue of the visible and infrared imager data resulted in the utilization of the early Nimbus data to determine sea ice extent. A qualitative analysis of the early NASA Nimbus missions has revealed Antarctic sea ice extents that are significant larger and smaller than the historic 1979-2012 passive microwave record. The September 1964 ice mean area is 19.7x10(exp 6) sq. km +/- 0.3x10(exp 6) sq. km. This is more the 250,000 sq. km greater than the 19.44x10(exp 6) sq. km seen in the new 2012 historic maximum. However, in August 1966 the maximum sea ice extent fell to 15.9x10(exp 6) sq. km +/- 0.3x10(exp 6) sq. km. This is more than 1.5x10(exp 6) sq. km below the passive microwave record of 17.5x10(exp 6) sq. km set in September of 1986. This variation between 1964 and 1966 represents a change of maximum sea ice of over 3x10(exp 6) sq. km in just two years. These inter-annual variations while large, are small when compared to the Antarctic seasonal cycle.
NASA Astrophysics Data System (ADS)
Aghakhani Afshar, A.; Hasanzadeh, Y.; Besalatpour, A. A.; Pourreza-Bilondi, M.
2017-07-01
Hydrology cycle of river basins and available water resources in arid and semi-arid regions are highly affected by climate changes. In recent years, the increment of temperature due to excessive increased emission of greenhouse gases has led to an abnormality in the climate system of the earth. The main objective of this study is to survey the future climate changes in one of the biggest mountainous watersheds in northeast of Iran (i.e., Kashafrood). In this research, by considering the precipitation and temperature as two important climatic parameters in watersheds, 14 models evolved in the general circulation models (GCMs) of the newest generation in the Coupled Model Intercomparison Project Phase 5 (CMIP5) were used to forecast the future climate changes in the study area. For the historical period of 1992-2005, four evaluation criteria including Nash-Sutcliffe (NS), percent of bias (PBIAS), coefficient of determination ( R 2) and the ratio of the root-mean-square-error to the standard deviation of measured data (RSR) were used to compare the simulated observed data for assessing goodness-of-fit of the models. In the primary results, four climate models namely GFDL-ESM2G, IPSL-CM5A-MR, MIROC-ESM, and NorESM1-M were selected among the abovementioned 14 models due to their more prediction accuracies to the investigated evaluation criteria. Thereafter, climate changes of the future periods (near-century, 2006-2037; mid-century, 2037-2070; and late-century, 2070-2100) were investigated and compared by four representative concentration pathways (RCPs) of new emission scenarios of RCP2.6, RCP4.5, RCP6.0, and RCP8.5. In order to assess the trend of annual and seasonal changes of climatic components, Mann-Kendall non-parametric test (MK) was also employed. The results of Mann-Kendall test revealed that the precipitation has significant variable trends of both positive and negative alterations. Furthermore, the mean, maximum, and minimum temperature values had significant positive trends at 90, 99, and 99.9 % confidence level. On the other hand, in all parts of the Kashafrood Watershed (KW), the average temperature of watershed will be increased up to 0.56-3.3 °C and the mean precipitation will be decreased up to 10.7 % by the end of the twenty-first century comparing to the historical baselines. Also, in seasonal scale, the maximum and minimum precipitations will occur in spring and summer, respectively, and the mean temperature is higher than the historical baseline in all seasons. The maximum and minimum values of the mean temperature will occur in summer and winter, respectively, and the amount of seasonal precipitation in these seasons will be reduced.
NASA Technical Reports Server (NTRS)
Allison, L. J.
1972-01-01
A complete documentation of Numbus 2 High Resolution infrared Radiometer data and ESSA-1 and 3 television photographs is presented for the life-time of Hurricane Inez, 1966. Ten computer produced radiation charts were analyzed in order to delineate the three dimensional cloud structure during the formative, mature and dissipating stages of this tropical cyclone. Time sections were drawn throughout the storm's life cycle to relate the warm core development and upper level outflow of the storm with their respective cloud canopies, as shown by the radiation data. Aerial reconnaissance weather reports, radar photographs and conventional weather analyses were used to complement the satellite data. A computer program was utilized to accept Nimbus 2 HRIR equivalent blackbody temperatures within historical maximum and minimum sea surface temperature limits over the tropical Atlantic Ocean.
USDA-ARS?s Scientific Manuscript database
DayCent (Daily Century) is a biogeochemical model of intermediate complexity used to simulate flows of carbon and nutrients for crop, grassland, forest, and savanna ecosystems. Required model inputs are: soil texture, current and historical land use, vegetation cover, and daily maximum/minimum tempe...
Using "Number the Stars" as a Springboard for Doing Social Studies
ERIC Educational Resources Information Center
Putman, Errol
2003-01-01
The Danish experience during the German occupation, presented through the experiences of the Rosen and Johansen families, provides the literary and historical background for the activities the author presents in this article. He designed the four activities for maximum student involvement, with each requiring students to respond to Lois Lowry's…
Five Perspectives for Teaching the Holocaust
ERIC Educational Resources Information Center
Lindquist, David H.
2008-01-01
Studying the Holocaust provides an opportunity to explore a fascinating historical topic whose impact on the contemporary world cannot be overstated. As such, the topic is now an accepted part of the American secondary school curriculum. For such curricula to be of maximum benefit to students, clearly defined perspectives that direct the students'…
What Is the Maximum Credible Event for Hazard Division 1.6 Explosive Articles?
2010-07-01
involving SCGs D & E explosives, there is no data available for SCG N explosives since there has never been an accident involving HD 1.6 explosives that...resulted in a violent response. As the historical data provided in Technical Paper 14 indicates, many SCG D & E explosives are sensitive to
Heterogeneous coupling along Makran subduction zone
NASA Astrophysics Data System (ADS)
Zarifi, Z.; Raeesi, M.
2010-12-01
The Makran subduction zone, located in the southeast of Iran and southern Pakistan, extends for almost 900 km along the Eurasian-Arabian plate boundary. The seismic activities in the eastern and western Makran exhibit very different patterns. The eastern Makran characterized by infrequent large earthquakes and low level of seismicity. The only large instrumentally recorded earthquake in the eastern Makran, the 27 Nov. 1945 (Mw=8.1) earthquake, was followed by tsunami waves with the maximum run-up height of 13 m and disastrous effects in Pakistan, India, Iran and Oman. The western Makran, however, is apparently quiescent without strong evidence on occurrence of large earthquakes in historical times, which makes it difficult to ascertain whether the slab subducts aseismically or experiences large earthquakes separated by long periods exceeding the historical records. We used seismicity and Trench Parallel Free air and Bouguer Anomalies (TPGA and TPBA) to study the variation in coupling in the slab interface. Using a 3D mechanical Finite Element (FE) model, we show how heterogeneous coupling can influence the rate of deformation in the overriding lithosphere and the state of stress in the outer rise, overriding, and subducting plates within the shortest expected cycle of earthquake. We test the results of FE model against the observed focal mechanism of earthquakes and available GPS measurements in Makran subduction zone.
Den-site characteristics of black bears in Rocky Mountain National Park, Colorado
Baldwin, R.A.; Bender, L.C.
2008-01-01
We compared historic (1985-1992) and contemporary (2003-2006) black bear (Ursus americanus) den locations in Rocky Mountain National Park (RMNP), Colorado, USA, for habitat and physiographic attributes of den sites and used maximum entropy modeling to determine which factors were most influential in predicting den-site locations. We observed variability in the relationship between den locations and distance to trails and elevation over rime. Locations of historic den sites were most associated with slope, elevation, and covertype, whereas contemporary sites were associated with slope, distance to roads, aspect, and canopy height. Although relationships to covariates differed between historic and contemporary periods, preferred den-site characteristics consistently included steep slopes and factors associated with greater snow depth. Distribution of den locations shifted toward areas closer to human developments, indicating little negative influence of this factor on den-site selection by black bears in RMNP.
Analysis and selection of magnitude relations for the Working Group on Utah Earthquake Probabilities
Duross, Christopher; Olig, Susan; Schwartz, David
2015-01-01
Prior to calculating time-independent and -dependent earthquake probabilities for faults in the Wasatch Front region, the Working Group on Utah Earthquake Probabilities (WGUEP) updated a seismic-source model for the region (Wong and others, 2014) and evaluated 19 historical regressions on earthquake magnitude (M). These regressions relate M to fault parameters for historical surface-faulting earthquakes, including linear fault length (e.g., surface-rupture length [SRL] or segment length), average displacement, maximum displacement, rupture area, seismic moment (Mo ), and slip rate. These regressions show that significant epistemic uncertainties complicate the determination of characteristic magnitude for fault sources in the Basin and Range Province (BRP). For example, we found that M estimates (as a function of SRL) span about 0.3–0.4 units (figure 1) owing to differences in the fault parameter used; age, quality, and size of historical earthquake databases; and fault type and region considered.
Astley, H C; Abbott, E M; Azizi, E; Marsh, R L; Roberts, T J
2013-11-01
Maximal performance is an essential metric for understanding many aspects of an organism's biology, but it can be difficult to determine because a measured maximum may reflect only a peak level of effort, not a physiological limit. We used a unique opportunity provided by a frog jumping contest to evaluate the validity of existing laboratory estimates of maximum jumping performance in bullfrogs (Rana catesbeiana). We recorded video of 3124 bullfrog jumps over the course of the 4-day contest at the Calaveras County Jumping Frog Jubilee, and determined jump distance from these images and a calibration of the jump arena. Frogs were divided into two groups: 'rental' frogs collected by fair organizers and jumped by the general public, and frogs collected and jumped by experienced, 'professional' teams. A total of 58% of recorded jumps surpassed the maximum jump distance in the literature (1.295 m), and the longest jump was 2.2 m. Compared with rental frogs, professionally jumped frogs jumped farther, and the distribution of jump distances for this group was skewed towards long jumps. Calculated muscular work, historical records and the skewed distribution of jump distances all suggest that the longest jumps represent the true performance limit for this species. Using resampling, we estimated the probability of observing a given jump distance for various sample sizes, showing that large sample sizes are required to detect rare maximal jumps. These results show the importance of sample size, animal motivation and physiological conditions for accurate maximal performance estimates.
Calibration of Crustal Historical Earthquakes from Intra-Carpathian Region of Romania
NASA Astrophysics Data System (ADS)
Oros, Eugen; Popa, Mihaela; Rogozea, Maria
2017-12-01
The main task of the presented study is to elaborate a set of relations of mutual conversion macroseismic intensity - magnitude, necessary for the calibration of the historical crustal earthquakes produced in the Intra - Carpathian region of Romania, as a prerequisite for homogenization of the parametric catalogue of Romanian earthquakes. To achieve the goal, we selected a set of earthquakes for which we have quality macroseismic data and the Mw moment magnitude obtained instrumentally. These seismic events were used to determine the relations between the Mw and the peak/epicentral intensity, the isoseist surface area for I=3, I=4 and I=5: Mw = f (Imax / Io), Mw = f (Imax / Io, h), Mw = f (A3, A4; A5). We investigated several variants of such relationships and combinations, taking into account that the macroseismic data necessary for the re-evaluation of historical earthquakes in the investigated region are available in several forms. Thus, a number of investigations provided various information resulted after revising initial historical data: 1) Intensity data point (IDP) assimilated or not with the epicentre intensity after analysis of the correlation level with recent seismicity data and / or active tectonics / seismotectonics, 2) Sets of intensities obtained in several localities (IDPs) with variable values having maxims that can be considered equal to epicentral intensity (Io), 3) Sets of intensities obtained in several localities (IDPs) but without obvious maximum values, assimilable with the epicentral intensity, 4) maps with isoseismals, 5) Information on the areas in which the investigated earthquake was felt or the area of perceptiveness (e.g. I = 3 EMS during the day and I = 4 EMS at night) or the surfaces corresponding to a certain degree of well-defined intensity. The obtained relationships were validated using a set of earthquakes with instrumental source parameters (localization, depth, Mw). These relationships lead to redundant results meaningful in the process of estimating the quality and credibility of the primary data used (e.g. IDPs, isoseismals) and in the correct determination of Mw.
Millar, Melissa A; Coates, David J; Byrne, Margaret
2014-10-01
Understanding patterns of pollen dispersal and variation in mating systems provides insights into the evolutionary potential of plant species and how historically rare species with small disjunct populations persist over long time frames. This study aims to quantify the role of pollen dispersal and the mating system in maintaining contemporary levels of connectivity and facilitating persistence of small populations of the historically rare Acacia woodmaniorum. Progeny arrays of A. woodmaniorum were genotyped with nine polymorphic microsatellite markers. A low number of fathers contributed to seed within single pods; therefore, sampling to remove bias of correlated paternity was implemented for further analysis. Pollen immigration and mating system parameters were then assessed in eight populations of varying size and degree of isolation. Pollen immigration into small disjunct populations was extensive (mean minimum estimate 40 % and mean maximum estimate 57 % of progeny) and dispersal occurred over large distances (≤1870m). Pollen immigration resulted in large effective population sizes and was sufficient to ensure adaptive and inbreeding connectivity in small disjunct populations. High outcrossing (mean tm = 0·975) and a lack of apparent inbreeding suggested that a self-incompatibility mechanism is operating. Population parameters, including size and degree of geographic disjunction, were not useful predictors of pollen dispersal or components of the mating system. Extensive long-distance pollen dispersal and a highly outcrossed mating system are likely to play a key role in maintaining genetic diversity and limiting negative genetic effects of inbreeding and drift in small disjunct populations of A. woodmaniorum. It is proposed that maintenance of genetic connectivity through habitat and pollinator conservation will be a key factor in the persistence of this and other historically rare species with similar extensive long-distance pollen dispersal and highly outcrossed mating systems. © The Author 2014. Published by Oxford University Press on behalf of the Annals of Botany Company.
NASA Astrophysics Data System (ADS)
Naruhashi, R.; Satake, K.; Heidarzadeh, M.; Harada, T.
2014-12-01
Gokasho Bay is a blockade inner bay which has typical ria coasts and drowned valleys. It is located in the central Kii Peninsula and faces the Nankai Trough subduction zone. This Kumano-nada coastal area has been repeatedly striked by historical great tsunamis. For the 1854 Ansei-Tokai earthquake and its tsunami, there are comparatively many historical records including historical documents and oral traditions for tsunami behavior and damages along the coast. Based on these records, a total of 42 tsunami heights were measured by using a laser range finder and a hand level on the basis of spot elevation given by 1/2500 topographical maps. The average inundation height of whole bay area was approximately 4 - 5 m. On the whole, in the closed-off section of the bay, large values were obtained. For example, the average value in Gokasho-ura town area was 4 m, and the maximum run-up height along the Gokasho river was 6.8 m. Particularly in Konsa, located in the most closed-off section of the bay, tsunami heights ranged between 4 - 11 m, and were higher than those in other districts. It was comparatively high along the eastern coast and eastern baymouth. We simulate the distribution of the tsunami wave heights using numerical modeling, and compare the simulation results and above-mentioned actual historical data and results of our field survey. Based on fault models by Ando (1975), Aida (1981), and Annaka et al. (2003), the tsunami simulation was performed. After comparing the calculated results by three fault models, the wave height based on the model by Annaka et al. (2003) was found to have better agreement with observations. Moreover, the wave height values in a closed-off section of bay and at the eastern baymouth are high consistent with our survey data.
Kashyap, Ridhi; Villavicencio, Francisco
2016-10-01
We present a micro-founded simulation model that formalizes the "ready, willing, and able" framework, originally used to explain historical fertility decline, to the practice of prenatal sex selection. The model generates sex ratio at birth (SRB) distortions from the bottom up and attempts to quantify plausible levels, trends, and interactions of son preference, technology diffusion, and fertility decline that underpin SRB trajectories at the macro level. Calibrating our model for South Korea, we show how even as the proportion with a preference for sons was declining, SRB distortions emerged due to rapid diffusion of prenatal sex determination technology combined with small but growing propensities to abort at low birth parities. Simulations reveal that relatively low levels of son preference (about 20 % to 30 % wanting one son) can result in skewed SRB levels if technology diffuses early and steadily, and if fertility falls rapidly to encourage sex-selective abortion at low parities. Model sensitivity analysis highlights how the shape of sex ratio trajectories is particularly sensitive to the timing and speed of prenatal sex-determination technology diffusion. The maximum SRB levels reached in a population are influenced by how the readiness to abort rises as a function of the fertility decline.
Disaggregating from daily to sub-daily rainfall under a future climate
NASA Astrophysics Data System (ADS)
Westra, S.; Evans, J.; Mehrotra, R.; Sharma, A.
2012-04-01
We describe an algorithm for disaggregating daily rainfall into sub-daily rainfall 'fragments' (continuous fine-resolution rainfall sequences whose total depth sums to the daily rainfall amount) under a future, warmer climate. The basis of the algorithm is re-sample sub-daily fragments from the historical record conditional on the total daily rainfall amount and a range of atmospheric predictors representative of the future climate. The logic is that as the atmosphere warms, future rainfall patterns will be more reflective of historical rainfall patterns which occurred on warmer days at the same location, or at locations which have an atmospheric profile more reflective of expected future conditions. When looking at the scaling from daily to sub-daily rainfall over the historical record, it was found that the relationship varied significantly by season and by location, with rainfall patterns on warmer seasons or at warmer locations typically showing more intense rain falling over shorter periods compared with cooler seasons and stations. Importantly, by regressing against atmospheric covariates such as temperature this effect was almost entirely eliminated, providing a basis for suggesting the approach may be valid when extrapolating sub-daily sequences to a future climate. The method of fragments algorithm was then applied to nine stations around Australia, and showed that when holding the total daily rainfall constant, the maximum intensity of a short duration (6 minute) rainfall increased by between 4.1% and 13.4% per degree change in temperature for the maximum six minute burst, between 3.1% and 6.8% for the maximum one hour burst, and between 1.5% and 3.5% for the fraction of the day with no rainfall. This highlights that a large proportion of the change to the distribution of precipitation in the future is likely to occur at sub-daily timescales, with significant implications for many hydrological systems.
New seismic sources parameterization in El Salvador. Implications to seismic hazard.
NASA Astrophysics Data System (ADS)
Alonso-Henar, Jorge; Staller, Alejandra; Jesús Martínez-Díaz, José; Benito, Belén; Álvarez-Gómez, José Antonio; Canora, Carolina
2014-05-01
El Salvador is located at the pacific active margin of Central America, here, the subduction of the Cocos Plate under the Caribbean Plate at a rate of ~80 mm/yr is the main seismic source. Although the seismic sources located in the Central American Volcanic Arc have been responsible for some of the most damaging earthquakes in El Salvador. The El Salvador Fault Zone is the main geological structure in El Salvador and accommodates 14 mm/yr of horizontal displacement between the Caribbean Plate and the forearc sliver. The ESFZ is a right lateral strike-slip fault zone c. 150 km long and 20 km wide .This shear band distributes the deformation among strike-slip faults trending N90º-100ºE and secondary normal faults trending N120º- N170º. The ESFZ is relieved westward by the Jalpatagua Fault and becomes less clear eastward disappearing at Golfo de Fonseca. Five sections have been proposed for the whole fault zone. These fault sections are (from west to east): ESFZ Western Section, San Vicente Section, Lempa Section, Berlin Section and San Miguel Section. Paleoseismic studies carried out in the Berlin and San Vicente Segments reveal an important amount of quaternary deformation and paleoearthquakes up to Mw 7.6. In this study we present 45 capable seismic sources in El Salvador and their preliminary slip-rate from geological and GPS data. The GPS data detailled results are presented by Staller et al., 2014 in a complimentary communication. The calculated preliminary slip-rates range from 0.5 to 8 mm/yr for individualized faults within the ESFZ. We calculated maximum magnitudes from the mapped lengths and paleoseismic observations.We propose different earthquakes scenario including the potential combined rupture of different fault sections of the ESFZ, resulting in maximum earthquake magnitudes of Mw 7.6. We used deterministic models to calculate acceleration distribution related with maximum earthquakes of the different proposed scenario. The spatial distribution of seismic accelerations are compared and calibrated using the February 13, 2001 earthquake, as control earthquake. To explore the sources of historical earthquakes we compare synthetic acceleration maps with the historical earthquakes of March 6, 1719 and June 8, 1917. control earthquake. To explore the sources of historical earthquakes we compare synthetic acceleration maps with the historical earthquakes of March 6, 1719 and June 8, 1917.
Applicability of AgMERRA Forcing Dataset to Fill Gaps in Historical in-situ Meteorological Data
NASA Astrophysics Data System (ADS)
Bannayan, M.; Lashkari, A.; Zare, H.; Asadi, S.; Salehnia, N.
2015-12-01
Integrated assessment studies of food production systems use crop models to simulate the effects of climate and socio-economic changes on food security. Climate forcing data is one of those key inputs of crop models. This study evaluated the performance of AgMERRA climate forcing dataset to fill gaps in historical in-situ meteorological data for different climatic regions of Iran. AgMERRA dataset intercompared with in- situ observational dataset for daily maximum and minimum temperature and precipitation during 1980-2010 periods via Root Mean Square error (RMSE), Mean Absolute Error (MAE) and Mean Bias Error (MBE) for 17 stations in four climatic regions included humid and moderate, cold, dry and arid, hot and humid. Moreover, probability distribution function and cumulative distribution function compared between model and observed data. The results of measures of agreement between AgMERRA data and observed data demonstrated that there are small errors in model data for all stations. Except for stations which are located in cold regions, model data in the other stations illustrated under-prediction for daily maximum temperature and precipitation. However, it was not significant. In addition, probability distribution function and cumulative distribution function showed the same trend for all stations between model and observed data. Therefore, the reliability of AgMERRA dataset is high to fill gaps in historical observations in different climatic regions of Iran as well as it could be applied as a basis for future climate scenarios.
A forensic re-analysis of one of the deadliest European tornadoes
NASA Astrophysics Data System (ADS)
Holzer, Alois M.; Schreiner, Thomas M. E.; Púčik, Tomáš
2018-06-01
Extremely rare events with high potential impact, such as violent tornadoes, are of strong interest for climatology and risk assessment. In order to obtain more knowledge about the most extreme events, it is vital to study historical cases. The purpose of this paper is twofold: (1) to demonstrate how a windstorm catastrophe that happened 100 years ago, such as the Wiener Neustadt, Lower Austria, tornado on 10 July 1916, can be successfully re-analyzed using a forensic approach, and (2) to propose a repeatable working method for assessing damage and reconstructing the path and magnitude of local windstorm and tornado cases with sufficient historical sources. Based on the results of the forensic re-analyses, a chronology of the tornado impact is presented, followed by a description of the key tornado characteristics: a maximum intensity rating of F4, a damage path length of 20 km and a maximum visible tornado diameter of 1 km. Compared to a historical scientific study published soon after the event, additional new findings are presented, namely the existence of two predecessor tornadoes and a higher number of fatalities: at least 34 instead of 32. While the storm-scale meteorology could not be reconstructed, rich damage data sources for the urban area of Wiener Neustadt facilitated a detailed analysis of damage tracks and wind intensities within the tornado. The authors postulate the requirement for an International Fujita Scale to rate tornadoes globally in a consistent way, based on comparable damage indicators.
Stirling, M.; Petersen, M.
2006-01-01
We compare the historical record of earthquake hazard experienced at 78 towns and cities (sites) distributed across New Zealand and the continental United States with the hazard estimated from the national probabilistic seismic-hazard (PSH) models for the two countries. The two PSH models are constructed with similar methodologies and data. Our comparisons show a tendency for the PSH models to slightly exceed the historical hazard in New Zealand and westernmost continental United States interplate regions, but show lower hazard than that of the historical record in the continental United States intraplate region. Factors such as non-Poissonian behavior, parameterization of active fault data in the PSH calculations, and uncertainties in estimation of ground-motion levels from historical felt intensity data for the interplate regions may have led to the higher-than-historical levels of hazard at the interplate sites. In contrast, the less-than-historical hazard for the remaining continental United States (intraplate) sites may be largely due to site conditions not having been considered at the intraplate sites, and uncertainties in correlating ground-motion levels to historical felt intensities. The study also highlights the importance of evaluating PSH models at more than one region, because the conclusions reached on the basis of a solely interplate or intraplate study would be very different.
40 CFR 141.11 - Maximum contaminant levels for inorganic chemicals.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 40 Protection of Environment 22 2010-07-01 2010-07-01 false Maximum contaminant levels for inorganic chemicals. 141.11 Section 141.11 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY... § 141.11 Maximum contaminant levels for inorganic chemicals. (a) The maximum contaminant level for...
40 CFR 141.11 - Maximum contaminant levels for inorganic chemicals.
Code of Federal Regulations, 2014 CFR
2014-07-01
... 40 Protection of Environment 23 2014-07-01 2014-07-01 false Maximum contaminant levels for inorganic chemicals. 141.11 Section 141.11 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY... § 141.11 Maximum contaminant levels for inorganic chemicals. (a) The maximum contaminant level for...
40 CFR 141.11 - Maximum contaminant levels for inorganic chemicals.
Code of Federal Regulations, 2012 CFR
2012-07-01
... 40 Protection of Environment 24 2012-07-01 2012-07-01 false Maximum contaminant levels for inorganic chemicals. 141.11 Section 141.11 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY... § 141.11 Maximum contaminant levels for inorganic chemicals. (a) The maximum contaminant level for...
Harkisoen, S; Arends, J E; van den Hoek, J A R; Whelan, J; van Erpecum, K J; Boland, G J; Hoepelman, A I M
2014-12-01
Some studies done in Asian patients have shown that serum levels of hepatitis B virus (HBV) DNA predict the development of cirrhosis. However, it is unclear whether this also applies for non-Asian patients. This study investigated historic and current HBV DNA and quantitative hepatitis B surface antigen (HBsAg) levels as predictors of cirrhosis in non-Asian women with chronic HBV. A retrospective cohort study of non-Asian women with chronic HBV was performed. Among other variables, HBV DNA and quantitative HBsAg levels were measured in stored historic serum samples obtained during pregnancy (period 1990-2004) and current serum samples (period 2011-2012) to determine any association with liver cirrhosis by liver stiffness measurement (LSM). One hundred and nineteen asymptomatic, treatment-naïve non-Asian women were included; the median number of years between the historic sample and the current sample was 17 (interquartile range (IQR) 13-20). The median historic log HBV DNA and quantitative log HBsAg levels were 2.5 (IQR 1.9-3.4) IU/ml and 4.2 (IQR 3.6-4.5) IU/ml, respectively. LSM diagnosed 14 patients (12%) with F3-F4 fibrosis, i.e. stiffness >8.1kPa. No association of cirrhosis was found with historic HBV DNA (relative risk (RR) 0.34, 95% confidence interval (CI) 0.05-2.44) or with the quantitative HBsAg level (HBsAg level >1000 IU/ml, RR 0.35, 95% CI 0.11-1.11). Multivariable analysis identified alcohol consumption (odds ratio (OR) 6.4, 95% CI 1.3-30.1), aspartate aminotransferase >0.5 times the upper limit of normal (OR 15.4, 95% CI 1.9-122.6), and prothrombin time (OR 12.0, 95% CI 1.2-120.4), but not HBV DNA or quantitative HBsAg level, to be independent predictors of the presence of cirrhosis. Neither historic nor current HBV DNA or the quantitative HBsAg level is associated with the development of HBV-related cirrhosis in non-Asian women. Copyright © 2014 The Authors. Published by Elsevier Ltd.. All rights reserved.
Hedefalk, Finn; Svensson, Patrick; Harrie, Lars
2017-01-01
This paper presents datasets that enable historical longitudinal studies of micro-level geographic factors in a rural setting. These types of datasets are new, as historical demography studies have generally failed to properly include the micro-level geographic factors. Our datasets describe the geography over five Swedish rural parishes, and by linking them to a longitudinal demographic database, we obtain a geocoded population (at the property unit level) for this area for the period 1813–1914. The population is a subset of the Scanian Economic Demographic Database (SEDD). The geographic information includes the following feature types: property units, wetlands, buildings, roads and railroads. The property units and wetlands are stored in object-lifeline time representations (information about creation, changes and ends of objects are recorded in time), whereas the other feature types are stored as snapshots in time. Thus, the datasets present one of the first opportunities to study historical spatio-temporal patterns at the micro-level. PMID:28398288
Supporting document for the historical tank content estimate for AY-tank farm
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brevick, C H; Stroup, J L; Funk, J. W.
1997-03-12
This Supporting Document provides historical in-depth characterization information on AY-Tank Farm, such as historical waste transfer and level data, tank physical information, temperature plots, liquid observation well plots, chemical analyte and radionuclide inventories for the Historical Tank Content Estimate Report for the Southeast Quadrant of the Hanford 200 Areas.
40 CFR 142.65 - Variances and exemptions from the maximum contaminant levels for radionuclides.
Code of Federal Regulations, 2013 CFR
2013-07-01
... maximum contaminant levels for radionuclides. 142.65 Section 142.65 Protection of Environment... Available § 142.65 Variances and exemptions from the maximum contaminant levels for radionuclides. (a)(1) Variances and exemptions from the maximum contaminant levels for combined radium-226 and radium-228, uranium...
40 CFR 142.65 - Variances and exemptions from the maximum contaminant levels for radionuclides.
Code of Federal Regulations, 2012 CFR
2012-07-01
... maximum contaminant levels for radionuclides. 142.65 Section 142.65 Protection of Environment... Available § 142.65 Variances and exemptions from the maximum contaminant levels for radionuclides. (a)(1) Variances and exemptions from the maximum contaminant levels for combined radium-226 and radium-228, uranium...
40 CFR 142.65 - Variances and exemptions from the maximum contaminant levels for radionuclides.
Code of Federal Regulations, 2014 CFR
2014-07-01
... maximum contaminant levels for radionuclides. 142.65 Section 142.65 Protection of Environment... Available § 142.65 Variances and exemptions from the maximum contaminant levels for radionuclides. (a)(1) Variances and exemptions from the maximum contaminant levels for combined radium-226 and radium-228, uranium...
40 CFR 142.65 - Variances and exemptions from the maximum contaminant levels for radionuclides.
Code of Federal Regulations, 2011 CFR
2011-07-01
... maximum contaminant levels for radionuclides. 142.65 Section 142.65 Protection of Environment... Available § 142.65 Variances and exemptions from the maximum contaminant levels for radionuclides. (a)(1) Variances and exemptions from the maximum contaminant levels for combined radium-226 and radium-228, uranium...
40 CFR Appendix I to Part 257 - Maximum Contaminant Levels (MCLs)
Code of Federal Regulations, 2010 CFR
2010-07-01
... 40 Protection of Environment 24 2010-07-01 2010-07-01 false Maximum Contaminant Levels (MCLs) I Appendix I to Part 257 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) SOLID WASTES... Part 257—Maximum Contaminant Levels (MCLs) Maximum Contaminant Levels (MCLs) Promulgated Under the Safe...
Chuck Harrell; Shep Zedaker
2010-01-01
More than a century of fire suppression has resulted in the increased abundance of Rosebay Rhododendron (Rhododendron maximum L.) throughout the Appalachian Mountains. Rhododendron has historically been most frequently associated with mesic sites, but can now be found proliferating toward drier midslope and ridgetop areas. The increased presence of...
Federal Register 2010, 2011, 2012, 2013, 2014
2010-08-18
... show virtual nontoxicity for all routes of exposure and it can be concluded that any dietary risks... thymic remnants; however, the increase was within the range of the performing laboratories historical... water are well below (6 to 7 orders of magnitude) the maximum doses used in laboratory testing, where no...
Ethnic Violence in Southern Thailand: The Anomaly of Satun
2012-06-01
12b. DISTRIBUTION CODE A 13. ABSTRACT (maximum 200 words ) This research uses a historical comparative analysis to investigate the differences...night words of optimism and kindness, you are missed more than you will ever know. 1 I. ETHNIC VIOLENCE IN SOUTHERN THAILAND: THE ANOMALY...facilities and supporting tourism . 8 Theda Skocpol, States & Social Revolutions: A Comparative Analysis
40 CFR 51.858 - Criteria for determining conformity of general Federal actions.
Code of Federal Regulations, 2010 CFR
2010-07-01
... reflect the historical activity levels that occurred in the geographic area affected by the proposed... future years (described in § 51.859(d)) using the historic activity levels (described in paragraph (a)(5... result in a level of emissions which, together with all other emissions in the nonattainment (or...
40 CFR 93.158 - Criteria for determining conformity of general Federal actions.
Code of Federal Regulations, 2011 CFR
2011-07-01
... reflect the historical activity levels that occurred in the geographic area affected by the proposed...(d)) using the historic activity levels (described in paragraph (a)(5)(iv)(A) of this section) and... responsible for the applicable SIP to result in a level of emissions which, together with all other emissions...
Delamater, Paul L; Finley, Andrew O; Banerjee, Sudipto
2012-05-15
There is now a large body of literature supporting a linkage between exposure to air pollutants and asthma morbidity. However, the extent and significance of this relationship varies considerably between pollutants, location, scale of analysis, and analysis methods. Our primary goal is to evaluate the relationship between asthma hospitalizations, levels of ambient air pollution, and weather conditions in Los Angeles (LA) County, California, an area with a historical record of heavy air pollution. County-wide measures of carbon monoxide (CO), nitrogen dioxide (NO(2)), ozone (O(3)), particulate matter<10 μm (PM(10)), particulate matter<2.5 μm (PM(2.5)), maximum temperature, and relative humidity were collected for all months from 2001 to 2008. We then related these variables to monthly asthma hospitalization rates using Bayesian regression models with temporal random effects. We evaluated model performance using a goodness of fit criterion and predictive ability. Asthma hospitalization rates in LA County decreased between 2001 and 2008. Traffic-related pollutants, CO and NO(2), were significant and positively correlated with asthma hospitalizations. PM(2.5) also had a positive, significant association with asthma hospitalizations. PM(10), relative humidity, and maximum temperature produced mixed results, whereas O(3) was non-significant in all models. Inclusion of temporal random effects satisfies statistical model assumptions, improves model fit, and yields increased predictive accuracy and precision compared to their non-temporal counterparts. Generally, pollution levels and asthma hospitalizations decreased during the 9 year study period. Our findings also indicate that after accounting for seasonality in the data, asthma hospitalization rate has a significant positive relationship with ambient levels of CO, NO(2), and PM(2.5). Copyright © 2012 Elsevier B.V. All rights reserved.
Should legislation regarding maximum Pb and Cd levels in human food also cover large game meat?
Taggart, Mark A; Reglero, Manuel M; Camarero, Pablo R; Mateo, Rafael
2011-01-01
Game meat may be contaminated with metals and metalloids if animals reside in anthropogenically polluted areas, or if ammunition used to kill the game contaminates the meat. Muscle tissue from red deer and wild boar shot in Ciudad Real province (Spain) in 2005-06 was analysed for As, Pb, Cu, Zn, Se and Cd. Samples were collected from hunting estates within and outside an area that has been historically used for mining, smelting and refining various metals and metalloids. Meat destined for human consumption, contained more Pb, As and Se (red deer) and Pb (boar) when harvested from animals that had resided in mined areas. Age related accumulation of Cd, Zn and As (in deer) and Cd, Cu and Se (in boar) was also observed. Two boar meat samples contained high Pb, at 352 and 2408 μg/g d.w., and these were likely to have been contaminated by Pb ammunition. Likewise, 19-84% of all samples (depending on species and sampling area) had Pb levels > 0.1 μg/g w.w., the EU maximum residue level (MRL) for farm reared meat. Between 9 and 43% of samples exceeded comparable Cd limits. Such data highlight a discrepancy between what is considered safe for human consumption in popular farmed meat (chicken, beef, lamb), and what in game may often exist. A risk assessment is presented which describes the number of meals required to exceed current tolerable weekly intakes (PTWIs) for Pb and Cd, and the potential contribution of large game consumption to such intake limit criteria. Copyright © 2010 Elsevier Ltd. All rights reserved.
Plasma Cytokine Concentrations in Workers Exposed to 2,3,7,8-tetrachlorodibenzo-p-dioxin (TCDD)
Saberi Hosnijeh, Fatemeh; Boers, Daisy; Portengen, Lützen; Bueno-de-Mesquita, H. Bas; Heederik, Dick; Vermeulen, Roel
2012-01-01
Objectives: Few epidemiological studies have studied the effect of 2,3,7,8-tetrachlorodibenzo-p-dioxin (TCDD) on blood cytokine levels. In this study we investigated changes in plasma levels of a large panel of cytokines, chemokines, and growth factors among workers from a Dutch historical cohort occupationally exposed to chlorophenoxy herbicides and contaminants including TCDD. Methods: Eighty-five workers who had been exposed to either high (n = 47) or low (n = 38) TCDD levels more than 30 years before serum collection were included in the current investigation. Plasma level of 16 cytokines, 10 chemokines, and 6 growth factors were measured. Current plasma levels of TCDD (TCDDcurrent) were determined by high-resolution gas chromatography/isotope-dilution high-resolution mass spectrometry. TCDD blood levels at the time of last exposure (TCDDmax) were estimated using a one-compartment first order kinetic model. Results: Blood levels of most analytes had a negative association with current and estimated past maximum TCDD levels. These decreases reached formal statistical significance for fractalkine, transforming growth factor alpha (TGF-α), and fibroblast growth factor 2 (FGF2) with increasing TCDD levels. Conclusion: Our study showed a general reduction in most analyte levels with the strongest effects for fractalkine, FGF2, and TGF-α. These findings suggest that TCDD exposure could suppress the immune system and that chemokine and growth factor-dependent cellular pathway changes by TCDD may play role in TCDD toxicity and associated health effects. PMID:22655272
Winner, M.D.; Lyke, W.L.
1986-01-01
Historical ground-water withdrawals and a general water-level decline in the Black Creek and upper Cape Fear aquifers of the central Coastal Plain of North Carolina are documented. Total municipal and industrial pumpage from these aquifers has increased from approximately 120,000 gal/day (gpd) in 1910 to >21 million gpd in 1980. Major pumpage, > 10,000 gpd, began around 1900. Since that time, per capita water use in the central Coastal Plain area has ranged from 17 to 172 gpd/person. The higher values partially represent the increasing availability and use of modern conveniences since the World War II era. The range of per capita water use can be subdivided according to general water-use and population characteristics for both urban and rural areas. The pumpage of ground water from the Black Creek and upper Cape Fear aquifers has created water-level declines from 0.5 to 4.9 ft/year since 1900. Approximately a third of the study area has experienced a decline > 50 ft up to the period 1979-1981, with 148 ft being the maximum.
Ford, Karl L; Beyer, W Nelson
2014-03-01
Thousands of hard rock mines exist in the western USA and in other parts of the world as a result of historic and current gold, silver, lead, and mercury mining. Many of these sites in the USA are on public lands. Typical mine waste associated with these sites are tailings and waste rock dumps that may be used by wildlife and open-range livestock. This report provides wildlife screening criteria levels for metals in soil and mine waste to evaluate risk and to determine the need for site-specific risk assessment, remediation, or a change in management practices. The screening levels are calculated from toxicity reference values based on maximum tolerable levels of metals in feed, on soil and plant ingestion rates, and on soil to plant uptake factors for a variety of receptors. The metals chosen for this report are common toxic metals found at mining sites: arsenic, cadmium, copper, lead, mercury, and zinc. The resulting soil screening values are well above those developed by the US Environmental Protection Agency. The difference in values was mainly a result of using toxicity reference values that were more specific to the receptors addressed rather than the most sensitive receptor.
Regional influences on reconstructed global mean sea level
NASA Astrophysics Data System (ADS)
Natarov, Svetlana I.; Merrifield, Mark A.; Becker, Janet M.; Thompson, Phillip R.
2017-04-01
Reconstructions of global mean sea level (GMSL) based on tide gauge measurements tend to exhibit common multidecadal rate fluctuations over the twentieth century. GMSL rate changes may result from physical drivers, such as changes in radiative forcing or land water storage. Alternatively, these fluctuations may represent artifacts due to sampling limitations inherent in the historical tide gauge network. In particular, a high percentage of tide gauges used in reconstructions, especially prior to the 1950s, are from Europe and North America in the North Atlantic region. Here a GMSL reconstruction based on the reduced space optimal interpolation algorithm is deconstructed, with the contributions of individual tide gauge stations quantified and assessed regionally. It is demonstrated that the North Atlantic region has a disproportionate influence on reconstructed GMSL rate fluctuations prior to the 1950s, notably accounting for a rate minimum in the 1920s and contributing to a rate maximum in the 1950s. North Atlantic coastal sea level fluctuations related to wind-driven ocean volume redistribution likely contribute to these estimated GMSL rate inflections. The findings support previous claims that multidecadal rate changes in GMSL reconstructions are likely related to the geographic distribution of tide gauge stations within a sparse global network.
Ford, Karl L; Beyer, W. Nelson
2014-01-01
Thousands of hard rock mines exist in the western USA and in other parts of the world as a result of historic and current gold, silver, lead, and mercury mining. Many of these sites in the USA are on public lands. Typical mine waste associated with these sites are tailings and waste rock dumps that may be used by wildlife and open-range livestock. This report provides wildlife screening criteria levels for metals in soil and mine waste to evaluate risk and to determine the need for site-specific risk assessment, remediation, or a change in management practices. The screening levels are calculated from toxicity reference values based on maximum tolerable levels of metals in feed, on soil and plant ingestion rates, and on soil to plant uptake factors for a variety of receptors. The metals chosen for this report are common toxic metals found at mining sites: arsenic, cadmium, copper, lead, mercury, and zinc. The resulting soil screening values are well above those developed by the US Environmental Protection Agency. The difference in values was mainly a result of using toxicity reference values that were more specific to the receptors addressed rather than the most sensitive receptor.
Historical Thinking: An Evaluation of Student and Teacher Ability to Analyze Sources
ERIC Educational Resources Information Center
Cowgill, Daniel Armond, II; Waring, Scott M.
2017-01-01
The purpose of this study was to partially replicate the "Historical Problem Solving: A Study of the Cognitive Process Using Historical Evidence" study conducted by Sam Wineburg in 1991. The Historical Problem Solving study conducted by Wineburg (1991) sought to compare the ability of historians and top level students, as they analyzed…
Supporting document for the historical tank content estimate for AX-tank farm
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brevick, C.H., Westinghouse Hanford
This Supporting Document provides historical in-depth characterization information on AX-Tank Farm, such as historical waste transfer and level data, tank physical information,temperature plots, liquid observation well plots, chemical analyte and radionuclide inventories for the Historical Tank Content Estimate Report for the northeast quadrant of the Hanford 200 East Area.
Synopsis of timing measurement techniques used in telecommunications
NASA Technical Reports Server (NTRS)
Zampetti, George
1993-01-01
Historically, Maximum Time Interval Error (MTIE) and Maximum Relative Time Interval Error (MRTIE) have been the main measurement techniques used to characterize timing performance in telecommunications networks. Recently, a new measurement technique, Time Variance (TVAR) has gained acceptance in the North American (ANSI) standards body. TVAR was developed in concurrence with NIST to address certain inadequacies in the MTIE approach. The advantages and disadvantages of each of these approaches are described. Real measurement examples are presented to illustrate the critical issues in actual telecommunication applications. Finally, a new MTIE measurement is proposed (ZTIE) that complements TVAR. Together, TVAR and ZTIE provide a very good characterization of network timing.
NASA Astrophysics Data System (ADS)
Naderi Beni, A.; Lahijani, H.; Mousavi Harami, R.; Arpe, K.; Leroy, S. A. G.; Marriner, N.; Berberian, M.; Andrieu-Ponel, V.; Djamali, M.; Mahboubi, A.; Reimer, P. J.
2013-07-01
Historical literature may constitute a valuable source of information to reconstruct sea-level changes. Here, historical documents and geological records have been combined to reconstruct Caspian sea-level (CSL) changes during the last millennium. In addition to a comprehensive literature review, new data from two short sediment cores were obtained from the south-eastern Caspian coast to identify coastal change driven by water-level changes and to compare the results with other geological and historical findings. The overall results indicate a high-stand during the Little Ice Age, up to -21 m (and extra rises due to manmade river avulsion), with a -28 m low-stand during the Medieval Climate Anomaly, while presently the CSL stands at -26.5 m. A comparison of the CSL curve with other lake systems and proxy records suggests that the main sea-level oscillations are essentially paced by solar irradiance. Although the major controller of the long-term CSL changes is driven by climatological factors, the seismicity of the basin creates local changes in base level. These local base-level changes should be considered in any CSL reconstruction.
NASA Astrophysics Data System (ADS)
Naderi Beni, A.; Lahijani, H.; Mousavi Harami, R.; Arpe, K.; Leroy, S. A. G.; Marriner, N.; Berberian, M.; Andrieu-Ponel, V.; Djamali, M.; Mahboubi, A.
2013-03-01
Historical literature may constitute a valuable source of information to reconstruct sea level changes. Here, historical documents and geological records have been combined to reconstruct Caspian sea-level (CSL) changes during the last millennium. In addition to a literature survey, new data from two short sediment cores were obtained from the south-eastern Caspian coast to identify coastal change driven by water-level changes. Two articulated bivalve shells from the marine facies were radiocarbon dated and calibrated to establish a chronology and to compare them with historical findings. The overall results indicate a high-stand during the Little Ice Age, up to -19 m, with a -28 m low-stand during the Medieval Climate Anomaly, while presently the CSL stands at -26.5 m. A comparison of the CSL curve with other lake systems and proxy records suggests that the main sea-level oscillations are essentially paced by solar irradiance. Although the major controller of the long-term CSL changes is driven by climatological factors, the seismicity of the basin could create locally changes in base level. These local base-level changes should be considered in any CSL reconstruction.
Grunewald, E.D.; Stein, R.S.
2006-01-01
In order to assess the long-term character of seismicity near Tokyo, we construct an intensity-based catalog of damaging earthquakes that struck the greater Tokyo area between 1649 and 1884. Models for 15 historical earthquakes are developed using calibrated intensity attenuation relations that quantitatively convey uncertainties in event location and magnitude, as well as their covariance. The historical catalog is most likely complete for earthquakes M ??? 6.7; the largest earthquake in the catalog is the 1703 M ??? 8.2 Genroku event. Seismicity rates from 80 years of instrumental records, which include the 1923 M = 7.9 Kanto shock, as well as interevent times estimated from the past ???7000 years of paleoseismic data, are combined with the historical catalog to define a frequency-magnitude distribution for 4.5 ??? M ??? 8.2, which is well described by a truncated Gutenberg-Richter relation with a b value of 0.96 and a maximum magnitude of 8.4. Large uncertainties associated with the intensity-based catalog are propagated by a Monte Carlo simulation to estimations of the scalar moment rate. The resulting best estimate of moment rate during 1649-2003 is 1.35 ?? 1026 dyn cm yr-1 with considerable uncertainty at the 1??, level: (-0.11, + 0.20) ?? 1026 dyn cm yr-1. Comparison with geodetic models of the interseismic deformation indicates that the geodetic moment accumulation and likely moment release rate are roughly balanced over the catalog period. This balance suggests that the extended catalog is representative of long-term seismic processes near Tokyo and so can be used to assess earthquake probabilities. The resulting Poisson (or time-averaged) 30-year probability for M ??? 7.9 earthquakes is 7-11%.
Preliminary Assessment of Legacy and Current-Use Pesticides in Franciscana Dolphins from Argentina.
Romero, M B; Polizzi, P; Chiodi, L; Medici, S; Blando, M; Gerpe, M
2018-06-01
The change towards intensive agriculture has led to an increase in the use of pesticides. In addition, legacy pesticides, such as organochlorines are still present in the environment. Ten Franciscana dolphins were accidentally killed by netting in a coastal area of Argentina in Buenos Aires province. From these animals, organochlorine, organophosphate and pyrethroid pesticides were analyzed in liver, bubbler and melon tissues. The concentrations of Σendosulfan ranged from not detectable values (nd) to 3539 ng g -1 lw, with the maximum level in melon tissue. DDE was present in 60% of all samples at concentrations from nd to 6672 ng g -1 lw, indicating historical DDT contamination. The presence of endosulfan and heptachlor in a nursling calf indicated a transfer of these pesticides through lactational and placental transport. The concentrations of organophosphates and pyrethroids were below the limit of detection, reflecting the low persistence of these compounds.
NASA Astrophysics Data System (ADS)
Wenfeng, Liu; Zhaomeng, Wang; Hongmei, Hou
2018-05-01
The dilemma of the “Building wastes Besieged City” has gradually become a national problem. Historical experience in the world shows that establishing a systematic and complete legal system is an effective way and powerful weapon to ensure the comprehensive utilization of building wastes resources. Based on the domestic conditions, the state focuses on the problems and learns from the legislation experience of Chinese and foreign construction wastes recycling laws and regulations, to design the legal system form multiple fields, multiple angles, and multiple levels as much as possible to achieve maximum environmental, social, and economic benefits. This article mainly summarizes the characteristics and outstanding experience of the legislation of the comprehensive utilization of construction wastes as resources in foreign countries, as well as the existing problems of Chinese relevant legal regulations, and provides reference for future research and implementation of relevant legislation.
Impact of historical science short stories on students' attitudes and NOS understanding
NASA Astrophysics Data System (ADS)
Hall, Garrett
This study examines the impact of historical short stories on upper and lower level high school chemistry students in the second semester of a two-semester course at a large Midwestern suburban school. Research focused on improved understanding of six fundamental nature of science (NOS) concepts made explicit in the stories, recollection of historical examples from the stories that supported student NOS thinking; student attitudes toward historical stories in comparison to traditional textbook readings as well as student attitudes regarding scientists and the development of science ideas. Data collection included surveys over six NOS concepts, attitudes towards science and reading, and semi-structured interviews. Analysis of the data collected in this study indicated significant increases in understanding for three of the six NOS concepts within the upper-level students and one of the six concepts for lower level students. Students were able to draw upon examples from the stories to defend their NOS views but did so more frequently when responding verbally in comparison to written responses on the surveys. The analysis also showed that students in both levels would rather utilize historical short stories over a traditional textbook and found value in learning about scientists and how scientific ideas are developed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Linard, Joshua; Hall, Steve
2016-08-01
This biennial event includes sampling five groundwater locations (four monitoring wells and one domestic well) at the Lakeview, Oregon, Processing Site. For this event, the domestic well (location 0543) could not be sampled because no one was in residence during the sampling event (Note: notification was provided to the resident prior to the event). Per Appendix A of the Groundwater Compliance Action Plan, sampling is conducted to monitor groundwater quality on a voluntary basis. Sampling and analyses were conducted as specified in the Sampling and Analysis Plan for U.S. Department of Energy Office of Legacy Management Sites (LMS/PRO/S04351, continually updated).more » One duplicate sample was collected from location 0505. Water levels were measured at each sampled monitoring well. The constituents monitored at the Lakeview site are manganese and sulfate. Monitoring locations that exceeded the U.S. Environmental Protection Agency (EPA) Secondary Maximum Contaminant Levels for these constituents are listed in Table 1. Review of time-concentration graphs included in this report indicate that manganese and sulfate concentrations are consistent with historical measurements.« less
Assessing Field-Specific Risk of Soybean Sudden Death Syndrome Using Satellite Imagery in Iowa.
Yang, S; Li, X; Chen, C; Kyveryga, P; Yang, X B
2016-08-01
Moderate resolution imaging spectroradiometer (MODIS) satellite imagery from 2004 to 2013 were used to assess the field-specific risks of soybean sudden death syndrome (SDS) caused by Fusarium virguliforme in Iowa. Fields with a high frequency of significant decrease (>10%) of the normalized difference vegetation index (NDVI) observed in late July to middle August on historical imagery were hypothetically considered as high SDS risk. These high-risk fields had higher slopes and shorter distances to flowlines, e.g., creeks and drainages, particularly in the Des Moines lobe. Field data in 2014 showed a significantly higher SDS level in the high-risk fields than fields selected without considering NDVI information. On average, low-risk fields had 10 times lower F. virguliforme soil density, determined by quantitative polymerase chain reaction, compared with other surveyed fields. Ordinal logistic regression identified positive correlations between SDS and slope, June NDVI, and May maximum temperature, but high June maximum temperature hindered SDS. A modeled SDS risk map showed a clear trend of potential disease occurrences across Iowa. Landsat imagery was analyzed similarly, to discuss the ability to utilize higher spatial resolution data. The results demonstrated the great potential of both MODIS and Landsat imagery for SDS field-specific risk assessment.
User’s manual to update the National Wildlife Refuge System Water Quality Information System (WQIS)
Chojnacki, Kimberly A.; Vishy, Chad J.; Hinck, Jo Ellen; Finger, Susan E.; Higgins, Michael J.; Kilbride, Kevin
2013-01-01
National Wildlife Refuges may have impaired water quality resulting from historic and current land uses, upstream sources, and aerial pollutant deposition. National Wildlife Refuge staff have limited time available to identify and evaluate potential water quality issues. As a result, water quality–related issues may not be resolved until a problem has already arisen. The National Wildlife Refuge System Water Quality Information System (WQIS) is a relational database developed for use by U.S. Fish and Wildlife Service staff to identify existing water quality issues on refuges in the United States. The WQIS database relies on a geospatial overlay analysis of data layers for ownership, streams and water quality. The WQIS provides summary statistics of 303(d) impaired waters and total maximum daily loads for the National Wildlife Refuge System at the national, regional, and refuge level. The WQIS allows U.S. Fish and Wildlife Service staff to be proactive in addressing water quality issues by identifying and understanding the current extent and nature of 303(d) impaired waters and subsequent total maximum daily loads. Water quality data are updated bi-annually, making it necessary to refresh the WQIS to maintain up-to-date information. This manual outlines the steps necessary to update the data and reports in the WQIS.
Barthe, Stéphanie; Binelli, Giorgio; Hérault, Bruno; Scotti-Saintagne, Caroline; Sabatier, Daniel; Scotti, Ivan
2017-02-01
How Quaternary climatic and geological disturbances influenced the composition of Neotropical forests is hotly debated. Rainfall and temperature changes during and/or immediately after the last glacial maximum (LGM) are thought to have strongly affected the geographical distribution and local abundance of tree species. The paucity of the fossil records in Neotropical forests prevents a direct reconstruction of such processes. To describe community-level historical trends in forest composition, we turned therefore to inferential methods based on the reconstruction of past demographic changes. In particular, we modelled the history of rainforests in the eastern Guiana Shield over a timescale of several thousand generations, through the application of approximate Bayesian computation and maximum-likelihood methods to diversity data at nuclear and chloroplast loci in eight species or subspecies of rainforest trees. Depending on the species and on the method applied, we detected population contraction, expansion or stability, with a general trend in favour of stability or expansion, with changes presumably having occurred during or after the LGM. These findings suggest that Guiana Shield rainforests have globally persisted, while expanding, through the Quaternary, but that different species have experienced different demographic events, with a trend towards the increase in frequency of light-demanding, disturbance-associated species. © 2016 John Wiley & Sons Ltd.
How Historical Information Can Improve Extreme Value Analysis of Coastal Water Levels
NASA Astrophysics Data System (ADS)
Le Cozannet, G.; Bulteau, T.; Idier, D.; Lambert, J.; Garcin, M.
2016-12-01
The knowledge of extreme coastal water levels is useful for coastal flooding studies or the design of coastal defences. While deriving such extremes with standard analyses using tide gauge measurements, one often needs to deal with limited effective duration of observation which can result in large statistical uncertainties. This is even truer when one faces outliers, those particularly extreme values distant from the others. In a recent work (Bulteau et al., 2015), we investigated how historical information of past events reported in archives can reduce statistical uncertainties and relativize such outlying observations. We adapted a Bayesian Markov Chain Monte Carlo method, initially developed in the hydrology field (Reis and Stedinger, 2005), to the specific case of coastal water levels. We applied this method to the site of La Rochelle (France), where the storm Xynthia in 2010 generated a water level considered so far as an outlier. Based on 30 years of tide gauge measurements and 8 historical events since 1890, the results showed a significant decrease in statistical uncertainties on return levels when historical information is used. Also, Xynthia's water level no longer appeared as an outlier and we could have reasonably predicted the annual exceedance probability of that level beforehand (predictive probability for 2010 based on data until the end of 2009 of the same order of magnitude as the standard estimative probability using data until the end of 2010). Such results illustrate the usefulness of historical information in extreme value analyses of coastal water levels, as well as the relevance of the proposed method to integrate heterogeneous data in such analyses.
Beckwith, Michael A.
2003-01-01
Water-quality samples were collected at 10 sites in the Clark Fork-Pend Oreille and Spokane River Basins in water years 1999 – 2001 as part of the Northern Rockies Intermontane Basins (NROK) National Water-Quality Assessment (NAWQA) Program. Sampling sites were located in varied environments ranging from small streams and rivers in forested, mountainous headwater areas to large rivers draining diverse landscapes. Two sampling sites were located immediately downstream from the large lakes; five sites were located downstream from large-scale historical mining and oreprocessing areas, which are now the two largest “Superfund” (environmental remediation) sites in the Nation. Samples were collected during a wide range of streamflow conditions, more frequently during increasing and high streamflow and less frequently during receding and base-flow conditions. Sample analyses emphasized major ions, nutrients, and selected trace elements. Streamflow during the study ranged from more than 130 percent of the long-term average in 1999 at some sites to 40 percent of the long-term average in 2001. River and stream water in the study area exhibited small values for specific conductance, hardness, alkalinity, and dissolved solids. Dissolved oxygen concentrations in almost all samples were near saturation. Median total nitrogen and total phosphorus concentrations in samples from most sites were smaller than median concentrations reported for many national programs and other NAWQA Program study areas. The only exceptions were two sites downstream from large wastewater-treatment facilities, where median concentrations of total nitrogen exceeded the national median. Maximum concentrations of total phosphorus in samples from six sites exceeded the 0.1 milligram per liter threshold recommended for limiting nuisance aquatic growth. Concentrations of arsenic, cadmium, copper, lead, mercury, and zinc were largest in samples from sites downstream from historical mining and ore-processing areas in the upper Clark Fork in Montana and the South Fork Coeur d’Alene River in Idaho. Concentrations of dissolved lead in all 32 samples from the South Fork Coeur d’Alene River exceeded the Idaho chronic criterion for the protection of aquatic life at the median hardness level measured during the study. Concentrations of dissolved zinc in all samples collected at this site exceeded both the chronic and acute criteria at all hardness levels measured. When all data from all NROK sites were combined, median concentrations of dissolved arsenic, dissolved and total recoverable copper, total recoverable lead, and total recoverable zinc in the NROK study area appeared to be similar to or slightly smaller than median concentrations at sites in other NAWQA Program study areas in the Western United States affected by historical mining activities. Although the NROK median total recoverable lead concentration was the smallest among the three Western study areas compared, concentrations in several NROK samples were an order of magnitude larger than the maximum concentrations measured in the Upper Colorado River and Great Salt Lake Basins. Dissolved cadmium, dissolved lead, and total recoverable zinc concentrations at NROK sites were more variable than in the other study areas; concentrations ranged over almost three orders of magnitude between minimum and maximum values; the range of dissolved zinc concentrations in the NROK study area exceeded three orders of magnitude.
Charlton, Bruce G
2007-01-01
The four science Nobel prizes (physics, chemistry, medicine/physiology and economics) have performed extremely well as a method of recognizing the highest level of achievement. The prizes exist primarily to honour individuals but also have a very important function in science generally. In particular, the institutions and nations which have educated, nurtured or supported many Nobel laureates can be identified as elite in world science. However, the limited range of subjects and a maximum of 12 laureates per year mean that many major scientific achievements remain un-recognized; and relatively few universities can gather sufficient Nobel-credits to enable a precise estimate of their different levels of quality. I advocate that the Nobel committee should expand the number of Nobel laureates and Prize categories as a service to world science. (1) There is a large surplus of high quality prize candidates deserving of recognition. (2) There has been a vast expansion of research with a proliferation of major sub-disciplines in the existing categories. (3) Especially, the massive growth of the bio-medical sciences has created a shortage of Nobel recognition in this area. (4) Whole new fields of major science have emerged. I therefore suggest that the maximum of three laureates per year should always be awarded in the categories of physics, chemistry and economics, even when these prizes are for diverse and un-related achievements; that the number of laureates in the 'biology' category of physiology or medicine should be increased to six or preferably nine per year; and that two new Prize categories should be introduced to recognize achievements in mathematics and computing science. Together, these measures could increase the science laureates from a maximum of 12 to a minimum of 24, and increase the range of scientific coverage. In future, the Nobel committee should also officially allocate proportionate credit to institutions for each laureate, and a historical task force could also award institutional credit for past prizes.
Assessing the ability of plants to respond to climatic change through distribution shifts
Mark W. Schwartz
1996-01-01
Predictions of future global warming suggest northward shifts of up to 800 km in the equilibrium distributions of plant species. Historical data estimating the maximum rate of tree distribution shifts (migration) suggest that most species will not keep pace with future rates of human-induced climatic change. Previous plant migrations have occurred at rates typically...
1984–2010 trends in fire burn severity and area for the conterminous US
Picotte, Joshua J.; Peterson, Birgit E.; Meier, Gretchen; Howard, Stephen M.
2016-01-01
Burn severity products created by the Monitoring Trends in Burn Severity (MTBS) project were used to analyse historical trends in burn severity. Using a severity metric calculated by modelling the cumulative distribution of differenced Normalized Burn Ratio (dNBR) and Relativized dNBR (RdNBR) data, we examined burn area and burn severity of 4893 historical fires (1984–2010) distributed across the conterminous US (CONUS) and mapped by MTBS. Yearly mean burn severity values (weighted by area), maximum burn severity metric values, mean area of burn, maximum burn area and total burn area were evaluated within 27 US National Vegetation Classification macrogroups. Time series assessments of burned area and severity were performed using Mann–Kendall tests. Burned area and severity varied by vegetation classification, but most vegetation groups showed no detectable change during the 1984–2010 period. Of the 27 analysed vegetation groups, trend analysis revealed burned area increased in eight, and burn severity has increased in seven. This study suggests that burned area and severity, as measured by the severity metric based on dNBR or RdNBR, have not changed substantially for most vegetation groups evaluated within CONUS.
Hough, Susan E.
2013-01-01
The occurrence of three earthquakes with moment magnitude (Mw) greater than 8.8 and six earthquakes larger than Mw 8.5, since 2004, has raised interest in the long-term global rate of great earthquakes. Past studies have focused on the analysis of earthquakes since 1900, which roughly marks the start of the instrumental era in seismology. Before this time, the catalog is less complete and magnitude estimates are more uncertain. Yet substantial information is available for earthquakes before 1900, and the catalog of historical events is being used increasingly to improve hazard assessment. Here I consider the catalog of historical earthquakes and show that approximately half of all Mw ≥ 8.5 earthquakes are likely missing or underestimated in the 19th century. I further present a reconsideration of the felt effects of the 8 February 1843, Lesser Antilles earthquake, including a first thorough assessment of felt reports from the United States, and show it is an example of a known historical earthquake that was significantly larger than initially estimated. The results suggest that incorporation of best available catalogs of historical earthquakes will likely lead to a significant underestimation of seismic hazard and/or the maximum possible magnitude in many regions, including parts of the Caribbean.
Evaluating wind extremes in CMIP5 climate models
NASA Astrophysics Data System (ADS)
Kumar, Devashish; Mishra, Vimal; Ganguly, Auroop R.
2015-07-01
Wind extremes have consequences for renewable energy sectors, critical infrastructures, coastal ecosystems, and insurance industry. Considerable debates remain regarding the impacts of climate change on wind extremes. While climate models have occasionally shown increases in regional wind extremes, a decline in the magnitude of mean and extreme near-surface wind speeds has been recently reported over most regions of the Northern Hemisphere using observed data. Previous studies of wind extremes under climate change have focused on selected regions and employed outputs from the regional climate models (RCMs). However, RCMs ultimately rely on the outputs of global circulation models (GCMs), and the value-addition from the former over the latter has been questioned. Regional model runs rarely employ the full suite of GCM ensembles, and hence may not be able to encapsulate the most likely projections or their variability. Here we evaluate the performance of the latest generation of GCMs, the Coupled Model Intercomparison Project phase 5 (CMIP5), in simulating extreme winds. We find that the multimodel ensemble (MME) mean captures the spatial variability of annual maximum wind speeds over most regions except over the mountainous terrains. However, the historical temporal trends in annual maximum wind speeds for the reanalysis data, ERA-Interim, are not well represented in the GCMs. The historical trends in extreme winds from GCMs are statistically not significant over most regions. The MME model simulates the spatial patterns of extreme winds for 25-100 year return periods. The projected extreme winds from GCMs exhibit statistically less significant trends compared to the historical reference period.
Earthquakes Versus Surface Deformation: Qualitative and Quantitative Relationships From The Aegean
NASA Astrophysics Data System (ADS)
Pavlides, S.; Caputo, R.
Historical seismicity of the Aegean Region has been revised in order to associate major earthquakes to specific seismogenic structures. Only earthquakes associated to normal faulting have been considered. All available historical and seismotectonic data relative to co-seismic surface faulting have been collected in order to evaluate the surface rup- ture length (SRL) and the maximum displacement (MD). In order to perform Seismic Hazard analyses, empirical relationships between these parameters and the magnitude have been inferred and the best fitting regression functions have been calculated. Both co-seismic fault rupture lengths and maximum displacements show a logarithmic re- lationships, but our data from the Aegean Region have systematically lower values than the same parameters world-wide though they are similar to those of the East- ern Mediterranean-Middle East region. The upper envelopes of our diagrams (SRL vs Mw and MD vs Mw) have been also estimated and discussed, because they give useful information of the wort-case scenarios; these curces will be also discussed. Further- more, geological and morphological criteria have been used to recognise the tectonic structures along which historical earthquakes occurred in order to define the geolog- ical fault length (GFL). Accordingly, the SRL/GFL ratio seems to have a bimodal distribution with a major peak about 0.8-1.0, indicating that several earthquakes break through almost the entire geological fault length, and a second peak around 0.5, re- lated to the possible segmentation of these major neotectonic faults. In contrast, no relationships can be depicted between the SRL/GFL ratio and the magnitude of the corresponding events.
Tropical Africa: Land use, biomass, and carbon estimates for 1980
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brown, S.; Gaston, G.; Daniels, R.C.
1996-06-01
This document describes the contents of a digital database containing maximum potential aboveground biomass, land use, and estimated biomass and carbon data for 1980 and describes a methodology that may be used to extend this data set to 1990 and beyond based on population and land cover data. The biomass data and carbon estimates are for woody vegetation in Tropical Africa. These data were collected to reduce the uncertainty associated with the possible magnitude of historical releases of carbon from land use change. Tropical Africa is defined here as encompassing 22.7 x 10{sup 6} km{sup 2} of the earth`s landmore » surface and includes those countries that for the most part are located in Tropical Africa. Countries bordering the Mediterranean Sea and in southern Africa (i.e., Egypt, Libya, Tunisia, Algeria, Morocco, South Africa, Lesotho, Swaziland, and Western Sahara) have maximum potential biomass and land cover information but do not have biomass or carbon estimate. The database was developed using the GRID module in the ARC/INFO{sup TM} geographic information system. Source data were obtained from the Food and Agriculture Organization (FAO), the U.S. National Geophysical Data Center, and a limited number of biomass-carbon density case studies. These data were used to derive the maximum potential and actual (ca. 1980) aboveground biomass-carbon values at regional and country levels. The land-use data provided were derived from a vegetation map originally produced for the FAO by the International Institute of Vegetation Mapping, Toulouse, France.« less
NASA Astrophysics Data System (ADS)
Amores, Angel; Melnichenko, Oleg; Maximenko, Nikolai
2017-01-01
The mean vertical structure and transport properties of mesoscale eddies are investigated in the North Atlantic subtropical gyre by combining historical records of Argo temperature/salinity profiles and satellite sea level anomaly data in the framework of the eddy tracking technique. The study area is characterized by a low eddy kinetic energy and sea surface salinity maximum. Although eddies have a relatively weak signal at surface (amplitudes around 3-7 cm), the eddy composites reveal a clear deep signal that penetrates down to at least 1200 m depth. The analysis also reveals that the vertical structure of the eddy composites is strongly affected by the background stratification. The horizontal patterns of temperature/salinity anomalies can be reconstructed by a linear combination of a monopole, related to the elevation/depression of the isopycnals in the eddy core, and a dipole, associated with the horizontal advection of the background gradient by the eddy rotation. A common feature of all the eddy composites reconstructed is the phase coherence between the eddy temperature/salinity and velocity anomalies in the upper ˜300 m layer, resulting in the transient eddy transports of heat and salt. As an application, a box model of the near-surface layer is used to estimate the role of mesoscale eddies in maintaining a quasi-steady state distribution of salinity in the North Atlantic subtropical salinity maximum. The results show that mesoscale eddies are able to provide between 4 and 21% of the salt flux out of the area required to compensate for the local excess of evaporation over precipitation.
Reconstructing the spatial pattern of historical forest land in China in the past 300 years
NASA Astrophysics Data System (ADS)
Yang, Xuhong; Jin, Xiaobin; Xiang, Xiaomin; Fan, Yeting; Shan, Wei; Zhou, Yinkang
2018-06-01
The reconstruction of the historical forest spatial distribution is of a great significance to understanding land surface cover in historical periods as well as its climate and ecological effects. Based on the maximum scope of historical forest land before human intervention, the characteristics of human behaviors in farmland reclamation and deforestation for heating and timber, we create a spatial evolution model to reconstruct the spatial pattern of historical forest land. The model integrates the land suitability for reclamation, the difficulty of deforestation, the attractiveness of timber trading markets and the abundance of forest resources to calibrate the potential scope of historical forest land with the rationale that the higher the probability of deforestation for reclamation and wood, the greater the likelihood that the forest land will be deforested. Compared to the satellite-based forest land distribution in 2000, about 78.5% of our reconstructed historical forest grids are of the absolute error between 25% and -25% while as many as 95.85% of those grids are of the absolute error between 50% and -50%, which indirectly validates the feasibility of our reconstructed model. Then, we simulate the spatial distribution of forest land in China in 1661, 1724, 1820, 1887, 1933 and 1952 with the grid resolution of 1 km × 1 km. Our result shows that (1) the reconstructed historical forest land in China in the past 300 years concentrates in DaXingAnLing, XiaoXingAnLing, ChangBaiShan, HengDuanShan, DaBaShan, WuYiShan, DaBieShan, XueFengShang and etc.; (2) in terms of the spatial evolution, historical forest land shrank gradually in LiaoHe plains, SongHuaJiang-NenJiang plains and SanJiang plains of eastnorth of China, Sichuan basins and YunNan-GuiZhou Plateaus; and (3) these observations are consistent to the proceeding of agriculture reclamation in China in past 300 years towards Northeast China and Southwest China.
36 CFR 800.4 - Identification of historic properties.
Code of Federal Regulations, 2010 CFR
2010-07-01
... the steps necessary to identify historic properties within the area of potential effects. (1) Level of..., research and studies, the magnitude and nature of the undertaking and the degree of Federal involvement... research, consultation and an appropriate level of field investigation, taking into account the number of...
A historical perspective of Global Warming Potential from Municipal Solid Waste Management.
Habib, Komal; Schmidt, Jannick H; Christensen, Per
2013-09-01
The Municipal Solid Waste Management (MSWM) sector has developed considerably during the past century, paving the way for maximum resource (materials and energy) recovery and minimising environmental impacts such as global warming associated with it. The current study is assessing the historical development of MSWM in the municipality of Aalborg, Denmark throughout the period of 1970 to 2010, and its implications regarding Global Warming Potential (GWP(100)), using the Life Cycle Assessment (LCA) approach. Historical data regarding MSW composition, and different treatment technologies such as incineration, recycling and composting has been used in order to perform the analysis. The LCA results show a continuous improvement in environmental performance of MSWM from 1970 to 2010 mainly due to the changes in treatment options, improved efficiency of various treatment technologies and increasing focus on recycling, resulting in a shift from net emission of 618 kg CO(2)-eq.tonne(-1) to net saving of 670 kg CO(2)-eq.tonne(-1) of MSWM. Copyright © 2013 Elsevier Ltd. All rights reserved.
Stochastic differential equation (SDE) model of opening gold share price of bursa saham malaysia
NASA Astrophysics Data System (ADS)
Hussin, F. N.; Rahman, H. A.; Bahar, A.
2017-09-01
Black and Scholes option pricing model is one of the most recognized stochastic differential equation model in mathematical finance. Two parameter estimation methods have been utilized for the Geometric Brownian model (GBM); historical and discrete method. The historical method is a statistical method which uses the property of independence and normality logarithmic return, giving out the simplest parameter estimation. Meanwhile, discrete method considers the function of density of transition from the process of diffusion normal log which has been derived from maximum likelihood method. These two methods are used to find the parameter estimates samples of Malaysians Gold Share Price data such as: Financial Times and Stock Exchange (FTSE) Bursa Malaysia Emas, and Financial Times and Stock Exchange (FTSE) Bursa Malaysia Emas Shariah. Modelling of gold share price is essential since fluctuation of gold affects worldwide economy nowadays, including Malaysia. It is found that discrete method gives the best parameter estimates than historical method due to the smallest Root Mean Square Error (RMSE) value.
Vicente-Beckett, Victoria A; McCauley, Gaylene J Taylor; Duivenvoorden, Leo J
2016-01-01
Acid-mine drainage (AMD) into the Dee River from the historic gold and copper mine in Mount Morgan, Queensland (Australia) has been of concern to farmers in the area since 1925. This study sought to determine the levels of AMD-related metals and sulfur in agricultural produce grown near the mine-impacted Dee River, compare these with similar produce grown in reference fields (which had no known AMD influence), and assess any potential health risk using relevant Australian or US guidelines. Analyses of lucerne (Medicago sativa; also known as alfalfa) from five Dee fields showed the following average concentrations (mg/kg dry basis): Cd < 1, Cu 11, Fe 106, Mn 52, Pb < 5, Zn 25 and S 3934; similar levels were found in lucerne hay (used as cattle feed) from two Dee fields. All lucerne and lucerne hay data were generally comparable with levels found in the lucerne reference fields, suggesting no AMD influence; the levels were within the US National Research Council (US NRC) guidelines for maximum tolerable cattle dietary intake. Pasture grass (also cattle feed) from two fields in the Dee River floodplains gave mean concentrations (mg/kg dry) of Cd 0.14, Cu 12, Fe 313, Mn 111, Pb 1.4, Zn 86 and S 2450. All metal levels from the Dee and from reference sites were below the US NRC guidelines for maximum tolerable cattle dietary intake; however, the average Cd, Cu and Fe levels in Dee samples were significantly greater than the corresponding levels in the pasture grass reference sites, suggesting AMD influence in the Dee samples. The average levels in the edible portions of mandarin oranges (Citrus reticulata) from Dee sites (mg/kg wet weight) were Cd 0.011, Cu 0.59, Fe 2.2, Mn 0.56, Pb 0.18, S 91 and Zn 0.96. Cd and Zn were less than or close to, average Fe and Mn levels were at most twice, Cd 1.8 or 6.5 times, and Pb 8.5 or 72 times the maximum levels in raw oranges reported in the US total diet study (TDS) or the Australian TDS, respectively. Average Cd, Fe, Mn, Pb and Zn levels in the citrus reference samples were found to exceed the maximum reported in one or both TDS surveys. Cu, Fe, Mn, Pb and Zn plant-soil transfer factor (TF) values were < 1 for all agricultural samples from both Dee and reference sites, suggesting relatively poor transfer of these metals from soil to plant. In the case of Cd, TF values for Dee pasture grass and citrus fruit samples were 0.14 and 0.73, respectively; lucerne and lucerne hay from both Dee and reference sites gave TF = 10, suggesting some potential risk to cattle, although this conclusion is tentative because Cd levels were close to or less than the detection limit. TF values for S in lucerne, lucerne hay, pasture grass and mandarin oranges from Dee sites were 18, 14, 3 and 3.6, respectively, indicating that S in soil was readily available to plant or fruit. Sulfur in pasture grass and citrus fruit (TF = 11 for both) was apparently more bioavailable at the reference sites than at the Dee sites (TF = 3.0 for pasture grass; TF = 3.6 for citrus fruit).
Gamiño-Gutiérrez, Sandra P; González-Pérez, C Ivonne; Gonsebatt, María E; Monroy-Fernández, Marcos G
2013-02-01
Environmental geochemical and health studies were carried out in urban areas of Villa de la Paz, S.L.P. (Mexico), where mining activities have been developed for more of 200 years, leading to the pollution of surface soil by arsenic and heavy metals (Pb, Cd, Cu, Zn). The analysis of urban soils to determine total and bioaccessibility concentrations of As and Pb, demonstrated a combined contribution of the natural and anthropogenic concentrations in the site, at levels higher than the environmental guideline values that provoke a human health risk. Contour soil mapping confirmed that historical mine waste deposits without environmental control measures, are the main source of pollution soil by As and Pb in the site. Exposure (Pb in blood and As in urine) and effect (micronucleated exfoliated cells assay) biological monitoring were then carried out in the childhood population of the site and in a control site. The exposure biological monitoring demonstrated that at least 20-30 % of children presented Pb and As exposure values higher than the national and international maximum intervention values. The effect biomonitoring by MEC assay confirmed that there is a genotoxic damage in local childhood population that could be associated with the arsenic exposure in the site.
Huang, Jingyu; Amuzu-Sefordzi, Basil; Li, Ming
2015-05-01
The Pearl River Delta is one of the biggest electronics manufacturing regions in the world. Due to the presence of abandoned industrial sites and the proliferation of large-scale electronics companies in the past four decades, it is therefore imperative to investigate the extent of heavy metals and polychlorinated biphenyls (PCBs) contamination in the region. Spatial and temporal distribution of heavy metals (Cr, Cu, Ni, Pb, and Zn) and PCBs (PCB28, PCB52, PCB101, PCB118, PCB138, PCB153, and PCB180) in the Lianhua Mountain reservoir in the Pearl River Delta, Dongguan City, China were examined based on a sedimentary profile analysis. Higher concentrations of the heavy metals detected were recorded in bottom sediments whereas 70% of the detected PCBs recorded maximum concentrations in top sediments. The geo-accumulation indices (Igeo) indicate that the study area is uncontaminated to moderately contaminated. Also, the integrated pollution indices (IPI) were above 1, except Pb, which shows that the study area is contaminated with heavy metals from anthropogenic sources. The concentrations of individual heavy metals and PCBs over a period of 60 years were also analyzed in order to establish a historical trend of pollution in the study area. This study provides baseline information on the level and historical trend of heavy metals and PCBs pollution in the study area.
Rader, B.R.; Nimmo, D.W.R.; Chapman, P.L.
1997-01-01
Concentrations of metals in sediments and soils deposited along the floodplain of the Clark Fork River, within the Grant-Kohrs Ranch National Historic Site, Deer Lodge, Montana, USA, have exceeded maximum background concentrations in the United States for most metals tested. As a result of mining and smelting activities, portions of the Deer Lodge Valley, including the Grant-Kohrs Ranch, have received National Priority List Designation under the Comprehensive Environmental Response, Compensation and Liability Act. Using a series of plant germination tests, pH measurements, and metal analyses, this study investigated the toxicity of soils from floodplain 'slicken' areas, bare spots devoid of vegetation, along the Clark Fork River. The slicken soils collected from the Grant-Kohrs Ranch were toxic to all four plant species tested. The most sensitive endpoint in the germination tests was root length and the least sensitive was emergence. Considering emergence, the most sensitive species was the resident grass species Agrostis gigantea. The sensitivities were reversed when root lengths were examined, with Echinochloa crusgalli showing the greatest sensitivity. Both elevated concentrations of metals and low pH were necessary to produce an acutely phytotoxic response in laboratory seed germination tests using slicken soils. Moreover, pH values on the Grant-Kohrs Ranch appear to be a better predictor of acutely phytotoxic conditions than total metal levels.
Robalo, Joana I.; Pereira, Ana M.; Branco, Paulo; Santos, José Maria; Ferreira, Maria Teresa; Sousa, Mónica; Doadrio, Ignacio
2016-01-01
Background. Worldwide predictions suggest that up to 75% of the freshwater fish species occurring in rivers with reduced discharge could be extinct by 2070 due to the combined effect of climate change and water abstraction. The Mediterranean region is considered to be a hotspot of freshwater fish diversity but also one of the regions where the effects of climate change will be more severe. Iberian cyprinids are currently highly endangered, with over 68% of the species raising some level of conservation concern. Methods. During the FISHATLAS project, the Portuguese hydrographical network was extensively covered (all the 34 river basins and 47 sub-basins) in order to contribute with valuable data on the genetic diversity distribution patterns of native cyprinid species. A total of 188 populations belonging to 16 cyprinid species of Squalius, Luciobarbus, Achondrostoma, Iberochondrostoma, Anaecypris and Pseudochondrostoma were characterized, for a total of 3,678 cytochrome b gene sequences. Results. When the genetic diversity of these populations was mapped, it highlighted differences among populations from the same species and between species with identical distribution areas. Factors shaping the contemporary patterns of genetic diversity were explored and the results revealed the role of latitude, inter-basin connectivity, migratory behaviour, species maximum size, species range and other species intrinsic traits in determining the genetic diversity of sampled populations. Contrastingly, drainage area and hydrological regime (permanent vs. temporary) seem to have no significant effect on genetic diversity. Species intrinsic traits, maximum size attained, inter-basin connectivity and latitude explained over 30% of the haplotype diversity variance and, generally, the levels of diversity were significantly higher for smaller sized species, from connected and southerly river basins. Discussion. Targeting multiple co-distributed species of primary freshwater fish allowed us to assess the relative role of historical versus contemporary factors affecting genetic diversity. Since different patterns were detected for species with identical distribution areas we postulate that contemporary determinants of genetic diversity (species’ intrinsic traits and landscape features) must have played a more significant role than historical factors. Implications for conservation in a context of climate change and highly disturbed habitats are detailed, namely the need to focus management and conservation actions on intraspecific genetic data and to frequently conduct combined genetic and demographic surveys. PMID:26966653
Gridded climate data from 5 GCMs of the Last Glacial Maximum downscaled to 30 arc s for Europe
NASA Astrophysics Data System (ADS)
Schmatz, D. R.; Luterbacher, J.; Zimmermann, N. E.; Pearman, P. B.
2015-06-01
Studies of the impacts of historical, current and future global change require very high-resolution climate data (≤ 1 km) as a basis for modelled responses, meaning that data from digital climate models generally require substantial rescaling. Another shortcoming of available datasets on past climate is that the effects of sea level rise and fall are not considered. Without such information, the study of glacial refugia or early Holocene plant and animal migration are incomplete if not impossible. Sea level at the last glacial maximum (LGM) was approximately 125 m lower, creating substantial additional terrestrial area for which no current baseline data exist. Here, we introduce the development of a novel, gridded climate dataset for LGM that is both very high resolution (1 km) and extends to the LGM sea and land mask. We developed two methods to extend current terrestrial precipitation and temperature data to areas between the current and LGM coastlines. The absolute interpolation error is less than 1 and 0.5 °C for 98.9 and 87.8 %, respectively, of all pixels within two arc degrees of the current coastline. We use the change factor method with these newly assembled baseline data to downscale five global circulation models of LGM climate to a resolution of 1 km for Europe. As additional variables we calculate 19 "bioclimatic" variables, which are often used in climate change impact studies on biological diversity. The new LGM climate maps are well suited for analysing refugia and migration during Holocene warming following the LGM.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brachman, David G., E-mail: david.brachman@dignityhealth.org; Barrow Neurological Institute, St. Joseph's Hospital and Medical Center, Phoenix, Arizona; Pugh, Stephanie L.
Purpose: The purpose of phase 1 was to determine the maximum tolerated dose (MTD) of motexafin gadolinium (MGd) given concurrently with temozolomide (TMZ) and radiation therapy (RT) in patients with newly diagnosed supratentorial glioblastoma multiforme (GBM). Phase 2 determined whether this combination improved overall survival (OS) and progression-free survival (PFS) in GBM recursive partitioning analysis class III to V patients compared to therapies for recently published historical controls. Methods and Materials: Dose escalation in phase 1 progressed through 3 cohorts until 2 of 6 patients experienced dose-limiting toxicity or a dose of 5 mg/kg was reached. Once MTD was established, amore » 1-sided 1-sample log-rank test at significance level of .1 had 85% power to detect a median survival difference (13.69 vs 18.48 months) with 60 deaths over a 12-month accrual period and an additional 18 months of follow-up. OS and PFS were estimated using the Kaplan-Meier method. Results: In phase 1, 24 patients were enrolled. The MTD established was 5 mg/kg, given intravenously 5 days a week for the first 10 RT fractions, then 3 times a week for the duration of RT. The 7 patients enrolled in the third dose level and the 94 enrolled in phase 2 received this dose. Of these 101 patients, 87 were eligible and evaluable. Median survival time was 15.6 months (95% confidence interval [CI]: 12.9-17.6 months), not significantly different from that of the historical control (P=.36). Median PFS was 7.6 months (95% CI: 5.7-9.6 months). One patient (1%) experienced a grade 5 adverse event possibly related to therapy during the concurrent phase, and none experience toxicity during adjuvant TMZ therapy. Conclusions: Treatment was well tolerated, but median OS did not reach improvement specified by protocol compared to historical control, indicating that the combination of standard RT with TMZ and MGd did not achieve a significant survival advantage.« less
Atlantic Bluefin Tuna: A Novel Multistock Spatial Model for Assessing Population Biomass
Taylor, Nathan G.; McAllister, Murdoch K.; Lawson, Gareth L.; Carruthers, Tom; Block, Barbara A.
2011-01-01
Atlantic bluefin tuna (Thunnus thynnus) is considered to be overfished, but the status of its populations has been debated, partly because of uncertainties regarding the effects of mixing on fishing grounds. A better understanding of spatial structure and mixing may help fisheries managers to successfully rebuild populations to sustainable levels while maximizing catches. We formulate a new seasonally and spatially explicit fisheries model that is fitted to conventional and electronic tag data, historic catch-at-age reconstructions, and otolith microchemistry stock-composition data to improve the capacity to assess past, current, and future population sizes of Atlantic bluefin tuna. We apply the model to estimate spatial and temporal mixing of the eastern (Mediterranean) and western (Gulf of Mexico) populations, and to reconstruct abundances from 1950 to 2008. We show that western and eastern populations have been reduced to 17% and 33%, respectively, of 1950 spawning stock biomass levels. Overfishing to below the biomass that produces maximum sustainable yield occurred in the 1960s and the late 1990s for western and eastern populations, respectively. The model predicts that mixing depends on season, ontogeny, and location, and is highest in the western Atlantic. Assuming that future catches are zero, western and eastern populations are predicted to recover to levels at maximum sustainable yield by 2025 and 2015, respectively. However, the western population will not recover with catches of 1750 and 12,900 tonnes (the “rebuilding quotas”) in the western and eastern Atlantic, respectively, with or without closures in the Gulf of Mexico. If future catches are double the rebuilding quotas, then rebuilding of both populations will be compromised. If fishing were to continue in the eastern Atlantic at the unregulated levels of 2007, both stocks would continue to decline. Since populations mix on North Atlantic foraging grounds, successful rebuilding policies will benefit from trans-Atlantic cooperation. PMID:22174745
Flood frequency analysis - the challenge of using historical data
NASA Astrophysics Data System (ADS)
Engeland, Kolbjorn
2015-04-01
Estimates of high flood quantiles are needed for many applications, .e.g. dam safety assessments are based on the 1000 years flood, whereas the dimensioning of important infrastructure requires estimates of the 200 year flood. The flood quantiles are estimated by fitting a parametric distribution to a dataset of high flows comprising either annual maximum values or peaks over a selected threshold. Since the record length of data is limited compared to the desired flood quantile, the estimated flood magnitudes are based on a high degree of extrapolation. E.g. the longest time series available in Norway are around 120 years, and as a result any estimation of a 1000 years flood will require extrapolation. One solution is to extend the temporal dimension of a data series by including information about historical floods before the stream flow was systematically gaugeded. Such information could be flood marks or written documentation about flood events. The aim of this study was to evaluate the added value of using historical flood data for at-site flood frequency estimation. The historical floods were included in two ways by assuming: (1) the size of (all) floods above a high threshold within a time interval is known; and (2) the number of floods above a high threshold for a time interval is known. We used a Bayesian model formulation, with MCMC used for model estimation. This estimation procedure allowed us to estimate the predictive uncertainty of flood quantiles (i.e. both sampling and parameter uncertainty is accounted for). We tested the methods using 123 years of systematic data from Bulken in western Norway. In 2014 the largest flood in the systematic record was observed. From written documentation and flood marks we had information from three severe floods in the 18th century and they were likely to exceed the 2014 flood. We evaluated the added value in two ways. First we used the 123 year long streamflow time series and investigated the effect of having several shorter series' which could be supplemented with a limited number of known large flood events. Then we used the three historical floods from the 18th century combined with the whole and subsets of the 123 years of systematic observations. In the latter case several challenges were identified: i) The possibility to transfer water levels to river streamflows due to man made changes in the river profile, (ii) The stationarity of the data might be questioned since the three largest historical floods occurred during the "little ice age" with different climatic conditions compared to today.
Dudley, Robert W.; Schalk, Charles W.; Stasulis, Nicholas W.; Trial, Joan G.
2011-01-01
In 2009, the U.S. Geological Survey entered into a cooperative agreement with the International Joint Commission, St. Croix River Board to do an analysis of historical smallmouth bass habitat as a function of lake level for Spednic Lake in an effort to quantify the effects, if any, of historical lake-level management and meteorological conditions (from 1970 to 2009) on smallmouth bass year-class failure. The analysis requires estimating habitat availability as a function of lake level during spawning periods from 1970 to 2009, which is documented in this report. Field work was done from October 19 to 23, and from November 2 to 10, 2009, to acquire acoustic bathymetric (depth) data and acoustic data indicating the character of the surficial lake-bottom sediments. Historical lake-level data during smallmouth bass spawning (May-June) were applied to the bathymetric and surficial-sediment type data sets to produce annual historic estimates of smallmouth-bass-spawning-habitat area. Results show that minimum lake level during the spawning period explained most of the variability (R2 = 0.89) in available spawning habitat for nearshore areas of shallow slope (less than 10 degrees) on the basis of linear correlation. The change in lake level during the spawning period explained most of the variability (R2 = 0.90) in available spawning habitat for areas of steeper slopes (10 to 40 degrees) on the basis of linear correlation. The next step in modeling historic smallmouth bass year-class persistence is to combine this analysis of the effects of lake-level management on habitat availability with meteorological conditions.
Tillman, Fred D.; Gangopadhyay, Subhrendu; Pruitt, Tom
2017-01-01
In evaluating potential impacts of climate change on water resources, water managers seek to understand how future conditions may differ from the recent past. Studies of climate impacts on groundwater recharge often compare simulated recharge from future and historical time periods on an average monthly or overall average annual basis, or compare average recharge from future decades to that from a single recent decade. Baseline historical recharge estimates, which are compared with future conditions, are often from simulations using observed historical climate data. Comparison of average monthly results, average annual results, or even averaging over selected historical decades, may mask the true variability in historical results and lead to misinterpretation of future conditions. Comparison of future recharge results simulated using general circulation model (GCM) climate data to recharge results simulated using actual historical climate data may also result in an incomplete understanding of the likelihood of future changes. In this study, groundwater recharge is estimated in the upper Colorado River basin, USA, using a distributed-parameter soil-water balance groundwater recharge model for the period 1951–2010. Recharge simulations are performed using precipitation, maximum temperature, and minimum temperature data from observed climate data and from 97 CMIP5 (Coupled Model Intercomparison Project, phase 5) projections. Results indicate that average monthly and average annual simulated recharge are similar using observed and GCM climate data. However, 10-year moving-average recharge results show substantial differences between observed and simulated climate data, particularly during period 1970–2000, with much greater variability seen for results using observed climate data.
Can we use GIS as a historic city's heritage management system? The case study of Hermoupolis-Syros
NASA Astrophysics Data System (ADS)
Chatzigrigoriou, Pavlos
2016-08-01
Because of the severe economic crisis, Greek historic heritage is in risk. Historic cities as Hermoupolis, were dealing with this risk years before the crisis. The current situation needed drastic action, with innovative low cost ideas. The historic building stock in Hermoupolis counts more than 1.200 buildings. By recording the pathology, the GIS and the D.B.M.S "HERMeS" with the appropriate algorithms identify the historic buildings in risk. In the first application of the system those buildings were 160, with a rate of 2.4 historic buildings collapse every year. The prioritization of interventions in these buildings is critical, as it is not possible to lower the collapsing risk simultaneously in 160 buildings, but neither the interventions can be judged solely by the reactions of local residents. Bearing in mind the fact that one, given the current economic conditions, has to make best use of the funds for this purpose, it is proved that the relevant decision requires multi criteria analysis method of prioritizing interventions. Specifically, the analysis takes into account the risk of collapse of each building, but in connection with a series of other variables, such as the role of building in Hermoupolis, the position in the city, the influence in other areas of interest, the social impact etc. The final result is a catalogue with historic buildings and a point system, which reflects the risk of loosing the building. The point system leads to a Conservation Plan for the city of Hermoupolis, giving the hierarchy of interventions that must be done in order to save the maximum architecture heritage with the minimum funds, postponing the risk of collapsing. In 2015, EU and EUROPA-NOSTRA awarded the above-mentioned project in the category of "Research and Digitization".
DOE Office of Scientific and Technical Information (OSTI.GOV)
Duffey, R.B.; Rohatgi, U.S.
Maximum power limits for hypothetical designs of natural circulation plants can be described analytically. The thermal hydraulic design parameters are those which limit the flow, being the elevations, flow areas, and loss coefficients. WE have found some simple ``design`` equations for natural circulation flow to power ratio, and for the stability limit. The analysis of historical and available data for maximum capacity factor estimation shows 80% to be reasonable and achievable. The least cost is obtained by optimizing both hypothetical plant performance for a given output,a nd the plant layout and design. There is also scope to increase output andmore » reduce cost by considering design variations of primary and secondary pressure, and by optimizing component elevations and loss coefficients. The design limits for each are set by stability and maximum flow considerations, which deserve close and careful evaluation.« less
40 CFR 143.3 - Secondary maximum contaminant levels.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 40 Protection of Environment 22 2010-07-01 2010-07-01 false Secondary maximum contaminant levels... levels. The secondary maximum contaminant levels for public water systems are as follows: Contaminant Level Aluminum 0.05 to 0.2 mg/l. Chloride 250 mg/l. Color 15 color units. Copper 1.0 mg/l. Corrosivity...
Thermal effects of dams in the Willamette River basin, Oregon
Rounds, Stewart A.
2010-01-01
Methods were developed to assess the effects of dams on streamflow and water temperature in the Willamette River and its major tributaries. These methods were used to estimate the flows and temperatures that would occur at 14 dam sites in the absence of upstream dams, and river models were applied to simulate downstream flows and temperatures under a no-dams scenario. The dams selected for this study include 13 dams built and operated by the U.S. Army Corps of Engineers (USACE) as part of the Willamette Project, and 1 dam on the Clackamas River owned and operated by Portland General Electric (PGE). Streamflows in the absence of upstream dams for 2001-02 were estimated for USACE sites on the basis of measured releases, changes in reservoir storage, a correction for evaporative losses, and an accounting of flow effects from upstream dams. For the PGE dam, no-project streamflows were derived from a previous modeling effort that was part of a dam-relicensing process. Without-dam streamflows were characterized by higher peak flows in winter and spring and much lower flows in late summer, as compared to with-dam measured flows. Without-dam water temperatures were estimated from measured temperatures upstream of the reservoirs (the USACE sites) or derived from no-project model results (the PGE site). When using upstream data to estimate without-dam temperatures at dam sites, a typical downstream warming rate based on historical data and downstream river models was applied over the distance from the measurement point to the dam site, but only for conditions when the temperature data indicated that warming might be expected. Regressions with measured temperatures from nearby or similar sites were used to extend the without-dam temperature estimates to the entire 2001-02 time period. Without-dam temperature estimates were characterized by a more natural seasonal pattern, with a maximum in July or August, in contrast to the measured patterns at many of the tall dam sites where the annual maximum temperature typically occurred in September or October. Without-dam temperatures also tended to have more daily variation than with-dam temperatures. Examination of the without-dam temperature estimates indicated that dam sites could be grouped according to the amount of streamflow derived from high-elevation, spring-fed, and snowmelt-driven areas high in the Cascade Mountains (Cougar, Big Cliff/Detroit, River Mill, and Hills Creek Dams: Group A), as opposed to flow primarily derived from lower-elevation rainfall-driven drainages (Group B). Annual maximum temperatures for Group A ranged from 15 to 20 degree(s)C, expressed as the 7-day average of the daily maximum (7dADM), whereas annual maximum 7dADM temperatures for Group B ranged from 21 to 25 degrees C. Because summertime stream temperature is at least somewhat dependent on the upstream water source, it was important when estimating without-dam temperatures to use correlations to sites with similar upstream characteristics. For that reason, it also is important to maintain long-term, year-round temperature measurement stations at representative sites in each of the Willamette River basin's physiographic regions. Streamflow and temperature estimates downstream of the major dam sites and throughout the Willamette River were generated using existing CE-QUAL-W2 flow and temperature models. These models, originally developed for the Willamette River water-temperature Total Maximum Daily Load process, required only a few modifications to allow them to run under the greatly reduced without-dam flow conditions. Model scenarios both with and without upstream dams were run. Results showed that Willamette River streamflow without upstream dams was reduced to levels much closer to historical pre-dam conditions, with annual minimum streamflows approximately one-half or less of dam-augmented levels. Thermal effects of the dams varied according to the time of year, from cooling in mid-summer to warm
The Development of IIA Method and the Application on the 1661 Luermen event
NASA Astrophysics Data System (ADS)
Wu, T. R.; Wu, H.; Hu, S. K.; Tsai, Y. L.
2016-12-01
In 1661, Chinese navy led by General Koxinga at the end of Ming Dynasty had a naval battle against the Netherlands. This battle was not only the first official sea warfare that China confronted the Western world, but also the only naval battle won by Chinese Navy so far. This case was significant because it altered the fate of Taiwan until today. Ace of the critical points that General Zheng won the battle was entering Lakjemuyse bay unexpected. Luermen bay was and is an extremely shallow bay with a 2.1m maximum water depth during the high tide, which was not possible for a fleet of 20,000 marines to cross. Hence, no defense was deployed from the Netherlands side. However, plenty of historical literatures mentioned a strange phenomenon that helped Chinese warships entered the Luermen bay, the rise of water level. In this study, we will discuss the possible causes that might rise the water level, e.g. Tsunami, storm surge, and high tide. We analyzed it based on the knowledge of hydrodynamics. We performed the newly developed Impact Intensify Analysis (IIA) for finding the potential tsunami sources, and the COMCOT tsunami model was adopted for the nonlinear scenario simulations, associated with the high resolution bathymetry data. Both earthquake and mudslide tsunamis were inspected. Other than that, we also collected the information of tide and weather for identifying the effects from high tide and storm surge. After the thorough study, a scenario that satisfies most of the descriptions in the historical literatures will be presented. The results will explain the cause of mysterious event that changed the destiny of Taiwan.
NASA Astrophysics Data System (ADS)
Luo, W.; Zhang, J.; Wu, Q.; Chen, J.; Huo, X.; Zhang, J.; Zhang, Y.; Wang, T.
2017-08-01
In China historical and cultural heritage resources include historically and culturally famous cities, towns, villages, blocks, immovable cultural relics and the scenic spots with cultural connotation. The spatial distribution laws of these resources are always directly connected to the regional physical geography, historical development and historical traffic geography and have high research values. Meanwhile, the exhibition and use of these resources are greatly influenced by traffic and tourism and other plans at the provincial level, and it is of great realistic significance to offer proposals on traffic and so on that are beneficial to the exhibition of heritage resources based on the research of province distribution laws. This paper takes the spatial analysis of Geographic Information System (GIS) as the basic technological means and all historical and cultural resources in China's Zhejiang Province as research objects, and finds out in the space the accumulation areas and accumulation belts of Zhejiang Province's historic cities and cultural resources through overlay analysis and density analysis, etc. It then discusses the reasons of the formation of these accumulation areas and accumulation belts by combining with the analysis of physical geography and historical geography and so on, and in the end, linking the tourism planning and traffic planning at the provincial level, it provides suggestions on the exhibition and use of accumulation areas and accumulation belts of historic cities and cultural resources.
Educational preparation of black nurses: a historical perspective.
Carnegie, M Elizabeth
2005-01-01
To where minority nursing needs to proceed, the minority nursing community must understand where we have been. This historical perspective traces our roots through every level of nursing education. Parallels are drawn between minority nurse educational evolution and the historical events occurring in the greater society in the United States.
NASA Astrophysics Data System (ADS)
Dessens, J.; Bücher, A.
In an attempt to contribute to the investigation on a global climate change, a historical series of minimum and maximum temperature data at the Pic du Midi, a mountain observatory at 2862 m a.s.l. in the French Pyrenees, is updated after correction of a systematic deviation due to a relocation of the station in 1971. These data, which now cover the 1882-1984 period, are examined in parallel with humidity and cloud cover data for the same period. From the beginning to the end of this period, observations show that the mean night-time temperature has increased by 2.39° C/100 yr while the mean daytime temperature has decreased by 0.50° C/100 yr. In consequence, the mean annual diurnal temperature range has dropped by 36%/100 yr. The maximum seasonal decrease is 46%/100 yr in spring. Season-to-season and year-to-year inter-relationships between minimum temperature, maximum temperature, relative humidity and cloud cover suggest that the decrease in maximum temperature is related to a concomitant increase of 15%/100 yr in both relative humidity and cloud cover.
Final Work Plan: Phase I Investigation at Bladen, Nebraska
DOE Office of Scientific and Technical Information (OSTI.GOV)
LaFreniere, Lorraine M.; Yan, Eugene
The village of Bladen is a town of population approximately 237 in the northwest part of Webster County, Nebraska, 30 mi southwest of Hastings and 140 mi southwest of Lincoln, Nebraska. In 2000, the fumigant-related compound carbon tetrachloride was detected in public water supply well PWS 68-1, at a trace level. Low-level contamination, below the maximum contamination level (MCL) of 5.0 μg/L, has been detected intermittently in well PWS 68-1 since 2000, including in the last sample taken in July 2013. In 2006, the village installed a new well, PWS 2006-1, that remains free of contamination. Because the carbon tetrachloridemore » found in well PWS 68-1 might be linked to historical use of fumigants containing carbon tetrachloride at grain storage facilities, including its former facility in Bladen, the CCC/USDA is proposing an investigation to (1) delineate the source and extent of the carbon tetrachloride contamination potentially associated with its former facility, (2) characterize pathways and controlling factors for contaminant migration in the subsurface, and (3) establish a basis for estimating potential health and environmental risks. The work will be performed in accordance with the Intergovernmental Agreement established between the NDEQ and the Farm Service Agency of the USDA. The site investigation at Bladen will be implemented in phases, so that data collected and interpretations developed during each phase can be evaluated to determine if a subsequent phase of investigation is warranted and, if warranted, to provide effective guidance for the subsequent investigation activities. This Work Plan identifies the specific technical objectives and defines the scope of work proposed for the Phase I investigation by compiling and evaluating historical data. The proposed investigation activities will be performed on behalf of the CCC/USDA by the Environmental Science Division of Argonne National Laboratory. Argonne is a nonprofit, multidisciplinary research institute operated by UChicago Argonne, LLC, for the U.S. Department of Energy.« less
NASA Astrophysics Data System (ADS)
Naren, A.; Maity, Rajib
2017-12-01
Sea level rise is one of the manifestations of climate change and may cause a threat to the coastal regions. Estimates from global circulation models (GCMs) are either not available on coastal locations due to their coarse spatial resolution or not reliable since the mismatch between (interpolated) GCM estimates at coastal locations and actual observation over historical period is significantly different. We propose a semi-empirical framework to model the local sea level rise (SLR) using the possibly existing relationship between local SLR and regional atmospheric/oceanic variables. Selection of set of input variables mostly based on the literature bears the signature of both atmospheric and oceanic variables that possibly have an effect on SLR. The proposed approach offers a method to extract the combined information hidden in the regional fields of atmospheric/oceanic variables for a specific target coastal location. Generality of the approach ensures the inclusion of more variables in the set of inputs depending on the geographical location of any coastal station. For demonstration, 14 coastal locations along the Indian coast and islands are considered and a set of regional atmospheric and oceanic variables are considered. After development and validation of the model at each coastal location with the historical data, the model is further used for future projection of local SLR up to the year 2100 for three different future emission scenarios represented by representative concentration pathways (RCPs)—RCP2.6, RCP4.5, and RCP8.5. The maximum projected SLR is found to vary from 260.65 to 393.16 mm (RCP8.5) by the end of 2100 among the locations considered. Outcome of the proposed approach is expected to be useful in regional coastal management and in developing mitigation strategies in a changing climate.
Flood Forecast Accuracy and Decision Support System Approach: the Venice Case
NASA Astrophysics Data System (ADS)
Canestrelli, A.; Di Donato, M.
2016-02-01
In the recent years numerical models for weather predictions have experienced continuous advances in technology. As a result, all the disciplines making use of weather forecasts have made significant steps forward. In the case of the Safeguard of Venice, a large effort has been put in order to improve the forecast of tidal levels. In this context, the Istituzione Centro Previsioni e Segnalazioni Maree (ICPSM) of the Venice Municipality has developed and tested many different forecast models, both of the statistical and deterministic type, and has shown to produce very accurate forecasts. For Venice, the maximum admissible forecast error should be (ideally) of the order of ten centimeters at 24 hours. The entity of the forecast error clearly affects the decisional process, which mainly consists of alerting the population, activating the movable barriers installed at the three tidal inlets and contacting the port authority. This process becomes more challenging whenever the weather predictions, and therefore the water level forecasts, suddenly change. These new forecasts have to be quickly transformed into operational tasks. Therefore, it is of the utter importance to set up scheduled alerts and emergency plans by means of easy-to-follow procedures. On this direction, Technital has set up a Decision Support System based on expert procedures that minimizes the human mistakes and, as a consequence, reduces the risk of flooding of the historical center. Moreover, the Decision Support System can communicate predefined alerts to all the interested subjects. The System uses the water levels forecasts produced by the ICPSM by taking into account the accuracy at different leading times. The Decision Support System has been successfully tested with 8 years of data, 6 of them in real time. Venice experience shows that the Decision Support System is an essential tool which assesses the risks associated with a particular event, provides clear operational procedures and minimizes the impact of natural floods on human lives, private properties and historical monuments.
40 CFR 142.61 - Variances from the maximum contaminant level for fluoride.
Code of Federal Regulations, 2010 CFR
2010-07-01
... level for fluoride. 142.61 Section 142.61 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY... from the maximum contaminant level for fluoride. (a) The Administrator, pursuant to section 1415(a)(1... means generally available for achieving compliance with the Maximum Contaminant Level for fluoride. (1...
40 CFR 142.61 - Variances from the maximum contaminant level for fluoride.
Code of Federal Regulations, 2011 CFR
2011-07-01
... level for fluoride. 142.61 Section 142.61 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY... from the maximum contaminant level for fluoride. (a) The Administrator, pursuant to section 1415(a)(1... means generally available for achieving compliance with the Maximum Contaminant Level for fluoride. (1...
NASA Astrophysics Data System (ADS)
Armas, I.; Dumitrascu, S.
2009-04-01
The urban environment often deals with issues concerning the deterioration of the constructed space and the quality of the environmental factors, in general terms meaning an unsatisfactory quality of life. Taking into account the complexity of the urban environment and the strong human impact, this ambience can be considered the ideal place for a varied range of risks to appear, being favoured by the external interventions and the dynamics of the internal changes that occur in the urban system, often unexpectedly. In this context, historic centre areas are even more vulnerable because of the age of the buildings and their socio-cultural value. The present study focuses on the development of a rapid assessment system of urban risks, putting emphasis on earthquakes. The importance of the study is shown by the high vulnerability that defines urban settlements, which can be considered socio-ecological systems characterized by a maximum risk level. In general, cities are highly susceptible areas because of their compactness and elevated degree of land occupancy, the Bucharest municipality being no exception. The street and sewerage networks disorganized the natural system resulted from the evolution of the lake-river system in Superior Pleistocene-Holocene and the intense construction activity represents a pressure that hasn't been measured and that is in need for a methodological interdisciplinary approach. In particular, the specific of Bucharest is given by the seismic risk based on an explosive urban evolution and the advanced state of degradation of the buildings. In this context, the Lipscani sector from the historic centre of the capital city is a maximum seismic vulnerability area, this being the result of its location in the Dâmbovita River meadow, on the brow and 80 m terrace, but more precisely because of the degradation of the buildings that cumulated the effects of the repeated earthquakes. The historic centre of Bucharest has not only a cultural function, but is also a very populated area, this being factors that favour a high susceptibility level. In addition, the majority of the buildings are included in the first and second categories of seismic risk, being built between 1875 and 1940, the age of the buildings establishing an increased vulnerability to natural hazards. The methodology was developed through the contribution of three partner universities from Bucharest: the University of Bucharest, the Academy for Economic Studies and the Technical University of Constructions. The method suggested was based on the analysis and processing of digital and statistical spatial information resulted from 1:500 topographical plans, satellite pictures, archives and historical maps used for the identification of the age of the buildings. Also, an important stage was represented by the field investigations that resulted with the data used in the assessment of the buildings: year of construction, location and vicinity, height, number of floors, state and function of the building, equipment and construction type. The information collected from the field together with the data resulted from the digitization of the ortophotoplans were inserted in ArcGIS in order to compile the database. Furthermore, the team from the Cybernetics Faculty developed a special software package in Visual Studio and SQL server in order to insert the sheets in GIS so that they could be statistically processed. The final product of the study is a program that includes as main functions editing, the analysis based on selected factors (individual or group) and viewing of building information in the shape of maps or 3D visualization. The strengths of the informational system resulted are given by the extended range of applicability, the short processing period, accessibility, capacity of support for a large amount of information and, thus, standing out as an adequate instrument to fit the needs of a susceptible population.
40 CFR 143.3 - Secondary maximum contaminant levels.
Code of Federal Regulations, 2013 CFR
2013-07-01
.... 143.3 Section 143.3 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) WATER PROGRAMS (CONTINUED) NATIONAL SECONDARY DRINKING WATER REGULATIONS § 143.3 Secondary maximum contaminant levels. The secondary maximum contaminant levels for public water systems are as follows: Contaminant...
40 CFR 143.3 - Secondary maximum contaminant levels.
Code of Federal Regulations, 2011 CFR
2011-07-01
.... 143.3 Section 143.3 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) WATER PROGRAMS (CONTINUED) NATIONAL SECONDARY DRINKING WATER REGULATIONS § 143.3 Secondary maximum contaminant levels. The secondary maximum contaminant levels for public water systems are as follows: Contaminant...
40 CFR 143.3 - Secondary maximum contaminant levels.
Code of Federal Regulations, 2014 CFR
2014-07-01
.... 143.3 Section 143.3 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) WATER PROGRAMS (CONTINUED) NATIONAL SECONDARY DRINKING WATER REGULATIONS § 143.3 Secondary maximum contaminant levels. The secondary maximum contaminant levels for public water systems are as follows: Contaminant...
40 CFR 143.3 - Secondary maximum contaminant levels.
Code of Federal Regulations, 2012 CFR
2012-07-01
.... 143.3 Section 143.3 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) WATER PROGRAMS (CONTINUED) NATIONAL SECONDARY DRINKING WATER REGULATIONS § 143.3 Secondary maximum contaminant levels. The secondary maximum contaminant levels for public water systems are as follows: Contaminant...
NASA Astrophysics Data System (ADS)
Riquelme, S.; Ruiz, S.; Yamazaki, Y.; Campos, J.
2012-04-01
The Mega-thrust zone of southern Peru and northern Chile is recognized as a tsunamigenic zone. In Southern Peru and Northern Chile, large earthquakes have not occurred in the last 130 years. The 1868 and 1877 were the last earthquakes with rupture larger than 400 km. The fault parameters and slip distribution of these earthquakes is not well understood, because only a few tide gauges recorded these events at far field distance. We studied simultaneously the near field effect, run-up, isoseismals, coseismic historical descriptions and far field tide gauges in the Pacific Ocean. We define several rupture scenerios which are numerically modeled using NEOWAVE program obtaining the tsunami propagation and coseismic deformation. New coupling models from are used to model scenarios. These results are compared with historical near field and far field observations, our preferred scenario fitted well these records and it agrees with the proposed isoseismals. For 1868 southern Peru earthquake our preferred scenario has a seismic rupture starting at the south part of 2001 Camaná Peru earthquake 16.8°S to 19.3°S through the Arica bending at 18°S, with a rupture of 350-400 km, maximum slip of 15 meters and seismic magnitude between M_w~8.7-8.9. For the 1877 earthquake our preferred scenario has a length of 400 kilometers from 23°S to 19.3°S, a maximum slip of 25 meters and seismic moderate magnitude of M_w~8.8. In both earthquakes the dip (10°-20°) is controlled by the geometry of subducting Nazca plate and larger slip distributions are located in the shallow part of the contact, from the trench to 30 km depth. Finally strong slip distribution in the shallow seismic contact for these historical mega-earthquakes could explain the apparent dual behavior between these mega-earthquakes Mw > 8.5 and moderate magnitude earthquakes Mw ~ 8.0 which apparently only have occurred in the depth zone of the contact i.e., the earthquakes of 1967 Mw 6.7 and 2007 Mw 7.7 in Tocopilla. However, more detailed studies are required to locate all historical Mw ~ 8.0 earthquakes in the deeper contact zone.
Novel characterization of landscape-level variability in historical vegetation structure
Brandon M. Collins; Jamie M. Lydersen; Richard G. Everett; Danny L. Fry; Scott L. Stephens
2015-01-01
We analyzed historical timber inventory data collected systematically across a large mixed-conifer-dominated landscape to gain insight into the interaction between disturbances and vegetation structure and composition prior to 20th century land management practices. Using records from over 20 000 trees, we quantified historical vegetation structure and composition for...
Li, Zijian; Jennings, Aaron A.
2017-01-01
Worldwide jurisdictions are making efforts to regulate pesticide standard values in residential soil, drinking water, air, and agricultural commodity to lower the risk of pesticide impacts on human health. Because human may exposure to pesticides from many ways, such as ingestion, inhalation, and dermal contact, it is important to examine pesticide standards by considering all major exposure pathways. Analysis of implied maximum dose limits for commonly historical and current used pesticides was adopted in this study to examine whether worldwide pesticide standard values are enough to prevent human health impact or not. Studies show that only U.S. has regulated pesticides standard in the air. Only 4% of the total number of implied maximum dose limits is based on three major exposures. For Chlorpyrifos, at least 77.5% of the total implied maximum dose limits are above the acceptable daily intake. It also shows that most jurisdictions haven't provided pesticide standards in all major exposures yet, and some of the standards are not good enough to protect human health. PMID:29546224
Love, Jeffrey J.; Rigler, E. Joshua; Pulkkinen, Antti; Riley, Pete
2015-01-01
An examination is made of the hypothesis that the statistics of magnetic-storm-maximum intensities are the realization of a log-normal stochastic process. Weighted least-squares and maximum-likelihood methods are used to fit log-normal functions to −Dst storm-time maxima for years 1957-2012; bootstrap analysis is used to established confidence limits on forecasts. Both methods provide fits that are reasonably consistent with the data; both methods also provide fits that are superior to those that can be made with a power-law function. In general, the maximum-likelihood method provides forecasts having tighter confidence intervals than those provided by weighted least-squares. From extrapolation of maximum-likelihood fits: a magnetic storm with intensity exceeding that of the 1859 Carrington event, −Dst≥850 nT, occurs about 1.13 times per century and a wide 95% confidence interval of [0.42,2.41] times per century; a 100-yr magnetic storm is identified as having a −Dst≥880 nT (greater than Carrington) but a wide 95% confidence interval of [490,1187] nT.
La Camera, Richard J.; Westenburg, Craig L.
1994-01-01
Tne U.S. Geological Survey. in support of the U.S. Department of Energy, Yucca Mountain Site- Characterization Project, collects, compiles, and summarizes water-resource data in the Yucca Mountain region. The data are collected to document the historical and current condition of ground-water resources, to detect and document changes in those resources through time, and to allow assessments of ground-water resources during investigations to determine the potential suitability of Yucca Mountain for storing high-level nuclear waste. Data on ground-water levels at 36 sites, ground- water discharge at 6 sites, ground-water quality at 19 sites, and ground-water withdrawals within Crater Fiat, Jackass Flats, Mercury Valley, and the Amargosa Desert are presented. Data on ground-water levels, discharges, and withdrawals collected by other agencies or as part of other programs are included to further indicate variations through time. A statistical summary of ground-water levels and median annual ground-water withdrawals in Jackass Flats is presented. The statistical summary includes the number of measurements, the maximum, minimum, and median water-level altitudes, and the average deviation of a11 water-level altitudes for selected baseline periods and for calendar year 1992. Data on ground-water quality are compared to established, proposed, or tentative primary and secondary drinking-water standards, and measures which exceeded those standards are listed for 18 sites. Detected organic compounds for which established, proposed, or tentative drinking-water standards exist also are listed.
O'Reilly, Andrew M.; Roehl, Edwin A.; Conrads, Paul; Daamen, Ruby C.; Petkewich, Matthew D.
2014-01-01
The urbanization of central Florida has progressed substantially in recent decades, and the total population in Lake, Orange, Osceola, Polk, and Seminole Counties more than quadrupled from 1960 to 2010. The Floridan aquifer system is the primary source of water for potable, industrial, and agricultural purposes in central Florida. Despite increases in groundwater withdrawals to meet the demand of population growth, recharge derived by infiltration of rainfall in the well-drained karst terrain of central Florida is the largest component of the long-term water balance of the Floridan aquifer system. To complement existing physics-based groundwater flow models, artificial neural networks and other data-mining techniques were used to simulate historical lake water level, groundwater level, and spring flow at sites throughout the area. Historical data were examined using descriptive statistics, cluster analysis, and other exploratory analysis techniques to assess their suitability for more intensive data-mining analysis. Linear trend analyses of meteorological data collected by the National Oceanic and Atmospheric Administration at 21 sites indicate 67 percent of sites exhibited upward trends in air temperature over at least a 45-year period of record, whereas 76 percent exhibited downward trends in rainfall over at least a 95-year period of record. Likewise, linear trend analyses of hydrologic response data, which have varied periods of record ranging in length from 10 to 79 years, indicate that water levels in lakes (307 sites) were about evenly split between upward and downward trends, whereas water levels in 69 percent of wells (out of 455 sites) and flows in 68 percent of springs (out of 19 sites) exhibited downward trends. Total groundwater use in the study area increased from about 250 million gallons per day (Mgal/d) in 1958 to about 590 Mgal/d in 1980 and remained relatively stable from 1981 to 2008, with a minimum of 559 Mgal/d in 1994 and a maximum of 773 Mgal/d in 2000. The change in groundwater-use trend in the early 1980s and the following period of relatively slight trend is attributable to the concomitant effects of increasing public-supply withdrawals and decreasing use of water by the phosphate industry and agriculture. On the basis of available historical data and exploratory analyses, empirical lake water-level, groundwater-level, and spring-flow models were developed for 22 lakes, 23 wells, and 6 springs. Input time series consisting of various frequencies and frequency-band components of daily rainfall (1942 to 2008) and monthly total groundwater use (1957 to 2008) resulted in hybrid signal-decomposition artificial neural network models. The final models explained much of the variability in observed hydrologic data, with 43 of the 51 sites having coefficients of determination exceeding 0.6, and the models matched the magnitude of the observed data reasonably well, such that models for 32 of the 51 sites had root-mean-square errors less than 10 percent of the measured range of the data. The Central Florida Artificial Neural Network Decision Support System was developed to integrate historical databases and the 102 site-specific artificial neural network models, model controls, and model output into a spreadsheet application with a graphical user interface that allows the user to simulate scenarios of interest. Overall, the data-mining analyses indicate that the Floridan aquifer system in central Florida is a highly conductive, dynamic, open system that is strongly influenced by external forcing. The most important external forcing appears to be rainfall, which explains much of the multiyear cyclic variability and long-term downward trends observed in lake water levels, groundwater levels, and spring flows. For most sites, groundwater use explains less of the observed variability in water levels and flows than rainfall. Relative groundwater-use impacts are greater during droughts, however, and long-term trends in water levels and flows were identified that are consistent with historical groundwater-use patterns. The sensitivity of the hydrologic system to rainfall is expected, owing to the well-drained karst terrain and relatively thin confinement of the Floridan aquifer system in much of central Florida. These characteristics facilitate the relatively rapid transmission of infiltrating water from rainfall to the water table and contribute to downward leakage of water to the Floridan aquifer system. The areally distributed nature of rainfall, as opposed to the site-specific nature of groundwater use, and the generally high transmissivity and low storativity properties of the semiconfined Floridan aquifer system contribute to the prevalence of water-level and flow patterns that mimic rainfall patterns. In general, the data-mining analyses demonstrate that the hydrologic system in central Florida is affected by groundwater use differently during wet periods, when little or no system storage is available (high water levels), compared to dry periods, when there is excess system storage (low water levels). Thus, by driving the overall behavior of the system, rainfall indirectly influences the degree to which groundwater use will effect persistent trends in water levels and flows, with groundwater-use impacts more prevalent during periods of low water levels and spring flows caused by low rainfall and less prevalent during periods of high water levels and spring flows caused by high rainfall. Differences in the magnitudes of rainfall and groundwater use during wet and dry periods also are important determinants of hydrologic response. An important implication of the data-mining analyses is that rainfall variability at subannual to multidecadal timescales must be considered in combination with groundwater use to provide robust system-response predictions that enhance sustainable resource management in an open karst aquifer system. The data-driven approach was limited, however, by the confounding effects of correlation between rainfall and groundwater use, the quality and completeness of the historical databases, and the spatial variations in groundwater use. The data-mining analyses indicate that available historical data when used alone do not contain sufficient information to definitively quantify the related individual effects of rainfall and groundwater use on hydrologic response. The knowledge gained from data-driven modeling and the results from physics-based modeling, when compared and used in combination, can yield a more comprehensive assessment and a more robust understanding of the hydrologic system than either of the approaches used separately.
Flood Frequency Curves - Use of information on the likelihood of extreme floods
NASA Astrophysics Data System (ADS)
Faber, B.
2011-12-01
Investment in the infrastructure that reduces flood risk for flood-prone communities must incorporate information on the magnitude and frequency of flooding in that area. Traditionally, that information has been a probability distribution of annual maximum streamflows developed from the historical gaged record at a stream site. Practice in the United States fits a Log-Pearson type3 distribution to the annual maximum flows of an unimpaired streamflow record, using the method of moments to estimate distribution parameters. The procedure makes the assumptions that annual peak streamflow events are (1) independent, (2) identically distributed, and (3) form a representative sample of the overall probability distribution. Each of these assumptions can be challenged. We rarely have enough data to form a representative sample, and therefore must compute and display the uncertainty in the estimated flood distribution. But, is there a wet/dry cycle that makes precipitation less than independent between successive years? Are the peak flows caused by different types of events from different statistical populations? How does the watershed or climate changing over time (non-stationarity) affect the probability distribution floods? Potential approaches to avoid these assumptions vary from estimating trend and shift and removing them from early data (and so forming a homogeneous data set), to methods that estimate statistical parameters that vary with time. A further issue in estimating a probability distribution of flood magnitude (the flood frequency curve) is whether a purely statistical approach can accurately capture the range and frequency of floods that are of interest. A meteorologically-based analysis produces "probable maximum precipitation" (PMP) and subsequently a "probable maximum flood" (PMF) that attempts to describe an upper bound on flood magnitude in a particular watershed. This analysis can help constrain the upper tail of the probability distribution, well beyond the range of gaged data or even historical or paleo-flood data, which can be very important in risk analyses performed for flood risk management and dam and levee safety studies.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bosarge, Christina L., E-mail: cbosarge@umail.iu.edu; Ewing, Marvene M.; DesRosiers, Colleen M.
To demonstrate the dosimetric advantages and disadvantages of standard anteroposterior-posteroanterior (S-AP/PA{sub AAA}), inverse-planned AP/PA (IP-AP/PA) and volumetry-modulated arc (VMAT) radiotherapies in the treatment of children undergoing whole-lung irradiation. Each technique was evaluated by means of target coverage and normal tissue sparing, including data regarding low doses. A historical approach with and without tissue heterogeneity corrections is also demonstrated. Computed tomography (CT) scans of 10 children scanned from the neck to the reproductive organs were used. For each scan, 6 plans were created: (1) S-AP/PA{sub AAA} using the anisotropic analytical algorithm (AAA), (2) IP-AP/PA, (3) VMAT, (4) S-AP/PA{sub NONE} without heterogeneitymore » corrections, (5) S-AP/PA{sub PB} using the Pencil-Beam algorithm and enforcing monitor units from technique 4, and (6) S-AP/PA{sub AAA[FM]} using AAA and forcing fixed monitor units. The first 3 plans compare modern methods and were evaluated based on target coverage and normal tissue sparing. Body maximum and lower body doses (50% and 30%) were also analyzed. Plans 4 to 6 provide a historic view on the progression of heterogeneity algorithms and elucidate what was actually delivered in the past. Averages of each comparison parameter were calculated for all techniques. The S-AP/PA{sub AAA} technique resulted in superior target coverage but had the highest maximum dose to every normal tissue structure. The IP-AP/PA technique provided the lowest dose to the esophagus, stomach, and lower body doses. VMAT excelled at body maximum dose and maximum doses to the heart, spine, and spleen, but resulted in the highest dose in the 30% body range. It was, however, superior to the S-AP/PA{sub AAA} approach in the 50% range. Each approach has strengths and weaknesses thus associated. Techniques may be selected on a case-by-case basis and by physician preference of target coverage vs normal tissue sparing.« less
NASA Astrophysics Data System (ADS)
Ishida, K.; Ohara, N.; Kavvas, M. L.; Chen, Z. Q.; Anderson, M. L.
2018-01-01
Impact of air temperature on the Maximum Precipitation (MP) estimation through change in moisture holding capacity of air was investigated. A series of previous studies have estimated the MP of 72-h basin-average precipitation over the American River watershed (ARW) in Northern California by means of the Maximum Precipitation (MP) estimation approach, which utilizes a physically-based regional atmospheric model. For the MP estimation, they have selected 61 severe storm events for the ARW, and have maximized them by means of the atmospheric boundary condition shifting (ABCS) and relative humidity maximization (RHM) methods. This study conducted two types of numerical experiments in addition to the MP estimation by the previous studies. First, the air temperature on the entire lateral boundaries of the outer model domain was increased uniformly by 0.0-8.0 °C with 0.5 °C increments for the two severest maximized historical storm events in addition to application of the ABCS + RHM method to investigate the sensitivity of the basin-average precipitation over the ARW to air temperature rise. In this investigation, a monotonous increase was found in the maximum 72-h basin-average precipitation over the ARW with air temperature rise for both of the storm events. The second numerical experiment used specific amounts of air temperature rise that is assumed to happen under future climate change conditions. Air temperature was increased by those specified amounts uniformly on the entire lateral boundaries in addition to application of the ABCS + RHM method to investigate the impact of air temperature on the MP estimate over the ARW under changing climate. The results in the second numerical experiment show that temperature increases in the future climate may amplify the MP estimate over the ARW. The MP estimate may increase by 14.6% in the middle of the 21st century and by 27.3% in the end of the 21st century compared to the historical period.
40 CFR 141.13 - Maximum contaminant levels for turbidity.
Code of Federal Regulations, 2010 CFR
2010-07-01
... turbidity. 141.13 Section 141.13 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) WATER PROGRAMS (CONTINUED) NATIONAL PRIMARY DRINKING WATER REGULATIONS Maximum Contaminant Levels § 141.13... part. The maximum contaminant levels for turbidity in drinking water, measured at a representative...
Undersampling power-law size distributions: effect on the assessment of extreme natural hazards
Geist, Eric L.; Parsons, Thomas E.
2014-01-01
The effect of undersampling on estimating the size of extreme natural hazards from historical data is examined. Tests using synthetic catalogs indicate that the tail of an empirical size distribution sampled from a pure Pareto probability distribution can range from having one-to-several unusually large events to appearing depleted, relative to the parent distribution. Both of these effects are artifacts caused by limited catalog length. It is more difficult to diagnose the artificially depleted empirical distributions, since one expects that a pure Pareto distribution is physically limited in some way. Using maximum likelihood methods and the method of moments, we estimate the power-law exponent and the corner size parameter of tapered Pareto distributions for several natural hazard examples: tsunamis, floods, and earthquakes. Each of these examples has varying catalog lengths and measurement thresholds, relative to the largest event sizes. In many cases where there are only several orders of magnitude between the measurement threshold and the largest events, joint two-parameter estimation techniques are necessary to account for estimation dependence between the power-law scaling exponent and the corner size parameter. Results indicate that whereas the corner size parameter of a tapered Pareto distribution can be estimated, its upper confidence bound cannot be determined and the estimate itself is often unstable with time. Correspondingly, one cannot statistically reject a pure Pareto null hypothesis using natural hazard catalog data. Although physical limits to the hazard source size and by attenuation mechanisms from source to site constrain the maximum hazard size, historical data alone often cannot reliably determine the corner size parameter. Probabilistic assessments incorporating theoretical constraints on source size and propagation effects are preferred over deterministic assessments of extreme natural hazards based on historic data.
Wu, Zhiwei; He, Hong S; Liu, Zhihua; Liang, Yu
2013-06-01
Fuel load is often used to prioritize stands for fuel reduction treatments. However, wildfire size and intensity are not only related to fuel loads but also to a wide range of other spatially related factors such as topography, weather and human activity. In prioritizing fuel reduction treatments, we propose using burn probability to account for the effects of spatially related factors that can affect wildfire size and intensity. Our burn probability incorporated fuel load, ignition probability, and spread probability (spatial controls to wildfire) at a particular location across a landscape. Our goal was to assess differences in reducing wildfire size and intensity using fuel-load and burn-probability based treatment prioritization approaches. Our study was conducted in a boreal forest in northeastern China. We derived a fuel load map from a stand map and a burn probability map based on historical fire records and potential wildfire spread pattern. The burn probability map was validated using historical records of burned patches. We then simulated 100 ignitions and six fuel reduction treatments to compare fire size and intensity under two approaches of fuel treatment prioritization. We calibrated and validated simulated wildfires against historical wildfire data. Our results showed that fuel reduction treatments based on burn probability were more effective at reducing simulated wildfire size, mean and maximum rate of spread, and mean fire intensity, but less effective at reducing maximum fire intensity across the burned landscape than treatments based on fuel load. Thus, contributions from both fuels and spatially related factors should be considered for each fuel reduction treatment. Published by Elsevier B.V.
NASA Astrophysics Data System (ADS)
Belferman, Mariana; Katsman, Regina; Agnon, Amotz; Ben-Avraham, Zvi
2017-04-01
Despite the global, social and scientific impact of earthquakes, their triggering mechanisms remain often poorly defined. We suggest that dynamic changes in the levels of the historic water bodies occupying tectonic depressions at the Dead Sea Rift cause significant variations in the shallow crustal stress field and affect local fault systems in a way that may promote or suppress earthquakes. This mechanism and its spatial and temporal scales differ from those in tectonically-driven deformations. We use analytical and numerical poroelastic models to simulate immediate and delayed seismic responses resulting from the observed historic water level changes. The role of variability in the poroelastic and the elastic properties of the rocks composing the upper crust in inducing or retarding deformations under a strike-slip faulting regime is studied. The solution allows estimating a possible reduction in a seismic recurrence interval. Considering the historic water level fluctuation, our preliminary simulations show a promising agreement with paleo-seismic rates identified in the field.
ERIC Educational Resources Information Center
Watson, Robert Stephen
2010-01-01
This dissertation illuminates relationships between micro-level practices of schools and macro-level structures of society through the socio-historical lens of New York State Regents mathematics examinations, which were administered to public school students throughout the State of New York between 1866 and 2009, inclusive. Fundamental research…
40 CFR 141.50 - Maximum contaminant level goals for organic contaminants.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 40 Protection of Environment 22 2010-07-01 2010-07-01 false Maximum contaminant level goals for organic contaminants. 141.50 Section 141.50 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) WATER PROGRAMS (CONTINUED) NATIONAL PRIMARY DRINKING WATER REGULATIONS Maximum Contaminant Level...
ERIC Educational Resources Information Center
Osunkunle, Oluyinka Oludolapo
2006-01-01
It has become common place for students in historically white universities (HWUs) in South Africa to have 24 hour access to computers, the Internet, e-learning facilities, check results online and even register online. However, historically black universities (HBUs) are still battling to have access to these facilities. On a macro level, the issue…
Source of high tsunamis along the southernmost Ryukyu trench inferred from tsunami stratigraphy
NASA Astrophysics Data System (ADS)
Ando, Masataka; Kitamura, Akihisa; Tu, Yoko; Ohashi, Yoko; Imai, Takafumi; Nakamura, Mamoru; Ikuta, Ryoya; Miyairi, Yosuke; Yokoyama, Yusuke; Shishikura, Masanobu
2018-01-01
Four paleotsunamis deposits are exposed in a trench on the coastal lowland north of the southern Ryukyu subduction zone trench. Radiocarbon ages on coral and bivalve shells show that the four deposits record tsunamis date from the last 2000 yrs., including a historical tsunami with a maximum run-up of 30 m in 1771, for an average recurrence interval of approximately 600 yrs. Ground fissures in a soil beneath the 1771 tsunami deposit may have been generated by stronger shaking than recorded by historical documents. The repeated occurrence of the paleotsunami deposits supports a tectonic source model on the plate boundary rather than a nontectonic source model, such as submarine landslides. Assuming a thrust model at the subduction zone, the seismic coupling ratio may be as low as 20%.
Floods of January-February 1959 in Indiana
Hale, Malcolm D.; Hoggatt, Richard Earl
1961-01-01
Previous maximum stages during the period of record were exceeded at 26 gaging stations. The peak discharge of Big Indian Creek near Corydon, and peak stages of Laughery Creek near Farmers Retreat and Vernon Fork at Vernon on January 21, were greater than any since at least 1897. The peak stage of Wabash River at Huntington on February 10 exceeded that of the historical 1913 flood by 0.5 foot.
Locke, Glenn L.; La Camera, Richard J.
2003-01-01
The U.S. Geological Survey, in support of the U.S. Department of Energy, Yucca Mountain Project, collects, compiles, and summarizes hydrologic data in the Yucca Mountain region. The data are collected to allow assessments of ground-water resources during activities to determine the potential suitability or development of Yucca Mountain for storing high-level nuclear waste. Data on ground-water levels at 35 wells and a fissure (Devils Hole), ground-water discharge at 5 springs and a flowing well, and total reported ground-water withdrawals within Crater Flat, Jackass Flats, Mercury Valley, and the Amargosa Desert are tabulated from January 2000 through December 2002. Historical data on water levels, discharges, and withdrawals are graphically presented to indicate variations through time. A statistical summary of ground-water levels at seven wells in Jackass Flats is presented for 1992-2002 to indicate potential effects of ground-water withdrawals associated with U.S. Department of Energy activities near Yucca Mountain. The statistical summary includes the annual number of measurements, maximum, minimum, and median water-level altitudes, and average deviation of measured water-level altitudes compared to selected baseline periods. Baseline periods varied for 1985-93. At six of the seven wells in Jackass Flats, the median water levels for 2002 were slightly higher (0.3-2.4 feet) than for their respective baseline periods. At the remaining well, data for 2002 was not summarized statistically but median water-level altitude in 2001 was 0.7 foot higher than that in its baseline period.
2. Historic American Buildings Survey Lanny Miyamoto, Photographer October 1958 ...
2. Historic American Buildings Survey Lanny Miyamoto, Photographer October 1958 INTERIOR, FROM FLOOR LEVEL, TOWARDS CHANCEL - Roman Catholic Cathedral of Baltimore, Cathedral Street, Baltimore, Independent City, MD
3. Historic American Buildings Survey Lanny Miyamoto, Photographer October 1958 ...
3. Historic American Buildings Survey Lanny Miyamoto, Photographer October 1958 INTERIOR, FROM BALCONY LEVEL, TOWARDS CHANCEL - Roman Catholic Cathedral of Baltimore, Cathedral Street, Baltimore, Independent City, MD
La Camera, Richard J.; Locke, Glenn L.; Habte, Aron M.; Darnell, Jon G.
2006-01-01
The U.S. Geological Survey, in support of the U.S. Department of Energy, Office of Repository Development, collects, compiles, and summarizes hydrologic data in the Yucca Mountain region of southern Nevada and eastern California. These data are collected to allow assessments of ground-water resources during activities to determine the potential suitability or development of Yucca Mountain for storing high-level nuclear waste. Data on ground-water levels at 35 boreholes and 1 fissure (Devils Hole), ground-water discharge at 5 springs, both ground-water levels and discharge at 1 flowing borehole, and total reported ground-water withdrawals within Crater Flat, Jackass Flats, Mercury Valley, and the Amargosa Desert are tabulated from January through December 2004. Also tabulated are ground-water levels, discharges, and withdrawals collected by other agencies (or collected as part of other programs) and data revised from those previously published at monitoring sites. Historical data on water levels, discharges, and withdrawals are presented graphically to indicate variations through time. A statistical summary of ground-water levels at seven boreholes in Jackass Flats is presented for the period 1992-2004 to indicate potential effects of ground-water withdrawals associated with U.S. Department of Energy activities near Yucca Mountain. The statistical summary includes the annual number of measurements, maximum, minimum, and median water-level altitudes, and average deviation of measured water-level altitudes compared to the 1992-93 baseline period. At six boreholes in Jackass Flats, median water levels for 2004 were slightly higher (0.3-2.7 feet) than their median water levels for 1992-93. At one borehole in Jackass Flats, median water level for 2004 equaled the median water level for 1992-93.
Human population dynamics in Europe over the Last Glacial Maximum.
Tallavaara, Miikka; Luoto, Miska; Korhonen, Natalia; Järvinen, Heikki; Seppä, Heikki
2015-07-07
The severe cooling and the expansion of the ice sheets during the Last Glacial Maximum (LGM), 27,000-19,000 y ago (27-19 ky ago) had a major impact on plant and animal populations, including humans. Changes in human population size and range have affected our genetic evolution, and recent modeling efforts have reaffirmed the importance of population dynamics in cultural and linguistic evolution, as well. However, in the absence of historical records, estimating past population levels has remained difficult. Here we show that it is possible to model spatially explicit human population dynamics from the pre-LGM at 30 ky ago through the LGM to the Late Glacial in Europe by using climate envelope modeling tools and modern ethnographic datasets to construct a population calibration model. The simulated range and size of the human population correspond significantly with spatiotemporal patterns in the archaeological data, suggesting that climate was a major driver of population dynamics 30-13 ky ago. The simulated population size declined from about 330,000 people at 30 ky ago to a minimum of 130,000 people at 23 ky ago. The Late Glacial population growth was fastest during Greenland interstadial 1, and by 13 ky ago, there were almost 410,000 people in Europe. Even during the coldest part of the LGM, the climatically suitable area for human habitation remained unfragmented and covered 36% of Europe.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brown, S.
This document describes the contents of a digital database containing maximum potential aboveground biomass, land use, and estimated biomass and carbon data for 1980. The biomass data and carbon estimates are associated with woody vegetation in Tropical Africa. These data were collected to reduce the uncertainty associated with estimating historical releases of carbon from land use change. Tropical Africa is defined here as encompassing 22.7 x 10{sup 6} km{sup 2} of the earth's land surface and is comprised of countries that are located in tropical Africa (Angola, Botswana, Burundi, Cameroon, Cape Verde, Central African Republic, Chad, Congo, Benin, Equatorial Guinea,more » Ethiopia, Djibouti, Gabon, Gambia, Ghana, Guinea, Ivory Coast, Kenya, Liberia, Madagascar, Malawi, Mali, Mauritania, Mozambique, Namibia, Niger, Nigeria, Guinea-Bissau, Zimbabwe (Rhodesia), Rwanda, Senegal, Sierra Leone, Somalia, Sudan, Tanzania, Togo, Uganda, Burkina Faso (Upper Volta), Zaire, and Zambia). The database was developed using the GRID module in the ARC/INFO{trademark} geographic information system. Source data were obtained from the Food and Agriculture Organization (FAO), the U.S. National Geophysical Data Center, and a limited number of biomass-carbon density case studies. These data were used to derive the maximum potential and actual (ca. 1980) aboveground biomass values at regional and country levels. The land-use data provided were derived from a vegetation map originally produced for the FAO by the International Institute of Vegetation Mapping, Toulouse, France.« less
Jeznach, Lillian C; Hagemann, Mark; Park, Mi-Hyun; Tobiason, John E
2017-10-01
Extreme precipitation events are of concern to managers of drinking water sources because these occurrences can affect both water supply quantity and quality. However, little is known about how these low probability events impact organic matter and nutrient loads to surface water sources and how these loads may impact raw water quality. This study describes a method for evaluating the sensitivity of a water body of interest from watershed input simulations under extreme precipitation events. An example application of the method is illustrated using the Wachusett Reservoir, an oligo-mesotrophic surface water reservoir in central Massachusetts and a major drinking water supply to metropolitan Boston. Extreme precipitation event simulations during the spring and summer resulted in total organic carbon, UV-254 (a surrogate measurement for reactive organic matter), and total algae concentrations at the drinking water intake that exceeded recorded maximums. Nutrient concentrations after storm events were less likely to exceed recorded historical maximums. For this particular reservoir, increasing inter-reservoir transfers of water with lower organic matter content after a large precipitation event has been shown in practice and in model simulations to decrease organic matter levels at the drinking water intake, therefore decreasing treatment associated oxidant demand, energy for UV disinfection, and the potential for formation of disinfection byproducts. Copyright © 2017 Elsevier Ltd. All rights reserved.
Human population dynamics in Europe over the Last Glacial Maximum
Tallavaara, Miikka; Luoto, Miska; Korhonen, Natalia; Järvinen, Heikki; Seppä, Heikki
2015-01-01
The severe cooling and the expansion of the ice sheets during the Last Glacial Maximum (LGM), 27,000–19,000 y ago (27–19 ky ago) had a major impact on plant and animal populations, including humans. Changes in human population size and range have affected our genetic evolution, and recent modeling efforts have reaffirmed the importance of population dynamics in cultural and linguistic evolution, as well. However, in the absence of historical records, estimating past population levels has remained difficult. Here we show that it is possible to model spatially explicit human population dynamics from the pre-LGM at 30 ky ago through the LGM to the Late Glacial in Europe by using climate envelope modeling tools and modern ethnographic datasets to construct a population calibration model. The simulated range and size of the human population correspond significantly with spatiotemporal patterns in the archaeological data, suggesting that climate was a major driver of population dynamics 30–13 ky ago. The simulated population size declined from about 330,000 people at 30 ky ago to a minimum of 130,000 people at 23 ky ago. The Late Glacial population growth was fastest during Greenland interstadial 1, and by 13 ky ago, there were almost 410,000 people in Europe. Even during the coldest part of the LGM, the climatically suitable area for human habitation remained unfragmented and covered 36% of Europe. PMID:26100880
The Effects of Solar Maximum on the Earth's Satellite Population and Space Situational Awareness
NASA Technical Reports Server (NTRS)
Johnson, Nicholas L.
2012-01-01
The rapidly approaching maximum of Solar Cycle 24 will have wide-ranging effects not only on the number and distribution of resident space objects, but also on vital aspects of space situational awareness, including conjunction assessment processes. The best known consequence of high solar activity is an increase in the density of the thermosphere, which, in turn, increases drag on the vast majority of objects in low Earth orbit. The most prominent evidence of this is seen in a dramatic increase in space object reentries. Due to the massive amounts of new debris created by the fragmentations of Fengyun-1C, Cosmos 2251 and Iridium 33 during the recent period of Solar Minimum, this effect might reach epic levels. However, space surveillance systems are also affected, both directly and indirectly, historically leading to an increase in the number of lost satellites and in the routine accuracy of the calculation of their orbits. Thus, at a time when more objects are drifting through regions containing exceptionally high-value assets, such as the International Space Station and remote sensing satellites, their position uncertainties increase. In other words, as the possibility of damaging and catastrophic collisions increases, our ability to protect space systems is degraded. Potential countermeasures include adjustments to space surveillance techniques and the resetting of collision avoidance maneuver thresholds.
Revisiting Robinson: The perils of individualistic and ecologic fallacy
Subramanian, S V; Jones, Kelvyn; Kaddour, Afamia; Krieger, Nancy
2009-01-01
Background W S Robinson made a seminal contribution by demonstrating that correlations for the same two variables can be different at the individual and ecologic level. This study reanalyzes and historically situates Robinson's influential study that laid the foundation for the primacy of analyzing data at only the individual level. Methods We applied a binomial multilevel logistic model to analyse variation in illiteracy as enumerated by the 1930 US. Census (the same data as used by Robinson). The outcome was log odds of being illiterate, while predictors were race/nativity (‘native whites’, ‘foreign-born whites’ and ‘negroes’) at the individual-level, and presence of Jim Crow segregation laws for education at the state-level. We conducted historical research to identify the social and scientific context within which Robinson's study was produced and favourably received. Results Empirically, the substantial state variations in illiteracy could not be accounted by the states' race/nativity composition. Different approaches to modelling state-effects yielded considerably attenuated associations at the individual-level between illiteracy and race/nativity. Furthermore, state variation in illiteracy was different across the race/nativity groups, with state variation being largest for whites and least for foreign-born whites. Strong effects of Jim Crow education laws on illiteracy were observed with the effect being strongest for blacks. Historically, Robinson's study was consonant with the post-World War II ascendancy of methodological individualism. Conclusion Applying a historically informed multilevel perspective to Robinson's profoundly influential study, we demonstrate that meaningful analysis of individual-level relationships requires attention to substantial heterogeneity in state characteristics. The implication is that perils are posed by not only ecological fallacy but also individualistic fallacy. Multilevel thinking, grounded in historical and spatiotemporal context, is thus a necessity, not an option. PMID:19179348
Revisiting Robinson: the perils of individualistic and ecologic fallacy.
Subramanian, S V; Jones, Kelvyn; Kaddour, Afamia; Krieger, Nancy
2009-04-01
W S Robinson made a seminal contribution by demonstrating that correlations for the same two variables can be different at the individual and ecologic level. This study reanalyzes and historically situates Robinson's influential study that laid the foundation for the primacy of analyzing data at only the individual level. We applied a binomial multilevel logistic model to analyse variation in illiteracy as enumerated by the 1930 US. Census (the same data as used by Robinson). The outcome was log odds of being illiterate, while predictors were race/nativity ('native whites', 'foreign-born whites' and 'negroes') at the individual-level, and presence of Jim Crow segregation laws for education at the state-level. We conducted historical research to identify the social and scientific context within which Robinson's study was produced and favourably received. Empirically, the substantial state variations in illiteracy could not be accounted by the states' race/nativity composition. Different approaches to modelling state-effects yielded considerably attenuated associations at the individual-level between illiteracy and race/nativity. Furthermore, state variation in illiteracy was different across the race/nativity groups, with state variation being largest for whites and least for foreign-born whites. Strong effects of Jim Crow education laws on illiteracy were observed with the effect being strongest for blacks. Historically, Robinson's study was consonant with the post-World War II ascendancy of methodological individualism. Applying a historically informed multilevel perspective to Robinson's profoundly influential study, we demonstrate that meaningful analysis of individual-level relationships requires attention to substantial heterogeneity in state characteristics. The implication is that perils are posed by not only ecological fallacy but also individualistic fallacy. Multilevel thinking, grounded in historical and spatiotemporal context, is thus a necessity, not an option.
Historical Climate Change Impacts on the Hydrological Processes of the Ponto-Caspian Basin
NASA Astrophysics Data System (ADS)
Koriche, Sifan A.; Singarayer, Joy S.; Coe, Michael T.; Nandini, Sri; Prange, Matthias; Cloke, Hannah; Lunt, Dan
2017-04-01
The Ponto-Caspian basin is one of the largest basins globally, composed of a closed basin (Caspian Sea) and open basins connecting to the global ocean (Black and Azov Sea). Over the historical time period (1850-present) Caspian Sea levels have varied between -25 and -29mbsl (Arpe et al., 2012), resulting in considerable changes to the area of the lake (currently 371,000 km2). Given projections of future climate change and the importance of the Caspian Sea for fisheries, agriculture, and industry, it is vital to understand how sea levels may vary in the future. Hydrological models can be used to assess the impacts of climate change on hydrological processes for future forecasts. However, it is critical to first evaluate such models using observational data for the present and recent past, and to understand the key hydrological processes driving past changes in sea level. In this study, the Terrestrial Hydrological Model (THMB) (Coe, 2000, 2002) is applied and evaluated to investigate the hydrological processes of the Ponto-Caspian basin for the historical period 1900 to 2000. The model has been forced using observational reanalysis datasets (ERA-Interim, ERA-20) and historical climate model data outputs (from CESM and HadCM3 models) to investigate the variability in the Caspian Sea level and the major river discharges. We examine the differences produced by driving the hydrological model with reanalysis data or climate models. We evaluate the model performance compared to observational discharge measurements and Caspian Sea level data. Secondly, we investigated the sensitivity of historical Caspian Sea level variations to different aspects of climate changes to examine the most important processes involved over this time period.
H. B. Reitlinger and the origins of the efficiency at maximum power formula for heat engines
NASA Astrophysics Data System (ADS)
Vaudrey, Alexandre; Lanzetta, François; Feidt, Michel
2014-12-01
Even if not so ancient, the history of the heat engine efficiency at maximum power expression has been yet turbulent. More than a decade after the publication of the seminal article by Curzon and Ahlborn in 1975, two older works by Chambadal and Novikov were rediscovered, both dating from 1957. Then, some years ago, the name of Yvon arose from a textual reference to this famous relation in a conference article published in 1955. Thanks to a historical study of French-written books not published for a long time, and since never translated into other languages, we bring to light in this paper that this relation was actually first proposed by Henri B. Reitlinger in 1929.
Wheeler, Russell L.
2014-01-01
Computation of probabilistic earthquake hazard requires an estimate of Mmax, the maximum earthquake magnitude thought to be possible within a specified geographic region. This report is Part A of an Open-File Report that describes the construction of a global catalog of moderate to large earthquakes, from which one can estimate Mmax for most of the Central and Eastern United States and adjacent Canada. The catalog and Mmax estimates derived from it were used in the 2014 edition of the U.S. Geological Survey national seismic-hazard maps. This Part A discusses prehistoric earthquakes that occurred in eastern North America, northwestern Europe, and Australia, whereas a separate Part B deals with historical events.
Surface faulting. A preliminary view
Sharp, R.V.
1989-01-01
This description of surface faulting near Spitak, Armenia, is based on a field inspection made December 22-26, 1988. The surface rupture west of Spitak, displacement of the ground surface, pre-earthquake surface expressions of the fault, and photolineaments in landsat images are described and surface faulting is compared to aftershocks. It is concluded that the 2 meters of maximum surface displacement fits well within the range of reliably measured maximum surface offsets for historic reverse and oblique-reverse faulting events throughout the world. By contrast, the presently known length of surface rupture near Spitak, between 8 and 13 km, is shorter than any other reverse or oblique-reverse event of magnitude greater than 6.0. This may be a reason to suppose that additional surface rupture might remain unmapped.
Fuzzifying historical peak water levels: case study of the river Rhine at Basel
NASA Astrophysics Data System (ADS)
Salinas, Jose Luis; Kiss, Andrea; Blöschl, Günter
2016-04-01
Hydrological information comes from a variety of sources, which in some cases might be non-precise. In particular, this is an important issue for the available information on water stages during historical floods. An accurate estimation of the water level profile, together with an elevation model of the riverbed and floodplain areas is fundamental for the hydraulic reconstruction of historical flood events, allowing the back calculation of flood peak discharges, velocity and erosion fields, damages, among others. For the greatest floods during the last 1700 years, Wetter et al. (2011) reconstructed the water levels and historical discharges at different locations in the old city centre from a variety of historical sources (stone marks, official documents, paintings, etc). This work presents a model for the inherent unpreciseness of these historical water levels. This is, with the arithmetics of fuzzy numbers, described by their membership functions, in a similar fashion as the probability density function describes the uncertainty of a random variable. Additional to the in-site collected water stages from floodmarks and other documentary evidence (e.g. preserved in narratives and newspaper flood reports) are prone to be modeled in a fuzzy way. This study presents the use of fuzzy logic to transform historical information from different sources, in this case of flood water stages, into membership functions. This values might then introduced in the mathematical framework of Fuzzy Bayesian Inference to perform the statistical analyses with the rules of fuzzy numbers algebra. The results of this flood frequency analysis, as in the traditional non-fuzzy way, link discharges with exceedance probabilities or return periods. The main difference is, that the modeled discharge quantiles are not precise values, but fuzzy numbers instead, represented by their membership functions explicitly including the unpreciseness of the historical information used. Wetter, O., Pfister, C., Weingartner, R., Luterbacher, J., Reist, T., & Trösch, J. (2011) The largest floods in the High Rhine basin since 1268 assessed from documentary and instrumental evidence. Hydrol. Sci. J. 56(5), 733-758.
NASA Astrophysics Data System (ADS)
Lenhard, R. J.; Rayner, J. L.; Davis, G. B.
2017-10-01
A model is presented to account for elevation-dependent residual and entrapped LNAPL above and below, respectively, the water-saturated zone when predicting subsurface LNAPL specific volume (fluid volume per unit area) and transmissivity from current and historic fluid levels in wells. Physically-based free, residual, and entrapped LNAPL saturation distributions and LNAPL relative permeabilities are integrated over a vertical slice of the subsurface to yield the LNAPL specific volumes and transmissivity. The model accounts for effects of fluctuating water tables. Hypothetical predictions are given for different porous media (loamy sand and clay loam), fluid levels in wells, and historic water-table fluctuations. It is shown the elevation range from the LNAPL-water interface in a well to the upper elevation where the free LNAPL saturation approaches zero is the same for a given LNAPL thickness in a well regardless of porous media type. Further, the LNAPL transmissivity is largely dependent on current fluid levels in wells and not historic levels. Results from the model can aid developing successful LNAPL remediation strategies and improving the design and operation of remedial activities. Results of the model also can aid in accessing the LNAPL recovery technology endpoint, based on the predicted transmissivity.
NASA Technical Reports Server (NTRS)
Piersol, Allan G.
1991-01-01
Analytical expressions have been derived to describe the mean square error in the estimation of the maximum rms value computed from a step-wise (or running) time average of a nonstationary random signal. These analytical expressions have been applied to the problem of selecting the optimum averaging times that will minimize the total mean square errors in estimates of the maximum sound pressure levels measured inside the Titan IV payload fairing (PLF) and the Space Shuttle payload bay (PLB) during lift-off. Based on evaluations of typical Titan IV and Space Shuttle launch data, it has been determined that the optimum averaging times for computing the maximum levels are (1) T (sub o) = 1.14 sec for the maximum overall level, and T(sub oi) = 4.88 f (sub i) (exp -0.2) sec for the maximum 1/3 octave band levels inside the Titan IV PLF, and (2) T (sub o) = 1.65 sec for the maximum overall level, and T (sub oi) = 7.10 f (sub i) (exp -0.2) sec for the maximum 1/3 octave band levels inside the Space Shuttle PLB, where f (sub i) is the 1/3 octave band center frequency. However, the results for both vehicles indicate that the total rms error in the maximum level estimates will be within 25 percent the minimum error for all averaging times within plus or minus 50 percent of the optimum averaging time, so a precise selection of the exact optimum averaging time is not critical. Based on these results, linear averaging times (T) are recommended for computing the maximum sound pressure level during lift-off.
Historical earthquakes studies in Eastern Siberia: State-of-the-art and plans for future
NASA Astrophysics Data System (ADS)
Radziminovich, Ya. B.; Shchetnikov, A. A.
2013-01-01
Many problems in investigating historical seismicity of East Siberia remain unsolved. A list of these problems may refer particularly to the quality and reliability of data sources, completeness of parametric earthquake catalogues, and precision and transparency of estimates for the main parameters of historical earthquakes. The main purpose of this paper is to highlight the current status of the studies of historical seismicity in Eastern Siberia, as well as analysis of existing macroseismic and parametric earthquake catalogues. We also made an attempt to identify the main shortcomings of existing catalogues and to clarify the reasons for their appearance in the light of the history of seismic observations in Eastern Siberia. Contentious issues in the catalogues of earthquakes are considered by the example of three strong historical earthquakes, important for assessing seismic hazard in the region. In particular, it was found that due to technical error the parameters of large M = 7.7 earthquakes of 1742 were transferred from the regional catalogue to the worldwide database with incorrect epicenter coordinates. The way some stereotypes concerning active tectonics influences on the localization of the epicenter is shown by the example of a strong М = 6.4 earthquake of 1814. Effect of insufficient use of the primary data source on completeness of earthquake catalogues is illustrated by the example of a strong M = 7.0 event of 1859. Analysis of the state-of-the-art of historical earthquakes studies in Eastern Siberia allows us to propose the following activities in the near future: (1) database compilation including initial descriptions of macroseismic effects with reference to their place and time of occurrence; (2) parameterization of the maximum possible (magnitude-unlimited) number of historical earthquakes on the basis of all the data available; (3) compilation of an improved version of the parametric historical earthquake catalogue for East Siberia with detailed consideration of each event and distinct logic schemes for data interpretation. Thus, we can make the conclusion regarding the necessity of a large-scale revision in historical earthquakes catalogues for the area of study.
The Climate Science Special Report: Rising Seas and Changing Oceans
NASA Astrophysics Data System (ADS)
Kopp, R. E.
2017-12-01
GMSL has risen by about 16-21 cm since 1900. Ocean heat content has increased at all depths since the 1960s, and global mean sea-surface temperature increased 0.7°C/century between 1900 to 2016. Human activity contributed substantially to generating a rate of GMSL rise since 1900 faster than during any preceding century in at least 2800 years. A new set of six sea-level rise scenarios, spanning a range from 30 cm to 250 cm of 21st century GMSL rise, were developed for the CSSR. The lower scenario is based on linearly extrapolating the past two decades' rate of rise. The upper scenario is informed by literature estimates of maximum physically plausible values, observations indicating the onset of marine ice sheet instability in parts of West Antarctica, and modeling of ice-cliff and ice-shelf instability mechanisms. The new scenarios include localized projections along US coastlines. There is significant variability around the US, with rates of rise likely greater than GMSL rise in the US Northeast and the western Gulf of Mexico. Under scenarios involving extreme Antarctic contributions, regional rise would be greater than GMSL rise along almost all US coastlines. Historical sea-level rise has already driven a 5- to 10-fold increase in minor tidal flooding in several US coastal cities since the 1960s. Under the CSSR's Intermediate sea-level rise scenario (1.0 m of GMSL rise in 2100) , a majority of NOAA tide gauge locations will by 2040 experience the historical 5-year coastal flood about 5 times per year. Ocean changes are not limited to rising sea levels. Ocean pH is decreasing at a rate that may be unparalleled in the last 66 million years. Along coastlines, ocean acidification can be enhanced by changes in the upwelling (particularly along the US Pacific Coast); by episodic, climate change-enhanced increases in freshwater input (particularly along the US Atlantic Coast); and by the enhancement of biological respiration by nutrient runoff. Climate models project a slowdown in the Atlantic Meridional Overturning Circulation (AMOC) under high-emissions scenarios. Any slowdown will reduce ocean heat and carbon absorption and raise sea levels off the northeastern US A full AMOC collapse, improbable in the current century, would lead to an additional 0.5 m of sea-level rise and offset 0-2°C of warming over the US.
Unrest episodes at Campi Flegrei: A reconstruction of vertical ground movements during 1905-2009
NASA Astrophysics Data System (ADS)
Del Gaudio, C.; Aquino, I.; Ricciardi, G. P.; Ricco, C.; Scandone, R.
2010-08-01
Geodetic observations at Campi Flegrei caldera were initiated in 1905. Historical observations and the few measurements made before 1970 suggested a deflationary trend. Since 1969, the ground started to inflate during two major uplift episodes in 1969-72 and 1982-1985. We collected and reanalyzed all available punctual observations of vertical ground displacement taken in the period 1905-2009 with special attention to the period before 1969, to reconstruct in greater detail the deformation history of the caldera. We make use of the many photographs of the sea level in a roman ruin (the Serapeum Market) taken during the period between 1905 and 1969 to infer with more accuracy its relative height with respect to the sea level. We identify a previously disregarded major episode of ground uplift occurred between 1950 and 1952 with a maximum uplift of about 73 cm. This finding suggests that Campi Flegrei is currently experiencing a prolonged period of unrest longer than previously thought. The higher seismicity associated with the later episodes of unrest suggests that the volcano has approached an instability threshold, which may eventually result in a volcanic eruption.
Mercury in soil and perennial plants in a mining-affected urban area from Northwestern Romania.
Senilă, Marin; Levei, Erika A; Senilă, Lăcrimioara R; Oprea, Gabriela M; Roman, Cecilia M
2012-01-01
The mercury (Hg) concentrations were evaluated in soils and perennial plants sampled in four districts of Baia Mare city, a historical mining and ore processing center in Northwestern Romania. The results showed that the Hg concentration exceeded the guideline value of 1.0 mg kg(-1) dry weight (dw) established by the Romanian Legislation, in 24 % of the analyzed soil samples, while the median Hg concentration (0.70 mg kg(-1) dw) was lower than the guideline value. However, Hg content in soil was generally higher than typical values in soils from residential and agricultural areas of the cities all over the world. The median Hg concentration was 0.22 mg kg(-1) dw in the perennial plants, and exceeded the maximum level of Hg (0.10 mg kg(-1)) established by European Directive 2002/32/EC for plants used in animal feed in order to prevent its transfer and further accumulation in the higher levels of food chain. No significant correlations were found between soil Hg and other analyzed metals (Cd, Cu, Pb, Zn) resulted from the non-ferrous smelting activities, probably due to the different physicochemical properties, that led to different dispersion patterns.
NASA Astrophysics Data System (ADS)
Jefferson, M.; Curran, B.; Routhier, M.; Mulukutla, G. K.; Hall, C. L.
2011-12-01
The study of climate change is now starting to be widely researched around the world. One prominent exception to this fact is within the discipline of Historic Preservation. With the likelihood of climate change causing sea levels to rise over decades to come, historical preservationists are now looking for data and information which can help them mitigate potential threats to our cultural heritage along our sea coasts. Some such information that can be helpful in understanding these threats includes geographic information such as the locations of artifacts, fossils, and historic structures as well as their vertical elevation above mean sea level. In an effort to build a set of protocols to help preservations study these threats, our work is currently focusing on a historic living history museum site known as Strawbery Banke in Portsmouth, New Hampshire. This poster features a subset of this work that was completed through undergraduate student internships funded by the Joan and James Leitzel Center at the University of New Hampshire. This subset of work focused on the creation a 3D model of the study site. Two aspects of the creation of this model involved the completion of a topographic ground survey and the 3D digital mapping of the site itself. The ground survey was completed with the use of standard surveying techniques and tools and the 3D digital mapping was completed with the use of ArcScene, a software which is part of the ArcGIS suite. This work was completed in conjunction with a larger study funded by the National Geographic Society to better understand how sea level rise and the effects of storm surges are putting the historic structures at Strawbery Banke at risk.
van Asten, Liselotte; van der Lubben, Mariken; van den Wijngaard, Cees; van Pelt, Wilfrid; Verheij, Robert; Jacobi, Andre; Overduin, Pieter; Meijer, Adam; Luijt, Dirk; Claas, Eric; Hermans, Mirjam; Melchers, Willem; Rossen, John; Schuurman, Rob; Wolffs, Petra; Boucher, Charles; Bouchier, Charles; Schirm, Jurjen; Kroes, Louis; Leenders, Sander; Galama, Joep; Peeters, Marcel; van Loon, Anton; Stobberingh, Ellen; Schutten, Martin; Koopmans, Marion
2009-07-01
Experience with a highly pathogenic avian influenza outbreak in the Netherlands (2003) illustrated that the diagnostic demand for respiratory viruses at different biosafety levels (including BSL3), can increase unexpectedly and dramatically. We describe the measures taken since, aimed at strengthening national laboratory surge capacity and improving preparedness for dealing with diagnostic demand during outbreaks of (emerging) respiratory virus infections, including pandemic influenza virus. Academic and peripheral medical-microbiological laboratories collaborated to determine minimal laboratory requirements for the identification of viruses in the early stages of a pandemic or a large outbreak of avian influenza virus. Next, an enhanced collaborative national network of outbreak assistance laboratories (OAL) was set up. An inventory was made of the maximum diagnostic throughput that this network can deliver in a period of intensified demand. For an estimate of the potential magnitude of this surge demand, historical counts were calculated from hospital- and physician-based registries of patients presenting with respiratory symptoms. Number of respiratory physician-visits ranged from 140,000 to 615,000 per month and hospitalizations ranged from 3000 to 11,500 per month. The established OAL-network provides rapid diagnostic response with agreed quality requirements and a maximum throughput capacity of 1275 samples/day (38,000 per month), assuming other routine diagnostic work needs to be maintained. Thus surge demand for diagnostics for hospitalized cases (if not distinguishable from other respiratory illness) could be handled by the OAL network. Assessing etiology of community acquired acute respiratory infection however, may rapidly exceed the capacity of the network. Therefore algorithms are needed for triaging for laboratory diagnostics; currently this is not addressed in pandemic preparedness plans.
Water-Balance Model to Simulate Historical Lake Levels for Lake Merced, California
NASA Astrophysics Data System (ADS)
Maley, M. P.; Onsoy, S.; Debroux, J.; Eagon, B.
2009-12-01
Lake Merced is a freshwater lake located in southwestern San Francisco, California. In the late 1980s and early 1990s, an extended, severe drought impacted the area that resulted in significant declines in Lake Merced lake levels that raised concerns about the long-term health of the lake. In response to these concerns, the Lake Merced Water Level Restoration Project was developed to evaluate an engineered solution to increase and maintain Lake Merced lake levels. The Lake Merced Lake-Level Model was developed to support the conceptual engineering design to restore lake levels. It is a spreadsheet-based water-balance model that performs monthly water-balance calculations based on the hydrological conceptual model. The model independently calculates each water-balance component based on available climate and hydrological data. The model objective was to develop a practical, rule-based approach for the water balance and to calibrate the model results to measured lake levels. The advantage of a rule-based approach is that once the rules are defined, they enhance the ability to then adapt the model for use in future-case simulations. The model was calibrated to historical lake levels over a 70-year period from 1939 to 2009. Calibrating the model over this long historical range tested the model over a variety of hydrological conditions including wet, normal and dry precipitation years, flood events, and periods of high and low lake levels. The historical lake level range was over 16 feet. The model calibration of historical to simulated lake levels had a residual mean of 0.02 feet and an absolute residual mean of 0.42 feet. More importantly, the model demonstrated the ability to simulate both long-term and short-term trends with a strong correlation of the magnitude for both annual and seasonal fluctuations in lake levels. The calibration results demonstrate an improved conceptual understanding of the key hydrological factors that control lake levels, reduce uncertainty in the hydrological conceptual model, and increase confidence in the model’s ability to forecast future lake conditions. The Lake Merced Lake-Level Model will help decision-makers with a straightforward, practical analysis of the major contributions to lake-level declines that can be used to support engineering, environmental and other decisions.
Henriques, Isabel; Araújo, Susana; Pereira, Anabela; Menezes-Oliveira, Vanessa B; Correia, António; Soares, Amadeu M V M; Scott-Fordsmand, Janeck J; Amorim, Mónica J B
2015-01-01
The aim of this study was to assess the combined effects of temperature and copper (Cu) contamination in the structure of soil bacterial community. For this, contaminated or spiked and control soils from two different geographic origins (PT-Portugal and DK-Denmark) were used. The DK soil was from a historically contaminated study field, representing a long-term exposure to Cu while the PT soil was from a clean site and freshly spiked with Cu. Soil bacterial communities were exposed in mesocosms during 84 days to 3 different temperatures based on values typically found in each geographic region and temperature conditions that simulated a warming scenario. Obtained results indicate that Cu stress alters the structure of bacterial community and that this effect is, to some extent, temperature-dependent. Effects on bacterial diversity for both soils were also observed. Differences in the DK and PT communities' response were apparent, with the community from the historically contaminated soil being more resilient to temperature fluctuations. This study presents evidence to support the hypothesis that temperature alters the effect of metals on soils. Further, our results suggest that the definition of soils quality criteria must be based on studies performed under temperatures selected for the specific geographic region. Studies taking into account temperature changes are needed to model and predict risks, this is important to e.g. future adjustments of the maximum permissible levels for soil metal contamination. Copyright © 2014 Elsevier Inc. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rader, B.R.; Nimmo, D.W.R.; Chapman, P.L.
1997-07-01
Concentrations of metals in sediments and soils deposited along the floodplain of the Clark Fork River, within the Grant-Kohrs Ranch National Historic Site, Deer Lodge, Montana, USA, have exceeded maximum background concentrations in the United States for most metals tested. As a result of mining and smelting activities, portions of the Deer Lodge Valley, including the Grant-Kohrs Ranch, have received National Priority List Designation under the Comprehensive Environmental Response, Compensation and Liability Act. Using a series of plant germination tests, pH measurements, and metal analyses, this study investigated the toxicity of soils from floodplain slicken areas, bare spots devoid ofmore » vegetation, along the Clark Fork River. The slicken soils collected from the Grant-Kohrs Ranch were toxic to all four plant species tested. The most sensitive endpoint in the germination tests was root length and the least sensitive was emergence. Considering emergence, the most sensitive species was the resident grass species Agrostis gigantea. The sensitivities were reversed when root lengths were examined, with Echinochloa crusgalli showing the greatest sensitivity. Both elevated concentrations of metals and low pH were necessary to produce an acutely phytotoxic response in laboratory seed germination tests using slicken soils. Moreover, pH values on the Grant-Kohrs Ranch appear to be a better predictor of acutely phytotoxic conditions than total metal levels.« less
Hoffmann, Jörn; Zebker, Howard A.; Galloway, Devin L.; Amelung, Falk
2001-01-01
Analyses of areal variations in the subsidence and rebound occurring over stressed aquifer systems, in conjunction with measurements of the hydraulic head fluctuations causing these displacements, can yield valuable information about the compressibility and storage properties of the aquifer system. Historically, stress‐strain relationships have been derived from paired extensometer/piezometer installations, which provide only point source data. Because of the general unavailability of spatially detailed deformation data, areal stress‐strain relations and their variability are not commonly considered in constraining conceptual and numerical models of aquifer systems. Interferometric synthetic aperture radar (InSAR) techniques can map ground displacements at a spatial scale of tens of meters over 100 km wide swaths. InSAR has been used previously to characterize larger magnitude, generally permanent aquifer system compaction and land subsidence at yearly and longer timescales, caused by sustained drawdown of groundwater levels that produces intergranular stresses consistently greater than the maximum historical stress. We present InSAR measurements of the typically small‐magnitude, generally recoverable deformations of the Las Vegas Valley aquifer system occurring at seasonal timescales. From these we derive estimates of the elastic storage coefficient for the aquifer system at several locations in Las Vegas Valley. These high‐resolution measurements offer great potential for future investigations into the mechanics of aquifer systems and the spatial heterogeneity of aquifer system structure and material properties as well as for monitoring ongoing aquifer system compaction and land subsidence.
Santonastaso, Trent; Lighten, Jackie; van Oosterhout, Cock; Jones, Kenneth L; Foufopoulos, Johannes; Anthony, Nicola M
2017-07-01
The major histocompatibility complex (MHC) plays a key role in disease resistance and is the most polymorphic gene region in vertebrates. Although habitat fragmentation is predicted to lead to a loss in MHC variation through drift, the impact of other evolutionary forces may counter this effect. Here we assess the impact of selection, drift, migration, and recombination on MHC class II and microsatellite variability in 14 island populations of the Aegean wall lizard Podarcis erhardii . Lizards were sampled from islands within the Cyclades (Greece) formed by rising sea levels as the last glacial maximum approximately 20,000 before present. Bathymetric data were used to determine the area and age of each island, allowing us to infer the corresponding magnitude and timing of genetic bottlenecks associated with island formation. Both MHC and microsatellite variation were positively associated with island area, supporting the hypothesis that drift governs neutral and adaptive variation in this system. However, MHC but not microsatellite variability declined significantly with island age. This discrepancy is likely due to the fact that microsatellites attain mutation-drift equilibrium more rapidly than MHC. Although we detected signals of balancing selection, recombination and migration, the effects of these evolutionary processes appeared negligible relative to drift. This study demonstrates how land bridge islands can provide novel insights into the impact of historical fragmentation on genetic diversity as well as help disentangle the effects of different evolutionary forces on neutral and adaptive diversity.
NASA Astrophysics Data System (ADS)
Tapponnier, P.; Elias, A.; Singh, S.; King, G.; Briais, A.; Daeron, M.; Carton, H.; Sursock, A.; Jacques, E.; Jomaa, R.; Klinger, Y.
2007-12-01
On July 9, AD 551, a large earthquake, followed by a tsunami destroyed most of the coastal cities of Phoenicia (modern-day Lebanon). This was arguably one of the most devastating historical submarine earthquakes in the eastern Mediterranean. Geophysical data from the Shalimar survey unveils the source of this Mw=7.5 event: rupture of the offshore, hitherto unknown, 100?150 km-long, active, east-dipping Mount Lebanon Thrust (MLT). Deep-towed sonar swaths along the base of prominent bathymetric escarpments reveal fresh, west facing seismic scarps that cut the sediment-smoothed seafloor. The MLT trace comes closest (~ 8 km) to the coast between Beirut and Enfeh, where as 13 radiocarbon-calibrated ages indicate, a shoreline-fringing Vermetid bench suddenly emerged by ~ 80 cm in the 6th century AD. At Tabarja, the regular vertical separation (~ 1 m) of higher fossil benches, suggests uplift by 3 more comparable-size earthquakes since the Holocene sea-level reached a maximum ca. 7-6 ka, implying a 1500?1750 yr recurrence time. Unabated thrusting on the MLT likely orchestrated the growth of Mt. Lebanon since the late Miocene. The newly discovered MLT has been the missing piece in the Dead Sea Transform and eastern Mediterranean tectonic scheme. Identifying the source of the AD 551 event thus ends a complete reassessment of the sources of the major historical earthquakes on the various faults of the Lebanese Restraining Bend of the Levant Fault System (or Dead Sea Transform).
NASA Astrophysics Data System (ADS)
Tapponnier, P.; Elias, A.; Singh, S.; King, G.; Briais, A.; Daeron, M.; Carton, H.; Sursock, A.; Jacques, E.; Jomaa, R.; Klinger, Y.
2004-12-01
On July 9, AD 551, a large earthquake, followed by a tsunami destroyed most of the coastal cities of Phoenicia (modern-day Lebanon). This was arguably one of the most devastating historical submarine earthquakes in the eastern Mediterranean. Geophysical data from the Shalimar survey unveils the source of this Mw=7.5 event: rupture of the offshore, hitherto unknown, 100?150 km-long, active, east-dipping Mount Lebanon Thrust (MLT). Deep-towed sonar swaths along the base of prominent bathymetric escarpments reveal fresh, west facing seismic scarps that cut the sediment-smoothed seafloor. The MLT trace comes closest (~ 8 km) to the coast between Beirut and Enfeh, where as 13 radiocarbon-calibrated ages indicate, a shoreline-fringing Vermetid bench suddenly emerged by ~ 80 cm in the 6th century AD. At Tabarja, the regular vertical separation (~ 1 m) of higher fossil benches, suggests uplift by 3 more comparable-size earthquakes since the Holocene sea-level reached a maximum ca. 7-6 ka, implying a 1500?1750 yr recurrence time. Unabated thrusting on the MLT likely orchestrated the growth of Mt. Lebanon since the late Miocene. The newly discovered MLT has been the missing piece in the Dead Sea Transform and eastern Mediterranean tectonic scheme. Identifying the source of the AD 551 event thus ends a complete reassessment of the sources of the major historical earthquakes on the various faults of the Lebanese Restraining Bend of the Levant Fault System (or Dead Sea Transform).
Assigning historic responsibility for extreme weather events
NASA Astrophysics Data System (ADS)
Otto, Friederike E. L.; Skeie, Ragnhild B.; Fuglestvedt, Jan S.; Berntsen, Terje; Allen, Myles R.
2017-11-01
Recent scientific advances make it possible to assign extreme events to human-induced climate change and historical emissions. These developments allow losses and damage associated with such events to be assigned country-level responsibility.
6. Historic American Buildings Survey, Cervin Robinson, Photographer August, 1958 ...
6. Historic American Buildings Survey, Cervin Robinson, Photographer August, 1958 INTERIOR OF SOUTH AND EAST WALLS IN LOWER LEVEL MILL RACE - Le Van Mill, Kutztown Road vicinity, Kutztown, Berks County, PA
5. Historic American Buildings Survey, Cervin Robinson, Photographer August, 1958 ...
5. Historic American Buildings Survey, Cervin Robinson, Photographer August, 1958 INTERIOR OF NORTH AND WEST WALLS IN LOWER LEVEL MILL RACE - Le Van Mill, Kutztown Road vicinity, Kutztown, Berks County, PA
Tsunami hazard assessment in the Hudson River Estuary based on dynamic tsunami-tide simulations
NASA Astrophysics Data System (ADS)
Shelby, Michael; Grilli, Stéphan T.; Grilli, Annette R.
2016-12-01
This work is part of a tsunami inundation mapping activity carried out along the US East Coast since 2010, under the auspice of the National Tsunami Hazard Mitigation program (NTHMP). The US East Coast features two main estuaries with significant tidal forcing, which are bordered by numerous critical facilities (power plants, major harbors,...) as well as densely built low-level areas: Chesapeake Bay and the Hudson River Estuary (HRE). HRE is the object of this work, with specific focus on assessing tsunami hazard in Manhattan, the Hudson and East River areas. In the NTHMP work, inundation maps are computed as envelopes of maximum surface elevation along the coast and inland, by simulating the impact of selected probable maximum tsunamis (PMT) in the Atlantic ocean margin and basin. At present, such simulations assume a static reference level near shore equal to the local mean high water (MHW) level. Here, instead we simulate maximum inundation in the HRE resulting from dynamic interactions between the incident PMTs and a tide, which is calibrated to achieve MHW at its maximum level. To identify conditions leading to maximum tsunami inundation, each PMT is simulated for four different phases of the tide and results are compared to those obtained for a static reference level. We first separately simulate the tide and the three PMTs that were found to be most significant for the HRE. These are caused by: (1) a flank collapse of the Cumbre Vieja Volcano (CVV) in the Canary Islands (with a 80 km3 volume representing the most likely extreme scenario); (2) an M9 coseismic source in the Puerto Rico Trench (PRT); and (3) a large submarine mass failure (SMF) in the Hudson River canyon of parameters similar to the 165 km3 historical Currituck slide, which is used as a local proxy for the maximum possible SMF. Simulations are performed with the nonlinear and dispersive long wave model FUNWAVE-TVD, in a series of nested grids of increasing resolution towards the coast, by one-way coupling. Four levels of nested grids are used, from a 1 arc-min spherical coordinate grid in the deep ocean down to a 39-m Cartesian grid in the HRE. Bottom friction coefficients in the finer grids are calibrated for the tide to achieve the local spatially averaged MHW level at high tide in the HRE. Combined tsunami-tide simulations are then performed for four phases of the tide corresponding to each tsunami arriving at Sandy Hook (NJ): 1.5 h ahead, concurrent with, 1.5 h after, and 3 h after the local high tide. These simulations are forced along the offshore boundary of the third-level grid by linearly superposing time series of surface elevation and horizontal currents of the calibrated tide and each tsunami wave train; this is done in deep enough water for a linear superposition to be accurate. Combined tsunami-tide simulations are then performed with FUNWAVE-TVD in this and the finest nested grids. Results show that, for the 3 PMTs, depending on the tide phase, the dynamic simulations lead to no or to a slightly increased inundation in the HRE (by up to 0.15 m depending on location), and to larger currents than for the simulations over a static level; the CRT SMF proxy tsunami is the PMT leading to maximum inundation in the HRE. For all tide phases, nonlinear interactions between tide and tsunami currents modify the elevation, current, and celerity of tsunami wave trains, mostly in the shallower water areas of the HRE where bottom friction dominates, as compared to a linear superposition of wave elevations and currents. We note that, while dynamic simulations predict a slight increase in inundation, this increase may be on the same order as, or even less than sources of uncertainty in the modeling of tsunami sources, such as their initial water elevation, and in bottom friction and bathymetry used in tsunami grids. Nevertheless, results in this paper provide insight into the magnitude and spatial variability of tsunami propagation and impact in the complex inland waterways surrounding New York City, and of their modification by dynamic tidal effects. We conclude that changes in inundation resulting from the inclusion of a dynamic tide in the specific case of the HRE, although of scientific interest, are not significant for tsunami hazard assessment and that the standard approach of specifying a static reference level equal to MHW is conservative. However, in other estuaries with similarly complex bathymetry/topography and stronger tidal currents, a simplified static approach might not be appropriate.
NASA Astrophysics Data System (ADS)
Croft, Michael; de Berg, Kevin
2014-09-01
This paper selects six key alternative conceptions identified in the literature on student understandings of chemical bonding and illustrates how a historical analysis and a textbook analysis can inform these conceptions and lead to recommendations for improving the teaching and learning of chemical bonding at the secondary school level. The historical analysis and the textbook analysis focus on the concepts of charge, octet, electron pair, ionic, covalent and metallic bonding. Finally, a table of recommendations is made for teacher and student in the light of four fundamental questions and the six alternative conceptions to enhance the quality of the curriculum resources available and the level of student engagement.
The Influence of Mean Trophic Level on Biomass and Production in Marine Ecosystems
NASA Astrophysics Data System (ADS)
Woodson, C. B.; Schramski, J.
2016-02-01
The oceans have faced rapid removal of top predators causing a reduction in the mean trophic level of many marine ecosystems due to fishing down the food web. However, estimating the pre-exploitation biomass of the ocean has been difficult. Historical population sizes have been estimated using population dynamics models, archaeological or historical records, fisheries data, living memory, ecological monitoring data, genetics, and metabolic theory. In this talk, we expand on the use of metabolic theory by including complex trophic webs to estimate pre-exploitation levels of marine biomass. Our results suggest that historical marine biomass could be as much as 10 times higher than current estimates and that the total carrying capacity of the ocean is sensitive to mean trophic level and trophic web complexity. We further show that the production levels needed to support the added biomass are possible due to biomass accumulation and predator-prey overlap in regions such as fronts. These results have important implications for marine biogeochemical cycling, fisheries management, and conservation efforts.
Christensen, Allen H.
2005-01-01
Historically, the U.S. Air Force Plant 42 has relied on ground water as the primary source of water owing, in large part, to the scarcity of surface water in the region. Groundwater withdrawal for municipal, industrial, and agricultural use has affected ground-water levels at U.S. Air Force Plant 42, and vicinity. A study to document changes in groundwater gradients and to present historical water-level data was completed by the U.S. Geological Survey in cooperation with the U.S. Air Force. This report presents historical water-level data, hydrographs, and generalized seasonal water-level and water-level contours for September?October 2000 and March?April 2001. The collection and interpretation of ground-water data helps local water districts, military bases, and private citizens gain a better understanding of the ground-water flow systems, and consequently water availability. During September?October 2000 and March?April 2001 the U.S. Geological Survey and other agencies made a total of 102 water-level measurements, 46 during September?October 2000 and 56 during March?April 2001. These data document recent conditions and, when compared with historical data, document changes in ground-water levels. Two water-level contour maps were drawn: the first depicts water-level conditions for September?October 2000 map and the second depicts water-level conditions for March?April 2001 map. In general, the water-level contour maps show water-level depressions formed as result of ground-water withdrawal. One hundred sixteen long-term hydrographs, using water-level data from 1915 through 2000, were constructed to show water-level trends in the area. The hydrographs indicate that water-level decline occurred throughout the study area, with the greatest declines south of U.S. Air Force Plant 42.
Gustman, Alan L; Steinmeier, Thomas L; Tabatabai, Nahid
2012-01-01
Analysts have proposed raising the maximum level of earnings subject to the Social Security payroll tax (the "tax max") to improve long-term Social Security Trust Fund solvency. This article investigates how raising the tax max leads to the "leakage" of portions of the additional revenue into higher benefit payments. Using Health and Retirement Study data matched to Social Security earnings records, we compare historical payroll tax payments and benefit amounts for Early Boomers (born 1948-1953) with tax and benefit simulations had they been subject to the tax max (adjusted for wage growth) faced by cohorts 12 and 24 years older. We find that 43.2 percent of the additional payroll tax revenue attributable to tax max increases affecting Early Boomers relative to taxes paid by the cohort 12 years older leaked into higher benefits. For Early Boomers relative to those 24 years older, we find 53.5 percent leakage.
Observations of Highly Variable Deuterium in the Martian Upper Atmosphere
NASA Astrophysics Data System (ADS)
Clarke, John T.; Mayyasi-Matta, Majd A.; Bhattacharyya, Dolon; Chaufray, Jean-Yves; Chaffin, Michael S.; Deighan, Justin; Schneider, Nicholas M.; Jain, Sonal; Jakosky, Bruce
2017-10-01
One of the key pieces of evidence for historic high levels of water on Mars is the present elevated ratio of deuterium/hydrogen (D/H) in near-surface water. This can be explained by the loss of large amounts of water into space, with the lighter H atoms escaping faster than D atoms. Understanding the specific physical processes and controlling factors behind the present escape of H and D is the key objective of the MAVEN IUVS echelle channel. This knowledge can then be applied to an accurate extrapolation back in time to understand the water history of Mars. Observations of D in the martian upper atmosphere over the first martian year of the MAVEN mission have shown highly variable amounts of D, with a short-lived maximum just after perihelion and during southern summer. The timing and nature of this increase provide constraints on its possible origin. These results will be presented and compared with other measurements of the upper atmosphere of Mars.
Drivers of vegetative dormancy across herbaceous perennial plant species.
Shefferson, Richard P; Kull, Tiiu; Hutchings, Michael J; Selosse, Marc-André; Jacquemyn, Hans; Kellett, Kimberly M; Menges, Eric S; Primack, Richard B; Tuomi, Juha; Alahuhta, Kirsi; Hurskainen, Sonja; Alexander, Helen M; Anderson, Derek S; Brys, Rein; Brzosko, Emilia; Dostálik, Slavomir; Gregg, Katharine; Ipser, Zdeněk; Jäkäläniemi, Anne; Jersáková, Jana; Dean Kettle, W; McCormick, Melissa K; Mendoza, Ana; Miller, Michael T; Moen, Asbjørn; Øien, Dag-Inge; Püttsepp, Ülle; Roy, Mélanie; Sather, Nancy; Sletvold, Nina; Štípková, Zuzana; Tali, Kadri; Warren, Robert J; Whigham, Dennis F
2018-05-01
Vegetative dormancy, that is the temporary absence of aboveground growth for ≥ 1 year, is paradoxical, because plants cannot photosynthesise or flower during dormant periods. We test ecological and evolutionary hypotheses for its widespread persistence. We show that dormancy has evolved numerous times. Most species displaying dormancy exhibit life-history costs of sprouting, and of dormancy. Short-lived and mycoheterotrophic species have higher proportions of dormant plants than long-lived species and species with other nutritional modes. Foliage loss is associated with higher future dormancy levels, suggesting that carbon limitation promotes dormancy. Maximum dormancy duration is shorter under higher precipitation and at higher latitudes, the latter suggesting an important role for competition or herbivory. Study length affects estimates of some demographic parameters. Our results identify life historical and environmental drivers of dormancy. We also highlight the evolutionary importance of the little understood costs of sprouting and growth, latitudinal stress gradients and mixed nutritional modes. © 2018 John Wiley & Sons Ltd/CNRS.
Stratified coastal ocean interactions with tropical cyclones
Glenn, S. M.; Miles, T. N.; Seroka, G. N.; Xu, Y.; Forney, R. K.; Yu, F.; Roarty, H.; Schofield, O.; Kohut, J.
2016-01-01
Hurricane-intensity forecast improvements currently lag the progress achieved for hurricane tracks. Integrated ocean observations and simulations during hurricane Irene (2011) reveal that the wind-forced two-layer circulation of the stratified coastal ocean, and resultant shear-induced mixing, led to significant and rapid ahead-of-eye-centre cooling (at least 6 °C and up to 11 °C) over a wide swath of the continental shelf. Atmospheric simulations establish this cooling as the missing contribution required to reproduce Irene's accelerated intensity reduction. Historical buoys from 1985 to 2015 show that ahead-of-eye-centre cooling occurred beneath all 11 tropical cyclones that traversed the Mid-Atlantic Bight continental shelf during stratified summer conditions. A Yellow Sea buoy similarly revealed significant and rapid ahead-of-eye-centre cooling during Typhoon Muifa (2011). These findings establish that including realistic coastal baroclinic processes in forecasts of storm intensity and impacts will be increasingly critical to mid-latitude population centres as sea levels rise and tropical cyclone maximum intensities migrate poleward. PMID:26953963
Phylogeographic population structure of Red-winged Blackbirds assessed by mitochondrial DNA
Ball, R. Martin; Freeman, Scott; James, Frances C.; Bermingham, Eldredge; Avise, John C.
1988-01-01
A continent-wide survey of restriction-site variation in mitochondrial DNA (mtDNA) of the Red-winged Blackbird (Agelaius phoeniceus) was conducted to assess the magnitude of phylogeographic population structure in an avian species. A total of 34 mtDNA genotypes was observed among the 127 specimens assayed by 18 restriction endonucleases. Nonetheless, population differentiation was minor, as indicated by (i) small genetic distances in terms of base substitutions per nucleotide site between mtDNA genotypes (maximum P ≈ 0.008) and by (ii) the widespread geographic distributions of particular mtDNA clones and phylogenetic arrays of clones. Extensive morphological differentiation among redwing populations apparently has occurred in the context of relatively little phylogenetic separation. A comparison between mtDNA data sets for Red-winged Blackbirds and deermice (Peromyscus maniculatus) also sampled from across North America shows that intraspecific population structures of these two species differ dramatically. The lower phylogeographic differentiation in redwings is probably due to historically higher levels of gene flow. PMID:16593914
a Framework for Architectural Heritage Hbim Semantization and Development
NASA Astrophysics Data System (ADS)
Brusaporci, S.; Maiezza, P.; Tata, A.
2018-05-01
Despite the recognized advantages of the use of BIM in the field of architecture and engineering, the extension of this procedure to the architectural heritage is neither immediate nor critical. The uniqueness and irregularity of historical architecture, on the one hand, and the great quantity of information necessary for the knowledge of architectural heritage, on the other, require appropriate reflections. The aim of this paper is to define a general framework for the use of BIM procedures for architectural heritage. The proposed methodology consists of three different Level of Development (LoD), depending on the characteristics of the building and the objectives of the study: a simplified model with a low geometric accuracy and a minimum quantity of information (LoD 200); a model nearer to the reality but, however, with a high deviation between virtual and real model (LoD 300); a detailed BIM model that reproduce as much as possible the geometric irregularities of the building and is enriched by the maximum quantity of information available (LoD 400).
The impact of age on lamotrigine and oxcarbazepine kinetics: a historical cohort study.
Wegner, Ilse; Wilhelm, Abraham J; Sander, Josemir W; Lindhout, Dick
2013-10-01
Age as well as estrogen levels may have an impact on the pharmacokinetics of lamotrigine (LTG) and monohydroxycarbazepine (MHD), the active metabolite of oxcarbazepine (OXC). To assess the effects of age and menopause, we evaluated retrospectively a therapeutic drug-monitoring database. Samples from 507 women and 302 men taking LTG and 464 women and 319 men taking OXC were used to develop a population pharmacokinetic model. Data were analyzed using NONMEM software and were compared with a population pharmacokinetic model based on samples of 1705 women and 1771 men taking carbamazepine (CBZ). Age was a significant factor contributing to pharmacokinetic variability in individuals using LTG, OXC, and CBZ with increasing clearance as a function of bioavailability (Cl/F) over age 18, a maximum Cl/F at 33years (CBZ) and 36 years (LTG and OXC), and a gradual decrease of Cl/F towards older age. We found no effect of perimenopausal age range on LTG and MHD clearance. © 2013.
Assessment of exposure to EMF in a Danish case-control study of childhood cancer.
Jensen, J K; Olsen, J H; Folkersen, E
1994-01-01
In Denmark it is permitted to draw overhead lines across residential areas. In connection with a Danish case-control study we developed a method for estimating the historical values of magnetic fields at residences. The study included 1,707 cases with childhood cancer and 4,788 matched population controls. A total of 16,082 different addresses had been occupied by the families from the time of conception until the date of diagnosis. The values of the extreme, maximum, middle and minimum 50 Hz magnetic field strengths originating from a 50-400 kV high-voltage installation were estimated for each of the dwellings included in a potential exposure area. 30 children were exposed to an average level of magnetic fields of 0.1 microT or more. The evaluated Danish method of exposure assessment was compared with the method for residential wiring codes developed by Wertheimer and Leeper /1/. We concluded that the US wiring codes are inappropriate for use in connection with the Danish electricity transmission system.
Pediatric intensive care unit admission tool: a colorful approach.
Biddle, Amy
2007-12-01
This article discusses the development, implementation, and utilization of our institution's Pediatric Intensive Care Unit (PICU) Color-Coded Admission Status Tool. Rather than the historical method of identifying a maximum number of staffed beds, a tool was developed to color code the PICU's admission status. Previous methods had been ineffective and led to confusion between the PICU leadership team and the administration. The tool includes the previously missing components of staffing and acuity, which are essential in determining admission capability. The PICU tool has three colored levels: green indicates open for admissions; yellow, admission alert resulting from available beds or because staffing is not equal to the projected patient numbers or required acuity; and red, admissions on hold because only one trauma or arrest bed is available or staffing is not equal to the projected acuity. Yellow and red designations require specific actions and the medical director's approval. The tool has been highly successful and significantly impacted nursing with the inclusion of the essential component of nurse staffing necessary in determining bed availability.
Dudley, Robert W.; Hodgkins, Glenn A.
2013-01-01
Water-level trends spanning 20, 30, 40, and 50 years were tested using month-end groundwater levels in 26, 12, 10, and 3 wells in northern New England (Maine, New Hampshire, and Vermont), respectively. Groundwater levels for 77 wells were used in interannual correlations with meteorological and hydrologic variables related to groundwater. Trends in the contemporary groundwater record (20 and 30 years) indicate increases (rises) or no substantial change in groundwater levels in all months for most wells throughout northern New England. The highest percentage of increasing 20-year trends was in February through March, May through August, and October through November. Forty-year trend results were mixed, whereas 50-year trends indicated increasing groundwater levels. Whereas most monthly groundwater levels correlate strongly with the previous month's level, monthly levels also correlate strongly with monthly streamflows in the same month; correlations of levels with monthly precipitation are less frequent and weaker than those with streamflow. Groundwater levels in May through August correlate strongly with annual (water year) streamflow. Correlations of groundwater levels with streamflow data and the relative richness of 50- to 100-year historical streamflow data suggest useful proxies for quantifying historical groundwater levels in light of the relatively short and fragmented groundwater data records presently available.
Changes to Sub-daily Rainfall Patterns in a Future Climate
NASA Astrophysics Data System (ADS)
Westra, S.; Evans, J. P.; Mehrotra, R.; Sharma, A.
2012-12-01
An algorithm is developed for disaggregating daily rainfall into sub-daily rainfall 'fragments' (continuous high temporal-resolution rainfall sequences whose total depth sums to the daily rainfall amount) under a future, warmer climate. The basis of the algorithm is to re-sample sub-daily fragments from the historical record conditional on the total daily rainfall amount and a range of temperature-based atmospheric predictors. The logic is that as the atmosphere warms, future rainfall patterns will be more reflective of historical rainfall patterns which occurred on warmer days at the same location, or at locations which have an atmospheric temperature profile more representative of expected future atmospheric conditions. It was found that the daily to sub-daily scaling relationship varied significantly by season and by location, with rainfall patterns on warmer seasons or at warmer locations typically exhibiting higher rainfall intensity occurring over shorter periods within a day, compared with cooler seasons and locations. Importantly, by regressing against temperature-based atmospheric covariates, this effect was substantially reduced, suggesting that the approach also may be valid when extrapolating to a future climate. An adjusted method of fragments algorithm was then applied to nine stations around Australia, with the results showing that when holding total daily rainfall constant, the maximum intensity of short duration rainfall increased by a median of about 5% per degree for the maximum 6 minute burst, and 3.5% for the maximum one hour burst, whereas the fraction of the day with no rainfall increased by a median of 1.5%. This highlights that a large proportion of the change to the distribution of rainfall is likely to occur at sub-daily timescales, with significant implications for many hydrological systems.
Mwakalapa, Eliezer Brown; Mmochi, Aviti John; Müller, Mette Helen Bjorge; Mdegela, Robinson Hammerthon; Lyche, Jan Ludvig; Polder, Anuschka
2018-01-01
In 2016, farmed and wild milkfish (Chanos chanos) and mullet (Mugil cephalus) from Tanzania mainland (Mtwara) and Zanzibar islands (Pemba and Unguja) were collected for analyses of persistent organic pollutants (POPs). Fish livers were analysed for organochlorine pesticides (OCPs), polychlorinated biphenyls (PCBs), brominated flame retardants (BFRs). Muscle tissue was used for analyses of perfluoroalkyl substances (PFASs). The major contaminant was p,p'-DDE. The highest p,p'-DDE concentration was found in wild milkfish from Mtwara (715.27 ng/g lipid weight (lw)). This was 572 times higher than the maximum level detected in farmed milkfish from the same area. The ratios of p,p'-DDE/p,p'-DDT in wild milkfish and mullet from Mtwara and Pemba indicate historical use of DDT. In contrast, ratios in farmed milkfish from Unguja and Mtwara, suggest recent use. The levels of HCB, HCHs and trans-nonachlor were low. ∑ 10 PCBs levels were low, ranging from
Zhang, Jinju; Li, Zuozhou; Fritsch, Peter W.; Tian, Hua; Yang, Aihong; Yao, Xiaohong
2015-01-01
Background and Aims The phylogeography of plant species in sub-tropical China remains largely unclear. This study used Tapiscia sinensis, an endemic and endangered tree species widely but disjunctly distributed in sub-tropical China, as a model to reveal the patterns of genetic diversity and phylogeographical history of Tertiary relict plant species in this region. The implications of the results are discussed in relation to its conservation management. Methods Samples were taken from 24 populations covering the natural geographical distribution of T. sinensis. Genetic structure was investigated by analysis of molecular variance (AMOVA) and spatial analysis of molecular variance (SAMOVA). Phylogenetic relationships among haplotypes were constructed with maximum parsimony and haplotype network methods. Historical population expansion events were tested with pairwise mismatch distribution analysis and neutrality tests. Species potential range was deduced by ecological niche modelling (ENM). Key Results A low level of genetic diversity was detected at the population level. A high level of genetic differentiation and a significant phylogeographical structure were revealed. The mean divergence time of the haplotypes was approx. 1·33 million years ago. Recent range expansion in this species is suggested by a star-like haplotype network and by the results from the mismatch distribution analysis and neutrality tests. Conclusions Climatic oscillations during the Pleistocene have had pronounced effects on the extant distribution of Tapiscia relative to the Last Glacial Maximum (LGM). Spatial patterns of molecular variation and ENM suggest that T. sinensis may have retreated in south-western and central China and colonized eastern China prior to the LGM. Multiple montane refugia for T. sinense existing during the LGM are inferred in central and western China. The populations adjacent to or within these refugia of T. sinense should be given high priority in the development of conservation policies and management strategies for this endangered species. PMID:26187222
Zhang, Jinju; Li, Zuozhou; Fritsch, Peter W; Tian, Hua; Yang, Aihong; Yao, Xiaohong
2015-10-01
The phylogeography of plant species in sub-tropical China remains largely unclear. This study used Tapiscia sinensis, an endemic and endangered tree species widely but disjunctly distributed in sub-tropical China, as a model to reveal the patterns of genetic diversity and phylogeographical history of Tertiary relict plant species in this region. The implications of the results are discussed in relation to its conservation management. Samples were taken from 24 populations covering the natural geographical distribution of T. sinensis. Genetic structure was investigated by analysis of molecular variance (AMOVA) and spatial analysis of molecular variance (SAMOVA). Phylogenetic relationships among haplotypes were constructed with maximum parsimony and haplotype network methods. Historical population expansion events were tested with pairwise mismatch distribution analysis and neutrality tests. Species potential range was deduced by ecological niche modelling (ENM). A low level of genetic diversity was detected at the population level. A high level of genetic differentiation and a significant phylogeographical structure were revealed. The mean divergence time of the haplotypes was approx. 1·33 million years ago. Recent range expansion in this species is suggested by a star-like haplotype network and by the results from the mismatch distribution analysis and neutrality tests. Climatic oscillations during the Pleistocene have had pronounced effects on the extant distribution of Tapiscia relative to the Last Glacial Maximum (LGM). Spatial patterns of molecular variation and ENM suggest that T. sinensis may have retreated in south-western and central China and colonized eastern China prior to the LGM. Multiple montane refugia for T. sinense existing during the LGM are inferred in central and western China. The populations adjacent to or within these refugia of T. sinense should be given high priority in the development of conservation policies and management strategies for this endangered species. © The Author 2015. Published by Oxford University Press on behalf of the Annals of Botany Company. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
NASA Astrophysics Data System (ADS)
Neumann, D. W.; Zagona, E. A.; Rajagopalan, B.
2005-12-01
Warm summer stream temperatures due to low flows and high air temperatures are a critical water quality problem in many western U.S. river basins because they impact threatened fish species' habitat. Releases from storage reservoirs and river diversions are typically driven by human demands such as irrigation, municipal and industrial uses and hydropower production. Historically, fish needs have not been formally incorporated in the operating procedures, which do not supply adequate flows for fish in the warmest, driest periods. One way to address this problem is for local and federal organizations to purchase water rights to be used to increase flows, hence decrease temperatures. A statistical model-predictive technique for efficient and effective use of a limited supply of fish water has been developed and incorporated in a Decision Support System (DSS) that can be used in an operations mode to effectively use water acquired to mitigate warm stream temperatures. The DSS is a rule-based system that uses the empirical, statistical predictive model to predict maximum daily stream temperatures based on flows that meet the non-fish operating criteria, and to compute reservoir releases of allocated fish water when predicted temperatures exceed fish habitat temperature targets with a user specified confidence of the temperature predictions. The empirical model is developed using a step-wise linear regression procedure to select significant predictors, and includes the computation of a prediction confidence interval to quantify the uncertainty of the prediction. The DSS also includes a strategy for managing a limited amount of water throughout the season based on degree-days in which temperatures are allowed to exceed the preferred targets for a limited number of days that can be tolerated by the fish. The DSS is demonstrated by an example application to the Truckee River near Reno, Nevada using historical flows from 1988 through 1994. In this case, the statistical model predicts maximum daily Truckee River stream temperatures in June, July, and August using predicted maximum daily air temperature and modeled average daily flow. The empirical relationship was created using a step-wise linear regression selection process using 1993 and 1994 data. The adjusted R2 value for this relationship is 0.91. The model is validated using historic data and demonstrated in a predictive mode with a prediction confidence interval to quantify the uncertainty. Results indicate that the DSS could substantially reduce the number of target temperature violations, i.e., stream temperatures exceeding the target temperature levels detrimental to fish habitat. The results show that large volumes of water are necessary to meet a temperature target with a high degree of certainty and violations may still occur if all of the stored water is depleted. A lower degree of certainty requires less water but there is a higher probability that the temperature targets will be exceeded. Addition of the rules that consider degree-days resulted in a reduction of the number of temperature violations without increasing the amount of water used. This work is described in detail in publications referenced in the URL below.
A case report of historical trauma among American Indians on a rural Northern Plains reservation.
Heckert, Wende; Eisenhauer, Christine
2014-01-01
This case report describes historical trauma on a rural American Indian reservation and outlines participatory action approaches for nurses. The prevalence of historical trauma often goes unnoticed by healthcare professionals because of its multifaceted nature and subsequent lack of provider understanding. Nurses accustomed to looking only for physical and psychosocial signs of trauma may not specifically understand how to align significant historical trauma events with prevention, education, and healthcare delivery. Nursing interventions developed through participatory action and directed at individual, family, and community levels of care are most effective in treating and preventing cumulative effects of historical trauma.
Deterministic Seismic Hazard Assessment of Center-East IRAN (55.5-58.5˚ E, 29-31˚ N)
NASA Astrophysics Data System (ADS)
Askari, M.; Ney, Beh
2009-04-01
Deterministic Seismic Hazard Assessment of Center-East IRAN (55.5-58.5˚E, 29-31˚N) Mina Askari, Behnoosh Neyestani Students of Science and Research University,Iran. Deterministic seismic hazard assessment has been performed in Center-East IRAN, including Kerman and adjacent regions of 100km is selected. A catalogue of earthquakes in the region, including historical earthquakes and instrumental earthquakes is provided. A total of 25 potential seismic source zones in the region delineated as area sources for seismic hazard assessment based on geological, seismological and geophysical information, then minimum distance for every seismic sources until site (Kerman) and maximum magnitude for each source have been determined, eventually using the N. A. ABRAHAMSON and J. J. LITEHISER '1989 attenuation relationship, maximum acceleration is estimated to be 0.38g, that is related to the movement of blind fault with maximum magnitude of this source is Ms=5.5.
Minear, J. Toby; Wright, Scott A.
2013-01-01
The Merced River in the popular and picturesque eastern-most part of Yosemite Valley in Yosemite National Park, California, USA, has been extensively altered since the park was first conceived in 1864. Historical human trampling of streambanks has been suggested as the cause of substantial increases in stream width, and the construction of undersized stone bridges in the 1920s has been suggested as the major factor leading to an increase in overbank flooding due to deposition of bars and islands between the bridges. In response, the National Park Service at Yosemite National Park (YNP) requested a study of the hydraulic and geomorphic conditions affecting the most-heavily influenced part of the river, a 2.4-km reach in eastern Yosemite Valley extending from above the Tenaya Creek and Merced River confluence to below Housekeeping Bridge. As part of the study, present-day conditions were compared to historical conditions and several possible planning scenarios were investigated, including the removal of an elevated road berm and the removal of three undersized historic stone bridges identified by YNP as potential problems: Sugar Pine, Ahwahnee and Stoneman Bridges. This Open-File Report will be superseded at a later date by a Scientific Investigations Report. A two-dimensional hydrodynamic model, the USGS FaSTMECH (Flow and Sediment Transport with Morphological Evolution of Channels) model, within the USGS International River Interface Cooperative (iRIC) model framework, was used to compare the scenarios over a range of discharges with annual exceedance probabilities of 50-, 20-, 10-, and 5- percent. A variety of topographic and hydraulic data sources were used to create the input conditions to the hydrodynamic model, including aerial LiDAR (Light Detection And Ranging), ground-based LiDAR, total station survey data, and grain size data from pebble counts. A digitized version of a historical topographic map created by the USGS in 1919, combined with estimates of grain size, was used to simulate historical conditions, and the planning scenarios were developed by altering the present-day topography. Roughness was estimated independently of measured water-surface elevations by using the mapped grain-size data and the Keulegan relation of grain size to drag coefficient. The FaSTMECH hydrodynamic model was evaluated against measured water levels by using a 130.9 m3 s-1 flow (approximately a 33-percent annual exceedance probability flood) with 36 water-surface elevations measured by YNP personnel on June 8, 2010. This evaluation run had a root mean square error of 0.21 m between the simulated- and observed water-surface elevations (less than 10 percent of depth), though the observed water-surface elevations had relatively high variation due to the strong diurnal stage changes over the course of the 4.4-hour collection period, during which discharge varied by about 15 percent. There are presently no velocity data with which to test the model. A geomorphic assessment was performed that consisted of an estimate of the magnitude and frequency of bedload and suspended-sediment transport at “Tenaya Bar”, an important gravel-cobble bar located near the upstream end of the study site that determines the amount of flow across the floodplain at the Sugar Pine – Ahwahnee bend. An analysis of select repeat cross-sections collected by YNP since the late 1980s was done to investigate changes in channel cross-sectional area near the Tenaya Bar site. The results of the FaSTMECH models indicate that the maximum velocities in the present-day channel within the study reach are associated with Stoneman and Sugar Pine Bridges, at close to 3.0 m s-1 for the 5-percent annual exceedance probability flood. The modeled maximum velocities at Ahwahnee Bridge are comparatively low, at between 1.5 and 2.0 m s-1, most likely due to the bridge's orientation parallel to down-valley floodplain flows. The results of the FaSTMECH models for the bridge removal scenarios indicate a reduction in average velocity at the bridge sites for the range of flows by approximately 23-38 percent (Sugar Pine Bridge), 32-42 percent (Ahwahnee Bridge), and 33-39 percent (Stoneman Bridge), though a side channel of concern to YNP management did not appear to be substantially affected by the removal scenarios. In comparison to the historical data, the FaSTMECH results suggest that flows for present-day conditions do not inundate the floodplain until between the 50- and 20-percent annual exceedance probability flood, whereas historically, a large portion of the floodplain was inundated during the 50-percent annual exceedance probability flood. Modeled maximum velocities in the present-day channel commonly exceed 2.0 m s-1, whereas with the historical scenario, modeled maximum in-channel velocities rarely exceeded 2.0 m s-1. The geomorphic analysis of the magnitude-frequency of bedload and suspended-sediment transport suggests that at the important Tenaya Bar site, the majority of bed sediment is mobile during most snowmelt-dominated floods. In contrast to sediment transport capacity, the analysis of repeat cross-sections suggests that bedload sediment supply into the eastern Yosemite Valley may be quite different between rain-on-snow floods and snowmelt-dominated floods, potentially with most sediment supply occurring during rain-on-snow floods, such as the 1997 flood. In contrast, the magnitude-frequency analysis of bedload and suspended-sediment transport suggests that long-term bedload sediment transport is likely dominated by snowmelt floods, and suspended-sediment transport is relatively low compared to bedload transport. Obtaining measured velocity data throughout the study reach would aid in model calibration, and thus would improve confidence in model results. Improved confidence in the model velocity results would allow additional substantial analyses of reach-scale effects of the planning scenarios and would enable the development of geomorphic models to evaluate the long-term geomorphic responses of the site. In addition, the collection of watershed sediment-supply data, about which little is presently known, would give planners helpful tools to plan restoration scenarios for this nationally important river.
History of the Combat Zone Tax Exclusion
2011-09-01
and Accounting Service (DFAS), Military Pay Tables, 1943 and 1945. Note: Minimum and maximum pay values vary within grades due to a member’s years of...Horowitz, Task Leader Log: H 11-001279 Approved for public release; distribution is unlimited. The Institute for Defense Analyses is a non- profit ...instrumental to the functioning of a fair tax system for members of the armed services. Despite its historical ties to wartime finance, the income tax
The Floor in the Solar Wind Magnetic Field Revisited
2012-05-07
index of geomagnetic activity (Svalgaard and Cliver, 2005). This empir- ical/historical evidence for a lower limit or floor in B was substantiated by...with the model of Fisk and Schwadron (2001) for the reversal of the polar magnetic fields at solar maximum. The Fisk and Schwadron model, based on the...interdiurnal variability [IDV] index of geomagnetic activity (Svalgaard and Cliver, 2005, 2010). DM, for minima preceding cycles 22 – 24, is the absolute
Temporal and Spatial Acoustical Factors for Listeners in the Boxes of Historical Opera Theatres
NASA Astrophysics Data System (ADS)
Sakai, H.; Ando, Y.; Prodi, N.; Pompoli, R.
2002-11-01
Acoustical measurements were conducted in a horseshoe-shaped opera house to clarify the acoustical quality of a sound field for listeners inside the boxes of an historical opera house. In order to investigate the effects of multiple reflections between the walls inside a box and scattering by the heads of people, the location of the receiver and the number of persons in the box were varied. In each configuration, four orthogonal factors and supplementary factors were derived as temporal and spatial factors by analysis of binaural impulse responses. Each factor is compared to that at a typical location in the stalls of the same theatre. An omni-directional sound source was located on the stage to emulate a singer or in the orchestra pit to reproduce the location of the musicians. Thus, in this paper, temporal and spatial factors in relation to subjective evaluation are characterized against changes in the listening conditions inside a box, and procedures for improvement and design methods for boxes are proposed. The main conclusions reached are as follows. As strong reflections from the lateral walls of a hall are screened by the front or side walls of a box for a receiver in a seat deeper in the box, the maximum listening level ( LL) in the boxes was observed at the front of the box, and the maximum range of LL values for each box was found to be 5 dB. Concerning the initial time delay gap ( Δt1), a more uniform listening environment was obtained in boxes further back in the theatre than in one closer to the stage. The subsequent reverberation time ( Tsub) lengthens for boxes closer to the stage due to the stage house with its huge volume, and a peak is observed at 1 kHz. For the box at the back, Tsub monotonically decreases with frequency in the same way as in the stalls, and moreover, its values approach those in the stalls. As the contribution of multiple reflections relatively increases for a receiver deeper in the box, the IACC in such positions decreases in comparison with that seen at the front of the box.
1. Historic American Buildings Survey E. H. Pickering, Photographer October ...
1. Historic American Buildings Survey E. H. Pickering, Photographer October 1936 EXTERIOR VIEW SHOWING SIMILAR BRICK HOUSE ON ADJOINING PROPERTY - 3850 West Chapel Road (Brick House Number 2), Level, Harford County, MD
1. Historic American Buildings Survey Ian McLaughlin Photographer October 27, ...
1. Historic American Buildings Survey Ian McLaughlin Photographer October 27, 1936 DETAIL OF DOORWAY IN WING (FROM NORTHEAST 10:15 a.m.) oldest part - "Level Green", Charles Town, Jefferson County, WV
Assessing the Potential for Inland Migration of a Northeastern Salt Marsh
NASA Astrophysics Data System (ADS)
Farron, S.; FitzGerald, D.; Hughes, Z. J.
2017-12-01
It is often assumed that as sea level rises, salt marshes will expand inland. If the slope of the upland is relatively flat and sufficient sediment is available, marshes should be able to spread horizontally and grow vertically in order to maintain their areal extent. However, in cases where marshes are backed by steeper slopes, or sediment supply is limited, rising sea level will produce minimal gains along the landward edge insufficient to offset potential losses along the seaward edge. This study uses future sea level rise scenarios to project areal losses for the Great Marsh in Massachusetts, the largest continuous salt marsh in New England. Land area covered by salt marsh is defined by surface elevation. Annual sediment input to the system is estimated based on the areal extent of high and low marsh, historical accretion rates for each, and known organic/inorganic ratios. Unlike other studies, sediment availability is considered to be finite, and future accretion rates are limited based on the assumption that the system is presently receiving the maximum sediment input available. The Great Marsh is dominated by high marsh; as sea level rises, it will convert to low marsh, vastly altering the ecological and sedimentological dynamics of the system. If it is assumed that former high marsh areas will build vertically at the increased rate associated with low marsh, then much of the total marsh area will be maintained. However, this may be an unrealistic assumption due to the low levels of suspended sediment within the Great Marsh system. Modeling the evolution of the Great Marsh by assuming that the current accretion rate is the maximum possible for this system reveals much greater losses than models assuming an unlimited sediment supply would predict (17% less marsh by 2115). In addition, uplands surrounding the Great Marsh have been shaped by glaciation, leaving numerous drumlins and other glacial landforms. Compared to the flat backbarrier, the surrounding hills offer little opportunity for expansion. Modeling results suggest that sea level rise over the next century will convert 12 km2 of marsh to open water, but only 9 km2 of new marsh will be formed through uplands inundation and sedimentation. These findings suggest that sea level rise presents a particular threat to the Great Marsh, and marshes like it.
NASA Technical Reports Server (NTRS)
Spruce, Joseph P.; Hargrove, William; Gasser, Gerald; Smoot, James; Kuper, Philip D.
2012-01-01
This presentation reviews the development, integration, and testing of Near Real Time (NRT) MODIS forest % maximum NDVI change products resident to the USDA Forest Service (USFS) ForWarn System. ForWarn is an Early Warning System (EWS) tool for detection and tracking of regionally evident forest change, which includes the U.S. Forest Change Assessment Viewer (FCAV) (a publically available on-line geospatial data viewer for visualizing and assessing the context of this apparent forest change). NASA Stennis Space Center (SSC) is working collaboratively with the USFS, ORNL, and USGS to contribute MODIS forest change products to ForWarn. These change products compare current NDVI derived from expedited eMODIS data, to historical NDVI products derived from MODIS MOD13 data. A new suite of forest change products are computed every 8 days and posted to the ForWarn system; this includes three different forest change products computed using three different historical baselines: 1) previous year; 2) previous three years; and 3) all previous years in the MODIS record going back to 2000. The change product inputs are maximum value NDVI that are composited across a 24 day interval and refreshed every 8 days so that resulting images for the conterminous U.S. are predominantly cloud-free yet still retain temporally relevant fresh information on changes in forest canopy greenness. These forest change products are computed at the native nominal resolution of the input reflectance bands at 231.66 meters, which equates to approx 5.4 hectares or 13.3 acres per pixel. The Time Series Product Tool, a MATLAB-based software package developed at NASA SSC, is used to temporally process, fuse, reduce noise, interpolate data voids, and re-aggregate the historical NDVI into 24 day composites, and then custom MATLAB scripts are used to temporally process the eMODIS NDVIs so that they are in synch with the historical NDVI products. Prior to posting, an in-house snow mask classification product is computed for the current compositing period and integrated into the change images to account for snow related NDVI drops. The supplemental snow classification product was needed because other available QA cloud/snow mask typically underestimates snow cover. MODIS true and false color composites were also computed from eMODIS reflectance data and the true color RGBs are also posted on ForWarn?s FCAV; this data is used for assessing apparent occasional quality issues on the change products due to residual unmasked cloud cover. New forest change products are posted with typical latencies of 1-2 days after the last input eMODIS data collection date for a given 24 day compositing period.
ERIC Educational Resources Information Center
Hall, Angela Renee
2011-01-01
This investigative research focuses on the level of readiness of Science, Technology, Engineering, and Mathematics (STEM) students entering Historically Black Colleges and Universities (HBCU) in the college Calculus sequence. Calculus is a fundamental course for STEM courses. The level of readiness of the students for Calculus can very well play a…
NASA Astrophysics Data System (ADS)
Clinton, J.
2017-12-01
Much of Hawaii's history is recorded in archeological sites. Researchers and cultural practitioners have been studying and reconstructing significant archeological sites for generations. Climate change, and more specifically, sea level rise may threaten these sites. Our research records current sea levels and then projects possible consequences to these cultural monuments due to sea level rise. In this mixed methods study, research scientists, cultural practitioners, and secondary students use plane-table mapping techniques to create maps of coastlines and historic sites. Students compare historical records to these maps, analyze current sea level rise trends, and calculate future sea levels. They also gather data through interviews with community experts and kupuna (elders). If climate change continues at projected rates, some historic sites will be in danger of negative impact due to sea level rise. Knowing projected sea levels at specific sites allows for preventative action and contributes to raised awareness of the impacts of climate change to the Hawaiian Islands. Students will share results with the community and governmental agencies in hopes of inspiring action to minimize climate change. It will take collaboration between scientists and cultural communities to inspire future action on climate change.
Code of Federal Regulations, 2013 CFR
2013-04-01
...) cyclohexanone 1,2-Bis(monobromoacetoxy) ethane [CA Reg. No. 3785-34-0] At a maximum level of 0.10 pound per ton... Methylenebisbutanethiolsulfonate Methylenebisthiocyanate 2-Nitrobutyl bromoacetate [CA Reg. No. 32815-96-6] At a maximum level of 0...)phosphonium sulfate (CAS Reg. No. 55566-30-8) Maximum use level of 84 mg/kg in the pulp slurry. The additive...
Code of Federal Regulations, 2012 CFR
2012-04-01
...) cyclohexanone 1,2-Bis(monobromoacetoxy) ethane [CA Reg. No. 3785-34-0] At a maximum level of 0.10 pound per ton... Methylenebisbutanethiolsulfonate Methylenebisthiocyanate 2-Nitrobutyl bromoacetate [CA Reg. No. 32815-96-6] At a maximum level of 0...)phosphonium sulfate (CAS Reg. No. 55566-30-8) Maximum use level of 84 mg/kg in the pulp slurry. The additive...
Wheeler, Russell L.
2014-01-01
Computation of probabilistic earthquake hazard requires an estimate of Mmax: the moment magnitude of the largest earthquake that is thought to be possible within a specified geographic region. The region specified in this report is the Central and Eastern United States and adjacent Canada. Parts A and B of this report describe the construction of a global catalog of moderate to large earthquakes that occurred worldwide in tectonic analogs of the Central and Eastern United States. Examination of histograms of the magnitudes of these earthquakes allows estimation of Central and Eastern United States Mmax. The catalog and Mmax estimates derived from it are used in the 2014 edition of the U.S. Geological Survey national seismic-hazard maps. Part A deals with prehistoric earthquakes, and this part deals with historical events.
High resolution projections for the western Iberian coastal low level jet in a changing climate
NASA Astrophysics Data System (ADS)
Soares, Pedro M. M.; Lima, Daniela C. A.; Cardoso, Rita M.; Semedo, Alvaro
2017-09-01
The Iberian coastal low-level jet (CLLJ) is one of the less studied boundary layer wind jet features in the Eastern Boundary Currents Systems (EBCS). These regions are amongst the most productive ocean ecosystems, where the atmosphere-land-ocean feedbacks, which include marine boundary layer clouds, coastal jets, upwelling and inland soil temperature and moisture, play an important role in defining the regional climate along the sub-tropical mid-latitude western coastal areas. Recently, the present climate western Iberian CLLJ properties were extensively described using a high resolution regional climate hindcast simulation. A summer maximum frequency of occurrence above 30 % was found, with mean maximum wind speeds around 15 ms-1, between 300 and 400 m heights (at the jet core). Since the 1990s the climate change impact on the EBCS is being studied, nevertheless some lack of consensus still persists regarding the evolution of upwelling and other components of the climate system in these areas. However, recently some authors have shown that changes are to be expected concerning the timing, intensity and spatial homogeneity of coastal upwelling, in response to future warming, especially at higher latitudes, namely in Iberia and Canaries. In this study, the first climate change assessment study regarding the Western Iberian CLLJ, using a high resolution (9 km) regional climate simulation, is presented. The properties of this CLLJ are studied and compared using two 30 years simulations: one historical simulation for the 1971-2000 period, and another simulation for future climate, in agreement with the RCP8.5 scenario, for the 2071-2100 period. Robust and consistent changes are found: (1) the hourly frequency of occurrence of the CLLJ is expected to increase in summer along the western Iberian coast, from mean maximum values of around 35 % to approximately 50 %; (2) the relative increase of the CLLJ frequency of occurrence is higher in the north off western Iberia; (3) the occurrence of the CLLJ covers larger areas both latitudinal and longitudinal; (4) the CLLJ season is lengthened extending to May and September; and, (5) there are shifts for higher occurrences of higher wind speeds and for the jet core to occur at higher heights.
ERIC Educational Resources Information Center
Milanovic, Vesna D.; Trivic, Dragica D.
2017-01-01
The aim of this research was to explore the effects of two approaches, designated as the historical and the contemporary one, on the level of students' understanding of the properties and the practical use of gases. Our research hypothesis was that the historical context of the discovery of gases and the study of their properties would deepen…
Ronald J. Glass; Nancy Gustke; Nancy Gustke
1987-01-01
Although the number of visitors to New Hampshire stateowned historic sites has declined during the last 20 years, the results of a survey indicate that most respondents are satisfied with the level of services provided. The majority of visitors indicated that historic sites need not be "self-supporting"; they were willing to pay an entry fee and did not...
Modeled future peak streamflows in four coastal Maine rivers
Hodgkins, Glenn A.; Dudley, Robert W.
2013-01-01
To safely and economically design bridges and culverts, it is necessary to compute the magnitude of peak streamflows that have specified annual exceedance probabilities (AEPs). Annual precipitation and air temperature in the northeastern United States are, in general, projected to increase during the 21st century. It is therefore important for engineers and resource managers to understand how peak flows may change in the future. This report, prepared in cooperation with the Maine Department of Transportation (MaineDOT), presents modeled changes in peak flows at four basins in coastal Maine on the basis of projected changes in air temperature and precipitation. To estimate future peak streamflows at the four basins in this study, historical values for climate (temperature and precipitation) in the basins were adjusted by different amounts and input to a hydrologic model of each study basin. To encompass the projected changes in climate in coastal Maine by the end of the 21st century, air temperatures were adjusted by four different amounts, from -3.6 degrees Fahrenheit (ºF) (-2 degrees Celsius (ºC)) to +10.8 ºF (+6 ºC) of observed temperatures. Precipitation was adjusted by three different percentage values from -15 percent to +30 percent of observed precipitation. The resulting 20 combinations of temperature and precipitation changes (includes the no-change scenarios) were input to Precipitation-Runoff Modeling System (PRMS) watershed models, and annual daily maximum peak flows were calculated for each combination. Modeled peak flows from the adjusted changes in temperature and precipitation were compared to unadjusted (historical) modeled peak flows. Annual daily maximum peak flows increase or decrease, depending on whether temperature or precipitation is adjusted; increases in air temperature (with no change in precipitation) lead to decreases in peak flows, whereas increases in precipitation (with no change in temperature) lead to increases in peak flows. As the magnitude of air temperatures increase in the four basins, peak flows decrease by larger amounts. If precipitation is held constant (no change from historical values), 17 to 26 percent decreases in peak flow occur at the four basins when temperature is increased by 7.2°F. If temperature is held constant, 26 to 38 percent increases in peak flow result from a 15-percent increase in precipitation. The largest decreases in peak flows at the four basins result from 15-percent decreases in precipitation combined with temperature increases of 10.8°F. The largest increases in peak flows generally result from 30-percent increases in precipitation combined with 3.6 °F decreases in temperatures. In many cases when temperature and precipitation both increase, small increases or decreases in annual daily maximum peak flows result. For likely changes projected for the northeastern United States for the middle of the 21st century (temperature increase of 3.6 °F and precipitation increases of 0 to 15 percent), peak-flow changes at the four coastal Maine basins in this study are modeled to be evenly distributed between increases and decreases of less than 25 percent. Peak flows with 50-percent and 1-percent AEPs (equivalent to 2-year and 100-year recurrence interval peak flows, respectively) were calculated for the four basins in the study using the PRMS-modeled annual daily maximum peak flows. Modeled peak flows with 50-percent and 1-percent AEPs with adjusted temperatures and precipitation were compared to unadjusted (historical) modeled values. Changes in peak flows with 50-percent AEPs are similar to changes in annual daily maximum peak flow; changes in peak flows with 1-percent AEPs are similar in pattern to changes in annual daily maximum peak flow, but some of the changes associated with increasing precipitation are much larger than changes in annual daily maximum peak flow. Substantial decreases in maximum annual winter snowpack water equivalent are modeled to occur with increasing air temperatures at the four basins in the study. (Snowpack is the snow on the ground that accumulates during a winter, and water equivalent is the amount of water in a snowpack if it were melted.) The decrease in modeled peak flows with increasing air temperature, given no change in precipitation amount, is likely caused by these decreases in winter snowpack and resulting decreases in snowmelt runoff. This Scientific Investigations Report, prepared in cooperation with the Maine Department of Transportation, presents a summary of modeled changes in peak flows at four basins in coastal Maine on the basis of projected changes in air temperature and precipitation. The full Fact Sheet (Hodgkins and Dudley, 2013) is available at http://pubs.usgs.gov/fs/2013/3021/.
NASA Astrophysics Data System (ADS)
Dudley, R. W.; Hodgkins, G. A.; Nielsen, M. G.; Qi, S. L.
2018-07-01
A number of previous studies have examined relations between groundwater levels and hydrologic and meteorological variables over parts of the glacial aquifer system, but systematic analyses across the entire U.S. glacial aquifer system are lacking. We tested correlations between monthly groundwater levels measured at 1043 wells in the U.S. glacial aquifer system considered to be minimally influenced by human disturbance and selected hydrologic and meteorological variables with the goal of extending historical groundwater records where there were strong correlations. Groundwater levels in the East region correlated most strongly with short-term (1 and 3 month) averages of hydrologic and meteorological variables, while those in the Central and West Central regions yielded stronger correlations with hydrologic and meteorological variables averaged over longer time intervals (6-12 months). Variables strongly correlated with high and low annual groundwater levels were identified as candidate records for use in statistical linear models as a means to fill in and extend historical high and low groundwater levels respectively. Overall, 37.4% of study wells meeting data criteria had successful models for high and (or) low groundwater levels; these wells shared characteristics of relatively higher local precipitation, higher local land-surface slope, lower amounts of clay within the surficial sediments, and higher base-flow index. Streamflow and base flow served as explanatory variables in about two thirds of both high- and low-groundwater-level models in all three regions, and generally yielded more and better models compared to precipitation and Palmer Drought Severity Index. The use of variables such as streamflow with substantially longer and more complete records than those of groundwater wells provide a means for placing contemporary groundwater levels in a longer historical context and can support site-specific analyses such as groundwater modeling.
Reed, Thomas B.
2003-01-01
A digital model of the Mississippi River Valley alluvial aquifer in eastern Arkansas was used to simulate ground-water flow for the period from 1918 to 2049. The model results were used to evaluate effects on water levels caused by demand for ground water from the alluvial aquifer, which has increased steadily for the last 40 years. The model results showed that water currently (1998) is being withdrawn from the aquifer at rates greater than what can be sustained for the long term. The saturated thickness of the alluvial aquifer has been reduced in some areas resulting in dry wells, degraded water quality, decreased water availability, increased pumping costs, and lower well yields. The model simulated the aquifer from a line just north of the Arkansas-Missouri border to south of the Arkansas River and on the east from the Mississippi River westward to the less permeable geologic units of Paleozoic age. The model consists of 2 layers, a grid of 184 rows by 156 columns, and comprises 14,118 active cells each measuring 1 mile on a side. It simulates time periods from 1918 to 1998 along with further time periods to 2049 testing different pumping scenarios. Model flux boundary conditions were specified for rivers, general head boundaries along parts of the western side of the model and parts of Crowleys Ridge, and a specified head boundary across the aquifer further north in Missouri. Model calibration was conducted for observed water levels for the years 1972, 1982, 1992, and 1998. The average absolute residual was 4.69 feet and the root-mean square error was 6.04 feet for the hydraulic head observations for 1998. Hydraulic-conductivity values obtained during the calibration process were 230 feet per day for the upper layer and ranged from 230 to 730 feet per day for the lower layer with the maximum mean for the combined aquifer of 480 feet per day. Specific yield values were 0.30 throughout the model and specific storage values were 0.000001 inverse-feet throughout the model. Areally specified recharge rates ranged from 0 to about 30 inches and total recharge increased from 1972 to 1998 by a factor of about four. Water levels caused by projected ground-water withdrawals were simulated using the calibrated model. Simulations represented a period of 50 years into the future in three scenarios with either unchanged pumpage, pumpage increased by historic trends, or pumpage increased by historic trends except in two areas of the Grand Prairie. If pumping remains at 1997 rates, this produces extreme water-level declines (areas where model cells have gone dry or where the water level in the aquifer is equal to or less than the original saturated thickness, assuming confined conditions in the aquifer everywhere in the formation in predevelopment times) in the aquifer in two areas of the aquifer (one in the Grand Prairie area between the Arkansas and White Rivers and the other west of Crowleys Ridge along the Cache River) with about 400 square miles going dry. Increasing the pumping rates to that which would be projected using historic data led to increased extreme water-level declines in both areas with about 1,300 square miles going dry. Declines in both scenarios generally occurred most rapidly between 2009 and 2019. Reducing the pumping rates to 90 percent of that used for projected historic rates in areas between the Arkansas and White Rivers relating to two diversion projects of the U.S. Army Corps of Engineers and other agencies did little to decrease the extreme water-level declines. However, these pumpage reductions are small (amounting to about 16 percent of the reductions that could result from implementation of these diversion projects).
Lenhard, R J; Rayner, J L; Davis, G B
2017-10-01
A model is presented to account for elevation-dependent residual and entrapped LNAPL above and below, respectively, the water-saturated zone when predicting subsurface LNAPL specific volume (fluid volume per unit area) and transmissivity from current and historic fluid levels in wells. Physically-based free, residual, and entrapped LNAPL saturation distributions and LNAPL relative permeabilities are integrated over a vertical slice of the subsurface to yield the LNAPL specific volumes and transmissivity. The model accounts for effects of fluctuating water tables. Hypothetical predictions are given for different porous media (loamy sand and clay loam), fluid levels in wells, and historic water-table fluctuations. It is shown the elevation range from the LNAPL-water interface in a well to the upper elevation where the free LNAPL saturation approaches zero is the same for a given LNAPL thickness in a well regardless of porous media type. Further, the LNAPL transmissivity is largely dependent on current fluid levels in wells and not historic levels. Results from the model can aid developing successful LNAPL remediation strategies and improving the design and operation of remedial activities. Results of the model also can aid in accessing the LNAPL recovery technology endpoint, based on the predicted transmissivity. Copyright © 2017 Commonwealth Scientific and Industrial Research Organisation - Copyright 2017. Published by Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Elmrabet, T.
2009-04-01
The seismic risk assessment in countries with moderate seismic activity is a very delicate action as well as a high priority one in land use management and urban rehabilitation. The credibility of this assessment is based mainly on the effects of destructive earthquakes whose maximum intensities become a reference for determining the level of protection. This knowledge requires a detailed study of the damage caused by major earthquakes over a long period of time. In this context, a broad scale research about past earthquakes started in the Geophysics Laboratory, now called the National Geophysics Institute (ING) within the National Center for Scientific and Technical Research in Rabat, Morocco. It supported a thorough study which included critical historical texts scattered in various local and foreign archives. The pursuit of this study, under the project called PAMERAR, which was launched in the early 80's aimed at reducing of earthquake risk in the Arab region, led us to investigate the origin of these texts according to various public and private libraries, as in Spain and France. The scientific interest in the historical seismicity in Morocco was highlighted at the beginning of the twentieth century in several reports and catalogs but with several gaps in time and space, further research was thus necessary especially to explore documents in particular from origin sources. This interest in historical archives is within the scope of understanding the natural hazards over long period to evaluate the recurrence of earthquakes. This study leads to significant results covering 11 centuries, from AD 846 to the present. We continued to study the effects of recent earthquakes on humans and buildings heritage, this work has led to a study in the footsteps of local seismic culture. Since earthquakes do not recognize the limits of political geography, since 1993 we have undertaken to extend the research to include the entire Maghreb region. We then used the database available to study the effects of earthquakes on humans and their social environment. The historical seismic catalog compiled for the maghrebian region is relatively homogeneous and complete covering the past twelve centuries. The seismic events are plotted on an appropriate Maghreb map. We started the study of the acquired seismic culture by identifying clues and traces left in the methods of construction and building measures taken on the old buildings that have enabled then to withstand strong earthquakes found in particular in the Moroccan catalogs and in the western mediterranean in general.
Reinhold, Lilli; Reinhardt, Katja
2011-05-01
In this presentation, the mycotoxin levels-as analysed by the analytical centre for mycotoxin surveillance of the state food laboratory (LAVES Braunschweig)-for approximately 500 food samples are reported. The samples were collected in the year 2009 at retail in the German federal state of Lower Saxony. Aflatoxin and ochratoxin A were analysed in dried fruits, spices, cereals and tree nuts. Ochratoxin A was detected in all samples of dried vine fruits, at levels up to 8.1 μg/kg. Aflatoxins and ochratoxin A were also found in nutmeg and curry powder: the maximum regulatory levels for aflatoxins were exceeded in 25% of the nutmeg samples. Nearly all samples of basmati rice contained aflatoxins, although at levels below the maximum regulatory level in all but one sample. Aflatoxins were also detected in about 50% of hazelnut samples, in 20% of the samples the maximum levels was exceeded (maximum 23.2 μg/kg). In contrast, aflatoxin contents in pistachios were surprisingly low. Fusarium toxins were analysed in cereals and cereal products such as flour, bread, and pasta. Deoxynivalenol (DON) was the predominant toxin found in these samples: DON was found in about 40% of the samples, although the maximum levels were not exceeded (max. 418 μg/kg). Fumonisins (FBs) and zearalenone (ZEA) were specifically analysed in maize products (snacks, flour and oil). Most of these samples (80%) were positive, but at levels not exceeding the maximum levels. Maximum levels were 98 μg/kg (ZEA) and 577 μg/kg (sum of FB1 and FB2). Ergot alkaloids (six major alkaloids) were analysed in rye flour, and approximately 50% were positive. The highest concentration of ergot alkaloids was 1,063 μg/kg; the predominant alkaloids were ergotamine and ergocristine. In conclusion, the results indicate that continuous and efficient control measures for mycotoxins in a wide range of critical foods are necessary to ensure compliance with maximum levels. Although the mycotoxin levels in the vast majority of samples were below maximum levels, year-to-year variation and changes in the production of relevant commodities may result in a different picture in the future.
Lead exposure and eclampsia in Britain, 1883-1934.
Troesken, Werner
2006-07-01
Eclampsia refers to a coma or seizure activity in a pregnant woman with no prior history of such activity. This paper presents a mix of historical and epidemiological evidence consistent with the hypothesis that chronic lead exposure is a predisposing factor for eclampsia. The historical evidence is based on research conducted by British physicians around 1900 showing that the geographic variation in eclampsia across England and Wales was correlated with lead levels in local drinking water supplies. A formal epidemiological analysis based on a data set of English and Welsh counties observed in 1883 corroborates the evidence presented by historical observers. In particular, the statistical results show that the death rate from eclampsia in counties with high-water-lead levels exceeded the death rate in counties with low-water-lead levels by a factor of 2.34 (95% CI: 1.54-3.14).
NASA Astrophysics Data System (ADS)
Gaertner, B. A.; Zegre, N.
2015-12-01
Climate change is surfacing as one of the most important environmental and social issues of the 21st century. Over the last 100 years, observations show increasing trends in global temperatures and intensity and frequency of precipitation events such as flooding, drought, and extreme storms. Global circulation models (GCM) show similar trends for historic and future climate indicators, albeit with geographic and topographic variability at regional and local scale. In order to assess the utility of GCM projections for hydrologic modeling, it is important to quantify how robust GCM outputs are compared to robust historical observations at finer spatial scales. Previous research in the United States has primarily focused on the Western and Northeastern regions due to dominance of snow melt for runoff and aquifer recharge but the impact of climate warming in the mountainous central Appalachian Region is poorly understood. In this research, we assess the performance of GCM-generated historical climate compared to historical observations primarily in the context of forcing data for macro-scale hydrologic modeling. Our results show significant spatial heterogeneity of modeled climate indices when compared to observational trends at the watershed scale. Observational data is showing considerable variability within maximum temperature and precipitation trends, with consistent increases in minimum temperature. The geographic, temperature, and complex topographic gradient throughout the central Appalachian region is likely the contributing factor in temperature and precipitation variability. Variable climate changes are leading to more severe and frequent climate events such as temperature extremes and storm events, which can have significant impacts on our drinking water supply, infrastructure, and health of all downstream communities.
Yuan, Zihao; Huang, Wei; Liu, Shikai; Xu, Peng; Dunham, Rex; Liu, Zhanjiang
2018-04-01
The inference of historical demography of a species is helpful for understanding species' differentiation and its population dynamics. However, such inference has been previously difficult due to the lack of proper analytical methods and availability of genetic data. A recently developed method called Pairwise Sequentially Markovian Coalescent (PSMC) offers the capability for estimation of the trajectories of historical populations over considerable time periods using genomic sequences. In this study, we applied this approach to infer the historical demography of the common carp using samples collected from Europe, Asia and the Americas. Comparison between Asian and European common carp populations showed that the last glacial period starting 100 ka BP likely caused a significant decline in population size of the wild common carp in Europe, while it did not have much of an impact on its counterparts in Asia. This was probably caused by differences in glacial activities in East Asia and Europe, and suggesting a separation of the European and Asian clades before the last glacial maximum. The North American clade which is an invasive population shared a similar demographic history as those from Europe, consistent with the idea that the North American common carp probably had European ancestral origins. Our analysis represents the first reconstruction of the historical population demography of the common carp, which is important to elucidate the separation of European and Asian common carp clades during the Quaternary glaciation, as well as the dispersal of common carp across the world.
Earthquake recovery of historic buildings: exploring cost and time needs.
Al-Nammari, Fatima M; Lindell, Michael K
2009-07-01
Disaster recovery of historic buildings has rarely been investigated even though the available literature indicates that they face special challenges. This study examines buildings' recovery time and cost to determine whether their functions (that is, their use) and their status (historic or non-historic) affect these outcomes. The study uses data from the city of San Francisco after the 1989 Loma Prieta earthquake to examine the recovery of historic buildings owned by public agencies and non-governmental organisations. The results show that recovery cost is affected by damage level, construction type and historic status, whereas recovery time is affected by the same variables and also by building function. The study points to the importance of pre-incident recovery planning, especially for building functions that have shown delayed recovery. Also, the study calls attention to the importance of further investigations into the challenges facing historic building recovery.
Survey of Occupational Noise Exposure in CF Personnel in Selected High-Risk Trades
2003-11-01
peak, maximum level , minimum level , average sound level , time weighted average, dose, projected 8-hour dose, and upper limit time were measured for...10 4.4.2 Maximum Sound Level ...11 4.4.3 Minimum Sound Level
NASA Astrophysics Data System (ADS)
Armigliato, A.; Tinti, S.; Pagnoni, G.; Zaniboni, F.; Paparo, M. A.
2015-12-01
The portion of the eastern Sicily coastline (southern Italy), ranging from the southern part of the Catania Gulf (to the north) down to the southern-eastern end of the island, represents a very important geographical domain from the industrial, commercial, military, historical and cultural points of view. Here the two major cities of Augusta and Siracusa are found. In particular, the Augusta bay hosts one of the largest petrochemical poles in the Mediterranean, and Siracusa is listed among the UNESCO World Heritage Sites since 2005. This area was hit by at least seven tsunamis in the approximate time interval from 1600 BC to present, the most famous being the 365, 1169, 1693 and 1908 tsunamis. The choice of this area as one of the sites for the testing of innovative methods for tsunami hazard, vulnerability and risk assessment and reduction is then fully justified. This is being developed in the frame of the EU Project called ASTARTE - Assessment, STrategy And Risk Reduction for Tsunamis in Europe (Grant 603839, 7th FP, ENV.2013.6.4-3). We assess the tsunami hazard for the Augusta-Siracusa area through the worst-case credible scenario technique, which can be schematically divided into the following steps: 1) Selection of five main source areas, both in the near- and in the far-field (Hyblaean-Malta escarpment, Messina Straits, Ionian subduction zone, Calabria offshore, western Hellenic Trench); 2) Choice of potential and credible tsunamigenic faults in each area: 38 faults were selected, with properly assigned magnitude, geometry and focal mechanism; 3) Computation of the maximum tsunami wave elevations along the eastern Sicily coast on a coarse grid (by means of the in-house code UBO-TSUFD) and extraction of the 9 scenarios that produce the largest effects in the target areas of Augusta and Siracusa; 4) For each of the 9 scenarios we run numerical UBO-TSUFD simulations over a set of five nested grids, with grid cells size decreasing from 3 km in the open Ionian sea to 40 m in the target areas of Augusta and Siracusa. The simulation results consist of fields of maximum water elevation, of maximum water column, of maximum velocity and of maximum momentum flux. The main findings for each single scenario and for the aggregate scenario are presented and discussed.
The global warming in the North Atlantic Sector and the role of the ocean
NASA Astrophysics Data System (ADS)
Hand, R.; Keenlyside, N. S.; Greatbatch, R. J.; Omrani, N. E.
2014-12-01
This work presents an analysis of North Atlantic ocean-atmosphere interaction in a warming climate, based on a long-term earth system model experiment forced by the RCP 8.5 scenario, the strongest greenhouse gas forcing used in the climate projections for the 5th Assessement report of the Intergovernmental Panel on Climate Change). In addition to a global increase in SSTs as a direct response to the radiative forcing, the model shows a distinct change of the local sea surface temperature (SST hereafter) patterns in the Gulf Stream region: The SST front moves northward by several hundred kilometers, likely as a response of the wind-driven part of the oceanic surface circulation, and becomes more zonal. As a consequence of a massive slowdown of the Atlantic Meridional Overturning Circulation, the northeast North Atlantic only shows a moderate warming compared to the rest of the ocean. The feedback of these changes on the atmosphere was studied in a set of sensitivity experiments based on the SST climatology of the coupled runs. The set consists of a control run based on the historical run, a run using the full SST from the coupled RCP 8.5 run and two runs, where the SST signal was deconstructed into a homogenous mean warming part and a local pattern change. In the region of the precipitation maximum in the historical run the future scenario shows an increase of absolute SSTs, but a significant decrease in local precipitation, low-level convergence and upward motion. Since warmer SSTs usually cause the opposite, this indicates that the local response in that region is connected to the (with respect to the historical run) weakened SST gradients rather than to the absolute SST. Consistently, the model shows enhanced precipitation north of this region, where the SST gradients are enhanced. However, the signal restricts to the low and mid-troposphere and does not reach the higher model levels. There is little evidence for a large-scale response to the changes in the Gulf Stream region; instead, the large scale signal is mainly controlled by the warmer background state and the AMOC slowdown and influenced by tropical SSTs. In a warmer climate the same change in SST gradient has a stronger effect on precipitation and the model produces a slightly enhanced North Atlantic storm track.
ERIC Educational Resources Information Center
McManus, Kristen LeToria
2013-01-01
Despite affirmative action, gender inequities persist at institutions of higher learning in the United States. The purpose of this qualitative research study was to explore the perceptions of African American women serving in executive-level leadership positions at historically black colleges and universities in a state the Southeast. Participants…
ERIC Educational Resources Information Center
Croft, Michael; de Berg, Kevin
2014-01-01
This paper selects six key alternative conceptions identified in the literature on student understandings of chemical bonding and illustrates how a historical analysis and a textbook analysis can inform these conceptions and lead to recommendations for improving the teaching and learning of chemical bonding at the secondary school level. The…
NASA Astrophysics Data System (ADS)
Garcier, R. J.
2007-11-01
As products of both natural and social systems, rivers are highly complex historical objects. We show in this paper that historical analysis works on two different levels: one level, which we call "structural", shows the materiality of the riverine environment as the spatial-temporal product of natural factors and human impacts (bed and course alterations, pollution, etc.). On a second level -"semiotic" - we show that river systems are also social constructs and the subjects of ancient and diverse management practices. The quality of a river will be a function of the dialectical interaction between both levels. Historical analysis can uncover the inherited constraints that bear upon current management practices. To help substantiate this analytical framework, we analyse the case of the Moselle river in eastern France by using archival sources and statistical data. Severely impaired by industrial discharges from iron, coal and salt industries between the 1875s and the early 1980s, the waters of the Moselle became the subject of a social consensus between stakeholders that prevented the implementation of efficient pollution management policies until the 1990s. The example urges caution on the pervasiveness of participatory approaches to river management: social consensus does not necessarily benefit the environment.
NASA Astrophysics Data System (ADS)
Garcier, R. J.
2007-06-01
As products of both natural and social systems, rivers are highly complex historical objects. We show in this paper that historical analysis works on two different levels: one level, which we call "structural", shows the materiality of the riverine environment as the spatial-temporal product of natural factors and human impacts (bed and course alterations, pollution, etc.). On a second level - "semiotic" - we show that river systems are also social constructs and the subjects of ancient and diverse management practices. The quality of a river will be a function of the dialectical interaction between both levels. Historical analysis can uncover the inherited constraints that bear upon current management practices. To help substantiate this analytical framework, we analyse the case of the Moselle river in eastern France by using archival sources and statistical data. Severely impaired by industrial discharges from iron, coal and salt industries between the 1875s and the early 1980s, the waters of the Moselle became the subject of a social consensus between stakeholders that prevented the implementation of efficient pollution management policies until the 1990s. The example urges caution on the pervasiveness of participatory approaches to river management: social consensus does not necessarily benefit the environment.
Colonial National Historical Park 2010 visitor/motorist survey.
DOT National Transportation Integrated Search
2011-05-31
This report presents findings and recommendations from a 2010 survey of visitors not using a seasonal shuttle bus at Colonial National Historical Park. The survey asked visitors for basic demographic information, level of awareness of the shuttle, in...
Use of Laser Scanning Technology to Obtain As-Built Records of Historic Covered Bridges
Robert J. Ross; Brian K. Brashaw; Samuel J. Anderson
2012-01-01
Covered bridges are part of the fabric of American history. Although much effort is expended to preserve these structures, many are lost forever. The National Park Serviceâs Historic American Engineering Record (HAER) has efforts under way to document historic structures. Their Level I documentation is defined in the Secretary of the Interiorâs Standards and Guidelines...
Tracing past ambient air pollution and its consequences on human health
NASA Astrophysics Data System (ADS)
Pedersen, M. W.; Kjaer, K. H.; Dean, K.; Siggaard-Andersen, M. L.; Petersen, J.; Rasmussen, P.; Kjeldsen, K. K.; Ilsøe, P.; Rivers, A.; Andersen, T.; Schreiber, N.; Bjork, A. A.; Funder, S.; Larsen, N. K.; Ruter, A.; Schomacker, A.; Andresen, C. S.; Hamerlik, L.; Orlando, L.; Hansen, A.; Mollerup, S.; Murray, A. S.; Thomsen, K. J.; Jensen, N.; Bjorck, S.; Bønløkke, J.; Tringe, S. G.; Rubin, E.; Louchouarn, P.; Willerslev, E.
2017-12-01
The onset and magniture of the industrialization and its impact on human health remains debated. This is because information largely comes from historical written records that primarily contains socio-political descriptions and thus do not provide a comprehensive environmental history. Therefore, it is essential to have an independent means for reconstructing pollution and disease levels around the time of industrialization. Here, we demonstrate how heavy metals, black carbon (BC), polycyclic aromatic hydrocarbons (PAHs) and environmental DNA (eDNA) in lake sediments can be used to track pollution and disease levels over the last 360 years in one major European capital city, Copenhagen (Denmark). We find that increased air pollution commenced in 1760s but decrease by the end of the 1790s, however it is not until 1850s a persistent increase occurs supporting the minority view that industrialization in Copenhagen initiated at this time rather than 20 years later as commonly thought. Over the following 125 years the pollution levels increased thousand-fold reaching a maximum level during the 1950-70s. After this time, the clean-air political initiative reduced emissions for most pollutants, some of which almost returned to pre-industrial levels. The high PAH levels measured between 1900 and 1950 imply that IQ levels of Copenhagen citizens, were probably 2-6 points lower during that period than today based upon their known impact on children's cognitive abilities. Changes in eDNA composition reveals establishment and cultivation of Copenhagen's Botanical Garden in the 1870s as well as the onset of the 1853 cholera epidemic. That epidemic, fuelled by high population density, caused the death of 4,737 Copenhageners. Our study establishes lake sediments as novel archives for tracking pollution levels, environmental changes and epidemics during urban development and understanding the changes associated with urbanisation.
Construct validity of self-reported historical physical activity.
Bowles, Heather R; FitzGerald, Shannon J; Morrow, James R; Jackson, Allen W; Blair, Steven N
2004-08-01
The purpose of this study was to determine the construct-related validity of self-reported historical walking, running, and jogging (WRJ) activity on the basis of data from the Aerobics Center Longitudinal Study (Dallas, Texas). A total of 4,100 men and 963 women underwent at least one medical examination between 1976 and 1985 and completed a follow-up questionnaire in 1986. Levels of glucose, cholesterol, and triglycerides, resting systolic blood pressure, body mass index (weight (kg)/height (m)(2)), and cardiorespiratory fitness were measured at the time of the medical examination. The follow-up questionnaire assessed WRJ and other strenuous activities for each year from 1976 through 1985. Data analysis included Spearman and partial correlations, analysis of variance, analysis of covariance, and t tests. Results indicated significant correlations between recalled WRJ and treadmill times for each year throughout the 10-year period (r = 0.40-0.61). Participants were classified as historically either sufficiently physically active to receive a health benefit or insufficiently active for a health benefit. Engaging in sufficient levels of historical WRJ was associated with higher treadmill times and lower body mass indices for men and women and lower triglyceride levels for men. Self-reported historical WRJ can be assessed with reasonable validity in comparison with measured treadmill performance, with no decay in accuracy of reporting for up to 10 years in the past.
Paul Sclafani
2011-01-01
The Middle Rio Grande is a 29-mi reach of the Rio Grande River in central New Mexico that extends from downstream of Cochiti Dam to Bernalillo, New Mexico. A series of anthropogenic factors including the construction of flood control levees and Cochiti Dam have altered the historically-braided morphology of the Middle Rio Grande to a more sinuous, degrading reach, with...
Resisting Reflexive Control: A Reassessment of Chinas Strategy and A2/AD
2015-04-01
spectrum of arguments from a maximum entitlement (historic waters) to a minimalist one (sovereignty): “either the entire area might be Chinese...far its approach has been decidedly neo-mercantilist and competitive.”84 As “easy oil”85 deposits start to become scarcer, the United States can... start to question whether or not the system needs a radical change. America has the ability to aid China in its quest for resources, and America can
Characteristics of Gyeongju earthquake, moment magnitude 5.5 and relative relocations of aftershocks
NASA Astrophysics Data System (ADS)
Cho, ChangSoo; Son, Minkyung
2017-04-01
There is low seismicity in the korea peninsula. According historical record in the historic book, There were several strong earthquake in the korea peninsula. Especially in Gyeongju of capital city of the Silla dynasty, few strong earthquakes caused the fatalities of several hundreds people 1,300 years ago and damaged the houses and make the wall of castles collapsed. Moderate strong earthquake of moment magnitude 5.5 hit the city in September 12, 2016. Over 1000 aftershocks were detected. The numbers of occurrences of aftershock over time follows omori's law well. The distribution of relative locations of 561 events using clustering aftershocks by cross-correlation between P and S waveform of the events showed the strike NNE 25 30 o and dip 68 74o of fault plane to cause the earthquake matched with the fault plane solution of moment tensor inversion well. The depth of range of the events is from 11km to 16km. The width of distribution of event locations is about 5km length. The direction of maximum horizontal stress by inversion of stress for the moment solutions of main event and large aftershocks is similar to the known maximum horizontal stress direction of the korea peninsula. The relation curves between moment magnitude and local magnitude of aftershocks shows that the moment magnitude increases slightly more for events of size less than 2.0
The poleward migration of the location of tropical cyclone maximum intensity.
Kossin, James P; Emanuel, Kerry A; Vecchi, Gabriel A
2014-05-15
Temporally inconsistent and potentially unreliable global historical data hinder the detection of trends in tropical cyclone activity. This limits our confidence in evaluating proposed linkages between observed trends in tropical cyclones and in the environment. Here we mitigate this difficulty by focusing on a metric that is comparatively insensitive to past data uncertainty, and identify a pronounced poleward migration in the average latitude at which tropical cyclones have achieved their lifetime-maximum intensity over the past 30 years. The poleward trends are evident in the global historical data in both the Northern and the Southern hemispheres, with rates of 53 and 62 kilometres per decade, respectively, and are statistically significant. When considered together, the trends in each hemisphere depict a global-average migration of tropical cyclone activity away from the tropics at a rate of about one degree of latitude per decade, which lies within the range of estimates of the observed expansion of the tropics over the same period. The global migration remains evident and statistically significant under a formal data homogenization procedure, and is unlikely to be a data artefact. The migration away from the tropics is apparently linked to marked changes in the mean meridional structure of environmental vertical wind shear and potential intensity, and can plausibly be linked to tropical expansion, which is thought to have anthropogenic contributions.
Modern climate challenges and the geological record
Cronin, Thomas M.
2010-01-01
Today's changing climate poses challenges about the influence of human activity, such as greenhouse gas emissions and land use changes, the natural variability of Earth's climate, and complex feedback processes. Ice core and instrumental records show that over the last century, atmospheric carbon dioxide (CO2) concentrations have risen to 390 parts per million volume (ppmv), about 40% above pre-Industrial Age concentrations of 280 ppmv and nearly twice those of the last glacial maximum about 22,000 years ago. Similar historical increases are recorded in atmospheric methane (CH4) and nitrous oxide (N2O). There is general agreement that human activity is largely responsible for these trends. Substantial evidence also suggests that elevated greenhouse gas concentrations are responsible for much of the recent atmospheric and oceanic warming, rising sea level, declining Arctic sea-ice cover, retreating glaciers and small ice caps, decreased mass balance of the Greenland and parts of the Antarctic ice sheets, and decreasing ocean pH (ocean "acidification"). Elevated CO2 concentrations raise concern not only from observations of the climate system, but because feedbacks associated with reduced reflectivity from in land and sea ice, sea level, and land vegetation relatively slowly (centuries or longer) to elevated 2 levels. This means that additional human-induced climate change is expected even if the rate of CO2 emissions is reduced or concentrations immediately stabilized.
The analysis and kinetic energy balance of an upper-level wind maximum during intense convection
NASA Technical Reports Server (NTRS)
Fuelberg, H. E.; Jedlovec, G. J.
1982-01-01
The purpose of this paper is to analyze the formation and maintenance of the upper-level wind maximum which formed between 1800 and 2100 GMT, April 10, 1979, during the AVE-SESAME I period, when intense storms and tornadoes were experienced (the Red River Valley tornado outbreak). Radiosonde stations participating in AVE-SESAME I are plotted (centered on Oklahoma). National Meteorological Center radar summaries near the times of maximum convective activity are mapped, and height and isotach plots are given, where the formation of an upper-level wind maximum over Oklahoma is the most significant feature at 300 mb. The energy balance of the storm region is seen to change dramatically as the wind maximum forms. During much of its lifetime, the upper-level wind maximum is maintained by ageostrophic flow that produces cross-contour generation of kinetic energy and by the upward transport of midtropospheric energy. Two possible mechanisms for the ageostrophic flow are considered.
36 CFR 3.15 - What is the maximum noise level for the operation of a vessel?
Code of Federal Regulations, 2011 CFR
2011-07-01
... 36 Parks, Forests, and Public Property 1 2011-07-01 2011-07-01 false What is the maximum noise... SERVICE, DEPARTMENT OF THE INTERIOR BOATING AND WATER USE ACTIVITIES § 3.15 What is the maximum noise level for the operation of a vessel? (a) A person may not operate a vessel at a noise level exceeding...
Code of Federal Regulations, 2014 CFR
2014-04-01
...(monobromoacetoxy) ethane [CA Reg. No. 3785-34-0] At a maximum level of 0.10 pound per ton of dry weight fiber. Bis... Methylenebisthiocyanate 2-Nitrobutyl bromoacetate [CA Reg. No. 32815-96-6] At a maximum level of 0.15 pound per ton of dry.... No. 55566-30-8) Maximum use level of 84 mg/kg in the pulp slurry. The additive may also be added to...
36 CFR 3.15 - What is the maximum noise level for the operation of a vessel?
Code of Federal Regulations, 2010 CFR
2010-07-01
... 36 Parks, Forests, and Public Property 1 2010-07-01 2010-07-01 false What is the maximum noise... SERVICE, DEPARTMENT OF THE INTERIOR BOATING AND WATER USE ACTIVITIES § 3.15 What is the maximum noise level for the operation of a vessel? (a) A person may not operate a vessel at a noise level exceeding...
Mapping Hurricane Inland-Storm Tides
NASA Astrophysics Data System (ADS)
Turco, M.; East, J. W.; Dorsey, M. E.; McGee, B. D.; McCallum, B. E.; Pearman, J. L.; Sallenger, A. H.; Holmes, R. R.; Berembrock, C. E.; Turnipseed, D. P.; Mason, R. R.
2008-12-01
Historically, hurricane-induced storm-tides were documented through analysis of structural or vegetative damage and high-water marks. However, these sources rarely provided quantitative information about the timing of the flooding, the sequencing of multiple paths by which the storm-surge waters arrived, or the magnitude of waves and wave run-up comprising floodwaters. In response to these deficiencies, the U.S. Geological Survey (USGS) developed and deployed an experimental mobile storm-surge network to provide detailed time-series data for selected hurricane landfalls. The USGS first deployed the network in September 2005 as Hurricane Rita approached the Texas and Louisiana coasts. The network for Rita consisted of 32 water-level and 14 barometric-pressure monitoring sites. Sensors were located at distances ranging from a few hundred feet to approximately 30 miles inland and sampled 4,000 square miles. Deployments have also occurred for Hurricanes Wilma, Gustav, and Ike. For Hurricane Gustav, more than 100 water level sensors were deployed. Analysis of the water-level data enable construction of maps depicting surge topography through time and space, essentially rendering elements of a 3-dimensional view of the storm-surge dome as it moves on- shore, as well as a map of maximum water-level elevations. The USGS also acquired LIDAR topographic data from coasts impacted by hurricanes. These data reveal extreme changes to the beaches and barrier islands that arise from hurricane storm surge and waves. By better understanding where extreme changes occur along our coasts, we will be able to position coastal structures away from hazards.
Choi, Sangjun; Kang, Dongmug; Park, Donguk; Lee, Hyunhee; Choi, Bongkyoo
2017-03-01
The goal of this study is to develop a general population job-exposure matrix (GPJEM) on asbestos to estimate occupational asbestos exposure levels in the Republic of Korea. Three Korean domestic quantitative exposure datasets collected from 1984 to 2008 were used to build the GPJEM. Exposure groups in collected data were reclassified based on the current Korean Standard Industrial Classification (9 th edition) and the Korean Standard Classification of Occupations code (6 th edition) that is in accordance to international standards. All of the exposure levels were expressed by weighted arithmetic mean (WAM) and minimum and maximum concentrations. Based on the established GPJEM, the 112 exposure groups could be reclassified into 86 industries and 74 occupations. In the 1980s, the highest exposure levels were estimated in "knitting and weaving machine operators" with a WAM concentration of 7.48 fibers/mL (f/mL); in the 1990s, "plastic products production machine operators" with 5.12 f/mL, and in the 2000s "detergents production machine operators" handling talc containing asbestos with 2.45 f/mL. Of the 112 exposure groups, 44 groups had higher WAM concentrations than the Korean occupational exposure limit of 0.1 f/mL. The newly constructed GPJEM which is generated from actual domestic quantitative exposure data could be useful in evaluating historical exposure levels to asbestos and could contribute to improved prediction of asbestos-related diseases among Koreans.
Hydrology of C-3 watershed, Seney National Wildlife Refuge, Michigan
Sweat, Michael J.
2001-01-01
Proposed changes to watershed management practices near C-3 Pool at Seney National Wildlife Refuge will affect surface-water flow patterns, ground-water levels, and possibly local plant communities. Data were collected between fall 1998 and spring 2000 to document existing conditions and to assess potential changes in hydrology that might occur as a consequence of modifications to water management practices in C-3 watershed.Minimum and maximum measured inflows and outflows for the study period are presented in light of proposed management changes to C-3 watershed. Streamflows ranged from 0 to 8.61 cubic meters per second. Low or zero flow was generally measured in late summer and early fall, and highest flows were measured during spring runoff and winter rain events. Ground-water levels varied by about a half meter, with levels closest to or above the land surface during spring runoff into the early summer, and with levels generally below land surface during late fall into early winter.A series of optional management practices that could conserve and restore habitat of the C-3 watershed is described. Modifications to the existing system of a drainage ditch and control structures are examined, as are the possibilities of reconnecting streams to their historical channels and the construction of additional or larger control structures to further manage the distribution of water in the watershed. The options considered could reduce erosion, restore presettlement streamflow conditions, and modify the ground-water gradient.
NASA Astrophysics Data System (ADS)
Huntley, John Warren; Fürsich, Franz T.; Alberti, Matthias; Hethke, Manja; Liu, Chunlian
2014-12-01
Increasing global temperature and sea-level rise have led to concern about expansions in the distribution and prevalence of complex-lifecycle parasites (CLPs). Indeed, numerous environmental variables can influence the infectivity and reproductive output of many pathogens. Digenean trematodes are CLPs with intermediate invertebrate and definitive vertebrate hosts. Global warming and sea level rise may affect these hosts to varying degrees, and the effect of increasing temperature on parasite prevalence has proven to be nonlinear and difficult to predict. Projecting the response of parasites to anthropogenic climate change is vital for human health, and a longer term perspective (104 y) offered by the subfossil record is necessary to complement the experimental and historical approaches of shorter temporal duration (10-1 to 103 y). We demonstrate, using a high-resolution 9,600-y record of trematode parasite traces in bivalve hosts from the Holocene Pearl River Delta, that prevalence was significantly higher during the earliest stages of sea level rise, significantly lower during the maximum transgression, and statistically indistinguishable in the other stages of sea-level rise and delta progradation. This stratigraphic paleobiological pattern represents the only long-term high-resolution record of pathogen response to global change, is consistent with fossil and recent data from other marine basins, and is instructive regarding the future of disease. We predict an increase in trematode prevalence concurrent with anthropogenic warming and marine transgression, with negative implications for estuarine macrobenthos, marine fisheries, and human health.
Norway's historical and projected water balance in TWh
NASA Astrophysics Data System (ADS)
Haddeland, Ingjerd; Holmqvist, Erik
2015-04-01
Hydroelectric power production is closely linked to the water cycle, and variations in power production numbers reflect variations in weather. The expected climate changes will influence electricity supply through changes in annual and seasonal inflow of water to hydropower reservoirs. In Norway, more than 95 percent of the electricity production is from hydroelectric plants, and industry linked to hydropower has been an important part of the society for more than a century. Reliable information on historical and future available water resources is hence of crucial importance both for short and long-term planning and adaptation purposes in the hydropower sector. Traditionally, the Multi-area Power-market Simulator (EMPS) is used for modelling hydropower production in Norway. However, due to the models' high level of details and computational demand, this model is only used for historical analyses and a limited number of climate projections. A method has been developed that transfers water fluxes (mm day-1) and states (mm) into energy units (GWh mm-1), based on hydrological modelling of a limited number of catchments representing reservoir inflow to more than 700 hydropower plants in Norway. The advantages of using the conversion factor method, compared to EMPS, are its simplicity and low computational requirements. The main disadvantages are that it does not take into account flood losses and the time lag between inflow and power production. The method is used operationally for weekly and seasonal energy forecasts, and has proven successful at the range of results obtained for reproducing historical hydropower production numbers. In hydropower energy units, mean annual precipitation for the period 1981-2010 is estimated at 154 TWh year-1. On average, 24 TWh year-1 is lost through evapotranspiration, meaning runoff equals 130 TWh year-1. There are large interannual variations, and runoff available for power production ranges from 91 to 165 TWh year-1. The snow pack on average peaks in the middle of April at 54 TWh, ranging from 33 to 84 TWh. Given its simplicity, the method of using conversion factors is a time and computational efficient way of producing projections of hydropower production potential from an ensemble of climate model simulations. Regional climate model (RCM) projections are obtained from Euro-Cordex, and precipitation and temperature are bias corrected to observation based datasets at 1 km2. Preliminary results, based on an ensemble consisting of 16 members (8 RCMs, RCP4.5 and RCP8.5) and transient hydrological simulations for the period 1981-2100, indicate an increase in hydroelectric power production of up to 10 percent by the end of the century, given the effect of climate change alone. The expected increase in temperature causes a negative trend for the energy potential stored in the annual maximum snow pack. At the end of the century (2071-2100), the maximum snow pack holds 43 TWh and 30 TWh for RCP4.5 and RCP8.5, respectively, compared to 54 TWh in 1981-2010. The substantial decrease in the peak snow pack is reflected in the seasonally more even inflow to reservoirs expected in the next decades.
NASA Astrophysics Data System (ADS)
Castedo, Ricardo; de la Vega-Panizo, Rogelio; Fernández-Hernández, Marta; Paredes, Carlos
2015-02-01
A key requirement for effective coastal zone management is good knowledge of historical rates of change and the ability to predict future shoreline evolution, especially for rapidly eroding areas. Historical shoreline recession analysis was used for the prediction of future cliff shoreline positions along a section of 9 km between Bridlington and Hornsea, on the northern area of the Holderness Coast, UK. The analysis was based on historical maps and aerial photographs dating from 1852 to 2011 using the Digital Shoreline Analysis System (DSAS) 4.3, extension of ESRI's ArcInfo 10.×. The prediction of future shorelines was performed for the next 40 years using a variety of techniques, ranging from extrapolation from historical data, geometric approaches like the historical trend analysis, to a process-response numerical model that incorporates physically-based equations and geotechnical stability analysis. With climate change and sea-level rise implying that historical rates of change may not be a reliable guide for the future, enhanced visualization of the evolving coastline has the potential to improve awareness of these changing conditions. Following the IPCC, 2013 report, two sea-level rise rates, 2 mm/yr and 6 mm/yr, have been used to estimate future shoreline conditions. This study illustrated that good predictive models, once their limitations are estimated or at least defined, are available for use by managers, planners, engineers, scientists and the public to make better decisions regarding coastal management, development, and erosion-control strategies.
Repurposing historical control clinical trial data to provide safety context.
Bhuyan, Prakash; Desai, Jigar; Louis, Matthew St; Carlsson, Martin; Bowen, Edward; Danielson, Mark; Cantor, Michael N
2016-02-01
Billions of dollars spent, millions of subject-hours of clinical trial experience and an abundance of archived study-level data, yet why are historical data underutilized? We propose that historical data can be aggregated to provide safety, background incidence rate and context to improve the evaluation of new medicinal products. Here, we describe the development and application of the eControls database, which is derived from the control arms of studies of licensed products, and discuss the challenges and potential solutions to the proper application of historical data to help interpret product safety. Copyright © 2015 Elsevier Ltd. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bonardi, M.; Canal, E.; Cavazzoni, S.
1997-12-31
Sedimentological investigations and archeological and historical information have allowed to correlate paleoenvironmental and coastline variations, in the Lagoon of Venice, to climatic changes during the Holocene. In particular, we report the results of a detailed study of Holocene sediments, from salt marshes and small islands, taken above and below a level with well dated archeological findings that gave a good indication of the mean sea level.
Searching for Eustasy in Pliocene Sea-Level Records (Invited)
NASA Astrophysics Data System (ADS)
Raymo, M. E.; Hearty, P. J.; O'Leary, M.; Mitrovica, J.; Deconto, R.; Inglis, J. D.; Robinson, M. M.
2010-12-01
It is widely accepted that greenhouse gas-induced warming over the next few decades to centuries could lead to a rise in sea level due to melting ice caps. Yet despite the enormous social and economic consequences for society, our ability to predict the likelihood and location of future melting is hampered by an insufficient theoretical and historical understanding of ice sheet behavior in the past. Various lines of evidence suggest that CO2 levels in the mid-Pliocene were between 350-450 ppm, similar to today, and it is important that significant effort be made to confirm these estimates, especially in light of policy discussions that seek to determine a “safe” level of atmospheric CO2. Likewise, accurate estimates of mid-Pliocene sea levels are necessary if we are to better constrain Greenland and Antarctic ice sheet stability in a slightly warmer world. Current published estimates of mid-Pliocene sea level (during times of maximum insolation forcing) range from +5m to >+40m (relative to present) reflecting a huge range of uncertainty in the sensitivity of polar ice sheets, including the East Antarctic Ice Sheet, to a modest global warming. Accurate determination of the maximum mid-Pliocene sea level rise is needed if climate and ice sheet modelers are to better assess the robustness of models used to predict the effects of anthropogenic global warming. Pliocene ice volume/highstand estimates fall into two classes, those derived from geologic evidence of past high stands and those derived from geochemical proxies of ice-sensitive changes in ocean chemistry. Both methods have significant errors and uncertainties associated with them. Recent multidisciplinary work along the intra-plate continental margin of Roe Plain (~250 x 30 km) on the southern coastline of Western Australia provides additional constraints on sea level during the mid-Pliocene. Outcroppings of shore-proximal marine deposits are observed at two distinct elevations across the plain, +28 ± 2 m and +18 ± 2 m. Definitive sedimentary intertidal indications (e.g., concentrated concave down bivalves characteristic of a swash zone) and subtidal biofacies including articulated valves are found throughout the deposits and suggest the occurrence two distinct highstand events. Preliminary Sr-isotopes yield a broad range of mid to late Pliocene ages. These data will be discussed in light of possible ice volume, dynamic topography, and isostatic effects. Building on these data we present a strategy for improving the accuracy of mid Pliocene sea level estimates.
Graf, Alexandra C; Bauer, Peter
2011-06-30
We calculate the maximum type 1 error rate of the pre-planned conventional fixed sample size test for comparing the means of independent normal distributions (with common known variance) which can be yielded when sample size and allocation rate to the treatment arms can be modified in an interim analysis. Thereby it is assumed that the experimenter fully exploits knowledge of the unblinded interim estimates of the treatment effects in order to maximize the conditional type 1 error rate. The 'worst-case' strategies require knowledge of the unknown common treatment effect under the null hypothesis. Although this is a rather hypothetical scenario it may be approached in practice when using a standard control treatment for which precise estimates are available from historical data. The maximum inflation of the type 1 error rate is substantially larger than derived by Proschan and Hunsberger (Biometrics 1995; 51:1315-1324) for design modifications applying balanced samples before and after the interim analysis. Corresponding upper limits for the maximum type 1 error rate are calculated for a number of situations arising from practical considerations (e.g. restricting the maximum sample size, not allowing sample size to decrease, allowing only increase in the sample size in the experimental treatment). The application is discussed for a motivating example. Copyright © 2011 John Wiley & Sons, Ltd.
Brown, Sandra [University of Illinois, Urbana, IL (USA); Winrock International, Arlington, Virginia (USA); Gaston, Greg [University of Illinois, Urbana, IL (USA); Oregon State University; Beaty, T. W. [Carbon Dioxide Information Analysis Center (CDIAC), Oak Ridge National Laboratory, Oak Ridge, TN (USA); Olsen, L. M. [Carbon Dioxide Information Analysis Center (CDIAC), Oak Ridge National Laboratory, Oak Ridge, TN (USA)
2001-01-01
This document describes the contents of a digital database containing maximum potential aboveground biomass, land use, and estimated biomass and carbon data for 1980. The biomass data and carbon estimates are associated with woody vegetation in Tropical Africa. These data were collected to reduce the uncertainty associated with estimating historical releases of carbon from land use change. Tropical Africa is defined here as encompassing 22.7 x 10E6 km2 of the earth's land surface and is comprised of countries that are located in tropical Africa (Angola, Botswana, Burundi, Cameroon, Cape Verde, Central African Republic, Chad, Congo, Benin, Equatorial Guinea, Ethiopia, Djibouti, Gabon, Gambia, Ghana, Guinea, Ivory Coast, Kenya, Liberia, Madagascar, Malawi, Mali, Mauritania, Mozambique, Namibia, Niger, Nigeria, Guinea-Bissau, Zimbabwe (Rhodesia), Rwanda, Senegal, Sierra Leone, Somalia, Sudan, Tanzania, Togo,Uganda, Burkina Faso (Upper Volta), Zaire, and Zambia). The database was developed using the GRID module in the ARC/INFO (TM geographic information system. Source data were obtained from the Food and Agriculture Organization (FAO), the U.S. National Geophysical Data Center, and a limited number of biomass-carbon density case studies. These data were used to derive the maximum potential and actual (ca. 1980) aboveground biomass values at regional and country levels. The land-use data provided were derived from a vegetation map originally produced for the FAO by the International Institute of Vegetation Mapping, Toulouse, France.
Simulating the effect of climate extremes on groundwater flow through a lakebed
Virdi, Makhan L.; Lee, Terrie M.; Swancar, Amy; Niswonger, Richard G.
2012-01-01
Groundwater exchanges with lakes resulting from cyclical wet and dry climate extremes maintain lake levels in the environment in ways that are not well understood, in part because they remain difficult to simulate. To better understand the atypical groundwater interactions with lakes caused by climatic extremes, an original conceptual approach is introduced using MODFLOW-2005 and a kinematic-wave approximation to variably saturated flow that allows lake size and position in the basin to change while accurately representing the daily lake volume and three-dimensional variably saturated groundwater flow responses in the basin. Daily groundwater interactions are simulated for a calibrated lake basin in Florida over a decade that included historic wet and dry departures from the average rainfall. The divergent climate extremes subjected nearly 70% of the maximum lakebed area and 75% of the maximum shoreline perimeter to both groundwater inflow and lake leakage. About half of the lakebed area subject to flow reversals also went dry. A flow-through pattern present for 73% of the decade caused net leakage from the lake 80% of the time. Runoff from the saturated lake margin offset the groundwater deficit only about half of that time. A centripetal flow pattern present for 6% of the decade was important for maintaining the lake stage and generated 30% of all net groundwater inflow. Pumping effects superimposed on dry climate extremes induced the least frequent but most cautionary flow pattern with leakage from over 90% of the actual lakebed area.
Trends in the Diversity, Distribution and Life History Strategy of Arctic Hydrozoa (Cnidaria)
Ronowicz, Marta; Kukliński, Piotr; Mapstone, Gillian M.
2015-01-01
This is the first attempt to compile a comprehensive and updated species list for Hydrozoa in the Arctic, encompassing both hydroid and medusa stages and including Siphonophorae. We address the hypothesis that the presence of a pelagic stage (holo- or meroplanktonic) was not necessary to successfully recolonize the Arctic by Hydrozoa after the Last Glacial Maximum. Presence-absence data of Hydrozoa in the Arctic were prepared on the basis of historical and present-day literature. The Arctic was divided into ecoregions. Species were grouped into distributional categories according to their worldwide occurrences. Each species was classified according to life history strategy. The similarity of species composition among regions was calculated with the Bray-Curtis index. Average and variation in taxonomic distinctness were used to measure diversity at the taxonomic level. A total of 268 species were recorded. Arctic-boreal species were the most common and dominated each studied region. Nineteen percent of species were restricted to the Arctic. There was a predominance of benthic species over holo- and meroplanktonic species. Arctic, Arctic-Boreal and Boreal species were mostly benthic, while widely distributed species more frequently possessed a pelagic stage. Our results support hypothesis that the presence of a pelagic stage (holo- or meroplanktonic) was not necessary to successfully recolonize the Arctic. The predominance of benthic Hydrozoa suggests that the Arctic could have been colonised after the Last Glacial Maximum by hydroids rafting on floating substrata or recolonising from glacial refugia. PMID:25793294
Trends in the diversity, distribution and life history strategy of Arctic Hydrozoa (Cnidaria).
Ronowicz, Marta; Kukliński, Piotr; Mapstone, Gillian M
2015-01-01
This is the first attempt to compile a comprehensive and updated species list for Hydrozoa in the Arctic, encompassing both hydroid and medusa stages and including Siphonophorae. We address the hypothesis that the presence of a pelagic stage (holo- or meroplanktonic) was not necessary to successfully recolonize the Arctic by Hydrozoa after the Last Glacial Maximum. Presence-absence data of Hydrozoa in the Arctic were prepared on the basis of historical and present-day literature. The Arctic was divided into ecoregions. Species were grouped into distributional categories according to their worldwide occurrences. Each species was classified according to life history strategy. The similarity of species composition among regions was calculated with the Bray-Curtis index. Average and variation in taxonomic distinctness were used to measure diversity at the taxonomic level. A total of 268 species were recorded. Arctic-boreal species were the most common and dominated each studied region. Nineteen percent of species were restricted to the Arctic. There was a predominance of benthic species over holo- and meroplanktonic species. Arctic, Arctic-Boreal and Boreal species were mostly benthic, while widely distributed species more frequently possessed a pelagic stage. Our results support hypothesis that the presence of a pelagic stage (holo- or meroplanktonic) was not necessary to successfully recolonize the Arctic. The predominance of benthic Hydrozoa suggests that the Arctic could have been colonised after the Last Glacial Maximum by hydroids rafting on floating substrata or recolonising from glacial refugia.
ERIC Educational Resources Information Center
Jager, Justin; Keyes, Katherine M.; Schulenberg, John E.
2015-01-01
This study examines historical variation in age 18 to 26 binge drinking trajectories, focusing on differences in both levels of use and rates of change (growth) across cohorts of young adults over 3 decades. As part of the national Monitoring the Future Study, over 64,000 youths from the high school classes of 1976 to 2004 were surveyed at…
Estes, Tammara L; Pai, Naresh; Winchell, Michael F
2016-06-01
A key factor in the human health risk assessment process for the registration of pesticides by the US Environmental Protection Agency (EPA) is an estimate of pesticide concentrations in groundwater used for drinking water. From 1997 to 2011, these estimates were obtained from the EPA empirical model SCI-GROW. Since 2012, these estimates have been obtained from the EPA deterministic model PRZM-GW, which has resulted in a significant increase in estimated groundwater concentrations for many pesticides. Historical groundwater monitoring data from the National Ambient Water Quality Assessment (NAWQA) Program (1991-2014) were compared with predicted groundwater concentrations from both SCI-GROW (v.2.3) and PRZM-GW (v.1.07) for 66 different pesticides of varying environmental fate properties. The pesticide environmental fate parameters associated with over- and underprediction of groundwater concentrations by the two models were evaluated. In general, SCI-GROW2.3 predicted groundwater concentrations were close to maximum historically observed groundwater concentrations. However, for pesticides with soil organic carbon content values below 1000 L kg(-1) and no simulated hydrolysis, PRZM-GW overpredicted, often by greater than 100 ppb. © 2015 Society of Chemical Industry. © 2015 Society of Chemical Industry.
Current and Future Urban Stormwater Flooding Scenarios in the Southeast Florida Coasts
NASA Astrophysics Data System (ADS)
Huq, E.; Abdul-Aziz, O. I.
2016-12-01
This study computed rainfall-fed stormwater flooding under the historical and future reference scenarios for the Southeast Coasts Basin of Florida. A large-scale, mechanistic rainfall-runoff model was developed using the U.S. E.P.A. Storm Water Management Model (SWMM 5.1). The model parameterized important processes of urban hydrology, groundwater, and sea level, while including hydroclimatological variables and land use features. The model was calibrated and validated with historical streamflow data. It was then used to estimate the sensitivity of stormwater runoff to the reference changes in hydroclimatological variables (rainfall and evapotranspiration) and different land use/land cover features (imperviousness, roughness). Furthermore, historical (1970-2000) and potential 2050s stormwater budgets were also estimated for the Florida Southeast Coasts Basin by incorporating climatic projections from different GCMs and RCMs, as well as by using relevant projections of sea level and land use/cover. Comparative synthesis of the historical and future scenarios along with the results of sensitivity analysis can aid in efficient management of stormwater flooding for the southeast Florida coasts and similar urban centers under a changing regime of climate, sea level, land use/cover and hydrology.
Moeini, Reihaneh; Memariani, Zahra; Pasalar, Parvin; Gorji, Narjes
2017-04-04
Pharmacogenomics and pharmacoproteomics are new sciences that their goal is achieving therapeutics with maximum results and minimal side effects for each individual due to the pattern of his genome and proteome.Although they considered new and high technology sciences but in distant past, Persian sages like Avicenna also knew about importance of "personalized medicine" and used specific patterns to detect individual differences in order to select suitable medication.Based on experience and analogy they divided individuals into different categories considering characteristics like body color, body temperature, sleep-awake pattern and skeletal structure.They also paid attention to effect of environmental conditions such as climate, job and the change of seasons on the influence of medication.Considering the low cost and ease of use of these experiences, it seems that researching their opinions can uncover the historical roots of modern pharmacoproteomics and can possibly infuse new ideas in this field.
Hull, Joshua M; Girman, Derek J
2005-01-01
DNA sequences of the mitochondrial control region were analysed from 298 individual sharp-shinned hawks (Accipiter striatus velox) sampled at 12 different migration study sites across North America. The control region proved to be an appropriate genetic marker for identification of continental-scale population genetic structure and for determining the historical demography of population units. These data suggest that sharp-shinned hawks sampled at migration sites in North America are divided into distinct eastern and western groups. The eastern group appears to have recently expanded in response to the retreat of glacial ice at the end of the last glacial maximum. The western group appears to have been strongly effected by the Holocene Hypsithermal dry period, with molecular evidence indicating the most recent expansion following this mid-Holocene climatic event 7000-5000 years before present.
Wild brown trout affected by historical mining in the Cévennes National Park, France.
Monna, F; Camizuli, E; Revelli, P; Biville, C; Thomas, C; Losno, R; Scheifler, R; Bruguier, O; Baron, S; Chateau, C; Ploquin, A; Alibert, P
2011-08-15
In the protected area of the Cévennes National Park (Southern France), 114 wild brown trout (Salmo trutta fario) were captured at six locations affected to different extents by historical mining and metallurgy dating from the Iron Age to Modern Times. Cadmium and lead in trout livers and muscles reflect high sediment contamination, although an age-related effect was also detected for hepatic metal concentrations. Lead isotope signatures confirm exposure to drainage from mining and metallurgical waste. Developmental instability, assessed by fluctuating asymmetry, is significantly correlated with cadmium and lead concentrations in trout tissues, suggesting that local contamination may have affected fish development. Nowadays, the area is among the least industrialized in France. However, our results show that 60% of the specimens at one site exceed EU maximum allowed cadmium or lead concentration in foodstuffs. The mining heritage should not be neglected when establishing strategies for long-term environmental management.
Merritts, Dorothy; Walter, Robert; Rahnis, Michael; Hartranft, Jeff; Cox, Scott; Gellis, Allen; Potter, Noel; Hilgartner, William; Langland, Michael; Manion, Lauren; Lippincott, Caitlin; Siddiqui, Sauleh; Rehman, Zain; Scheid, Chris; Kratz, Laura; Shilling, Andrea; Jenschke, Matthew; Datin, Katherine; Cranmer, Elizabeth; Reed, Austin; Matuszewski, Derek; Voli, Mark; Ohlson, Erik; Neugebauer, Ali; Ahamed, Aakash; Neal, Conor; Winter, Allison; Becker, Steven
2011-01-01
Recently, widespread valley-bottom damming for water power was identified as a primary control on valley sedimentation in the mid-Atlantic US during the late seventeenth to early twentieth century. The timing of damming coincided with that of accelerated upland erosion during post-European settlement land-use change. In this paper, we examine the impact of local drops in base level on incision into historic reservoir sediment as thousands of ageing dams breach. Analysis of lidar and field data indicates that historic milldam building led to local base-level rises of 2-5 m (typical milldam height) and reduced valley slopes by half. Subsequent base-level fall with dam breaching led to an approximate doubling in slope, a significant base-level forcing. Case studies in forested, rural as well as agricultural and urban areas demonstrate that a breached dam can lead to stream incision, bank erosion and increased loads of suspended sediment, even with no change in land use. After dam breaching, key predictors of stream bank erosion include number of years since dam breach, proximity to a dam and dam height. One implication of this work is that conceptual models linking channel condition and sediment yield exclusively with modern upland land use are incomplete for valleys impacted by milldams. With no equivalent in the Holocene or late Pleistocene sedimentary record, modern incised stream-channel forms in the mid-Atlantic region represent a transient response to both base-level forcing and major changes in land use beginning centuries ago. Similar channel forms might also exist in other locales where historic milling was prevalent.
Coastal marsh response to historical and future sea-level acceleration
Kirwan, M.; Temmerman, S.
2009-01-01
We consider the response of marshland to accelerations in the rate of sea-level rise by utilizing two previously described numerical models of marsh elevation. In a model designed for the Scheldt Estuary (Belgium-SW Netherlands), a feedback between inundation depth and suspended sediment concentrations allows marshes to quickly adjust their elevation to a change in sea-level rise rate. In a model designed for the North Inlet Estuary (South Carolina), a feedback between inundation and vegetation growth allows similar adjustment. Although the models differ in their approach, we find that they predict surprisingly similar responses to sea-level change. Marsh elevations adjust to a step change in the rate of sea-level rise in about 100 years. In the case of a continuous acceleration in the rate of sea-level rise, modeled accretion rates lag behind sea-level rise rates by about 20 years, and never obtain equilibrium. Regardless of the style of acceleration, the models predict approximately 6-14 cm of marsh submergence in response to historical sea-level acceleration, and 3-4 cm of marsh submergence in response to a projected scenario of sea-level rise over the next century. While marshes already low in the tidal frame would be susceptible to these depth changes, our modeling results suggest that factors other than historical sea-level acceleration are more important for observations of degradation in most marshes today.
Sylvester, Kenneth M.; Brown, Daniel G.; Leonard, Susan H.; Merchant, Emily; Hutchins, Meghan
2015-01-01
Land-use change in the U.S. Great Plains since agricultural settlement in the second half of the nineteenth century has been well documented. While aggregate historical trends are easily tracked, the decision-making of individual farmers is difficult to reconstruct. We use an agent-based model to tell the history of the settlement of the West by simulating farm-level agricultural decision making based on historical data about prices, yields, farming costs, and environmental conditions. The empirical setting for the model is the period between 1875 and 1940 in two townships in Kansas, one in the shortgrass region and the other in the mixed grass region. Annual historical data on yields and prices determine profitability of various land uses and thereby inform decision-making, in conjunction with the farmer’s previous experience and randomly assigned levels of risk aversion. Results illustrating the level of agreement between model output and unique and detailed household-level records of historical land use and farm size suggest that economic behavior and natural endowments account for land change processes to some degree, but are incomplete. Discrepancies are examined to identify missing processes through model experiments, in which we adjust input and output prices, crop yields, agent memory, and risk aversion. These analyses demonstrate how agent-based modeling can be a useful laboratory for thinking about social and economic behavior in the past. PMID:25729323
Bächler, K; Amico, P; Hönger, G; Bielmann, D; Hopfer, H; Mihatsch, M J; Steiger, J; Schaub, S
2010-05-01
Low-level donor-specific HLA-antibodies (HLA-DSA) (i.e. detectable by single-antigen flow beads, but negative by complement-dependent cytotoxicity crossmatch) represent a risk factor for early allograft rejection. The short-term efficacy of an induction regimen consisting of polyclonal anti-T-lymphocyte globulin (ATG) and intravenous immunoglobulins (IvIg) in patients with low-level HLA-DSA is unknown. In this study, we compared 67 patients with low-level HLA-DSA not having received ATG/IvIg induction (historic control) with 37 patients, who received ATG/IvIg induction. The two groups were equal regarding retransplants, HLA-matches, number and class of HLA-DSA. The overall incidence of clinical/subclinical antibody-mediated rejection (AMR) was lower in the ATG/IvIg than in the historic control group (38% vs. 55%; p = 0.03). This was driven by a significantly lower rate of clinical AMR (11% vs. 46%; p = 0.0002). Clinical T-cell-mediated rejection (TCR) was significantly lower in the ATG/IvIg than in the historic control group (0% vs. 50%; p < 0.0001). Within the first year, allograft loss due to AMR occurred in 7.5% in the historic control and in 0% in the ATG/IvIg group. We conclude that in patients with low-level HLA-DSA, ATG/IvIg induction significantly reduces TCR and the severity of AMR, but the high rate of subclinical AMR suggests an insufficient control of the humoral immune response.
Low-level nocturnal wind maximum over the Central Amazon Basin
NASA Technical Reports Server (NTRS)
Greco, Steven; Ulanski, Stanley; Garstang, Michael; Houston, Samuel
1992-01-01
A low-level nocturnal wind maximum is shown to exist over extensive and nearly undisturbed rainforest near the central Amazon city of Manaus. Meteorological data indicate the presence of this nocturnal wind maximum during both the wet and dry seasons of the Central Amazon Basin. Daytime wind speeds which are characteristically 3-7 m/s between 300 and 1000 m increase to 10-15 m/s shortly after sunset. The wind-speed maximum is reached in the early evening, with wind speeds remaining high until several hours after sunrise. The nocturnal wind maximum is closely linked to a strong low-level inversion formed by radiational cooling of the rainforest canopy. Surface and low-level pressure gradients between the undisturbed forest and the large Amazon river system and the city of Manaus are shown to be responsible for much of the nocturnal wind increase. The pressure gradients are interpreted as a function of the thermal differences between undisturbed forest and the river/city. The importance of both the frictional decoupling and the horizontal pressure gradient suggest that the nocturnal wind maximum does not occur uniformly over all Amazonia. Low-level winds are thought to be pervasive under clear skies and strong surface cooling and that, in many places (i.e., near rivers), local pressure gradients enhance the low-level nocturnal winds.
NASA Astrophysics Data System (ADS)
Lyddon, Charlotte; Plater, Andy, ,, Prof.; Brown, Jenny, ,, Dr.; Leonardi, Nicoletta, ,, Dr.
2017-04-01
Coastal zones worldwide are subject to short term, local variations in sea-level, particularly communities and industries developed on estuaries. Astronomical high tides, meteorological storm surges and increased river flow present a combined flood hazard. This can elevate water level at the coast above predicted levels, generating extreme water levels. These contributions can also interact to alter the phase and amplitude of tides and surges, and thus cause significant mismatches between the predicted and observed water level. The combined effect of tide, surge, river flow and their interactions are the key to understanding and assessing flood risk in estuarine environments for design purposes. Delft3D-FLOW, a hydrodynamic model which solves the unsteady shallow-water equation, is used to access spatial variability in extreme water levels for a range of historical events of different severity within the Severn Estuary, southwest England. Long-term tide gauge records from Ilfracombe and Mumbles and river level data from Sandhurst are analysed to generate a series of extreme water level events, representing the 90th, 95th and 99th percentile conditions, to force the model boundaries. To separate out the time-varying contributions of tidal, fluvial, meteorological processes and their interactions the model is run with different physical forcing. A low pass filter is applied to "de-tide" the residual water elevation, to separate out the time-varying meteorological residual and the tide-surge interactions within the surge. The filtered surge is recombined with the predicted tide so the peak occurs at different times relative to high water. The resulting time series are used to force the model boundary to identify how the interactive processes influence the timing of extreme water level across the estuarine domain. This methodology is first validated using the most extreme event on record to ensure that modelled extreme water levels can be predicted with confidence. Changes in maximum water level are observed in areas where nuclear assets are located (Hinkley, Oldbury & Berkeley) and further upstream, e.g., close to the tidal limit of the Severn Estuary at Epney. Change in crest shape (area and duration above the MSHW) are analysed to understand changes to flood hazard around the peak of the tide. The work concludes that changes in maximum water level can be attributed to the change in time of the peak of the surge relative to high water, the surge shape (classified by skew and kurtosis) and severity of the event. The results can be used to understand the spatial variability in extreme water levels relative to a tide gauge location, which can then be applied to other management needs in hypertidal estuaries worldwide.
Coastal Tsunami and Risk Assessment for Eastern Mediterranean Countries
NASA Astrophysics Data System (ADS)
Kentel, E.; Yavuz, C.
2017-12-01
Tsunamis are rarely experienced events that have enormous potential to cause large economic destruction on the critical infrastructures and facilities, social devastation due to mass casualty, and environmental adverse effects like erosion, accumulation and inundation. Especially for the past two decades, nations have encountered devastating tsunami events. The aim of this study is to investigate risks along the Mediterranean coastline due to probable tsunamis based on simulations using reliable historical data. In order to do this, 50 Critical Regions, CRs, (i.e. city centers, agricultural areas and summer villages) and 43 Critical Infrastructures, CIs, (i.e. airports, ports & marinas and industrial structures) are determined to perform people-centered risk assessment along Eastern Mediterranean region covering 7 countries. These countries include Turkey, Syria, Lebanon, Israel, Egypt, Cyprus, and Libya. Bathymetry of the region is given in Figure 1. In this study, NAMI-DANCE is used to carry out tsunami simulations. Source of a sample tsunami simulation and maximum wave propagation in the study area for this sample tsunami are given in Figures 2 and 3, respectively.Richter magnitude,, focal depth, time of occurrence in a day and season are considered as the independent parameters of the earthquake. Historical earthquakes are used to generate reliable probability distributions for these parameters. Monte Carlo (MC) Simulations are carried out to evaluate overall risks at the coastline. Inundation level, population density, number of passenger or employee, literacy rate, annually income level and existence of human are used in risk estimations. Within each MC simulation and for each grid in the study area, people-centered tsunami risk for each of the following elements at risk is calculated: i. City centers ii. Agricultural areas iii. Summer villages iv. Ports and marinas v. Airports vi. Industrial structures Risk levels at each grid along the shoreline are calculated based on the factors given above, grouped into low, medium and high risk, and used in generating the risk map. The risk map will be useful in prioritizing areas that require development of tsunami mitigation measures.
NASA Astrophysics Data System (ADS)
Thatch, L. M.; Maxwell, R. M.; Gilbert, J. M.
2017-12-01
Over the past century, groundwater levels in California's San Joaquin Valley have dropped more than 30 meters in some areas due to excessive groundwater extraction to irrigate agricultural lands and feed a growing population. Between 2012 and 2016 California experienced the worst drought in its recorded history, further exacerbating this groundwater depletion. Due to lack of groundwater regulation, exact quantities of extracted groundwater in California are unknown and hard to quantify. We use a synthesis of integrated hydrologic model simulations and remote sensing products to quantify the impact of drought and groundwater pumping on the Central Valley water tables. The Parflow-CLM model was used to evaluate groundwater depletion in the San Joaquin River basin under multiple groundwater extraction scenarios simulated from pre-drought through recent drought years. Extraction scenarios included pre-development conditions, with no groundwater pumping; historical conditions based on decreasing groundwater level measurements; and estimated groundwater extraction rates calculated from the deficit between the predicted crop water demand, based on county land use surveys, and available surface water supplies. Results were compared to NASA's Gravity Recover and Climate Experiment (GRACE) data products to constrain water table decline from groundwater extraction during severe drought. This approach untangles various factors leading to groundwater depletion within the San Joaquin Valley both during drought and years of normal recharge to help evaluate which areas are most susceptible to groundwater overdraft, as well as further evaluating the spatially and temporally variable sustainable yield. Recent efforts to improve water management and ensure reliable water supplies are highlighted by California's Sustainable Groundwater Management Act (SGMA) which mandates Groundwater Sustainability Agencies to determine the maximum quantity of groundwater that can be withdrawn through the course of a year without undesirable effects. We provide a path forward for how this concept may inform sustainable groundwater use under climate variations and land use changes.
Historical extension of operational NDVI products for livestock insurance in Kenya
NASA Astrophysics Data System (ADS)
Vrieling, Anton; Meroni, Michele; Shee, Apurba; Mude, Andrew G.; Woodard, Joshua; de Bie, C. A. J. M. (Kees); Rembold, Felix
2014-05-01
Droughts induce livestock losses that severely affect Kenyan pastoralists. Recent index insurance schemes have the potential of being a viable tool for insuring pastoralists against drought-related risk. Such schemes require as input a forage scarcity (or drought) index that can be reliably updated in near real-time, and that strongly relates to livestock mortality. Generally, a long record (>25 years) of the index is needed to correctly estimate mortality risk and calculate the related insurance premium. Data from current operational satellites used for large-scale vegetation monitoring span over a maximum of 15 years, a time period that is considered insufficient for accurate premium computation. This study examines how operational NDVI datasets compare to, and could be combined with the non-operational recently constructed 30-year GIMMS AVHRR record (1981-2011) to provide a near-real time drought index with a long term archive for the arid lands of Kenya. We compared six freely available, near-real time NDVI products: five from MODIS and one from SPOT-VEGETATION. Prior to comparison, all datasets were averaged in time for the two vegetative seasons in Kenya, and aggregated spatially at the administrative division level at which the insurance is offered. The feasibility of extending the resulting aggregated drought indices back in time was assessed using jackknifed R2 statistics (leave-one-year-out) for the overlapping period 2002-2011. We found that division-specific models were more effective than a global model for linking the division-level temporal variability of the index between NDVI products. Based on our results, good scope exists for historically extending the aggregated drought index, thus providing a longer operational record for insurance purposes. We showed that this extension may have large effects on the calculated insurance premium. Finally, we discuss several possible improvements to the drought index.
Li, Jing-Jing; Hu, Zi-Min; Sun, Zhong-Min; Yao, Jian-Ting; Liu, Fu-Li; Fresia, Pablo; Duan, De-Lin
2017-12-07
Long-term survival in isolated marginal seas of the China coast during the late Pleistocene ice ages is widely believed to be an important historical factor contributing to population genetic structure in coastal marine species. Whether or not contemporary factors (e.g. long-distance dispersal via coastal currents) continue to shape diversity gradients in marine organisms with high dispersal capability remains poorly understood. Our aim was to explore how historical and contemporary factors influenced the genetic diversity and distribution of the brown alga Sargassum thunbergii, which can drift on surface water, leading to long-distance dispersal. We used 11 microsatellites and the plastid RuBisCo spacer to evaluate the genetic diversity of 22 Sargassum thunbergii populations sampled along the China coast. Population structure and differentiation was inferred based on genotype clustering and pairwise F ST and allele-frequency analyses. Integrated genetic analyses revealed two genetic clusters in S. thunbergii that dominated in the Yellow-Bohai Sea (YBS) and East China Sea (ECS) respectively. Higher levels of genetic diversity and variation were detected among populations in the YBS than in the ECS. Bayesian coalescent theory was used to estimate contemporary and historical gene flow. High levels of contemporary gene flow were detected from the YBS (north) to the ECS (south), whereas low levels of historical gene flow occurred between the two regions. Our results suggest that the deep genetic divergence in S. thunbergii along the China coast may result from long-term geographic isolation during glacial periods. The dispersal of S. thunbergii driven by coastal currents may facilitate the admixture between southern and northern regimes. Our findings exemplify how both historical and contemporary forces are needed to understand phylogeographical patterns in coastal marine species with long-distance dispersal.
Vermeulen, Roel; Coble, Joseph B; Lubin, Jay H; Portengen, Lützen; Blair, Aaron; Attfield, Michael D; Silverman, Debra T; Stewart, Patricia A
2010-10-01
We developed quantitative estimates of historical exposures to respirable elemental carbon (REC) for an epidemiologic study of mortality, including lung cancer, among diesel-exposed miners at eight non-metal mining facilities [the Diesel Exhaust in Miners Study (DEMS)]. Because there were no historical measurements of diesel exhaust (DE), historical REC (a component of DE) levels were estimated based on REC data from monitoring surveys conducted in 1998-2001 as part of the DEMS investigation. These values were adjusted for underground workers by carbon monoxide (CO) concentration trends in the mines derived from models of historical CO (another DE component) measurements and DE determinants such as engine horsepower (HP; 1 HP = 0.746 kW) and mine ventilation. CO was chosen to estimate historical changes because it was the most frequently measured DE component in our study facilities and it was found to correlate with REC exposure. Databases were constructed by facility and year with air sampling data and with information on the total rate of airflow exhausted from the underground operations in cubic feet per minute (CFM) (1 CFM = 0.0283 m³ min⁻¹), HP of the diesel equipment in use (ADJ HP), and other possible determinants. The ADJ HP purchased after 1990 (ADJ HP₁₉₉₀(+)) was also included to account for lower emissions from newer, cleaner engines. Facility-specific CO levels, relative to those in the DEMS survey year for each year back to the start of dieselization (1947-1967 depending on facility), were predicted based on models of observed CO concentrations and log-transformed (Ln) ADJ HP/CFM and Ln(ADJ HP₁₉₉₀(+)). The resulting temporal trends in relative CO levels were then multiplied by facility/department/job-specific REC estimates derived from the DEMS surveys personal measurements to obtain historical facility/department/job/year-specific REC exposure estimates. The facility-specific temporal trends of CO levels (and thus the REC estimates) generated from these models indicated that CO concentrations had been generally greater in the past than during the 1998-2001 DEMS surveys, with the highest levels ranging from 100 to 685% greater (median: 300%). These levels generally occurred between 1970 and the early 1980s. A comparison of the CO facility-specific model predictions with CO air concentration measurements from a 1976-1977 survey external to the modeling showed that our model predictions were slightly lower than those observed (median relative difference of 29%; range across facilities: 49 to -25%). In summary, we successfully modeled past CO concentration levels using selected determinants of DE exposure to derive retrospective estimates of REC exposure. The results suggested large variations in REC exposure levels both between and within the underground operations of the facilities and over time. These REC exposure estimates were in a plausible range and were used in the investigation of exposure-response relationships in epidemiologic analyses.
The safety of high-hazard water infrastructures in the U.S. Pacific Northwest in a changing climate
NASA Astrophysics Data System (ADS)
Chen, X.; Hossain, F.; Leung, L. R.
2017-12-01
The safety of large and aging water infrastructures is gaining attention in water management given the accelerated rate of change in landscape, climate and society. In current engineering practice, such safety is ensured by the design of infrastructure for the Probable Maximum Precipitation (PMP). Recently, several numerical modeling approaches have been proposed to modernize the conventional and ad hoc PMP estimation approach. However, the underlying physics have not been investigated and thus differing PMP estimates are obtained without clarity on their interpretation. In this study, we present a hybrid approach that takes advantage of both traditional engineering practice and modern climate science to estimate PMP for current and future climate conditions. The traditional PMP approach is improved and applied to five statistically downscaled CMIP5 model outputs, producing an ensemble of PMP estimates in the Pacific Northwest (PNW) during the historical (1970-2016) and future (2050-2099) time periods. The new historical PMP estimates are verified against the traditional estimates. PMP in the PNW will increase by 50%±30% of the current level by 2099 under the RCP8.5 scenario. Most of the increase is caused by warming, which mainly affects moisture availability through increased sea surface temperature, with minor contributions from changes in storm efficiency in the future. Moist track change tends to reduce the future PMP. Compared with extreme precipitation, PMP exhibits higher internal variability. Thus long-time records of high-quality data in both precipitation and related meteorological fields (temperature, wind fields) are required to reduce uncertainties in the ensemble PMP estimates.
The legacy of leaded gasoline in bottom sediment of small rural reservoirs
Juracek, K.E.; Ziegler, A.C.
2006-01-01
The historical and ongoing lead (Pb) contamination caused by the 20th-century use of leaded gasoline was investigated by an analysis of bottom sediment in eight small rural reservoirs in eastern Kansas, USA. For the reservoirs that were completed before or during the period of maximum Pb emissions from vehicles (i.e., the 1940s through the early 1980s) and that had a major highway in the basin, increased Pb concentrations reflected the pattern of historical leaded gasoline use. For at least some of these reservoirs, residual Pb is still being delivered from the basins. There was no evidence of increased Pb deposition for the reservoirs completed after the period of peak Pb emissions and (or) located in relatively remote areas with little or no highway traffic. Results indicated that several factors affected the magnitude and variability of Pb concentrations in reservoir sediment including traffic volume, reservoir age, and basin size. The increased Pb concentrations at four reservoirs exceeded the U.S. Environmental Protection Agency threshold-effects level (30.2 mg kg-1) and frequently exceeded a consensus-based threshold-effects concentration (35.8 mg kg-1) for possible adverse biological effects. For two reservoirs it was estimated that it will take at least 20 to 70 yr for Pb in the newly deposited sediment to return to baseline (pre-1920s) concentrations (30 mg kg-1) following the phase out of leaded gasoline. The buried sediment with elevated Pb concentrations may pose a future environmental concern if the reservoirs are dredged, the dams are removed, or the dams fail. ?? ASA, CSSA, SSSA.
Batke, Sven P; Jocque, Merlijn; Kelly, Daniel L
2014-01-01
High energy weather events are often expected to play a substantial role in biotic community dynamics and large scale diversity patterns but their contribution is hard to prove. Currently, observations are limited to the documentation of accidental records after the passing of such events. A more comprehensive approach is synthesising weather events in a location over a long time period, ideally at a high spatial resolution and on a large geographic scale. We provide a detailed overview on how to generate hurricane exposure data at a meso-climate level for a specific region. As a case study we modelled landscape hurricane exposure in Cusuco National Park (CNP), Honduras with a resolution of 50 m×50 m patches. We calculated actual hurricane exposure vulnerability site scores (EVVS) through the combination of a wind pressure model, an exposure model that can incorporate simple wind dynamics within a 3-dimensional landscape and the integration of historical hurricanes data. The EVSS was calculated as a weighted function of sites exposure, hurricane frequency and maximum wind velocity. Eleven hurricanes were found to have affected CNP between 1995 and 2010. The highest EVSS's were predicted to be on South and South-East facing sites of the park. Ground validation demonstrated that the South-solution (i.e. the South wind inflow direction) explained most of the observed tree damage (90% of the observed tree damage in the field). Incorporating historical data to the model to calculate actual hurricane exposure values, instead of potential exposure values, increased the model fit by 50%.
Batke, Sven P.; Jocque, Merlijn; Kelly, Daniel L.
2014-01-01
High energy weather events are often expected to play a substantial role in biotic community dynamics and large scale diversity patterns but their contribution is hard to prove. Currently, observations are limited to the documentation of accidental records after the passing of such events. A more comprehensive approach is synthesising weather events in a location over a long time period, ideally at a high spatial resolution and on a large geographic scale. We provide a detailed overview on how to generate hurricane exposure data at a meso-climate level for a specific region. As a case study we modelled landscape hurricane exposure in Cusuco National Park (CNP), Honduras with a resolution of 50 m×50 m patches. We calculated actual hurricane exposure vulnerability site scores (EVVS) through the combination of a wind pressure model, an exposure model that can incorporate simple wind dynamics within a 3-dimensional landscape and the integration of historical hurricanes data. The EVSS was calculated as a weighted function of sites exposure, hurricane frequency and maximum wind velocity. Eleven hurricanes were found to have affected CNP between 1995 and 2010. The highest EVSS’s were predicted to be on South and South-East facing sites of the park. Ground validation demonstrated that the South-solution (i.e. the South wind inflow direction) explained most of the observed tree damage (90% of the observed tree damage in the field). Incorporating historical data to the model to calculate actual hurricane exposure values, instead of potential exposure values, increased the model fit by 50%. PMID:24614168
Understanding observed and simulated historical temperature trends in California
NASA Astrophysics Data System (ADS)
Bonfils, C. J.; Duffy, P. B.; Santer, B. D.; Lobell, D. B.; Wigley, T. M.
2006-12-01
In our study, we attempt 1) to improve our understanding of observed historical temperature trends and their underlying causes in the context of regional detection of climate change and 2) to identify possible neglected forcings and errors in the model response to imposed forcings at the origin of inconsistencies between models and observations. From eight different observational datasets, we estimate California-average temperature trends over 1950- 1999 and compare them to trends from a suite of IPCC control simulations of natural internal climate variability. We find that the substantial night-time warming occurring from January to September is inconsistent with model-based estimates of natural internal climate variability, and thus requires one or more external forcing agents to be explained. In contrast, we find that a significant day-time warming occurs only from January to March. Our confidence in these findings is increased because there is no evidence that the models systematically underestimate noise on interannual and decadal timescales. However, we also find that IPCC simulations of the 20th century that include combined anthropogenic and natural forcings are not able to reproduce such a pronounced seasonality of the trends. Our first hypothesis is that the warming of Californian winters over the second half of the twentieth century is associated with changes in large-scale atmospheric circulation that are likely to be human-induced. This circulation change is underestimated in the historical simulations, which may explain why the simulated warming of Californian winters is too weak. We also hypothesize that the lack of a detectable observed increase in summertime maximum temperature arises from a cooling associated with large-scale irrigation. This cooling may have, until now, counteracted the warming induced by increasing greenhouse gases and urbanization effects. Omitting to include this forcing in the simulations can result in overestimating the summertime maximum temperature trends. We conduct an empirical study based on observed climate and irrigation changes to evaluate this assumption.
Success in Developing Regions: World Records Evolution through a Geopolitical Prism
Guillaume, Marion; Helou, Nour El; Nassif, Hala; Berthelot, Geoffroy; Len, Stéphane; Thibault, Valérie; Tafflet, Muriel; Quinquis, Laurent; Desgorces, François; Hermine, Olivier; Toussaint, Jean-François
2009-01-01
A previous analysis of World Records (WR) has revealed the potential limits of human physiology through athletes' personal commitment. The impact of political factors on sports has only been studied through Olympic medals and results. Here we studied 2876 WR from 63 nations in four summer disciplines. We propose three new indicators and show the impact of historical, geographical and economical factors on the regional WR evolution. The south-eastward path of weighted annual barycenter (i.e. the average of country coordinates weighting by the WR number) shows the emergence of East Africa and China in WR archives. Home WR ratio decreased from 79.9% before the second World War to 23.3% in 2008, underlining sports globalization. Annual Cumulative Proportions (ACP, i.e. the cumulative sum of the WR annual rate) highlight the regional rates of progression. For all regions, the mean slope of ACP during the Olympic era is 0.0101, with a maximum between 1950 and 1989 (0.0156). For European countries, this indicator reflects major historical events (slowdown for western countries after 1945, slowdown for eastern countries after 1990). Mean North-American ACP slope is 0.0029 over the century with an acceleration between 1950 and 1989 at 0.0046. Russia takes off in 1935 and slows down in 1988 (0.0038). For Eastern Europe, maximal progression is seen between 1970 and 1989 (0.0045). China starts in 1979 with a maximum between 1990 and 2008 (0.0021), while other regions have largely declined (mean ACP slope for all other countries = 0.0011). A similar trend is observed for the evolution of the 10 best performers. The national analysis of WR reveals a precise and quantifiable link between the sport performances of a country, its historical or geopolitical context, and its steps of development. PMID:19862324
Maximum Historical Seismic Intensity Map of S. Miguel Island (azores)
NASA Astrophysics Data System (ADS)
Silveira, D.; Gaspar, J. L.; Ferreira, T.; Queiroz, G.
The Azores archipelago is situated in the Atlantic Ocean where the American, African and Eurasian lithospheric plates meet. The so-called Azores Triple Junction located in the area where the Terceira Rift, a NW-SE to WNW-ESE fault system with a dextral component, intersects the Mid-Atlantic Ridge, with an approximate N-S direction, dominates its geological setting. S. Miguel Island is located in the eastern segment of the Terceira Rift, showing a high diversity of volcanic and tectonic structures. It is the largest Azorean island and includes three active trachytic central volcanoes with caldera (Sete Cidades, Fogo and Furnas) placed in the intersection of the NW-SE Ter- ceira Rift regional faults with an E-W deep fault system thought to be a relic of a Mid-Atlantic Ridge transform fault. N-S and NE-SW faults also occur in this con- text. Basaltic cinder cones emplaced along NW-SE fractures link that major volcanic structures. The easternmost part of the island comprises an inactive trachytic central volcano (Povoação) and an old basaltic volcanic complex (Nordeste). Since the settle- ment of the island, early in the XV century, several destructive earthquakes occurred in the Azores region. At least 11 events hit S. Miguel Island with high intensity, some of which caused several deaths and significant damages. The analysis of historical documents allowed reconstructing the history and the impact of all those earthquakes and new intensity maps using the 1998 European Macrosseismic Scale were produced for each event. The data was then integrated in order to obtain the maximum historical seismic intensity map of S. Miguel. This tool is regarded as an important document for hazard assessment and risk mitigation taking in account that indicates the location of dangerous seismogenic zones and provides a comprehensive set of data to be applied in land-use planning, emergency planning and building construction.
2007-12-01
equivalent TMDL Total Maximum Daily Load USLE Universal Soil Loss Equation VTM Virtual Transect Model WEPP Water Erosion Prediction Project WMS Web...models, which do not reproduce the large storm dominance of sediment yield (e.g., Universal Soil Loss Equation [ USLE ]/RUSLE) significantly underestimate...technology is the USLE /RUSLE soil erosion prediction technology. The USLE (Wischmeier and Smith 1978) is the simplest and historically most widely
History and Technology Developments of Radio Frequency (RF) Systems for Particle Accelerators
NASA Astrophysics Data System (ADS)
Nassiri, A.; Chase, B.; Craievich, P.; Fabris, A.; Frischholz, H.; Jacob, J.; Jensen, E.; Jensen, M.; Kustom, R.; Pasquinelli, R.
2016-04-01
This article attempts to give a historical account and review of technological developments and innovations in radio frequency (RF) systems for particle accelerators. The evolution from electrostatic field to the use of RF voltage suggested by R. Wideröe made it possible to overcome the shortcomings of electrostatic accelerators, which limited the maximum achievable electric field due to voltage breakdown. After an introduction, we will provide reviews of technological developments of RF systems for particle accelerators.
B. Buma; P.E. Hennon; A.L. Bidlack; J.F. Baichtal; T.A. Ager; G. Streveier
2014-01-01
The velocity of species dispersal post-last glacial maximum (LGM) is an interesting question from both paleo-historical and contemporary perspectives. The apparent time lag between a locationâs climate becoming suitable for a given species and that speciesâ arrival at that location has important implications for our understanding of the relationship between climate...
The Planetary Nebulae Luminosity Function (PNLF): current perspectives
NASA Astrophysics Data System (ADS)
Méndez, Roberto H.
2017-10-01
This paper starts with a brief historical review about the PNLF and its use as a distance indicator. Then the PNLF distances are compared with Surface Brightness Fluctuations (SBF) distances and Tip of the Red Giant Branch (TRGB) distances. A Monte Carlo method to generate simulated PNLFs is described, leading to the last subject: recent progress in reproducing the expected maximum final mass in old stellar populations, a stellar astrophysics enigma that has been challenging us for quite some time.
Deuterated scintillators and their application to neutron spectroscopy
NASA Astrophysics Data System (ADS)
Febbraro, M.; Lawrence, C. C.; Zhu, H.; Pierson, B.; Torres-Isea, R. O.; Becchetti, F. D.; Kolata, J. J.; Riggins, J.
2015-06-01
Deuterated scintillators have been used as a tool for neutron spectroscopy without Neutron Time-of-Flight (n-ToF) for more than 30 years. This article will provide a brief historical overview of the technique and current uses of deuterated scintillators in the UM-DSA and DESCANT arrays. Pulse-shape discrimination and spectrum unfolding with the maximum-likelihood expectation maximization algorithm will be discussed. Experimental unfolding and cross section results from measurements of (d,n), (3He,n) and (α,n) reactions are shown.
Hobbs, Brian P.; Sargent, Daniel J.; Carlin, Bradley P.
2014-01-01
Assessing between-study variability in the context of conventional random-effects meta-analysis is notoriously difficult when incorporating data from only a small number of historical studies. In order to borrow strength, historical and current data are often assumed to be fully homogeneous, but this can have drastic consequences for power and Type I error if the historical information is biased. In this paper, we propose empirical and fully Bayesian modifications of the commensurate prior model (Hobbs et al., 2011) extending Pocock (1976), and evaluate their frequentist and Bayesian properties for incorporating patient-level historical data using general and generalized linear mixed regression models. Our proposed commensurate prior models lead to preposterior admissible estimators that facilitate alternative bias-variance trade-offs than those offered by pre-existing methodologies for incorporating historical data from a small number of historical studies. We also provide a sample analysis of a colon cancer trial comparing time-to-disease progression using a Weibull regression model. PMID:24795786
Estimating the long-term historic evolution of exposure to flooding of coastal populations
NASA Astrophysics Data System (ADS)
Stevens, A. J.; Clarke, D.; Nicholls, R. J.; Wadey, M. P.
2015-06-01
Coastal managers face the task of assessing and managing flood risk. This requires knowledge of the area of land, the number of people, properties and other infrastructure potentially affected by floods. Such analyses are usually static; i.e. they only consider a snapshot of the current situation. This misses the opportunity to learn about the role of key drivers of historical changes in flood risk, such as development and population rise in the coastal flood plain, as well as sea-level rise. In this paper, we develop and apply a method to analyse the temporal evolution of residential population exposure to coastal flooding. It uses readily available data in a GIS environment. We examine how population and sea-level change have modified exposure over two centuries in two neighbouring coastal sites: Portsea and Hayling Islands on the UK south coast. The analysis shows that flood exposure changes as a result of increases in population, changes in coastal population density and sea level rise. The results indicate that to date, population change is the dominant driver of the increase in exposure to flooding in the study sites, but climate change may outweigh this in the future. A full analysis of changing flood risk is not possible as data on historic defences and wider vulnerability are not available. Hence, the historic evolution of flood exposure is as close as we can get to a historic evolution of flood risk. The method is applicable anywhere that suitable floodplain geometry, sea level and population data sets are available and could be widely applied, and will help inform coastal managers of the time evolution in coastal flood drivers.
Annotated Bibliography of Relative Sea Level Change
1991-09-01
Millennial Basis, Morner and Kui-len, eds., D. Reidel. Publishing Company, pp 571-604. Over the last few years, considerable attention has been given to...historic erosion rates. Curiously , the ability to model statistically the historic shore erosion rate is best on those reaches already substantially
NASA Astrophysics Data System (ADS)
Diffenbaugh, N. S.
2017-12-01
Severe heat provides one of the most direct, acute, and rapidly changing impacts of climate on people and ecostystems. Theory, historical observations, and climate model simulations all suggest that global warming should increase the probability of hot events that fall outside of our historical experience. Given the acutre impacts of extreme heat, quantifying the probability of historically unprecedented hot events at different levels of climate forcing is critical for climate adaptation and mitigation decisions. However, in practice that quantification presents a number of methodological challenges. This presentation will review those methodological challenges, including the limitations of the observational record and of climate model fidelity. The presentation will detail a comprehensive approach to addressing these challenges. It will then demonstrate the application of that approach to quantifying uncertainty in the probability of record-setting hot events in the current climate, as well as periods with lower and higher greenhouse gas concentrations than the present.
NASA Astrophysics Data System (ADS)
Liu, Yuanming; Huang, Changwei; Dai, Qionglin
2018-06-01
Strategy imitation plays a crucial role in evolutionary dynamics when we investigate the spontaneous emergence of cooperation under the framework of evolutionary game theory. Generally, when an individual updates his strategy, he needs to choose a role model whom he will learn from. In previous studies, individuals choose role models randomly from their neighbors. In recent works, researchers have considered that individuals choose role models according to neighbors' attractiveness characterized by the present network topology or historical payoffs. Here, we associate an individual's attractiveness with the strategy persistence, which characterizes how frequently he changes his strategy. We introduce a preferential parameter α to describe the nonlinear correlation between the selection probability and the strategy persistence and the memory length of individuals M into the evolutionary games. We investigate the effects of α and M on cooperation. Our results show that cooperation could be promoted when α > 0 and at the same time M > 1, which corresponds to the situation that individuals are inclined to select their neighbors with relatively higher persistence levels during the evolution. Moreover, we find that the cooperation level could reach the maximum at an optimal memory length when α > 0. Our work sheds light on how to promote cooperation through preferential selection based on strategy persistence and a limited memory length.
Hysteresis in simulations of malaria transmission
NASA Astrophysics Data System (ADS)
Yamana, Teresa K.; Qiu, Xin; Eltahir, Elfatih A. B.
2017-10-01
Malaria transmission is a complex system and in many parts of the world is closely related to climate conditions. However, studies on environmental determinants of malaria generally consider only concurrent climate conditions and ignore the historical or initial conditions of the system. Here, we demonstrate the concept of hysteresis in malaria transmission, defined as non-uniqueness of the relationship between malaria prevalence and concurrent climate conditions. We show the dependence of simulated malaria transmission on initial prevalence and the initial level of human immunity in the population. Using realistic time series of environmental variables, we quantify the effect of hysteresis in a modeled population. In a set of numerical experiments using HYDREMATS, a field-tested mechanistic model of malaria transmission, the simulated maximum malaria prevalence depends on both the initial prevalence and the initial level of human immunity in the population. We found the effects of initial conditions to be of comparable magnitude to the effects of interannual variability in environmental conditions in determining malaria prevalence. The memory associated with this hysteresis effect is longer in high transmission settings than in low transmission settings. Our results show that efforts to simulate and forecast malaria transmission must consider the exposure history of a location as well as the concurrent environmental drivers.
The flushing and exchange of the South China Sea derived from salt and mass conservation
NASA Astrophysics Data System (ADS)
Liu, Yang; Bye, John A. T.; You, Yuzhu; Bao, Xianwen; Wu, Dexing
2010-07-01
In this paper, we use two kinds of hydrographic data, historical cruise data, Array for Real-time Geostrophic Oceanography (Argo) float data, and atmospheric data to study the water exchange between the South China Sea (SCS) and the Pacific Ocean through the Luzon Strait. The annual mean distributions of temperature and salinity at five different levels in the SCS and the adjacent Pacific Ocean are presented, which indicate the occurrence of active water exchange through the Luzon Strait. The flushing and exchange of the SCS are then determined by the application of salt and mass conservation in a multi-layered thermohaline system, using an estimate of the net rainfall obtained from reanalysis data. The results show that the annual mean flushing time is 44±8 months with an inflow rate of 11±2 Sv (1 Sv=10 6 m 3 s -1), part of which recirculates at a deeper level through the Luzon Strait, the remainder (6±2 Sv) forming the SCS throughflow. The diffusive influx of salt is also estimated and accounts for about 10% of the total influx, and hence advection dominates over diffusion in the water exchange through the Luzon Strait. The seasonal cycle of exchange shows a maximum in autumn and winter of about twice the annual mean rate.
Socio-historical paths of the male breadwinner model - an explanation of cross-national differences.
Pfau-Effinger, Birgit
2004-09-01
It is often assumed that in the historical transformation to modern industrial society, the integration of women into the economy occurred everywhere as a three-phase process: in pre-modern societies, the extensive integration of women into societal production; then, their wide exclusion with the shift to industrial society; and finally, their re-integration into paid work during the further course of modernization. Results from the author's own international comparative study of the historical development of the family and the economic integration of women have shown that this was decidedly not the case even for western Europe. Hence the question arises: why is there such historical variation in the development and importance of the housewife model of the male breadwinner family? In the article, an explanation is presented. It is argued that the historical development of the urban bourgeoisie was especially significant for the historical destiny of this cultural model: the social and political strength of the urban bourgeoisie had central societal importance in the imposition of the housewife model of the male breadwinner family as the dominant family form in a given society. In this, it is necessary to distinguish between the imposition of the breadwinner marriage at the cultural level on the one hand, and at the level of social practice in the family on the other.
Digitalizing historical high resolution water level data: Challenges and opportunities
NASA Astrophysics Data System (ADS)
Holinde, Lars; Hein, Hartmut; Barjenbruch, Ulrich
2017-04-01
Historical tide-gauge data offer the opportunities for determining variations in key characteristics for water level data and the analyses of past extreme events (storm surges). These information are important for calculating future trends and scenarios. But there are challenges involved due to the extensive effort needed to digitalize gauge sheets and quality control the resulting historical data. Based on these conditions, two main sources for inaccuracies in historical time series can be identified. First are several challenges due to the digitalization of the historical data, e.g. general quality of the sheets, multiple crossing lines of the observed water levels and additional comments on the sheet describing problems or additional information during the measurements. Second are problems during the measurements themselves. These can include the incorrect positioning of the sheets, trouble with the tide-gauge and maintenance. Errors resulting from these problems can be e.g. flat lines, discontinuities and outlier. Especially, the characterization of outliers has to be conducted carefully, to distinguish between real outliers and the appearance of extreme events. Methods for the quality control process involve the use of statistics, machine learning and neural networks. These will be described and applied to three different time series from tide gauge stations at the cost of Lower Saxony, Germany. Resulting difficulties and outcomes of the quality control process will be presented and explained. Furthermore, we will present a first glance at analyses for these time series.
Federal Register 2010, 2011, 2012, 2013, 2014
2011-12-16
... Approaches To Derive a Maximum Contaminant Level Goal for Perchlorate AGENCY: Environmental Protection Agency... maximum contaminant level goal (MCLG) for perchlorate. DATES: Nominations should be submitted by January... perchlorate. In 2011, EPA announced its decision (76 FR 7762-7767) to regulate perchlorate under the Safe...
NASA Astrophysics Data System (ADS)
Main, Ian; Irving, Duncan; Musson, Roger; Reading, Anya
1999-05-01
Earthquake populations have recently been shown to have many similarities with critical-point phenomena, with fractal scaling of source sizes (energy or seismic moment) corresponding to the observed Gutenberg-Richter (G-R) frequency-magnitude law holding at low magnitudes. At high magnitudes, the form of the distribution depends on the seismic moment release rate Msolar and the maximum magnitude m_max . The G-R law requires a sharp truncation at an absolute maximum magnitude for finite Msolar. In contrast, the gamma distribution has an exponential tail which allows a soft or `credible' maximum to be determined by negligible contribution to the total seismic moment release. Here we apply both distributions to seismic hazard in the mainland UK and its immediate continental shelf, constrained by a mixture of instrumental, historical and neotectonic data. Tectonic moment release rates for the seismogenic part of the lithosphere are calculated from a flexural-plate model for glacio-isostatic recovery, constrained by vertical deformation rates from tide-gauge and geomorphological data. Earthquake focal mechanisms in the UK show near-vertical strike-slip faulting, with implied directions of maximum compressive stress approximately in the NNW-SSE direction, consistent with the tectonic model. Maximum magnitudes are found to be in the range 6.3-7.5 for the G-R law, or 7.0-8.2 m_L for the gamma distribution, which compare with a maximum observed in the time period of interest of 6.1 m_L . The upper bounds are conservative estimates, based on 100 per cent seismic release of the observed vertical neotectonic deformation. Glacio-isostatic recovery is predominantly an elastic rather than a seismic process, so the true value of m_max is likely to be nearer the lower end of the quoted range.
Bever, Aaron J.; MacWilliams, Michael L.; Herbold, Bruce; Brown, Larry R.; Feyrer, Frederick V.
2016-01-01
Long-term fish sampling data from the San Francisco Estuary were combined with detailed three dimensional hydrodynamic modeling to investigate the relationship between historical fish catch and hydrodynamic complexity. Delta Smelt catch data at 45 stations from the Fall Midwater Trawl (FMWT) survey in the vicinity of Suisun Bay were used to develop a quantitative catch-based station index. This index was used to rank stations based on historical Delta Smelt catch. The correlations between historical Delta Smelt catch and 35 quantitative metrics of environmental complexity were evaluated at each station. Eight metrics of environmental conditions were derived from FMWT data and 27 metrics were derived from model predictions at each FMWT station. To relate the station index to conceptual models of Delta Smelt habitat, the metrics were used to predict the station ranking based on the quantified environmental conditions. Salinity, current speed, and turbidity metrics were used to predict the relative ranking of each station for Delta Smelt catch. Including a measure of the current speed at each station improved predictions of the historical ranking for Delta Smelt catch relative to similar predictions made using only salinity and turbidity. Current speed was also found to be a better predictor of historical Delta Smelt catch than water depth. The quantitative approach developed using the FMWT data was validated using the Delta Smelt catch data from the San Francisco Bay Study. Complexity metrics in Suisun Bay were-evaluated during 2010 and 2011. This analysis indicated that a key to historical Delta Smelt catch is the overlap of low salinity, low maximum velocity, and low Secchi depth regions. This overlap occurred in Suisun Bay during 2011, and may have contributed to higher Delta Smelt abundance in 2011 than in 2010 when the favorable ranges of the metrics did not overlap in Suisun Bay.
Lehmann, A; Scheffler, Ch; Hermanussen, M
2010-02-01
Recent progress in modelling individual growth has been achieved by combining the principal component analysis and the maximum likelihood principle. This combination models growth even in incomplete sets of data and in data obtained at irregular intervals. We re-analysed late 18th century longitudinal growth of German boys from the boarding school Carlsschule in Stuttgart. The boys, aged 6-23 years, were measured at irregular 3-12 monthly intervals during the period 1771-1793. At the age of 18 years, mean height was 1652 mm, but height variation was large. The shortest boy reached 1474 mm, the tallest 1826 mm. Measured height closely paralleled modelled height, with mean difference of 4 mm, SD 7 mm. Seasonal height variation was found. Low growth rates occurred in spring and high growth rates in summer and autumn. The present study demonstrates that combining the principal component analysis and the maximum likelihood principle enables growth modelling in historic height data also. Copyright (c) 2009 Elsevier GmbH. All rights reserved.
NASA Astrophysics Data System (ADS)
Yan, Tiezhu; Shen, Zhenyao; Heng, Lee; Dercon, Gerd
2016-04-01
Future climate change information is important to formulate adaptation and mitigation strategies for climate change. In this study, a statistical downscaling model (SDSM) was established using both NCEP reanalysis data and ground observations (daily maximum and minimum temperature) during the period 1971-2010, and then calibrated model was applied to generate the future maximum and minimum temperature projections using predictors from the two CMIP5 models (MPI-ESM-LR and CNRM-CM5) under two Representative Concentration Pathway (RCP2.6 and RCP8.5) during the period 2011-2100 for the Haihe River Basin, China. Compared to the baseline period, future change in annual and seasonal maximum and minimum temperature was computed after bias correction. The spatial distribution and trend change of annual maximum and minimum temperature were also analyzed using ensemble projections. The results shows that: (1)The downscaling model had a good applicability on reproducing daily and monthly mean maximum and minimum temperature over the whole basin. (2) Bias was observed when using historical predictors from CMIP5 models and the performance of CNRM-CM5 was a little worse than that of MPI-ESM-LR. (3) The change in annual mean maximum and minimum temperature under the two scenarios in 2020s, 2050s and 2070s will increase and magnitude of maximum temperature will be higher than minimum temperature. (4) The increase in temperature in the mountains and along the coastline is remarkably high than the other parts of the studies basin. (5) For annual maximum and minimum temperature, the significant upward trend will be obtained under RCP 8.5 scenario and the magnitude will be 0.37 and 0.39 ℃ per decade, respectively; the increase in magnitude under RCP 2.6 scenario will be upward in 2020s and then decrease in 2050s and 2070s, and the magnitude will be 0.01 and 0.01℃ per decade, respectively.
Implementing Solar Photovoltaic Projects on Historic Buildings and in Historic Districts
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kandt, A.; Hotchkiss, E.; Walker, A.
2011-01-01
Despite a global recession, the number of photovoltaic (PV) installations in the United States grew 30% from 2008 to 2009. A number of trends point toward continued growth of new PV installations. The efficiency of solar panels is increasing, while installation costs are going down. At the same time, federal, state, and local regulations are requiring that greater amounts of energy must come from renewable sources. Incentives for solar power technology implementation are being created and regulatory barriers removed. Corporations and governments are focusing on solar power to demonstrate leadership in environmental sustainability and resource conservation. Architects and builders aremore » including PV arrays as a way to meet green building standards and property owners are seeking PV as a way to reduce their utility bills, as well as their carbon footprints. This publication focuses on the implementation of PV systems on historic properties. Many private property owners, as well as local, state, and national government entities, are seeking guidance on how best to integrate solar PV installations on historic buildings. Historic preservationists maintain that preserving, reusing, and maintaining historic structures is a key sustainable design strategy while also recognizing the importance of accommodating renewable energy technologies where they are appropriate. In some cases, however, conflicts have arisen over the installation of PV panels on historic properties. Addressing these conflicts and providing guidance regarding solutions and best practices is an important step toward resolving or eliminating barriers. Historic properties and districts in the United States provide tangible connections to the nation's past. Thousands of buildings, sites, districts, structures, and objects have been recognized for their historic and architectural significance. Local, state, and national designations of historic properties provide recognition, protection, and incentives that help to preserve those properties for future generations. At the national level, the National Register of Historic Places includes more than 86,000 listings, which encompass a total of more than 1.6 million historic resources. State registers of historic places also provide recognition and protection for historic sites and districts. Locally, more than 2,400 communities have established historic preservation ordinances. Typically implemented through zoning overlays, these local land use regulations manage changes to hundreds of thousands of historic properties. Over a period of 2 years (2007 and 2008) the U.S. Department of Energy (DOE) designated 25 major U.S. cities as Solar America Cities. DOE provided financial and technical assistance to help the cities develop comprehensive approaches to accelerate the adoption of solar energy technologies. The Solar America Cities partnerships represent the foundation of DOE's larger Solar America Communities program. As a part of this program, DOE identified the implementation of solar projects on historic properties and in historic districts as one area to address. A workshop titled 'Implementing Solar Projects on Historic Buildings and in Historic Districts' was held in Denver, Colorado, in June of 2010. Participants included representatives from the solar industry as well as historic preservationists from nonprofit organizations and government agencies at the local, state, and national levels. The workshop provided an opportunity to gain a common understanding of solar technologies and historic preservation procedures and priorities. The workshop participants also discussed some of the challenges involved in locating PV systems on historic properties and identified potential solutions. This publication is based on the discussions that occurred at this workshop and the recommendations that were developed by participants. Ideas expressed by participants in the workshop, and included in this document, do not necessarily reflect the opinion of any government council, agency, or entity.« less
Lombard, Pamela J.
2018-04-30
The U.S. Geological Survey, in cooperation with the International Joint Commission, compiled historical data on regulated streamflows and lake levels and estimated unregulated streamflows and lake levels on Forest City Stream at Forest City, Maine, and East Grand Lake on the United States-Canada border between Maine and New Brunswick to study the effects on streamflows and lake levels if two or all three dam gates are left open. Historical regulated monthly mean streamflows in Forest City Stream at the outlet of East Grand Lake (referred to as Grand Lake by Environment Canada) fluctuated between 114 cubic feet per second (ft3 /s) (3.23 cubic meters per second [m3 /s]) in November and 318 ft3 /s (9.01 m3 /s) in September from 1975 to 2015 according to Environment Canada streamgaging data. Unregulated monthly mean streamflows at this location estimated from regression equations for unregulated sites range from 59.2 ft3 /s (1.68 m3 /s) in September to 653 ft3 /s (18.5 m3 /s) in April. Historical lake levels in East Grand Lake fluctuated between 431.3 feet (ft) (131.5 meters [m]) in October and 434.0 ft (132.3 m) in May from 1969 to 2016 according to Environment Canada lake level data for East Grand Lake. Average monthly lake levels modeled by using the estimated hydrology for unregulated flows, and an outflow rating built from a hydraulic model with all gates at the dam open, range from 427.7 ft (130.4 m) in September to 431.1 ft (131.4 m) in April. Average monthly lake levels would likely be from 1.8 to 5.4 ft (0.55 to 1.6 m) lower with the gates at the dam opened than they have been historically. The greatest lake level changes would be from June through September.
Gulf Coast Community College's Memory Project
ERIC Educational Resources Information Center
Burrell, Matthew D.
2005-01-01
Gulf Coast Community College in Panama City, Florida, is celebrating a fifty-year anniversary in 2007. Maintained by the library, the school's archives represent the historical contributions on a local and national level. Gulf Coast Community College library is ensuring the school's historical significance through the digitization of its…
Merz, Clayton; Catchen, Julian M; Hanson-Smith, Victor; Emerson, Kevin J; Bradshaw, William E; Holzapfel, Christina M
2013-01-01
Herein we tested the repeatability of phylogenetic inference based on high throughput sequencing by increased taxon sampling using our previously published techniques in the pitcher-plant mosquito, Wyeomyia smithii in North America. We sampled 25 natural populations drawn from different localities nearby 21 previous collection localities and used these new data to construct a second, independent phylogeny, expressly to test the reproducibility of phylogenetic patterns. Comparison of trees between the two data sets based on both maximum parsimony and maximum likelihood with Bayesian posterior probabilities showed close correspondence in the grouping of the most southern populations into clear clades. However, discrepancies emerged, particularly in the middle of W. smithii's current range near the previous maximum extent of the Laurentide Ice Sheet, especially concerning the most recent common ancestor to mountain and northern populations. Combining all 46 populations from both studies into a single maximum parsimony tree and taking into account the post-glacial historical biogeography of associated flora provided an improved picture of W. smithii's range expansion in North America. In a more general sense, we propose that extensive taxon sampling, especially in areas of known geological disruption is key to a comprehensive approach to phylogenetics that leads to biologically meaningful phylogenetic inference.
NASA Astrophysics Data System (ADS)
Love, J. J.; Rigler, E. J.; Pulkkinen, A. A.; Riley, P.
2015-12-01
An examination is made of the hypothesis that the statistics of magnetic-storm-maximum intensities are the realization of a log-normal stochastic process. Weighted least-squares and maximum-likelihood methods are used to fit log-normal functions to -Dst storm-time maxima for years 1957-2012; bootstrap analysis is used to established confidence limits on forecasts. Both methods provide fits that are reasonably consistent with the data; both methods also provide fits that are superior to those that can be made with a power-law function. In general, the maximum-likelihood method provides forecasts having tighter confidence intervals than those provided by weighted least-squares. From extrapolation of maximum-likelihood fits: a magnetic storm with intensity exceeding that of the 1859 Carrington event, -Dst > 850 nT, occurs about 1.13 times per century and a wide 95% confidence interval of [0.42, 2.41] times per century; a 100-yr magnetic storm is identified as having a -Dst > 880 nT (greater than Carrington) but a wide 95% confidence interval of [490, 1187] nT. This work is partially motivated by United States National Science and Technology Council and Committee on Space Research and International Living with a Star priorities and strategic plans for the assessment and mitigation of space-weather hazards.
Wang, Tongli; Hamann, Andreas; Spittlehouse, Dave; Carroll, Carlos
2016-01-01
Large volumes of gridded climate data have become available in recent years including interpolated historical data from weather stations and future predictions from general circulation models. These datasets, however, are at various spatial resolutions that need to be converted to scales meaningful for applications such as climate change risk and impact assessments or sample-based ecological research. Extracting climate data for specific locations from large datasets is not a trivial task and typically requires advanced GIS and data management skills. In this study, we developed a software package, ClimateNA, that facilitates this task and provides a user-friendly interface suitable for resource managers and decision makers as well as scientists. The software locally downscales historical and future monthly climate data layers into scale-free point estimates of climate values for the entire North American continent. The software also calculates a large number of biologically relevant climate variables that are usually derived from daily weather data. ClimateNA covers 1) 104 years of historical data (1901–2014) in monthly, annual, decadal and 30-year time steps; 2) three paleoclimatic periods (Last Glacial Maximum, Mid Holocene and Last Millennium); 3) three future periods (2020s, 2050s and 2080s); and 4) annual time-series of model projections for 2011–2100. Multiple general circulation models (GCMs) were included for both paleo and future periods, and two representative concentration pathways (RCP4.5 and 8.5) were chosen for future climate data. PMID:27275583
Vorsino, Adam E.; King, Cynthia B.; Haines, William P.; Rubinoff, Daniel
2013-01-01
Survey data over the last 100 years indicate that populations of the endemic Hawaiian leafroller moth, Omiodes continuatalis (Wallengren) (Lepidoptera: Crambidae), have declined, and the species is extirpated from large portions of its original range. Declines have been attributed largely to the invasion of non-native parasitoid species into Hawaiian ecosystems. To quantify changes in O. continuatalis distribution, we applied the maximum entropy modeling approach using Maxent. The model referenced historical (1892–1967) and current (2004–2008) survey data, to create predictive habitat suitability maps which illustrate the probability of occurrence of O. continuatalis based on historical data as contrasted with recent survey results. Probability of occurrence is predicted based on the association of biotic (vegetation) and abiotic (proxy of precipitation, proxy of temperature, elevation) environmental factors with 141 recent and historic survey locations, 38 of which O. continuatalis were collected from. Models built from the historical and recent surveys suggest habitat suitable for O. continuatalis has changed significantly over time, decreasing both in quantity and quality. We reference these data to examine the potential effects of non-native parasitoids as a factor in changing habitat suitability and range contraction for O. continuatalis. Synthesis and applications: Our results suggest that the range of O. continuatalis, an endemic Hawaiian species of conservation concern, has shrunk as its environment has degraded. Although few range shifts have been previously demonstrated in insects, such contractions caused by pressure from introduced species may be important factors in insect extinctions. PMID:23300954
Recording and Analysis of the Rec Yard at Alcatraz Island
NASA Astrophysics Data System (ADS)
Warden, R.; Toz, T. K.; Everett, M.; DeSmet, T.; Billingsley, A.; Hagin, J.
2013-07-01
In the summer of 2012 students and professors from the Concrete Industry Management (CIM ) program at California State University Chico, along with their partners at National Park Service, invited Texas A&M students and professors to join forces and perform a condition assessment of the Recreation Yard at Alcatraz Island in San Francisco Bay. The Recreation Yard is a heavily visited area by tourists who are drawn to the island because of its history as a maximum security prison in the 20th c. Because of its history, first as a military fort in the 19thc., later as a military prison, and finally as a federal prison, many difficult historical and preservation related questions exist. This team was formed to begin research on the historical and preservation questions with respect to the Recreation Yard. This paper and presentation will focus on the integration of documentation technologies employed to aid the research necessary for answering preservation and historical questions regarding the recreations yard. Since that yard was constructed on top of the historic 19th c masonry fort it was requested that we also seek the location of tunnels below the Recreation Yard and their relationship with the walls. Teams were formed to perform Non-destructive testing of concrete walls to determine the size and location of rebar, Ground Penetrating Radar for determining the location of the masonry tunnels and photogrammetry and laser scanning to provide both overall and detailed dimensional information of the current state of material decay.
Wang, Tongli; Hamann, Andreas; Spittlehouse, Dave; Carroll, Carlos
2016-01-01
Large volumes of gridded climate data have become available in recent years including interpolated historical data from weather stations and future predictions from general circulation models. These datasets, however, are at various spatial resolutions that need to be converted to scales meaningful for applications such as climate change risk and impact assessments or sample-based ecological research. Extracting climate data for specific locations from large datasets is not a trivial task and typically requires advanced GIS and data management skills. In this study, we developed a software package, ClimateNA, that facilitates this task and provides a user-friendly interface suitable for resource managers and decision makers as well as scientists. The software locally downscales historical and future monthly climate data layers into scale-free point estimates of climate values for the entire North American continent. The software also calculates a large number of biologically relevant climate variables that are usually derived from daily weather data. ClimateNA covers 1) 104 years of historical data (1901-2014) in monthly, annual, decadal and 30-year time steps; 2) three paleoclimatic periods (Last Glacial Maximum, Mid Holocene and Last Millennium); 3) three future periods (2020s, 2050s and 2080s); and 4) annual time-series of model projections for 2011-2100. Multiple general circulation models (GCMs) were included for both paleo and future periods, and two representative concentration pathways (RCP4.5 and 8.5) were chosen for future climate data.
Responses to historical climate change identify contemporary threats to diversity in Dodecatheon.
Oberle, Brad; Schaal, Barbara A
2011-04-05
Anthropogenic climate change may threaten many species with extinction. However, species at risk today survived global climate change in recent geological history. Describing how habitat tracking and adaptation allowed species to survive warming since the end of the Pleistocene can indicate the relative importance of dispersal and natural selection during climate change. By taking this historical perspective, we can identify how contemporary climate change could interfere with these mechanisms and threaten the most vulnerable species. We focused on a group of closely related plant species in the genus Dodecatheon (Primulaceae) in eastern North America. Two rare species (Dodecatheon amethystinum and Dodecatheon frenchii) that are endemic to patchy cool cliffs may be glacial relicts whose ranges constricted following the last glacial maximum. Alternatively, these species may be extreme ecotypes of a single widespread species (Dodecatheon meadia) that quickly adapted to microclimatic differences among habitats. We test support for these alternative scenarios by combining ecophysiological and population genetic data at a regional scale. An important ecophysiological trait distinguishes rare species from D. meadia, but only a few northern populations of D. amethystinum are genetically distinctive. These relict populations indicate that habitat tracking did occur with historical climate change. However, relatively stronger evidence for isolation by distance and admixture suggests that local adaptation and genetic introgression have been at least as important. The complex response of Dodecatheon to historical climate change suggests that contemporary conservation efforts should accommodate evolutionary processes, in some cases by restoring genetic connectivity between ecologically differentiated populations.
Bangert, M; Gil, H; Oliva, J; Delgado, C; Vega, T; DE Mateo, S; Larrauri, A
2017-03-01
The intensity of annual Spanish influenza activity is currently estimated from historical data of the Spanish Influenza Sentinel Surveillance System (SISSS) using qualitative indicators from the European Influenza Surveillance Network. However, these indicators are subjective, based on qualitative comparison with historical data of influenza-like illness rates. This pilot study assesses the implementation of Moving Epidemic Method (MEM) intensity levels during the 2014-2015 influenza season within the 17 sentinel networks covered by SISSS, comparing them to historically reported indicators. Intensity levels reported and those obtained with MEM at the epidemic peak of the influenza wave, and at national and regional levels did not show statistical difference (P = 0·74, Wilcoxon signed-rank test), suggesting that the implementation of MEM would have limited disrupting effects on the dynamic of notification within the surveillance system. MEM allows objective influenza surveillance monitoring and standardization of criteria for comparing the intensity of influenza epidemics in regions in Spain. Following this pilot study, MEM has been adopted to harmonize the reporting of intensity levels of influenza activity in Spain, starting in the 2015-2016 season.
Natural gas annual 1992: Volume 1
DOE Office of Scientific and Technical Information (OSTI.GOV)
Not Available
1993-11-22
This document provides information on the supply and disposition of natural gas to a wide audience including industry, consumers, Federal and State agencies, and education institutions. The 1992 data are presented in a sequence that follows natural gas (including supplemental supplies) from its production top its end use. Tables summarizing natural gas supply and disposition from 1988 to 1992 are given for each Census Division and each State. Annual historical data are shown at the national level. Volume 2 of this report presents State-level historical data.
Saint-Jacques, Nathalie; Brown, Patrick; Nauta, Laura; Boxall, James; Parker, Louise; Dummer, Trevor J B
2018-01-01
Arsenic in drinking water impacts health. Highest levels of arsenic have been historically observed in Taiwan and Bangladesh but the contaminant has been affecting the health of people globally. Strong associations have been confirmed between exposure to high-levels of arsenic in drinking water and a wide range of diseases, including cancer. However, at lower levels of exposure, especially near the current World Health Organization regulatory limit (10μg/L), this association is inconsistent as the effects are mostly extrapolated from high exposure studies. This ecological study used Bayesian inference to model the relative risk of bladder and kidney cancer at these lower concentrations-0-2μg/L; 2-5μg/L and; ≥5μg/L of arsenic-in 864 bladder and 525 kidney cancers diagnosed in the study area, Nova Scotia, Canada between 1998 and 2010. The model included proxy measures of lifestyle (e.g. smoking) and accounted for spatial dependencies. Overall, bladder cancer risk was 16% (2-5μg/L) and 18% (≥5μg/L) greater than that of the referent group (<2μg/L), with posterior probabilities of 88% and 93% for these risks being above 1. Effect sizes for kidney cancer were 5% (2-5μg/L) and 14% (≥5μg/L) above that of the referent group (<2μg/L), with probabilities of 61% and 84%. High-risk areas were common in southwestern areas, where higher arsenic-levels are associated with the local geology. The study suggests an increased bladder cancer, and potentially kidney cancer, risk from exposure to drinking water arsenic-levels within the current the World Health Organization maximum acceptable concentration. Copyright © 2017 Elsevier Ltd. All rights reserved.
Federal Register 2010, 2011, 2012, 2013, 2014
2012-07-06
... Increase the Maximum Reactor Power Level, Florida Power & Light Company, St. Lucie, Units 1 and 2 AGENCY... amendment for Renewed Facility Operating License Nos. DPR-67 and NPF-16, issued to Florida Power & Light... St. Lucie County, Florida. The proposed license amendment would increase the maximum thermal power...
Annual maximum and minimum lake levels for Indiana, 1942-85
Fowler, Kathleen K.
1988-01-01
Indiana has many natural and manmade lakes. Lake-level data are available for 217 lakes. These data were collected during water years 1942-85 by use of staff gages and, more recently, continuous recorders. The period of record at each site ranges from 1 to 43 years. Data from the lake stations have been compiled, and maximum and minimum lake levels for each year of record are reported. In addition to annual maximum and minimum lake levels, each lake station is described by gage location, surface area, drainage area, period of record, datum of gage, gage type, established legal level, lake level control, inlets and outlets, and extremes for the period of record.
40 CFR 142.40 - Requirements for a variance.
Code of Federal Regulations, 2010 CFR
2010-07-01
... responsibility from any requirement respecting a maximum contaminant level of an applicable national primary... maximum contaminant levels of such drinking water regulations despite application of the best technology...
Conflicting Interpretations of Scientific Pedagogy
ERIC Educational Resources Information Center
Galamba, Arthur
2016-01-01
Not surprisingly historical studies have suggested that there is a distance between concepts of teaching methods, their interpretations and their actual use in the classroom. This issue, however, is not always pitched to the personal level in historical studies, which may provide an alternative insight on how teachers conceptualise and engage with…
Cloze Procedure. An Historical Review.
ERIC Educational Resources Information Center
Wardell, David
Cloze procedure, a testing method which systematically deletes words in written prose and then measures the accuracy of the information is reviewed from a historical viewpoint. Redundancy is placed in a verbal context and can be noted on three separate levels of language: (1) surface syntactic structure; (2) deep syntactic structure; and (3)…
A Means to an End: A Middle Level Teacher's Purposes for Using Historical Simulations
ERIC Educational Resources Information Center
Gradwell, Jill M.; DiCamillo, Lorrei
2013-01-01
Historical simulations are often criticized for being superficial, reinforcing negative stereotypes, and skewing students' view of history. Simulation critics argue if inexperienced teachers implement simulations, they may adversely influence students' psychological development, especially if students take roles as perpetrators or victims.…
Schooling in Malaysia: Historical Trends and Recent Enrollments. A Rand Note.
ERIC Educational Resources Information Center
De Tray, Dennis
The educational history of Malaysia is discussed; policy, historical trends, and school attendance are emphasized. Increased schooling and increased returns to schooling have been essential ingredients in Malaysia's economic growth. Schooling levels have risen rapidly since independence and, while all Malaysians have shared substantially in this…
Economic activity and trends in ambient air pollution.
Davis, Mary E; Laden, Francine; Hart, Jaime E; Garshick, Eric; Smith, Thomas J
2010-05-01
One challenge in assessing the health effects of human exposure to air pollution in epidemiologic studies is the lack of widespread historical air pollutant monitoring data with which to characterize past exposure levels. Given the availability of long-term economic data, we relate economic activity levels to patterns in vehicle-related particulate matter (PM) over a 30-year period in New Jersey, USA, to provide insight into potential historical surrogate markers of air pollution. We used statewide unemployment and county-level trucking industry characteristics to estimate historical coefficient of haze (COH), a marker of vehicle-related PM predominantly from diesel exhaust. A total of 5,920 observations were included across 25 different locations in New Jersey between 1971 and 2003. A mixed-modeling approach was employed to estimate the impact of economic indicators on measured COH. The model explained approximately 50% of the variability in COH as estimated by the overall R2 value. Peaks and lows in unemployment tracked negatively with similar extremes in COH, whereas employment in the trucking industry was positively associated with COH. Federal air quality regulations also played a large and significant role in reducing COH levels over the study period. This new approach outlines an alternative method to reconstruct historical exposures that may greatly aid epidemiologic research on specific causes of health effects from urban air pollution. Economic activity data provide a potential surrogate marker of changes in exposure levels over time in the absence of direct monitoring data for chronic disease studies, but more research in this area is needed.
Analysis of changes in water-level dynamics at selected sites in the Florida Everglades
Conrads, Paul; Benedict, Stephen T.
2013-01-01
The historical modification and regulation of the hydrologic patterns in the Florida Everglades have resulted in changes in the ecosystem of South Florida and the Florida Everglades. Since the 1970s, substantial focus has been given to the restoration of the Everglades ecosystem. The U.S. Geological Survey through its Greater Everglades Priority Ecosystem Science and National Water-Quality Assessment Programs has been providing scientific information to resource managers to assist in the Everglades restoration efforts. The current investigation included development of a simple method to identify and quantify changes in historical hydrologic behavior within the Everglades that could be used by researchers to identify responses of ecological communities to those changes. Such information then could be used by resource managers to develop appropriate water-management practices within the Everglades to promote restoration. The identification of changes in historical hydrologic behavior within the Everglades was accomplished by analyzing historical time-series water-level data from selected gages in the Everglades using (1) break-point analysis of cumulative Z-scores to identify hydrologic changes and (2) cumulative water-level frequency distribution curves to evaluate the magnitude of those changes. This analytical technique was applied to six long-term water-level gages in the Florida Everglades. The break-point analysis for the concurrent period of record (1978–2011) identified 10 common periods of changes in hydrologic behavior at the selected gages. The water-level responses at each gage for the 10 periods displayed similarity in fluctuation patterns, highlighting the interconnectedness of the Florida Everglades hydrologic system. While the patterns were similar, the analysis also showed that larger fluctuations in water levels between periods occurred in Water Conservation Areas 2 and 3 in contrast to those in Water Conservation Area 1 and the Everglades National Park. Results from the analysis indicate that the cumulative Z-score curve, in conjunction with cumulative water-level frequency distribution curves, can be a useful tool in identifying and quantifying changes in historical hydrologic behavior within the Everglades. In addition to the analysis, a spreadsheet application was developed to assist in applying these techniques to time-series water-level data at gages within the Everglades and is included with this report.
Enhanced Light Absorption in Fluorinated Ternary Small-Molecule Photovoltaics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Eastham, Nicholas D.; Dudnik, Alexander S.; Harutyunyan, Boris
2017-06-14
Using small-molecule donor (SMD) semiconductors in organic photovoltaics (OPVs) has historically afforded lower power conversion efficiencies (PCEs) than their polymeric counterparts. The PCE difference is attributed to shorter conjugated backbones, resulting in reduced intermolecular interactions. Here, a new pair of SMDs is synthesized based on the diketopyrrolopyrrole-benzodithiophene-diketopyrrolopyrrole (BDT-DPP2) skeleton but having fluorinated and fluorinefree aromatic side-chain substituents. Ternary OPVs having varied ratios of the two SMDs with PC61BM as the acceptor exhibit tunable open-circuit voltages (Vocs) between 0.833 and 0.944 V due to a fluorination-induced shift in energy levels and the electronic “alloy” formed from the miscibility of the twomore » SMDs. A 15% increase in PCE is observed at the optimal ternary SMD ratio, with the short-circuit current density (Jsc) significantly increased to 9.18 mA/cm2. The origin of Jsc enhancement is analyzed via charge generation, transport, and diffuse reflectance measurements, and is attributed to increased optical absorption arising from a maximum in film crystallinity at this SMD ratio, observed by grazing incidence wide-angle X-ray scattering.« less
Long-term trend of foE in European higher middle latitudes
NASA Astrophysics Data System (ADS)
Laštovička, Jan
2016-04-01
Long-term changes and trends have been observed in the whole ionosphere below its maximum. As concerns the E region, historical global data (Bremer, 2008) provide predominantly slightly positive trend, even though some stations provide a negative trend. Here we use data of two European stations with the best long data series of parameters of the ionospheric E layer, Slough/Chilton and Juliusruh over 1975-2014 (40 years). Noon-time medians (10-14 LT) are analyzed. The trend pattern after removing solar influence is complex. For yearly average values for Chilton first foE is decreasing in 1975-1990 by about 0.1 MHz, then the trend levels off or a little increase occurs in 1990-2004, and finally in 2004-2014 again a decrease is observed (again by about 0.1 MHz but over shorter period). Juliusruh yields a similar pattern. Similar analysis is also done for some months to check seasonal dependence of trends. The stability of relation between solar activity and foE is tested to clarify potential role of this factor in apparent trend of foE.
Yu, Xue; Ghasemizadeh, Reza; Padilla, Ingrid; Irizarry, Celys; Kaeli, David; Alshawabkeh, Akram
2014-01-01
We studied the spatial and temporal distribution patterns of Chlorinated Volatile Organic Compounds (CVOCs) in the karst aquifers in northern Puerto Rico (1982-2013). Seventeen CVOCs were widely detected across the study area, with the most detected and persistent contaminated CVOCs including trichloroethylene (TCE), tetrachloroethylene (PCE), carbon tetrachloride (CT), chloroform (TCM), and methylene chloride (DCM). Historically, 471 (76%) and 319 (52%) of the 615 sampling sites have CVOC concentrations above the detection limit and maximum contamination level (MCL), respectively. The spatiotemporal patterns of the CVOC concentrations showed two clusters of contaminated areas, one near the Superfund site “Upjohn” and another near “Vega Alta Public Supply Wells.” Despite a decreasing trend in concentrations, there is a general northward movement and spreading of contaminants even beyond the extent of known sources of the Superfund and landfill sites. Our analyses suggest that, besides the source conditions, karst characteristics (high heterogeneity, complex hydraulic and biochemical environment) are linked to the long-term spatiotemporal patterns of CVOCs in groundwater. PMID:25522355
Erosion and sediment yields in the Transverse Ranges, Southern California
Scott, Kevin M.; Williams, Rhea P.
1978-01-01
Major-storm and long-term erosion rates in mountain watersheds of the western Transverse Ranges of Ventura County, Calif., are estimated to range from low values that would not require the construction of catchments or channel-stabilization structures to values as high as those recorded anywhere for comparable bedrock erodibilities. A major reason for this extreme variability is the high degree of tectonic activity in the area--watersheds are locally being uplifted by at least as much as 25 feet per 1,000 years, yet the maximum extrapolated rate of denudation measured over the longest available period of record is 7.5 feet per 1,000 years adjusted to a drainage area of 0.5 square mile. Evidence of large amounts of uplift continuing into historic time includes structurally overturned strata of Pleistocene age, active thrust faulting, demonstrable stream antecedence, uplifted and deformed terraces, and other results of base-level change seen in stream channels. Such evidence is widespread in the Transverse Ranges, and aspects of the landscape are locally more a function of tectonic activity than of the denudational process. (Woodard-USGS)
The Bonneville Flood—A veritable débâcle: Chapter 6
O'Connor, Jim E.
2016-01-01
The Bonneville Flood was one of the largest floods on Earth. First discovered by G.K. Gilbert in the 1870s during his inspection of the outlet at Red Rock Pass, it was rediscovered in the 1950s by Harold Malde and coworkers, leading to mapping and assessment of spectacular flood features along Marsh Creek, Portneuf River, and Snake River for over 1100 km between the outlet and Lewiston, Idaho. The cataclysmic flood—from the rapid ~ 115 m drop of Lake Bonneville from the Bonneville level to the Provo level—was nearly 200 m deep in places and flowed at a maximum rate of about 1 million m3 s− 1; about 100 times greater than any historical Snake River flood. Along its route the Bonneville Flood carved canyons and cataract complexes and built massive boulder bars. These flood features have been a rich source for understanding megaflood processes. Yet it still offers much more with new and developing techniques for hydrodynamic modeling and landscape analysis.
Fitness in paradise: quality of forensic reports submitted to the Hawaii judiciary.
Robinson, Richard; Acklin, Marvin W
2010-01-01
This paper examined quality of forensic reports submitted to the Hawaii Judiciary. Hawaii utilizes a three panel system for assessing fitness to proceed, where two psychologists and one psychiatrist submit independent reports to the Court. Utilizing a survey instrument based on previous research and nationally-derived quality standards, 150 competency to stand trial (CST) reports were examined. Reports demonstrated pervasive mediocrity with respect to quality (Mean QC=68.95, SD=15.21). One quarter (N=38) of the reports scored at or above 80% of the maximum possible score. Levels of CST agreement between evaluators and evaluators and judges were high. Report quality did not differ as a function of evaluator professional identity. Full-time employed evaluators submitted a greater number of reports above the quality criterion. For those evaluators who attended the March training, reports demonstrated significantly improved quality. Suggestions for enhancing report quality are offered with a special attention to inclusion of report elements, focus on inclusion of historical elements, and clearly described rationales supporting forensic opinions. (7664 words. Competency to stand trial, inter-rater agreement).
Early Teen Marriage and Future Poverty
DAHL, GORDON B.
2010-01-01
Both early teen marriage and dropping out of high school have historically been associated with a variety of negative outcomes, including higher poverty rates throughout life. Are these negative outcomes due to preexisting differences, or do they represent the causal effect of marriage and schooling choices? To better understand the true personal and societal consequences, in this article, I use an instrumental variables (IV) approach that takes advantage of variation in state laws regulating the age at which individuals are allowed to marry, drop out of school, and begin work. The baseline IV estimate indicates that a woman who marries young is 31 percentage points more likely to live in poverty when she is older. Similarly, a woman who drops out of school is 11 percentage points more likely to be poor. The results are robust to a variety of alternative specifications and estimation methods, including limited information maximum likelihood (LIML) estimation and a control function approach. While grouped ordinary least squares (OLS) estimates for the early teen marriage variable are also large, OLS estimates based on individual-level data are small, consistent with a large amount of measurement error. PMID:20879684
Early teen marriage and future poverty.
Dahl, Gordon B
2010-08-01
Both early teen marriage and dropping out of high school have historically been associated with a variety of negative outcomes, including higher poverty rates throughout life. Are these negative outcomes due to preexisting differences, or do they represent the causal effect of marriage and schooling choices? To better understand the true personal and societal consequences, in this article, I use an instrumental variables (IV) approach that takes advantage of variation in state laws regulating the age at which individuals are allowed to marry, drop out of school, and begin work. The baseline IV estimate indicates that a woman who marries young is 31 percentage points more likely to live in poverty when she is older. Similarly, a woman who drops out of school is 11 percentage points more likely to be poor. The results are robust to a variety of alternative specifications and estimation methods, including limited information maximum likelihood (LIML) estimation and a control function approach. While grouped ordinary least squares (OLS) estimates for the early teen marriage variable are also large, OLS estimates based on individual-level data are small, consistent with a large amount of measurement error
A Copula-Based Conditional Probabilistic Forecast Model for Wind Power Ramps
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hodge, Brian S; Krishnan, Venkat K; Zhang, Jie
Efficient management of wind ramping characteristics can significantly reduce wind integration costs for balancing authorities. By considering the stochastic dependence of wind power ramp (WPR) features, this paper develops a conditional probabilistic wind power ramp forecast (cp-WPRF) model based on Copula theory. The WPRs dataset is constructed by extracting ramps from a large dataset of historical wind power. Each WPR feature (e.g., rate, magnitude, duration, and start-time) is separately forecasted by considering the coupling effects among different ramp features. To accurately model the marginal distributions with a copula, a Gaussian mixture model (GMM) is adopted to characterize the WPR uncertaintymore » and features. The Canonical Maximum Likelihood (CML) method is used to estimate parameters of the multivariable copula. The optimal copula model is chosen based on the Bayesian information criterion (BIC) from each copula family. Finally, the best conditions based cp-WPRF model is determined by predictive interval (PI) based evaluation metrics. Numerical simulations on publicly available wind power data show that the developed copula-based cp-WPRF model can predict WPRs with a high level of reliability and sharpness.« less
Tramonte, Keila Modesto; Figueira, Rubens Cesar Lopes; Majer, Alessandra Pereira; de Lima Ferreira, Paulo Alves; Batista, Miriam Fernanda; Ribeiro, Andreza Portella; de Mahiques, Michel Michaelovitch
2018-02-01
The Cananéia-Iguape system is located in a coastal region of southeastern Brazil, recognized by UNESCO as an Atlantic Forest Biosphere Reserve. This system has suffered substantial environmental impacts due to the opening of an artificial channel and by past intensive mining activities. In this paper was performed the sequential chemical extraction of Cu, Pb, and Zn, on previously described sediment cores, and the statistical treatment of the data, allowing to estimate the remobilization geochemical behavior, the available content and the trend of accumulation between 1926 and 2008. The maximum available level (sum of all mobile fraction) were, in mgkg -1 , 18.74 for Cu, 177.55 for Pb and 123.03 for Zn. Considering its environmental availability, Pb remains a concern in the system. It was possible to recognize the anthropic contribution of Pb, being the mining activities considered the only potential source of this metal in the region. Copyright © 2017 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Ayanshola, Ayanniyi; Olofintoye, Oluwatosin; Obadofin, Ebenezer
2018-03-01
This study presents the impact of global warming on precipitation patterns in Ilorin, Nigeria, and its implications on the hydrological balance of the Awun basin under the prevailing climate conditions. The study analyzes 39 years of rainfall and temperature data of relevant stations within the study areas. Simulated data from the Coupled Global Climate model for historical and future datasets were investigated under the A2 emission scenario. Statistical regression and a Mann-Kendall analysis were performed to determine the nature of the trends in the hydrological variables and their significance levels, while a Soil and Water Assessment Tool (SWAT) was used to estimate the water balance and derive the stream flow and yield of the Awun basin. The study revealed that while minimum and maximum temperatures in Ilorin are increasing, rainfall is generally decreasing. The assessment of the trends in the water balance parameters in the basin indicates that there is no improvement in the water yield as the population increases. This may result in major stresses to the water supply in the near future.
The Low-Level Radioactive Waste Management Office: Thirty Years of Experience in Canada - 13308
DOE Office of Scientific and Technical Information (OSTI.GOV)
Benitez, Liliana; Gardiner, Mark J.; Zelmer, Robert L.
2013-07-01
This paper reviews thirty years of progress by the Low-Level Radioactive Waste Management Office (LLRWMO) in developing and implementing low-level radioactive waste (LLRW) remediation projects and environmentally safe co-existence strategies. It reports on the present status and the future of the national historic waste program in Canada. There are over two million cubic metres of historic LLRW in Canada. Historic LLRW is broadly defined as LLRW that was managed in the past in a manner that is no longer considered acceptable and for which the original owner cannot reasonably be held accountable. In many cases, the original owner can notmore » be identified or no longer exists. The LLRWMO was established in 1982 as Canada's agent to carry out the responsibilities of the federal government for the management of historic LLRW. The LLRWMO is operated by Atomic Energy of Canada Limited (AECL) through a cost-recovery agreement with Natural Resources Canada (NRCan), the federal department that provides the funding and establishes national policy for radioactive waste management in Canada. The LLRWMO expertise includes project managers, environmental remediation specialists, radiation surveyors, communications staff and administrative support staff. The LLRWMO in providing all aspects of project oversight and implementation contracts additional resources supplementing core staff capacity as project/program demands require. (authors)« less
2017-01-01
Background A number of biotic and abiotic factors have been proposed as drivers of geographic variation in species richness. As biotic elements, inter-specific interactions are the most widely recognized. Among abiotic factors, in particular for plants, climate and topographic variables as well as their historical variation have been correlated with species richness and endemism. In this study, we determine the extent to which the species richness and endemism of monocot geophyte species in Mesoamerica is predicted by current climate, historical climate stability and topography. Methods Using approximately 2,650 occurrence points representing 507 geophyte taxa, species richness (SR) and weighted endemism (WE) were estimated at a geographic scale using grids of 0.5 × 0.5 decimal degrees resolution using Mexico as the geographic extent. SR and WE were also estimated using species distributions inferred from ecological niche modeling for species with at least five spatially unique occurrence points. Current climate, current to Last Glacial Maximum temperature, precipitation stability and topographic features were used as predictor variables on multiple spatial regression analyses (i.e., spatial autoregressive models, SAR) using the estimates of SR and WE as response variables. The standardized coefficients of the predictor variables that were significant in the regression models were utilized to understand the observed patterns of species richness and endemism. Results Our estimates of SR and WE based on direct occurrence data and distribution modeling generally yielded similar results, though estimates based on ecological niche modeling indicated broader distribution areas for SR and WE than when species richness was directly estimated using georeferenced coordinates. The SR and WE of monocot geophytes were highest along the Trans-Mexican Volcanic Belt, in both cases with higher levels in the central area of this mountain chain. Richness and endemism were also elevated in the southern regions of the Sierra Madre Oriental and Occidental mountain ranges, and in the Tehuacán Valley. Some areas of the Sierra Madre del Sur and Sierra Madre Oriental had high levels of WE, though they are not the areas with the highest SR. The spatial regressions suggest that SR is mostly influenced by current climate, whereas endemism is mainly affected by topography and precipitation stability. Conclusions Both methods (direct occurrence data and ecological niche modeling) used to estimate SR and WE in this study yielded similar results and detected a key area that should be considered in plant conservation strategies: the central region of the Trans-Mexican Volcanic Belt. Our results also corroborated that species richness is more closely correlated with current climate factors while endemism is related to differences in topography and to changes in precipitation levels compared to the LGM climatic conditions. PMID:29062605
Seagrasses in tropical Australia, productive and abundant for decades decimated overnight.
Pollard, Peter C; Greenway, Margaret
2013-03-01
Seagrass ecosystems provide unique coastal habitats critical to the life cycle of many species. Seagrasses are a major store of organic carbon. While seagrasses are globally threatened and in decline, in Cairns Harbour, Queensland, on the tropical east coast of Australia, they have flourished. We assessed seagrass distribution in Cairns Harbour between 1953 and 2012 from historical aerial photographs, Google map satellite images, existing reports and our own surveys of their distribution. Seasonal seagrass physiology was assessed through gross primary production, respiration and photosynthetic characteristics of three seagrass species, Cymodocea serrulata, Thalassia hemprichii and Zostera muelleri. At the higher water temperatures of summer, respiration rates increased in all three species, as did their maximum rates of photosynthesis. All three seagrasses achieved maximum rates of photosynthesis at low tide and when they were exposed. For nearly six decades there was little change in seagrass distribution in Cairns Harbour. This was most likely because the seagrasses were able to achieve sufficient light for growth during intertidal and low tide periods. With historical data of seagrass distribution and measures of species production and respiration, could seagrass survival in a changing climate be predicted? Based on physiology, our results predicted the continued maintenance of the Cairns Harbour seagrasses, although one species was more susceptible to thermal disturbance. However, in 2011 an unforeseen episodic disturbance - Tropical Cyclone Yasi - and associated floods lead to the complete and catastrophic loss of all the seagrasses in Cairns Harbour.
Anderson, N M; Larkin, J W; Cole, M B; Skinner, G E; Whiting, R C; Gorris, L G M; Rodriguez, A; Buchanan, R; Stewart, C M; Hanlin, J H; Keener, L; Hall, P A
2011-11-01
As existing technologies are refined and novel microbial inactivation technologies are developed, there is a growing need for a metric that can be used to judge equivalent levels of hazard control stringency to ensure food safety of commercially sterile foods. A food safety objective (FSO) is an output-oriented metric that designates the maximum level of a hazard (e.g., the pathogenic microorganism or toxin) tolerated in a food at the end of the food supply chain at the moment of consumption without specifying by which measures the hazard level is controlled. Using a risk-based approach, when the total outcome of controlling initial levels (H(0)), reducing levels (ΣR), and preventing an increase in levels (ΣI) is less than or equal to the target FSO, the product is considered safe. A cross-disciplinary international consortium of specialists from industry, academia, and government was organized with the objective of developing a document to illustrate the FSO approach for controlling Clostridium botulinum toxin in commercially sterile foods. This article outlines the general principles of an FSO risk management framework for controlling C. botulinum growth and toxin production in commercially sterile foods. Topics include historical approaches to establishing commercial sterility; a perspective on the establishment of an appropriate target FSO; a discussion of control of initial levels, reduction of levels, and prevention of an increase in levels of the hazard; and deterministic and stochastic examples that illustrate the impact that various control measure combinations have on the safety of well-established commercially sterile products and the ways in which variability all levels of control can heavily influence estimates in the FSO risk management framework. This risk-based framework should encourage development of innovative technologies that result in microbial safety levels equivalent to those achieved with traditional processing methods.
Regional flood frequency analysis in Triveneto (Italy): climate and scale controls
NASA Astrophysics Data System (ADS)
Persiano, Simone; Castellarin, Attilio; Domeneghetti, Alessio; Brath, Armando
2016-04-01
The growing concern about the possible effects of climate change on flood frequency regime is leading Authorities to review previously proposed procedures for design-flood estimation, such as national regionalization approaches. Our study focuses on the Triveneto region, a broad geographical area in North-eastern Italy consisting of the administrative regions of Trentino-Alto Adige, Veneto and Friuli-Venezia Giulia. A reference procedure for design flood estimation in Triveneto is available from the Italian NCR research project "VA.PI.", which developed a regional model using annual maximum series (AMS) of peak discharges that were collected up to the 80s by the former Italian Hydrometeorological Service. We consider a very detailed AMS database that we recently compiled for ~80 catchments located in Triveneto. Our dataset includes the historical data mentioned above, together with more recent data obtained from Regional Services and annual maximum peak streamflows extracted from inflow series to artificial reservoirs and provided by dam managers. All ~80 study catchments are characterized in terms of several geomorphologic and climatic descriptors. The main objectives of our study are: (1) to check whether climatic and scale controls on flood frequency regime in Triveneto are similar to the controls that were recently found in Europe; (2) to verify the possible presence of trends as well as abrupt changes in the intensity and frequency of flood extremes by looking at changes in time of regional L-moments of annual maximum floods; (3) to assess the reliability and representativeness of the reference procedure for design flood estimation relative to flood data that were not included in the VA.PI. dataset (i.e. more recent data collected after the 80s and historical data provided by dam managers); (4) to develop an updated reference procedure for design flood estimation in Triveneto by using a focused-pooling approach (i.e. Region of Influence, RoI).
Maximum earthquake magnitudes in the Aegean area constrained by tectonic moment release rates
NASA Astrophysics Data System (ADS)
Ch. Koravos, G.; Main, I. G.; Tsapanos, T. M.; Musson, R. M. W.
2003-01-01
Seismic moment release is usually dominated by the largest but rarest events, making the estimation of seismic hazard inherently uncertain. This uncertainty can be reduced by combining long-term tectonic deformation rates with short-term recurrence rates. Here we adopt this strategy to estimate recurrence rates and maximum magnitudes for tectonic zones in the Aegean area. We first form a merged catalogue for historical and instrumentally recorded earthquakes in the Aegean, based on a recently published catalogue for Greece and surrounding areas covering the time period 550BC-2000AD, at varying degrees of completeness. The historical data are recalibrated to allow for changes in damping in seismic instruments around 1911. We divide the area up into zones that correspond to recent determinations of deformation rate from satellite data. In all zones we find that the Gutenberg-Richter (GR) law holds at low magnitudes. We use Akaike's information criterion to determine the best-fitting distribution at high magnitudes, and classify the resulting frequency-magnitude distributions of the zones as critical (GR law), subcritical (gamma density distribution) or supercritical (`characteristic' earthquake model) where appropriate. We determine the ratio η of seismic to tectonic moment release rate. Low values of η (<0.5) corresponding to relatively aseismic deformation, are associated with higher b values (>1.0). The seismic and tectonic moment release rates are then combined to constrain recurrence rates and maximum credible magnitudes (in the range 6.7-7.6 mW where the results are well constrained) based on extrapolating the short-term seismic data. With current earthquake data, many of the tectonic zones show a characteristic distribution that leads to an elevated probability of magnitudes around 7, but a reduced probability of larger magnitudes above this value when compared with the GR trend. A modification of the generalized gamma distribution is suggested to account for this, based on a finite statistical second moment for the seismic moment distribution.
NASA Astrophysics Data System (ADS)
Tian, D.; Cammarano, D.
2017-12-01
Modeling changes of crop production at regional scale is important to make adaptation measures for sustainably food supply under global change. In this study, we explore how changing climate extremes in the 20th and 21st century affect maize (summer crop) and wheat (winter crop) yields in an agriculturally important region: the southeast United States. We analyze historical (1950-1999) and projected (2006-2055) precipitation and temperature extremes by calculating the changes of 18 climate extreme indices using the statistically downscaled CMIP5 data from 10 general circulation models (GCMs). To evaluate how these climate extremes affect maize and wheat yields, historical baseline and projected maize and wheat yields under RCP4.5 and RCP8.5 scenarios are simulated using the DSSAT-CERES maize and wheat models driven by the same downscaled GCMs data. All of the changes are examined at 110 locations over the study region. The results show that most of the precipitation extreme indices do not have notable change; mean precipitation, precipitation intensity, and maximum 1-day precipitation are generally increased; the number of rainy days is decreased. The temperature extreme indices mostly showed increased values on mean temperature, number of high temperature days, diurnal temperature range, consecutive high temperature days, maximum daily maximum temperature, and minimum daily minimum temperature; the number of low temperature days and number of consecutive low temperature days are decreased. The conditional probabilistic relationships between changes in crop yields and changes in extreme indices suggested different responses of crop yields to climate extremes during sowing to anthesis and anthesis to maturity periods. Wheat yields and crop water productivity for wheat are increased due to an increased CO2 concentration and minimum temperature; evapotranspiration, maize yields, and crop water productivity for wheat are decreased owing to the increased temperature extremes. We found the effects of precipitation changes on both yields are relatively uncertain.
Numbenjapon, Nawaporn; Costin, Gertrude; Gilsanz, Vicente; Pitukcheewanont, Pisit
2007-05-01
To determine whether increased thyroid hormones levels have an effect on various bone components (cortical vs cancellous bone). The anthropometric and 3-dimensional quantitative computed tomography (CT) bone measurements, including bone density (BD), cross-sectional area (CSA) of the lumbar spine and femur, and cortical bone area (CBA) of the femur, of 18 children and adolescents with untreated hyperthyroidism were reviewed and compared with those of age-, sex-, and ethnicity-matched historical controls. No significant differences in height, weight, body mass index (BMI), or pubertal staging between patients and controls were found. Cortical BD was significantly lower (P < .001) in children and adolescents with hyperthyroidism compared with historical controls. After adjusting for weight and height, no difference in femur CSA between hyperthyroid children and historical controls was evident. No significant correlations among thyroid hormone levels, antithyroid antibody levels, and cortical BD values were found. As determined by CT, cortical bone is the preferential site of bone loss in children and adolescents with untreated hyperthyroidism.
History, race, and attachment to place among elders in the rural all-black towns of Oklahoma.
McAuley, W J
1998-01-01
This research examines place attachment among older residents of the all-Black towns of Oklahoma. Social-historical occurrences, personal experiences associated with race, and expressed differences between social-historical groupings of older African Americans influence the level of social and autobiographical insideness among the elderly residents. The findings extend current conceptualizations of place attachment by showing that (a) place attachment is not a constant, even among long-term residents; (b) social-historical factors can play an important role in the level of place attachment; (c) race can be a salient element of place attachment; (d) experiences outside the community, such as racial discrimination, can influence the level of social and autobiographical bonding to the community; and (e) subgroup identity within minority groups can be associated with variations in community place attachment. The findings point to the value of carefully examining the issues of history and race in research focusing on older minority group members.
40 CFR 142.61 - Variances from the maximum contaminant level for fluoride.
Code of Federal Regulations, 2014 CFR
2014-07-01
... level for fluoride. 142.61 Section 142.61 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) WATER PROGRAMS (CONTINUED) NATIONAL PRIMARY DRINKING WATER REGULATIONS IMPLEMENTATION... from the maximum contaminant level for fluoride. (a) The Administrator, pursuant to section 1415(a)(1...
40 CFR 142.61 - Variances from the maximum contaminant level for fluoride.
Code of Federal Regulations, 2012 CFR
2012-07-01
... level for fluoride. 142.61 Section 142.61 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) WATER PROGRAMS (CONTINUED) NATIONAL PRIMARY DRINKING WATER REGULATIONS IMPLEMENTATION... from the maximum contaminant level for fluoride. (a) The Administrator, pursuant to section 1415(a)(1...
40 CFR 142.61 - Variances from the maximum contaminant level for fluoride.
Code of Federal Regulations, 2013 CFR
2013-07-01
... level for fluoride. 142.61 Section 142.61 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) WATER PROGRAMS (CONTINUED) NATIONAL PRIMARY DRINKING WATER REGULATIONS IMPLEMENTATION... from the maximum contaminant level for fluoride. (a) The Administrator, pursuant to section 1415(a)(1...
Developing an early warning system for storm surge inundation in the Philippines
NASA Astrophysics Data System (ADS)
Tablazon, Judd; Mahar Francisco Lagmay, Alfredo; Francia Mungcal, Ma. Theresa; Gonzalo, Lia Anne; Dasallas, Lea; Briones, Jo Brianne Louise; Santiago, Joy; Suarez, John Kenneth; Lapidez, John Phillip; Caro, Carl Vincent; Ladiero, Christine; Malano, Vicente
2014-05-01
A storm surge is the sudden rise of sea water generated by an approaching storm, over and above the astronomical tides. This event imposes a major threat in the Philippine coastal areas, as manifested by Typhoon Haiyan on 08 November 2013 where more than 6,000 people lost their lives. It has become evident that the need to develop an early warning system for storm surges is of utmost importance. To provide forecasts of the possible storm surge heights of an approaching typhoon, the Nationwide Operational Assessment of Hazards under the Department of Science and Technology (DOST-Project NOAH) simulated historical tropical cyclones that entered the Philippine Area of Responsibility. Bathymetric data, storm track, central atmospheric pressure, and maximum wind speed were used as parameters for the Japan Meteorological Agency (JMA) Storm Surge Model. The researchers calculated the frequency distribution of maximum storm surge heights of all typhoons under a specific Public Storm Warning Signal (PSWS) that passed through a particular coastal area. This determines the storm surge height corresponding to a given probability of occurrence. The storm surge heights from the model were added to the maximum astronomical tide data from WXTide software. The team then created maps of probable area inundation and flood levels of storm surges along coastal areas for a specific PSWS using the results of the frequency distribution. These maps were developed from the time series data of the storm tide at 10-minute intervals of all observation points in the Philippines. This information will be beneficial in developing early warnings systems, static maps, disaster mitigation and preparedness plans, vulnerability assessments, risk-sensitive land use plans, shoreline defense efforts, and coastal protection measures. Moreover, these will support the local government units' mandate to raise public awareness, disseminate information about storm surge hazards, and implement appropriate counter-measures for a given PSWS.
Developing an early warning system for storm surge inundation in the Philippines
NASA Astrophysics Data System (ADS)
Tablazon, J.; Caro, C. V.; Lagmay, A. M. F.; Briones, J. B. L.; Dasallas, L.; Lapidez, J. P.; Santiago, J.; Suarez, J. K.; Ladiero, C.; Gonzalo, L. A.; Mungcal, M. T. F.; Malano, V.
2014-10-01
A storm surge is the sudden rise of sea water generated by an approaching storm, over and above the astronomical tides. This event imposes a major threat in the Philippine coastal areas, as manifested by Typhoon Haiyan on 8 November 2013 where more than 6000 people lost their lives. It has become evident that the need to develop an early warning system for storm surges is of utmost importance. To provide forecasts of the possible storm surge heights of an approaching typhoon, the Nationwide Operational Assessment of Hazards under the Department of Science and Technology (DOST-Project NOAH) simulated historical tropical cyclones that entered the Philippine Area of Responsibility. Bathymetric data, storm track, central atmospheric pressure, and maximum wind speed were used as parameters for the Japan Meteorological Agency Storm Surge Model. The researchers calculated the frequency distribution of maximum storm surge heights of all typhoons under a specific Public Storm Warning Signal (PSWS) that passed through a particular coastal area. This determines the storm surge height corresponding to a given probability of occurrence. The storm surge heights from the model were added to the maximum astronomical tide data from WXTide software. The team then created maps of probable area inundation and flood levels of storm surges along coastal areas for a specific PSWS using the results of the frequency distribution. These maps were developed from the time series data of the storm tide at 10 min intervals of all observation points in the Philippines. This information will be beneficial in developing early warnings systems, static maps, disaster mitigation and preparedness plans, vulnerability assessments, risk-sensitive land use plans, shoreline defense efforts, and coastal protection measures. Moreover, these will support the local government units' mandate to raise public awareness, disseminate information about storm surge hazards, and implement appropriate counter-measures for a given PSWS.