Sample records for exceed previous estimates

  1. Estimating soil moisture exceedance probability from antecedent rainfall

    NASA Astrophysics Data System (ADS)

    Cronkite-Ratcliff, C.; Kalansky, J.; Stock, J. D.; Collins, B. D.

    2016-12-01

    The first storms of the rainy season in coastal California, USA, add moisture to soils but rarely trigger landslides. Previous workers proposed that antecedent rainfall, the cumulative seasonal rain from October 1 onwards, had to exceed specific amounts in order to trigger landsliding. Recent monitoring of soil moisture upslope of historic landslides in the San Francisco Bay Area shows that storms can cause positive pressure heads once soil moisture values exceed a threshold of volumetric water content (VWC). We propose that antecedent rainfall could be used to estimate the probability that VWC exceeds this threshold. A major challenge to estimating the probability of exceedance is that rain gauge records are frequently incomplete. We developed a stochastic model to impute (infill) missing hourly precipitation data. This model uses nearest neighbor-based conditional resampling of the gauge record using data from nearby rain gauges. Using co-located VWC measurements, imputed data can be used to estimate the probability that VWC exceeds a specific threshold for a given antecedent rainfall. The stochastic imputation model can also provide an estimate of uncertainty in the exceedance probability curve. Here we demonstrate the method using soil moisture and precipitation data from several sites located throughout Northern California. Results show a significant variability between sites in the sensitivity of VWC exceedance probability to antecedent rainfall.

  2. 48 CFR 8.405-6 - Limiting sources.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... or BPA with an estimated value exceeding the micro-purchase threshold not placed or established in... Schedule ordering procedures. The original order or BPA must not have been previously issued under sole... order or BPA exceeding the simplified acquisition threshold. (2) Posting. (i) Within 14 days after...

  3. 48 CFR 8.405-6 - Limiting sources.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... or BPA with an estimated value exceeding the micro-purchase threshold not placed or established in... Schedule ordering procedures. The original order or BPA must not have been previously issued under sole... order or BPA exceeding the simplified acquisition threshold. (2) Posting. (i) Within 14 days after...

  4. 48 CFR 8.405-6 - Limiting sources.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... or BPA with an estimated value exceeding the micro-purchase threshold not placed or established in... Schedule ordering procedures. The original order or BPA must not have been previously issued under sole... order or BPA exceeding the simplified acquisition threshold. (2) Posting. (i) Within 14 days after...

  5. 48 CFR 8.405-6 - Limiting sources.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... or BPA with an estimated value exceeding the micro-purchase threshold not placed or established in... Schedule ordering procedures. The original order or BPA must not have been previously issued under sole... order or BPA exceeding the simplified acquisition threshold. (2) Posting. (i) Within 14 days after...

  6. Flood of April 2007 and Flood-Frequency Estimates at Streamflow-Gaging Stations in Western Connecticut

    USGS Publications Warehouse

    Ahearn, Elizabeth A.

    2009-01-01

    A spring nor'easter affected the East Coast of the United States from April 15 to 18, 2007. In Connecticut, rainfall varied from 3 inches to more than 7 inches. The combined effects of heavy rainfall over a short duration, high winds, and high tides led to widespread flooding, storm damage, power outages, evacuations, and disruptions to traffic and commerce. The storm caused at least 18 fatalities (none in Connecticut). A Presidential Disaster Declaration was issued on May 11, 2007, for two counties in western Connecticut - Fairfield and Litchfield. This report documents hydrologic and meteorologic aspects of the April 2007 flood and includes estimates of the magnitude of the peak discharges and peak stages during the flood at 28 streamflow-gaging stations in western Connecticut. These data were used to perform flood-frequency analyses. Flood-frequency estimates provided in this report are expressed in terms of exceedance probabilities (the probability of a flood reaching or exceeding a particular magnitude in any year). Flood-frequency estimates for the 0.50, 0.20, 0.10, 0.04, 0.02, 0.01, and 0.002 exceedance probabilities (also expressed as 50-, 20-, 10-, 4-, 2-, 1-, and 0.2- percent exceedance probability, respectively) were computed for 24 of the 28 streamflow-gaging stations. Exceedance probabilities can further be expressed in terms of recurrence intervals (2-, 5-, 10-, 25-, 50-, 100-, and 500-year recurrence interval, respectively). Flood-frequency estimates computed in this study were compared to the flood-frequency estimates used to derive the water-surface profiles in previously published Federal Emergency Management Agency (FEMA) Flood Insurance Studies. The estimates in this report update and supersede previously published flood-frequency estimates for streamflowgaging stations in Connecticut by incorporating additional years of annual peak discharges, including the peaks for the April 2007 flood. In the southwest coastal region of Connecticut, the April 2007 peak discharges for streamflow-gaging stations with records extending back to 1955 were the second highest peak discharges on record; the 1955 annual peak discharges are the highest peak discharges in the station records. In the Housatonic and South Central Coast Basins, the April 2007 peak discharges for streamflow-gaging stations with records extending back to 1930 or earlier ranked between the fourth and eighth highest discharges on record, with the 1936, 1938, and 1955 floods as the largest floods in the station records. The peak discharges for the April 2007 flood have exceedance probabilities ranging between 0.10 to 0.02 (a 10- to 2-percent chance of being exceeded in a given year, respectively) with the majority (80 percent) of the stations having exceedance probabilities between 0.10 to 0.04. At three stations - Norwalk River at South Wilton, Pootatuck River at Sandy Hook, and Still River at Robertsville - the April 2007 peak discharges have an exceedance probability of 0.02. Flood-frequency estimates made after the April 2007 flood were compared to flood-frequency estimates used to derive the water-surface profiles (also called flood profiles) in FEMA Flood Insurance Studies developed for communities. In general, the comparison indicated that at the 0.10 exceedance probability (a 10-percent change of being exceeded in a given year), the discharges from the current (2007) flood-frequency analysis are larger than the discharges in the FEMA Flood Insurance Studies, with a median change of about +10 percent. In contrast, at the 0.01 exceedance probability (a 1-percent change of being exceeded in a year), the discharges from the current flood-frequency analysis are smaller than the discharges in the FEMA Flood Insurance Studies, with a median change of about -13 percent. Several stations had more than + 25 percent change in discharges at the 0.10 exceedance probability and are in the following communities: Winchester (Still River at Robertsv

  7. Peak-flow characteristics of Virginia streams

    USGS Publications Warehouse

    Austin, Samuel H.; Krstolic, Jennifer L.; Wiegand, Ute

    2011-01-01

    Peak-flow annual exceedance probabilities, also called probability-percent chance flow estimates, and regional regression equations are provided describing the peak-flow characteristics of Virginia streams. Statistical methods are used to evaluate peak-flow data. Analysis of Virginia peak-flow data collected from 1895 through 2007 is summarized. Methods are provided for estimating unregulated peak flow of gaged and ungaged streams. Station peak-flow characteristics identified by fitting the logarithms of annual peak flows to a Log Pearson Type III frequency distribution yield annual exceedance probabilities of 0.5, 0.4292, 0.2, 0.1, 0.04, 0.02, 0.01, 0.005, and 0.002 for 476 streamgaging stations. Stream basin characteristics computed using spatial data and a geographic information system are used as explanatory variables in regional regression model equations for six physiographic regions to estimate regional annual exceedance probabilities at gaged and ungaged sites. Weighted peak-flow values that combine annual exceedance probabilities computed from gaging station data and from regional regression equations provide improved peak-flow estimates. Text, figures, and lists are provided summarizing selected peak-flow sites, delineated physiographic regions, peak-flow estimates, basin characteristics, regional regression model equations, error estimates, definitions, data sources, and candidate regression model equations. This study supersedes previous studies of peak flows in Virginia.

  8. Near doubling of storm rainfall

    NASA Astrophysics Data System (ADS)

    Feng, Zhe

    2017-12-01

    Large, intense thunderstorms frequently cause flooding and fatalities. Now, research finds that these storms may see a threefold increase in frequency and produce significantly heavier downpours in the future, far exceeding previous estimates.

  9. Floods of May 1981 in west-central Montana

    USGS Publications Warehouse

    Parrett, Charles; Omang, R.J.; Hull, J.A.; Fassler, John W.

    1982-01-01

    Extensive flooding occurred in west-central Montana during May 22-23, 1981, as a result of a series of rainstorms. Flooding was particularly severe in the communities of East Helena, Belt, and Deer Lodge. Although no lives were lost, total flood damages were estimated by the Montana Disaster Emergency Services Division to be in excess of $30 million. Peak discharges were determined at 75 sites in the flooded area. At 25 sites the May 1981 peak discharge exceeded the computed 100-year frequency flood, and at 29 sites, where previous flow records are available, the May 1981 peak discharge exceeded the previous peak of record. (USGS)

  10. A procedure for estimating the frequency distribution of CO levels in the micro-region of a highway.

    DOT National Transportation Integrated Search

    1979-01-01

    This report demonstrates that the probability of violating a "not to be exceeded more than once per year", one-hour air quality standard can be bounded from above. This result represents a significant improvement over previous methods of ascertaining...

  11. Low-flow characteristics of Virginia streams

    USGS Publications Warehouse

    Austin, Samuel H.; Krstolic, Jennifer L.; Wiegand, Ute

    2011-01-01

    Low-flow annual non-exceedance probabilities (ANEP), called probability-percent chance (P-percent chance) flow estimates, regional regression equations, and transfer methods are provided describing the low-flow characteristics of Virginia streams. Statistical methods are used to evaluate streamflow data. Analysis of Virginia streamflow data collected from 1895 through 2007 is summarized. Methods are provided for estimating low-flow characteristics of gaged and ungaged streams. The 1-, 4-, 7-, and 30-day average streamgaging station low-flow characteristics for 290 long-term, continuous-record, streamgaging stations are determined, adjusted for instances of zero flow using a conditional probability adjustment method, and presented for non-exceedance probabilities of 0.9, 0.8, 0.7, 0.6, 0.5, 0.4, 0.3, 0.2, 0.1, 0.05, 0.02, 0.01, and 0.005. Stream basin characteristics computed using spatial data and a geographic information system are used as explanatory variables in regional regression equations to estimate annual non-exceedance probabilities at gaged and ungaged sites and are summarized for 290 long-term, continuous-record streamgaging stations, 136 short-term, continuous-record streamgaging stations, and 613 partial-record streamgaging stations. Regional regression equations for six physiographic regions use basin characteristics to estimate 1-, 4-, 7-, and 30-day average low-flow annual non-exceedance probabilities at gaged and ungaged sites. Weighted low-flow values that combine computed streamgaging station low-flow characteristics and annual non-exceedance probabilities from regional regression equations provide improved low-flow estimates. Regression equations developed using the Maintenance of Variance with Extension (MOVE.1) method describe the line of organic correlation (LOC) with an appropriate index site for low-flow characteristics at 136 short-term, continuous-record streamgaging stations and 613 partial-record streamgaging stations. Monthly streamflow statistics computed on the individual daily mean streamflows of selected continuous-record streamgaging stations and curves describing flow-duration are presented. Text, figures, and lists are provided summarizing low-flow estimates, selected low-flow sites, delineated physiographic regions, basin characteristics, regression equations, error estimates, definitions, and data sources. This study supersedes previous studies of low flows in Virginia.

  12. A method for estimating mean and low flows of streams in national forests of Montana

    USGS Publications Warehouse

    Parrett, Charles; Hull, J.A.

    1985-01-01

    Equations were developed for estimating mean annual discharge, 80-percent exceedance discharge, and 95-percent exceedance discharge for streams on national forest lands in Montana. The equations for mean annual discharge used active-channel width, drainage area and mean annual precipitation as independent variables, with active-channel width being most significant. The equations for 80-percent exceedance discharge and 95-percent exceedance discharge used only active-channel width as an independent variable. The standard error or estimate for the best equation for estimating mean annual discharge was 27 percent. The standard errors of estimate for the equations were 67 percent for estimating 80-percent exceedance discharge and 75 percent for estimating 95-percent exceedance discharge. (USGS)

  13. Floods in Central Texas, September 7-14, 2010

    USGS Publications Warehouse

    Winters, Karl E.

    2012-01-01

    Severe flooding occurred near the Austin metropolitan area in central Texas September 7–14, 2010, because of heavy rainfall associated with Tropical Storm Hermine. The U.S. Geological Survey, in cooperation with the Upper Brushy Creek Water Control and Improvement District, determined rainfall amounts and annual exceedance probabilities for rainfall resulting in flooding in Bell, Williamson, and Travis counties in central Texas during September 2010. We documented peak streamflows and the annual exceedance probabilities for peak streamflows recorded at several streamflow-gaging stations in the study area. The 24-hour rainfall total exceeded 12 inches at some locations, with one report of 14.57 inches at Lake Georgetown. Rainfall probabilities were estimated using previously published depth-duration frequency maps for Texas. At 4 sites in Williamson County, the 24-hour rainfall had an annual exceedance probability of 0.002. Streamflow measurement data and flood-peak data from U.S. Geological Survey surface-water monitoring stations (streamflow and reservoir gaging stations) are presented, along with a comparison of September 2010 flood peaks to previous known maximums in the periods of record. Annual exceedance probabilities for peak streamflow were computed for 20 streamflow-gaging stations based on an analysis of streamflow-gaging station records. The annual exceedance probability was 0.03 for the September 2010 peak streamflow at the Geological Survey's streamflow-gaging stations 08104700 North Fork San Gabriel River near Georgetown, Texas, and 08154700 Bull Creek at Loop 360 near Austin, Texas. The annual exceedance probability was 0.02 for the peak streamflow for Geological Survey's streamflow-gaging station 08104500 Little River near Little River, Texas. The lack of similarity in the annual exceedance probabilities computed for precipitation and streamflow might be attributed to the small areal extent of the heaviest rainfall over these and the other gaged watersheds.

  14. Methods for estimating selected flow-duration and flood-frequency characteristics at ungaged sites in Central Idaho

    USGS Publications Warehouse

    Kjelstrom, L.C.

    1998-01-01

    Methods for estimating daily mean discharges for selected flow durations and flood discharge for selected recurrence intervals at ungaged sites in central Idaho were applied using data collected at streamflow-gaging stations in the area. The areal and seasonal variability of discharge from ungaged drainage basins may be described by estimating daily mean discharges that are exceeded 20, 50, and 80 percent of the time each month. At 73 gaging stations, mean monthly discharge was regressed with discharge at three points—20, 50, and 80—from daily mean flow-duration curves for each month. Regression results were improved by dividing the study area into six regions. Previously determined estimates of mean monthly discharge from about 1,200 ungaged drainage basins provided the basis for applying the developed techniques to the ungaged basins. Estimates of daily mean discharges that are exceeded 20, 50, and 80 percent of the time each month at ungaged drainage basins can be made by multiplying mean monthly discharges estimated at ungaged sites by a regression factor for the appropriate region. In general, the flow-duration data were less accurately estimated at discharges exceeded 80 percent of the time than at discharges exceeded 20 percent of the time. Curves drawn through the three points for each of the six regions were most similar in July and most different from December through March. Coefficients of determination of the regressions indicate that differences in mean monthly discharge largely explain differences in discharge at points on the daily mean flow-duration curve. Inherent in the method are errors in the technique used to estimate mean monthly discharge. Flood discharge estimates for selected recurrence intervals at ungaged sites upstream or downstream from gaging stations can be determined by a transfer technique. A weighted ratio of drainage area times flood discharge for selected recurrence intervals at the gaging station can be used to estimate flood discharge at the ungaged site. Best results likely are obtained when the difference between gaged and ungaged drainage areas is small.

  15. Magnitude and Frequency of Rural Floods in the Southeastern United States, through 2006: Volume 2, North Carolina

    USGS Publications Warehouse

    Weaver, J. Curtis; Feaster, Toby D.; Gotvald, Anthony J.

    2009-01-01

    Reliable estimates of the magnitude and frequency of floods are required for the economical and safe design of transportation and water-conveyance structures. A multistate approach was used to update methods for estimating the magnitude and frequency of floods in rural, ungaged basins in North Carolina, South Carolina, and Georgia that are not substantially affected by regulation, tidal fluctuations, or urban development. In North Carolina, annual peak-flow data available through September 2006 were available for 584 sites; 402 of these sites had a total of 10 or more years of systematic record that is required for at-site, flood-frequency analysis. Following data reviews and the computation of 20 physical and climatic basin characteristics for each station as well as at-site flood-frequency statistics, annual peak-flow data were identified for 363 sites in North Carolina suitable for use in this analysis. Among these 363 sites, 19 sites had records that could be divided into unregulated and regulated/ channelized annual peak discharges, which means peak-flow records were identified for a total of 382 cases in North Carolina. Considering the 382 cases, at-site flood-frequency statistics are provided for 333 unregulated cases (also used for the regression database) and 49 regulated/channelized cases. The flood-frequency statistics for the 333 unregulated sites were combined with data for sites from South Carolina, Georgia, and adjacent parts of Alabama, Florida, Tennessee, and Virginia to create a database of 943 sites considered for use in the regional regression analysis. Flood-frequency statistics were computed by fitting logarithms (base 10) of the annual peak flows to a log-Pearson Type III distribution. As part of the computation process, a new generalized skew coefficient was developed by using a Bayesian generalized least-squares regression model. Exploratory regression analyses using ordinary least-squares regression completed on the initial database of 943 sites resulted in defining five hydrologic regions for North Carolina, South Carolina, and Georgia. Stations with drainage areas less than 1 square mile were removed from the database, and a procedure to examine for basin redundancy (based on drainage area and periods of record) also resulted in the removal of some stations from the regression database. Flood-frequency estimates and basin characteristics for 828 gaged stations were combined to form the final database that was used in the regional regression analysis. Regional regression analysis, using generalized least-squares regression, was used to develop a set of predictive equations that can be used for estimating the 50-, 20-, 10-, 4-, 2-, 1-, 0.5-, and 0.2-percent chance exceedance flows for rural ungaged, basins in North Carolina, South Carolina, and Georgia. The final predictive equations are all functions of drainage area and the percentage of drainage basin within each of the five hydrologic regions. Average errors of prediction for these regression equations range from 34.0 to 47.7 percent. Discharge estimates determined from the systematic records for the current study are, on average, larger in magnitude than those from a previous study for the highest percent chance exceedances (50 and 20 percent) and tend to be smaller than those from the previous study for the lower percent chance exceedances when all sites are considered as a group. For example, mean differences for sites in the Piedmont hydrologic region range from positive 0.5 percent for the 50-percent chance exceedance flow to negative 4.6 percent for the 0.2-percent chance exceedance flow when stations are grouped by hydrologic region. Similarly for the same hydrologic region, median differences range from positive 0.9 percent for the 50-percent chance exceedance flow to negative 7.1 percent for the 0.2-percent chance exceedance flow. However, mean and median percentage differences between the estimates from the previous and curre

  16. Global biomass production potentials exceed expected future demand without the need for cropland expansion

    PubMed Central

    Mauser, Wolfram; Klepper, Gernot; Zabel, Florian; Delzeit, Ruth; Hank, Tobias; Putzenlechner, Birgitta; Calzadilla, Alvaro

    2015-01-01

    Global biomass demand is expected to roughly double between 2005 and 2050. Current studies suggest that agricultural intensification through optimally managed crops on today's cropland alone is insufficient to satisfy future demand. In practice though, improving crop growth management through better technology and knowledge almost inevitably goes along with (1) improving farm management with increased cropping intensity and more annual harvests where feasible and (2) an economically more efficient spatial allocation of crops which maximizes farmers' profit. By explicitly considering these two factors we show that, without expansion of cropland, today's global biomass potentials substantially exceed previous estimates and even 2050s' demands. We attribute 39% increase in estimated global production potentials to increasing cropping intensities and 30% to the spatial reallocation of crops to their profit-maximizing locations. The additional potentials would make cropland expansion redundant. Their geographic distribution points at possible hotspots for future intensification. PMID:26558436

  17. Global biomass production potentials exceed expected future demand without the need for cropland expansion.

    PubMed

    Mauser, Wolfram; Klepper, Gernot; Zabel, Florian; Delzeit, Ruth; Hank, Tobias; Putzenlechner, Birgitta; Calzadilla, Alvaro

    2015-11-12

    Global biomass demand is expected to roughly double between 2005 and 2050. Current studies suggest that agricultural intensification through optimally managed crops on today's cropland alone is insufficient to satisfy future demand. In practice though, improving crop growth management through better technology and knowledge almost inevitably goes along with (1) improving farm management with increased cropping intensity and more annual harvests where feasible and (2) an economically more efficient spatial allocation of crops which maximizes farmers' profit. By explicitly considering these two factors we show that, without expansion of cropland, today's global biomass potentials substantially exceed previous estimates and even 2050s' demands. We attribute 39% increase in estimated global production potentials to increasing cropping intensities and 30% to the spatial reallocation of crops to their profit-maximizing locations. The additional potentials would make cropland expansion redundant. Their geographic distribution points at possible hotspots for future intensification.

  18. Flood-frequency characteristics of Wisconsin streams

    USGS Publications Warehouse

    Walker, John F.; Peppler, Marie C.; Danz, Mari E.; Hubbard, Laura E.

    2017-05-22

    Flood-frequency characteristics for 360 gaged sites on unregulated rural streams in Wisconsin are presented for percent annual exceedance probabilities ranging from 0.2 to 50 using a statewide skewness map developed for this report. Equations of the relations between flood-frequency and drainage-basin characteristics were developed by multiple-regression analyses. Flood-frequency characteristics for ungaged sites on unregulated, rural streams can be estimated by use of the equations presented in this report. The State was divided into eight areas of similar physiographic characteristics. The most significant basin characteristics are drainage area, soil saturated hydraulic conductivity, main-channel slope, and several land-use variables. The standard error of prediction for the equation for the 1-percent annual exceedance probability flood ranges from 56 to 70 percent for Wisconsin Streams; these values are larger than results presented in previous reports. The increase in the standard error of prediction is likely due to increased variability of the annual-peak discharges, resulting in increased variability in the magnitude of flood peaks at higher frequencies. For each of the unregulated rural streamflow-gaging stations, a weighted estimate based on the at-site log Pearson type III analysis and the multiple regression results was determined. The weighted estimate generally has a lower uncertainty than either the Log Pearson type III or multiple regression estimates. For regulated streams, a graphical method for estimating flood-frequency characteristics was developed from the relations of discharge and drainage area for selected annual exceedance probabilities. Graphs for the major regulated streams in Wisconsin are presented in the report.

  19. A global probabilistic tsunami hazard assessment from earthquake sources

    USGS Publications Warehouse

    Davies, Gareth; Griffin, Jonathan; Lovholt, Finn; Glimsdal, Sylfest; Harbitz, Carl; Thio, Hong Kie; Lorito, Stefano; Basili, Roberto; Selva, Jacopo; Geist, Eric L.; Baptista, Maria Ana

    2017-01-01

    Large tsunamis occur infrequently but have the capacity to cause enormous numbers of casualties, damage to the built environment and critical infrastructure, and economic losses. A sound understanding of tsunami hazard is required to underpin management of these risks, and while tsunami hazard assessments are typically conducted at regional or local scales, globally consistent assessments are required to support international disaster risk reduction efforts, and can serve as a reference for local and regional studies. This study presents a global-scale probabilistic tsunami hazard assessment (PTHA), extending previous global-scale assessments based largely on scenario analysis. Only earthquake sources are considered, as they represent about 80% of the recorded damaging tsunami events. Globally extensive estimates of tsunami run-up height are derived at various exceedance rates, and the associated uncertainties are quantified. Epistemic uncertainties in the exceedance rates of large earthquakes often lead to large uncertainties in tsunami run-up. Deviations between modelled tsunami run-up and event observations are quantified, and found to be larger than suggested in previous studies. Accounting for these deviations in PTHA is important, as it leads to a pronounced increase in predicted tsunami run-up for a given exceedance rate.

  20. Estimating earthquake magnitudes from reported intensities in the central and eastern United States

    USGS Publications Warehouse

    Boyd, Oliver; Cramer, Chris H.

    2014-01-01

    A new macroseismic intensity prediction equation is derived for the central and eastern United States and is used to estimate the magnitudes of the 1811–1812 New Madrid, Missouri, and 1886 Charleston, South Carolina, earthquakes. This work improves upon previous derivations of intensity prediction equations by including additional intensity data, correcting magnitudes in the intensity datasets to moment magnitude, and accounting for the spatial and temporal population distributions. The new relation leads to moment magnitude estimates for the New Madrid earthquakes that are toward the lower range of previous studies. Depending on the intensity dataset to which the new macroseismic intensity prediction equation is applied, mean estimates for the 16 December 1811, 23 January 1812, and 7 February 1812 mainshocks, and 16 December 1811 dawn aftershock range from 6.9 to 7.1, 6.8 to 7.1, 7.3 to 7.6, and 6.3 to 6.5, respectively. One‐sigma uncertainties on any given estimate could be as high as 0.3–0.4 magnitude units. We also estimate a magnitude of 6.9±0.3 for the 1886 Charleston, South Carolina, earthquake. We find a greater range of magnitude estimates when also accounting for multiple macroseismic intensity prediction equations. The inability to accurately and precisely ascertain magnitude from intensities increases the uncertainty of the central United States earthquake hazard by nearly a factor of two. Relative to the 2008 national seismic hazard maps, our range of possible 1811–1812 New Madrid earthquake magnitudes increases the coefficient of variation of seismic hazard estimates for Memphis, Tennessee, by 35%–42% for ground motions expected to be exceeded with a 2% probability in 50 years and by 27%–35% for ground motions expected to be exceeded with a 10% probability in 50 years.

  1. Methods for estimating selected low-flow statistics and development of annual flow-duration statistics for Ohio

    USGS Publications Warehouse

    Koltun, G.F.; Kula, Stephanie P.

    2013-01-01

    This report presents the results of a study to develop methods for estimating selected low-flow statistics and for determining annual flow-duration statistics for Ohio streams. Regression techniques were used to develop equations for estimating 10-year recurrence-interval (10-percent annual-nonexceedance probability) low-flow yields, in cubic feet per second per square mile, with averaging periods of 1, 7, 30, and 90-day(s), and for estimating the yield corresponding to the long-term 80-percent duration flow. These equations, which estimate low-flow yields as a function of a streamflow-variability index, are based on previously published low-flow statistics for 79 long-term continuous-record streamgages with at least 10 years of data collected through water year 1997. When applied to the calibration dataset, average absolute percent errors for the regression equations ranged from 15.8 to 42.0 percent. The regression results have been incorporated into the U.S. Geological Survey (USGS) StreamStats application for Ohio (http://water.usgs.gov/osw/streamstats/ohio.html) in the form of a yield grid to facilitate estimation of the corresponding streamflow statistics in cubic feet per second. Logistic-regression equations also were developed and incorporated into the USGS StreamStats application for Ohio for selected low-flow statistics to help identify occurrences of zero-valued statistics. Quantiles of daily and 7-day mean streamflows were determined for annual and annual-seasonal (September–November) periods for each complete climatic year of streamflow-gaging station record for 110 selected streamflow-gaging stations with 20 or more years of record. The quantiles determined for each climatic year were the 99-, 98-, 95-, 90-, 80-, 75-, 70-, 60-, 50-, 40-, 30-, 25-, 20-, 10-, 5-, 2-, and 1-percent exceedance streamflows. Selected exceedance percentiles of the annual-exceedance percentiles were subsequently computed and tabulated to help facilitate consideration of the annual risk of exceedance or nonexceedance of annual and annual-seasonal-period flow-duration values. The quantiles are based on streamflow data collected through climatic year 2008.

  2. Redrawing the US Obesity Landscape: Bias-Corrected Estimates of State-Specific Adult Obesity Prevalence

    PubMed Central

    Ward, Zachary J.; Long, Michael W.; Resch, Stephen C.; Gortmaker, Steven L.; Cradock, Angie L.; Giles, Catherine; Hsiao, Amber; Wang, Y. Claire

    2016-01-01

    Background State-level estimates from the Centers for Disease Control and Prevention (CDC) underestimate the obesity epidemic because they use self-reported height and weight. We describe a novel bias-correction method and produce corrected state-level estimates of obesity and severe obesity. Methods Using non-parametric statistical matching, we adjusted self-reported data from the Behavioral Risk Factor Surveillance System (BRFSS) 2013 (n = 386,795) using measured data from the National Health and Nutrition Examination Survey (NHANES) (n = 16,924). We validated our national estimates against NHANES and estimated bias-corrected state-specific prevalence of obesity (BMI≥30) and severe obesity (BMI≥35). We compared these results with previous adjustment methods. Results Compared to NHANES, self-reported BRFSS data underestimated national prevalence of obesity by 16% (28.67% vs 34.01%), and severe obesity by 23% (11.03% vs 14.26%). Our method was not significantly different from NHANES for obesity or severe obesity, while previous methods underestimated both. Only four states had a corrected obesity prevalence below 30%, with four exceeding 40%–in contrast, most states were below 30% in CDC maps. Conclusions Twelve million adults with obesity (including 6.7 million with severe obesity) were misclassified by CDC state-level estimates. Previous bias-correction methods also resulted in underestimates. Accurate state-level estimates are necessary to plan for resources to address the obesity epidemic. PMID:26954566

  3. The Black Hills (South Dakota) flood of June 1972: Impacts and implications

    Treesearch

    Howard K. Orr

    1973-01-01

    Rains of 12 inches or more in 6 hours fell on the east slopes of the Black Hills the night of June 9, 1972. Resulting flash floods exacted a disastrous toll in human life and property. Rainfall and discharge so greatly exceeded previous records that recurrence intervals have been presented in terms of multiples of the estimated 50- or 100- year event. Quick runoff was...

  4. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yan, H; Chen, Z; Nath, R

    Purpose: kV fluoroscopic imaging combined with MV treatment beam imaging has been investigated for intrafractional motion monitoring and correction. It is, however, subject to additional kV imaging dose to normal tissue. To balance tracking accuracy and imaging dose, we previously proposed an adaptive imaging strategy to dynamically decide future imaging type and moments based on motion tracking uncertainty. kV imaging may be used continuously for maximal accuracy or only when the position uncertainty (probability of out of threshold) is high if a preset imaging dose limit is considered. In this work, we propose more accurate methods to estimate tracking uncertaintymore » through analyzing acquired data in real-time. Methods: We simulated motion tracking process based on a previously developed imaging framework (MV + initial seconds of kV imaging) using real-time breathing data from 42 patients. Motion tracking errors for each time point were collected together with the time point’s corresponding features, such as tumor motion speed and 2D tracking error of previous time points, etc. We tested three methods for error uncertainty estimation based on the features: conditional probability distribution, logistic regression modeling, and support vector machine (SVM) classification to detect errors exceeding a threshold. Results: For conditional probability distribution, polynomial regressions on three features (previous tracking error, prediction quality, and cosine of the angle between the trajectory and the treatment beam) showed strong correlation with the variation (uncertainty) of the mean 3D tracking error and its standard deviation: R-square = 0.94 and 0.90, respectively. The logistic regression and SVM classification successfully identified about 95% of tracking errors exceeding 2.5mm threshold. Conclusion: The proposed methods can reliably estimate the motion tracking uncertainty in real-time, which can be used to guide adaptive additional imaging to confirm the tumor is within the margin or initialize motion compensation if it is out of the margin.« less

  5. Streamflow statistics for development of water rights claims for the Jarbidge Wild and Scenic River, Owyhee Canyonlands Wilderness, Idaho, 2013-14: a supplement to Scientific Investigations Report 2013-5212

    USGS Publications Warehouse

    Wood, Molly S.

    2014-01-01

    The U.S. Geological Survey (USGS), in cooperation with the Bureau of Land Management (BLM), estimated streamflow statistics for stream segments designated “Wild,” “Scenic,” or “Recreational” under the National Wild and Scenic Rivers System in the Owyhee Canyonlands Wilderness in southwestern Idaho. The streamflow statistics were used by the BLM to develop and file a draft, federal reserved water right claim to protect federally designated “outstanding remarkable values” in the Jarbidge River. The BLM determined that the daily mean streamflow equaled or exceeded 20, 50, and 80 percent of the time during bimonthly periods (two periods per month) and the bankfull (66.7-percent annual exceedance probability) streamflow are important thresholds for maintaining outstanding remarkable values. Although streamflow statistics for the Jarbidge River below Jarbidge, Nevada (USGS 13162225) were published previously in 2013 and used for the draft water right claim, the BLM and USGS have since recognized the need to refine streamflow statistics given the approximate 40 river mile distance and intervening tributaries between the original point of estimation (USGS 13162225) and at the mouth of the Jarbidge River, which is the downstream end of the Wild and Scenic River segment. A drainage-area-ratio method was used in 2013 to estimate bimonthly exceedance probability streamflow statistics at the mouth of the Jarbidge River based on available streamgage data on the Jarbidge and East Fork Jarbidge Rivers. The resulting bimonthly streamflow statistics were further adjusted using a scaling factor calculated from a water balance on streamflow statistics calculated for the Bruneau and East Fork Bruneau Rivers and Sheep Creek. The final, adjusted bimonthly exceedance probability and bankfull streamflow statistics compared well with available verification datasets (including discrete streamflow measurements made at the mouth of the Jarbidge River) and are considered the best available estimates for streamflow statistics in the Jarbidge Wild and Scenic River segment.

  6. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Paller, M.; Blas, S.

    The upper portion of Lower Three Runs includes several ponds, reservoirs, and canals that were formerly used as a cooling system for nuclear production reactors. This area was divided into nine exposure areas (EAs) for the assessment of environmental contamination resulting from past reactor operations and other industrial processes. A tiered screening process identified several contaminants of potential concern including aluminum, cyanide, lead, manganese, mercury, DDD, DDE, and DDT. Risks posed by these contaminants to ecological receptors (river otter, belted kingfisher, raccoon, and blue heron) were assessed using contaminant exposure models that estimated contaminant intake resulting from ingestion of food,more » water, and sediment/ soil and compared these intakes with toxicity reference values (TRVs). The contaminant exposure models showed that the TRVs were not exceeded in the otter model, exceeded by aluminum in EA 7 (Pond 2 and associated canals) in the raccoon model, and exceeded by mercury in EAs 2, 3 (Pond B), 6 (Par Pond), and 8 (Ponds 4 and 5 and Canal to Pond C) in both the kingfisher and blue heron models. Hazard quotients (total exposure dose divided by the TRV) were 2.8 for aluminum and 1.7- 3.6 for mercury. The primary route of exposure for aluminum was the ingestion of soil, and the primary route of exposure for mercury was the ingestion of mercury contaminated fish. Elevated levels of mercury in fish were at least partly the result of the aerial deposition of mercury onto Lower Three Runs and its watershed. The atmospheric deposition of mercury creates pervasive contamination in fish throughout the Savannah River basin. Another possible source of mercury was the discharge of mercury contaminated Savannah River water into the Lower Three Runs cooling ponds and canals during previous years of reactor operation. This contamination originated from industries located upstream of the SRS. The aluminum exceedance for the raccoon was likely the result of naturally high aluminum levels in SRS soils rather than SRS operations. Aluminum exceedances have previously been observed in relatively undisturbed background locations as well as areas affected by SRS operations. Aluminum exceedances are more likely with the raccoon than the other receptors because it consumes more soil as a result of its feeding habits. Sensitivity analysis showed that model uncertainty can be reduced by adequate sampling of key variables (e.g., fish and sediments). Although sediment samples were collected from all EAs, fish samples were not collected from three EAs and some analytes (pesticides and cyanide) were not measured in fish. Water-to-fish concentration ratios were used to estimate contaminant levels in fish when direct measurements from fish were unavailable; however, such estimates are potentially less accurate than direct measurements.« less

  7. Storm and flood of July 5, 1989, in northern New Castle County, Delaware

    USGS Publications Warehouse

    Paulachok, G.N.; Simmons, R.H.; Tallman, A.J.

    1995-01-01

    On July 5, 1989, intense rainfall from the remnants of Tropical Storm Allison caused severe flooding in northern New Castle County, Delaware. The flooding claimed three lives, and damage was estimated to be $5 million. Flood conditions were aggravated locally by rapid runoff from expansive urban areas. Record- breaking floods occurred on many streams in northern New Castle County. Peak discharges at three active, continuous-record streamflow-gaging stations, one active crest-stage station, and at two discontinued streamflow-gaging stations exceeded previously recorded maximums. Estimated recurrence intervals for peak flow at the three active, continuous-record streamflow stations exceeded 100 years. The U.S. Geological Survey conducted comprehensive post-flood surveys to determine peak water-surface elevations that occurred on affected streams and their tributaries during the flood of July 5, 1989. Detailed surveys were performed near bridge crossings to provide additional information on the extent and severity of the flooding and the effects of hydraulic constrictions on floodwaters.

  8. Effect of an Expenditure Cap on Low-Income Seniors' Drug Use and Spending in a State Pharmacy Assistance Program

    PubMed Central

    Bishop, Christine E; Ryan, Andrew M; Gilden, Daniel M; Kubisiak, Joanna; Thomas, Cindy Parks

    2009-01-01

    Objective To estimate the impact of a soft cap (a ceiling on utilization beyond which insured enrollees pay a higher copayment) on low-income elders' use of prescription drugs. Data Sources and Setting Claims and enrollment files for the first year (June 2002 through May 2003) of the Illinois SeniorCare program, a state pharmacy assistance program, and Medicare claims and enrollment files, 2001 through 2003. SeniorCare enrolled non-Medicaid-eligible elders with income less than 200 percent of Federal Poverty Level. Minimal copays increased by 20 percent of prescription cost when enrollee expenditures reached $1,750. Research Design Models were estimated for three dependent variables: enrollees' average monthly utilization (number of prescriptions), spending, and the proportion of drugs that were generic rather than brand. Observations included all program enrollees who exceeded the cap and covered two periods, before and after the cap was exceeded. Principle Findings On average, enrollees exceeding the cap reduced the number of drugs they purchased by 14 percent, monthly expenditures decreased by 19 percent, and the proportion generic increased by 4 percent, all significant at p<.01. Impacts were greater for enrollees with greater initial spending, for enrollees without one of five chronic illness diagnoses in the previous calendar year, and for enrollees with lower income. Conclusions Near-poor elders enrolled in plans with caps or coverage gaps, including Part D plans, may face sharp declines in utilization when they exceed these thresholds. PMID:19291168

  9. Estimates of critical acid loads and exceedances for forest soils across the conterminous United States.

    PubMed

    McNulty, Steven G; Cohen, Erika C; Moore Myers, Jennifer A; Sullivan, Timothy J; Li, Harbin

    2007-10-01

    Concern regarding the impacts of continued nitrogen and sulfur deposition on ecosystem health has prompted the development of critical acid load assessments for forest soils. A critical acid load is a quantitative estimate of exposure to one or more pollutants at or above which harmful acidification-related effects on sensitive elements of the environment occur. A pollutant load in excess of a critical acid load is termed exceedance. This study combined a simple mass balance equation with national-scale databases to estimate critical acid load and exceedance for forest soils at a 1-km(2) spatial resolution across the conterminous US. This study estimated that about 15% of US forest soils are in exceedance of their critical acid load by more than 250eqha(-1)yr(-1), including much of New England and West Virginia. Very few areas of exceedance were predicted in the western US.

  10. Improvements to photometry. Part 1: Better estimation of derivatives in extinction and transformation equations

    NASA Technical Reports Server (NTRS)

    Young, Andrew T.

    1988-01-01

    Atmospheric extinction in wideband photometry is examined both analytically and through numerical simulations. If the derivatives that appear in the Stromgren-King theory are estimated carefully, it appears that wideband measurements can be transformed to outside the atmosphere with errors no greater than a millimagnitude. A numerical analysis approach is used to estimate derivatives of both the stellar and atmospheric extinction spectra, avoiding previous assumptions that the extinction follows a power law. However, it is essential to satify the requirements of the sampling theorem to keep aliasing errors small. Typically, this means that band separations cannot exceed half of the full width at half-peak response. Further work is needed to examine higher order effects, which may well be significant.

  11. U. S. PHOSPHATE INDUSTRY: REVISED PROSPECTS AND POTENTIAL.

    USGS Publications Warehouse

    McKelvey, Vincent E.

    1985-01-01

    Although the United States is the world's largest producer and exporter of phosphates, serious doubts have arisen in recent years that U. S. deposits could sustain this important role. The development of borehole mining; i. e. , extracting the phosphate matrix as a slurry through a drill hole, however, is cause for optimism. Commercial borehole mining is still years away, but the potential advantages are numerous and important. Recent surveys also suggest that offshore deposits and deeply buried onshore deposits much exceed previous estimates. On the basis of the new technology and revised resource estimates, one can easily see the potential for increased production from U. S. deposits.

  12. Working times of elastomeric impression materials determined by dimensional accuracy.

    PubMed

    Tan, E; Chai, J; Wozniak, W T

    1996-01-01

    The working times of five poly(vinyl siloxane) impression materials were estimated by evaluating the dimensional accuracy of stone dies of impressions of a standard model made at successive time intervals. The stainless steel standard model was represented by two abutments having known distances between landmarks in three dimensions. Three dimensions in the x-, y-, and z-axes of the stone dies were measured with a traveling microscope. A time interval was rejected as being within the working time if the percentage change of the resultant dies, in any dimension, was statistically different from those measured from stone dies from previous time intervals. The absolute dimensions of those dies from the rejected time interval also must have exceeded all those from previous time intervals. Results showed that the working times estimated with this method generally were about 30 seconds longer than those recommended by the manufacturers.

  13. Volcanic Plume Heights on Mars: Limits of Validity for Convective Models

    NASA Technical Reports Server (NTRS)

    Glaze, Lori S.; Baloga, Stephen M.

    2002-01-01

    Previous studies have overestimated volcanic plume heights on Mars. In this work, we demonstrate that volcanic plume rise models, as currently formulated, have only limited validity in any environment. These limits are easily violated in the current Mars environment and may also be violated for terrestrial and early Mars conditions. We indicate some of the shortcomings of the model with emphasis on the limited applicability to current Mars conditions. Specifically, basic model assumptions are violated when (1) vertical velocities exceed the speed of sound, (2) radial expansion rates exceed the speed of sound, (3) radial expansion rates approach or exceed the vertical velocity, or (4) plume radius grossly exceeds plume height. All of these criteria are violated for the typical Mars example given here. Solutions imply that the convective rise, model is only valid to a height of approximately 10 kilometers. The reason for the model breakdown is hat the current Mars atmosphere is not of sufficient density to satisfy the conservation equations. It is likely that diffusion and other effects governed by higher-order differential equations are important within the first few kilometers of rise. When the same criteria are applied to eruptions into a higher-density early Mars atmosphere, we find that eruption rates higher than 1.4 x 10(exp 9) kilograms per second also violate model assumptions. This implies a maximum extent of approximately 65 kilometers for convective plumes on early Mars. The estimated plume heights for both current and early Mars are significantly lower than those previously predicted in the literature. Therefore, global-scale distribution of ash seems implausible.

  14. Concentrations and Potential Health Risks of Metals in Lip Products

    PubMed Central

    Liu, Sa; Rojas-Cheatham, Ann

    2013-01-01

    Background: Metal content in lip products has been an issue of concern. Objectives: We measured lead and eight other metals in a convenience sample of 32 lip products used by young Asian women in Oakland, California, and assessed potential health risks related to estimated intakes of these metals. Methods: We analyzed lip products by inductively coupled plasma optical emission spectrometry and used previous estimates of lip product usage rates to determine daily oral intakes. We derived acceptable daily intakes (ADIs) based on information used to determine public health goals for exposure, and compared ADIs with estimated intakes to assess potential risks. Results: Most of the tested lip products contained high concentrations of titanium and aluminum. All examined products had detectable manganese. Lead was detected in 24 products (75%), with an average concentration of 0.36 ± 0.39 ppm, including one sample with 1.32 ppm. When used at the estimated average daily rate, estimated intakes were > 20% of ADIs derived for aluminum, cadmium, chromium, and manganese. In addition, average daily use of 10 products tested would result in chromium intake exceeding our estimated ADI for chromium. For high rates of product use (above the 95th percentile), the percentages of samples with estimated metal intakes exceeding ADIs were 3% for aluminum, 68% for chromium, and 22% for manganese. Estimated intakes of lead were < 20% of ADIs for average and high use. Conclusions: Cosmetics safety should be assessed not only by the presence of hazardous contents, but also by comparing estimated exposures with health-based standards. In addition to lead, metals such as aluminum, cadmium, chromium, and manganese require further investigation. PMID:23674482

  15. Phytoplankton growth balanced by clam and zooplankton grazing and net transport into the low-salinity zone of the San Francisco Estuary

    USGS Publications Warehouse

    Kimmerer, Wim J.; Thompson, Janet K.

    2014-01-01

    We estimated the influence of planktonic and benthic grazing on phytoplankton in the strongly tidal, river-dominated northern San Francisco Estuary using data from an intensive study of the low salinity foodweb in 2006–2008 supplemented with long-term monitoring data. A drop in chlorophyll concentration in 1987 had previously been linked to grazing by the introduced clam Potamocorbula amurensis, but numerous changes in the estuary may be linked to the continued low chlorophyll. We asked whether phytoplankton continued to be suppressed by grazing and what proportion of the grazing was by benthic bivalves. A mass balance of phytoplankton biomass included estimates of primary production and grazing by microzooplankton, mesozooplankton, and clams. Grazing persistently exceeded net phytoplankton growth especially for larger cells, and grazing by microzooplankton often exceeded that by clams. A subsidy of phytoplankton from other regions roughly balanced the excess of grazing over growth. Thus, the influence of bivalve grazing on phytoplankton biomass can be understood only in the context of limits on phytoplankton growth, total grazing, and transport.

  16. A non-stationary cost-benefit based bivariate extreme flood estimation approach

    NASA Astrophysics Data System (ADS)

    Qi, Wei; Liu, Junguo

    2018-02-01

    Cost-benefit analysis and flood frequency analysis have been integrated into a comprehensive framework to estimate cost effective design values. However, previous cost-benefit based extreme flood estimation is based on stationary assumptions and analyze dependent flood variables separately. A Non-Stationary Cost-Benefit based bivariate design flood estimation (NSCOBE) approach is developed in this study to investigate influence of non-stationarities in both the dependence of flood variables and the marginal distributions on extreme flood estimation. The dependence is modeled utilizing copula functions. Previous design flood selection criteria are not suitable for NSCOBE since they ignore time changing dependence of flood variables. Therefore, a risk calculation approach is proposed based on non-stationarities in both marginal probability distributions and copula functions. A case study with 54-year observed data is utilized to illustrate the application of NSCOBE. Results show NSCOBE can effectively integrate non-stationarities in both copula functions and marginal distributions into cost-benefit based design flood estimation. It is also found that there is a trade-off between maximum probability of exceedance calculated from copula functions and marginal distributions. This study for the first time provides a new approach towards a better understanding of influence of non-stationarities in both copula functions and marginal distributions on extreme flood estimation, and could be beneficial to cost-benefit based non-stationary bivariate design flood estimation across the world.

  17. Spectral and temperature-dependent infrared emissivity measurements of painted metals for improved temperature estimation during laser damage testing

    NASA Astrophysics Data System (ADS)

    Baumann, Sean M.; Keenan, Cameron; Marciniak, Michael A.; Perram, Glen P.

    2014-10-01

    A database of spectral and temperature-dependent emissivities was created for painted Al-alloy laser-damage-testing targets for the purpose of improving the uncertainty to which temperature on the front and back target surfaces may be estimated during laser-damage testing. Previous temperature estimates had been made by fitting an assumed gray-body radiance curve to the calibrated spectral radiance data collected from the back surface using a Telops Imaging Fourier Transform Spectrometer (IFTS). In this work, temperature-dependent spectral emissivity measurements of the samples were made from room temperature to 500 °C using a Surface Optics Corp. SOC-100 Hemispherical Directional Reflectometer (HDR) with Nicolet FTS. Of particular interest was a high-temperature matte-black enamel paint used to coat the rear surfaces of the Al-alloy samples. The paint had been assumed to have a spectrally flat and temperatureinvariant emissivity. However, the data collected using the HDR showed both spectral variation and temperature dependence. The uncertainty in back-surface temperature estimation during laser-damage testing made using the measured emissivities was improved from greater than +10 °C to less than +5 °C for IFTS pixels away from the laser burn-through hole, where temperatures never exceeded those used in the SOC-100 HDR measurements. At beam center, where temperatures exceeded those used in the SOC-100 HDR, uncertainty in temperature estimates grew beyond those made assuming gray-body emissivity. Accurate temperature estimations during laser-damage testing are useful in informing a predictive model for future high-energy-laser weapon applications.

  18. Survival dynamics of scleractinian coral larvae and implications for dispersal

    NASA Astrophysics Data System (ADS)

    Graham, E. M.; Baird, A. H.; Connolly, S. R.

    2008-09-01

    Survival of pelagic marine larvae is an important determinant of dispersal potential. Despite this, few estimates of larval survival are available. For scleractinian corals, few studies of larval survival are long enough to provide accurate estimates of longevity. Moreover, changes in mortality rates during larval life, expected on theoretical grounds, have implications for the degree of connectivity among reefs and have not been quantified for any coral species. This study quantified the survival of larvae from five broadcast-spawning scleractinian corals ( Acropora latistella, Favia pallida, Pectinia paeonia, Goniastrea aspera, and Montastraea magnistellata) to estimate larval longevity, and to test for changes in mortality rates as larvae age. Maximum lifespans ranged from 195 to 244 d. These longevities substantially exceed those documented previously for coral larvae that lack zooxanthellae, and they exceed predictions based on metabolic rates prevailing early in larval life. In addition, larval mortality rates exhibited strong patterns of variation throughout the larval stage. Three periods were identified in four species: high initial rates of mortality; followed by a low, approximately constant rate of mortality; and finally, progressively increasing mortality after approximately 100 d. The lifetimes observed in this study suggest that the potential for long-distance dispersal may be substantially greater than previously thought. Indeed, detection of increasing mortality rates late in life suggests that energy reserves do not reach critically low levels until approximately 100 d after spawning. Conversely, increased mortality rates early in life decrease the likelihood that larvae transported away from their natal reef will survive to reach nearby reefs, and thus decrease connectivity at regional scales. These results show how variation in larval survivorship with age may help to explain the seeming paradox of high genetic structure at metapopulation scales, coupled with the maintenance of extensive geographic ranges observed in many coral species.

  19. Regional Regression Equations to Estimate Flow-Duration Statistics at Ungaged Stream Sites in Connecticut

    USGS Publications Warehouse

    Ahearn, Elizabeth A.

    2010-01-01

    Multiple linear regression equations for determining flow-duration statistics were developed to estimate select flow exceedances ranging from 25- to 99-percent for six 'bioperiods'-Salmonid Spawning (November), Overwinter (December-February), Habitat Forming (March-April), Clupeid Spawning (May), Resident Spawning (June), and Rearing and Growth (July-October)-in Connecticut. Regression equations also were developed to estimate the 25- and 99-percent flow exceedances without reference to a bioperiod. In total, 32 equations were developed. The predictive equations were based on regression analyses relating flow statistics from streamgages to GIS-determined basin and climatic characteristics for the drainage areas of those streamgages. Thirty-nine streamgages (and an additional 6 short-term streamgages and 28 partial-record sites for the non-bioperiod 99-percent exceedance) in Connecticut and adjacent areas of neighboring States were used in the regression analysis. Weighted least squares regression analysis was used to determine the predictive equations; weights were assigned based on record length. The basin characteristics-drainage area, percentage of area with coarse-grained stratified deposits, percentage of area with wetlands, mean monthly precipitation (November), mean seasonal precipitation (December, January, and February), and mean basin elevation-are used as explanatory variables in the equations. Standard errors of estimate of the 32 equations ranged from 10.7 to 156 percent with medians of 19.2 and 55.4 percent to predict the 25- and 99-percent exceedances, respectively. Regression equations to estimate high and median flows (25- to 75-percent exceedances) are better predictors (smaller variability of the residual values around the regression line) than the equations to estimate low flows (less than 75-percent exceedance). The Habitat Forming (March-April) bioperiod had the smallest standard errors of estimate, ranging from 10.7 to 20.9 percent. In contrast, the Rearing and Growth (July-October) bioperiod had the largest standard errors, ranging from 30.9 to 156 percent. The adjusted coefficient of determination of the equations ranged from 77.5 to 99.4 percent with medians of 98.5 and 90.6 percent to predict the 25- and 99-percent exceedances, respectively. Descriptive information on the streamgages used in the regression, measured basin and climatic characteristics, and estimated flow-duration statistics are provided in this report. Flow-duration statistics and the 32 regression equations for estimating flow-duration statistics in Connecticut are stored on the U.S. Geological Survey World Wide Web application ?StreamStats? (http://water.usgs.gov/osw/streamstats/index.html). The regression equations developed in this report can be used to produce unbiased estimates of select flow exceedances statewide.

  20. The Impact of Exceeding TANF Time Limits on the Access to Healthcare of Low-Income Mothers.

    PubMed

    Narain, Kimberly; Ettner, Susan

    2017-01-01

    The objective of this article is to estimate the relationship of exceeding Temporary Assistance for Needy Families (TANF) time limits, with health insurance, healthcare, and health outcomes. The authors use Heckman selection models that exploit variability in state time-limit duration and timing of policy implementation as identifying exclusion restrictions to adjust the effect estimates of exceeding time limits for possible correlations between the probability of exceeding time limits and unobservable factors influencing the outcomes. The authors find that exceeding time limits decreases the predicted probability of Medicaid coverage, increases the predicted probability of being uninsured, and decreases the predicted probability of annual medical provider contact.

  1. Flood Map for the Winooski River in Waterbury, Vermont, 2014

    USGS Publications Warehouse

    Olson, Scott A.

    2015-01-01

    High-water marks from Tropical Storm Irene were available for seven locations along the study reach. The highwater marks were used to estimate water-surface profiles and discharges resulting from Tropical Storm Irene throughout the study reach. From a comparison of the estimated water-surface profile for Tropical Storm Irene with the water-surface profiles for the 1- and 0.2-percent annual exceedance probability (AEP) floods, it was determined that the high-water elevations resulting from Tropical Storm Irene exceeded the estimated 1-percent AEP flood throughout the Winooski River study reach but did not exceed the estimated 0.2-percent AEP flood at any location within the study reach.

  2. Dynamic deformations and the M6.7, Northridge, California earthquake

    USGS Publications Warehouse

    Gomberg, J.

    1997-01-01

    A method of estimating the complete time-varying dynamic formation field from commonly available three-component single station seismic data has been developed and applied to study the relationship between dynamic deformation and ground failures and structural damage using observations from the 1994 Northridge, California earthquake. Estimates from throughout the epicentral region indicate that the horizontal strains exceed the vertical ones by more than a factor of two. The largest strains (exceeding ???100 ??strain) correlate with regions of greatest ground failure. There is a poor correlation between structural damage and peak strain amplitudes. The smallest strains, ???35 ??strain, are estimated in regions of no damage or ground failure. Estimates in the two regions with most severe and well mapped permanent deformation, Potrero Canyon and the Granada-Mission Hills regions, exhibit the largest strains; peak horizontal strains estimates in these regions equal ???139 and ???229 ??strain respectively. Of note, the dynamic principal strain axes have strikes consistent with the permanent failure features suggesting that, while gravity, sub-surface materials, and hydrologic conditions undoubtedly played fundamental roles in determining where and what types of failures occurred, the dynamic deformation field may have been favorably sized and oriented to initiate failure processes. These results support other studies that conclude that the permanent deformation resulted from ground shaking, rather than from static strains associated with primary or secondary faulting. They also suggest that such an analysis, either using data or theoretical calculations, may enable observations of paleo-ground failure to be used as quantitative constraints on the size and geometry of previous earthquakes. ?? 1997 Elsevier Science Limited.

  3. 26 CFR 1.6016-1 - Declarations of estimated income tax by corporations.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... be expected to exceed the sum of $100,000 plus the amount of any estimated credits allowable under...,000) does not exceed the $100,000 plus the allowable credits totaling $7,000. [T.D. 6768, 29 FR 14921...), or subchapter L, chapter 1 of the Code, over the sum of $100,000 and any estimated credits under...

  4. Radiation dose to the global flying population.

    PubMed

    Alvarez, Luis E; Eastham, Sebastian D; Barrett, Steven R H

    2016-03-01

    Civil airliner passengers and crew are exposed to elevated levels of radiation relative to being at sea level. Previous studies have assessed the radiation dose received in particular cases or for cohort studies. Here we present the first estimate of the total radiation dose received by the worldwide civilian flying population. We simulated flights globally from 2000 to 2013 using schedule data, applying a radiation propagation code to estimate the dose associated with each flight. Passengers flying in Europe and North America exceed the International Commission on Radiological Protection annual dose limits at an annual average of 510 or 420 flight hours per year, respectively. However, this falls to 160 or 120 h on specific routes under maximum exposure conditions.

  5. Methods for estimating the magnitude and frequency of floods for urban and small, rural streams in Georgia, South Carolina, and North Carolina, 2011

    USGS Publications Warehouse

    Feaster, Toby D.; Gotvald, Anthony J.; Weaver, J. Curtis

    2014-01-01

    Reliable estimates of the magnitude and frequency of floods are essential for the design of transportation and water-conveyance structures, flood-insurance studies, and flood-plain management. Such estimates are particularly important in densely populated urban areas. In order to increase the number of streamflow-gaging stations (streamgages) available for analysis, expand the geographical coverage that would allow for application of regional regression equations across State boundaries, and build on a previous flood-frequency investigation of rural U.S Geological Survey streamgages in the Southeast United States, a multistate approach was used to update methods for determining the magnitude and frequency of floods in urban and small, rural streams that are not substantially affected by regulation or tidal fluctuations in Georgia, South Carolina, and North Carolina. The at-site flood-frequency analysis of annual peak-flow data for urban and small, rural streams (through September 30, 2011) included 116 urban streamgages and 32 small, rural streamgages, defined in this report as basins draining less than 1 square mile. The regional regression analysis included annual peak-flow data from an additional 338 rural streamgages previously included in U.S. Geological Survey flood-frequency reports and 2 additional rural streamgages in North Carolina that were not included in the previous Southeast rural flood-frequency investigation for a total of 488 streamgages included in the urban and small, rural regression analysis. The at-site flood-frequency analyses for the urban and small, rural streamgages included the expected moments algorithm, which is a modification of the Bulletin 17B log-Pearson type III method for fitting the statistical distribution to the logarithms of the annual peak flows. Where applicable, the flood-frequency analysis also included low-outlier and historic information. Additionally, the application of a generalized Grubbs-Becks test allowed for the detection of multiple potentially influential low outliers. Streamgage basin characteristics were determined using geographical information system techniques. Initial ordinary least squares regression simulations reduced the number of basin characteristics on the basis of such factors as statistical significance, coefficient of determination, Mallow’s Cp statistic, and ease of measurement of the explanatory variable. Application of generalized least squares regression techniques produced final predictive (regression) equations for estimating the 50-, 20-, 10-, 4-, 2-, 1-, 0.5-, and 0.2-percent annual exceedance probability flows for urban and small, rural ungaged basins for three hydrologic regions (HR1, Piedmont–Ridge and Valley; HR3, Sand Hills; and HR4, Coastal Plain), which previously had been defined from exploratory regression analysis in the Southeast rural flood-frequency investigation. Because of the limited availability of urban streamgages in the Coastal Plain of Georgia, South Carolina, and North Carolina, additional urban streamgages in Florida and New Jersey were used in the regression analysis for this region. Including the urban streamgages in New Jersey allowed for the expansion of the applicability of the predictive equations in the Coastal Plain from 3.5 to 53.5 square miles. Average standard error of prediction for the predictive equations, which is a measure of the average accuracy of the regression equations when predicting flood estimates for ungaged sites, range from 25.0 percent for the 10-percent annual exceedance probability regression equation for the Piedmont–Ridge and Valley region to 73.3 percent for the 0.2-percent annual exceedance probability regression equation for the Sand Hills region.

  6. Methods for estimating annual exceedance-probability discharges and largest recorded floods for unregulated streams in rural Missouri.

    DOT National Transportation Integrated Search

    2014-01-01

    Regression analysis techniques were used to develop a : set of equations for rural ungaged stream sites for estimating : discharges with 50-, 20-, 10-, 4-, 2-, 1-, 0.5-, and 0.2-percent : annual exceedance probabilities, which are equivalent to : ann...

  7. Comparisons of estimates of annual exceedance-probability discharges for small drainage basins in Iowa, based on data through water year 2013 : [summary].

    DOT National Transportation Integrated Search

    2015-01-01

    Traditionally, the Iowa DOT has used the Iowa Runoff Chart and single-variable regional regression equations (RREs) from a USGS report : (published in 1987) as the primary methods to estimate annual exceedance-probability discharge : (AEPD) for small...

  8. Estimating the exceedance probability of rain rate by logistic regression

    NASA Technical Reports Server (NTRS)

    Chiu, Long S.; Kedem, Benjamin

    1990-01-01

    Recent studies have shown that the fraction of an area with rain intensity above a fixed threshold is highly correlated with the area-averaged rain rate. To estimate the fractional rainy area, a logistic regression model, which estimates the conditional probability that rain rate over an area exceeds a fixed threshold given the values of related covariates, is developed. The problem of dependency in the data in the estimation procedure is bypassed by the method of partial likelihood. Analyses of simulated scanning multichannel microwave radiometer and observed electrically scanning microwave radiometer data during the Global Atlantic Tropical Experiment period show that the use of logistic regression in pixel classification is superior to multiple regression in predicting whether rain rate at each pixel exceeds a given threshold, even in the presence of noisy data. The potential of the logistic regression technique in satellite rain rate estimation is discussed.

  9. Maximum swimming speeds of sailfish and three other large marine predatory fish species based on muscle contraction time and stride length: a myth revisited

    PubMed Central

    Svendsen, Morten B. S.; Domenici, Paolo; Marras, Stefano; Krause, Jens; Boswell, Kevin M.; Rodriguez-Pinto, Ivan; Wilson, Alexander D. M.; Kurvers, Ralf H. J. M.; Viblanc, Paul E.; Finger, Jean S.; Steffensen, John F.

    2016-01-01

    ABSTRACT Billfishes are considered to be among the fastest swimmers in the oceans. Previous studies have estimated maximum speed of sailfish and black marlin at around 35 m s−1 but theoretical work on cavitation predicts that such extreme speed is unlikely. Here we investigated maximum speed of sailfish, and three other large marine pelagic predatory fish species, by measuring the twitch contraction time of anaerobic swimming muscle. The highest estimated maximum swimming speeds were found in sailfish (8.3±1.4 m s−1), followed by barracuda (6.2±1.0 m s−1), little tunny (5.6±0.2 m s−1) and dorado (4.0±0.9 m s−1); although size-corrected performance was highest in little tunny and lowest in sailfish. Contrary to previously reported estimates, our results suggest that sailfish are incapable of exceeding swimming speeds of 10-15 m s−1, which corresponds to the speed at which cavitation is predicted to occur, with destructive consequences for fin tissues. PMID:27543056

  10. Techniques for estimating monthly mean streamflow at gaged sites and monthly streamflow duration characteristics at ungaged sites in central Nevada

    USGS Publications Warehouse

    Hess, G.W.; Bohman, L.R.

    1996-01-01

    Techniques for estimating monthly mean streamflow at gaged sites and monthly streamflow duration characteristics at ungaged sites in central Nevada were developed using streamflow records at six gaged sites and basin physical and climatic characteristics. Streamflow data at gaged sites were related by regression techniques to concurrent flows at nearby gaging stations so that monthly mean streamflows for periods of missing or no record can be estimated for gaged sites in central Nevada. The standard error of estimate for relations at these sites ranged from 12 to 196 percent. Also, monthly streamflow data for selected percent exceedence levels were used in regression analyses with basin and climatic variables to determine relations for ungaged basins for annual and monthly percent exceedence levels. Analyses indicate that the drainage area and percent of drainage area at altitudes greater than 10,000 feet are the most significant variables. For the annual percent exceedence, the standard error of estimate of the relations for ungaged sites ranged from 51 to 96 percent and standard error of prediction for ungaged sites ranged from 96 to 249 percent. For the monthly percent exceedence values, the standard error of estimate of the relations ranged from 31 to 168 percent, and the standard error of prediction ranged from 115 to 3,124 percent. Reliability and limitations of the estimating methods are described.

  11. From damselflies to pterosaurs: how burst and sustainable flight performance scale with size.

    PubMed

    Marden, J H

    1994-04-01

    Recent empirical data for short-burst lift and power production of flying animals indicate that mass-specific lift and power output scale independently (lift) or slightly positively (power) with increasing size. These results contradict previous theory, as well as simple observation, which argues for degradation of flight performance with increasing size. Here, empirical measures of lift and power during short-burst exertion are combined with empirically based estimates of maximum muscle power output in order to predict how burst and sustainable performance scale with body size. The resulting model is used to estimate performance of the largest extant flying birds and insects, along with the largest flying animals known from fossils. These estimates indicate that burst flight performance capacities of even the largest extinct fliers (estimated mass 250 kg) would allow takeoff from the ground; however, limitations on sustainable power output should constrain capacity for continuous flight at body sizes exceeding 0.003-1.0 kg, depending on relative wing length and flight muscle mass.

  12. Estimates of present and future flood risk in the conterminous United States

    NASA Astrophysics Data System (ADS)

    Wing, Oliver E. J.; Bates, Paul D.; Smith, Andrew M.; Sampson, Christopher C.; Johnson, Kris A.; Fargione, Joseph; Morefield, Philip

    2018-03-01

    Past attempts to estimate rainfall-driven flood risk across the US either have incomplete coverage, coarse resolution or use overly simplified models of the flooding process. In this paper, we use a new 30 m resolution model of the entire conterminous US with a 2D representation of flood physics to produce estimates of flood hazard, which match to within 90% accuracy the skill of local models built with detailed data. These flood depths are combined with exposure datasets of commensurate resolution to calculate current and future flood risk. Our data show that the total US population exposed to serious flooding is 2.6-3.1 times higher than previous estimates, and that nearly 41 million Americans live within the 1% annual exceedance probability floodplain (compared to only 13 million when calculated using FEMA flood maps). We find that population and GDP growth alone are expected to lead to significant future increases in exposure, and this change may be exacerbated in the future by climate change.

  13. Method of estimating flood-frequency parameters for streams in Idaho

    USGS Publications Warehouse

    Kjelstrom, L.C.; Moffatt, R.L.

    1981-01-01

    Skew coefficients for the log-Pearson type III distribution are generalized on the basis of some similarity of floods in the Snake River basin and other parts of Idaho. Generalized skew coefficients aid in shaping flood-frequency curves because skew coefficients computed from gaging stations having relatively short periods of peak flow records can be unreliable. Generalized skew coefficients can be obtained for a gaging station from one of three maps in this report. The map to be used depends on whether (1) snowmelt floods are domiant (generally when more than 20 percent of the drainage area is above 6,000 feet altitude), (2) rainstorm floods are dominant (generally when the mean altitude is less than 3,000 feet), or (3) either snowmelt or rainstorm floods can be the annual miximum discharge. For the latter case, frequency curves constructed using separate arrays of each type of runoff can be combined into one curve, which, for some stations, is significantly different than the frequency curve constructed using only annual maximum discharges. For 269 gaging stations, flood-frequency curves that include the generalized skew coefficients in the computation of the log-Pearson type III equation tend to fit the data better than previous analyses. Frequency curves for ungaged sites can be derived by estimating three statistics of the log-Pearson type III distribution. The mean and standard deviation of logarithms of annual maximum discharges are estimated by regression equations that use basin characteristics as independent variables. Skew coefficient estimates are the generalized skews. The log-Pearson type III equation is then applied with the three estimated statistics to compute the discharge at selected exceedance probabilities. Standard errors at the 2-percent exceedance probability range from 41 to 90 percent. (USGS)

  14. Estimation of peak discharge quantiles for selected annual exceedance probabilities in Northeastern Illinois.

    DOT National Transportation Integrated Search

    2016-06-01

    This report provides two sets of equations for estimating peak discharge quantiles at annual exceedance probabilities (AEPs) of 0.50, 0.20, 0.10, : 0.04, 0.02, 0.01, 0.005, and 0.002 (recurrence intervals of 2, 5, 10, 25, 50, 100, 200, and 500 years,...

  15. Development of a methodology for probable maximum precipitation estimation over the American River watershed using the WRF model

    NASA Astrophysics Data System (ADS)

    Tan, Elcin

    A new physically-based methodology for probable maximum precipitation (PMP) estimation is developed over the American River Watershed (ARW) using the Weather Research and Forecast (WRF-ARW) model. A persistent moisture flux convergence pattern, called Pineapple Express, is analyzed for 42 historical extreme precipitation events, and it is found that Pineapple Express causes extreme precipitation over the basin of interest. An average correlation between moisture flux convergence and maximum precipitation is estimated as 0.71 for 42 events. The performance of the WRF model is verified for precipitation by means of calibration and independent validation of the model. The calibration procedure is performed only for the first ranked flood event 1997 case, whereas the WRF model is validated for 42 historical cases. Three nested model domains are set up with horizontal resolutions of 27 km, 9 km, and 3 km over the basin of interest. As a result of Chi-square goodness-of-fit tests, the hypothesis that "the WRF model can be used in the determination of PMP over the ARW for both areal average and point estimates" is accepted at the 5% level of significance. The sensitivities of model physics options on precipitation are determined using 28 microphysics, atmospheric boundary layer, and cumulus parameterization schemes combinations. It is concluded that the best triplet option is Thompson microphysics, Grell 3D ensemble cumulus, and YSU boundary layer (TGY), based on 42 historical cases, and this TGY triplet is used for all analyses of this research. Four techniques are proposed to evaluate physically possible maximum precipitation using the WRF: 1. Perturbations of atmospheric conditions; 2. Shift in atmospheric conditions; 3. Replacement of atmospheric conditions among historical events; and 4. Thermodynamically possible worst-case scenario creation. Moreover, climate change effect on precipitation is discussed by emphasizing temperature increase in order to determine the physically possible upper limits of precipitation due to climate change. The simulation results indicate that the meridional shift in atmospheric conditions is the optimum method to determine maximum precipitation in consideration of cost and efficiency. Finally, exceedance probability analyses of the model results of 42 historical extreme precipitation events demonstrate that the 72-hr basin averaged probable maximum precipitation is 21.72 inches for the exceedance probability of 0.5 percent. On the other hand, the current operational PMP estimation for the American River Watershed is 28.57 inches as published in the hydrometeorological report no. 59 and a previous PMP value was 31.48 inches as published in the hydrometeorological report no. 36. According to the exceedance probability analyses of this proposed method, the exceedance probabilities of these two estimations correspond to 0.036 percent and 0.011 percent, respectively.

  16. Global rates of marine sulfate reduction and implications for sub-sea-floor metabolic activities

    NASA Astrophysics Data System (ADS)

    Bowles, Marshall W.; Mogollón, José M.; Kasten, Sabine; Zabel, Matthias; Hinrichs, Kai-Uwe

    2014-05-01

    Sulfate reduction is a globally important redox process in marine sediments, yet global rates are poorly quantified. We developed an artificial neural network trained with 199 sulfate profiles, constrained with geomorphological and geochemical maps to estimate global sulfate-reduction rate distributions. Globally, 11.3 teramoles of sulfate are reduced yearly (~15% of previous estimates), accounting for the oxidation of 12 to 29% of the organic carbon flux to the sea floor. Combined with global cell distributions in marine sediments, these results indicate a strong contrast in sub-sea-floor prokaryote habitats: In continental margins, global cell numbers in sulfate-depleted sediment exceed those in the overlying sulfate-bearing sediment by one order of magnitude, whereas in the abyss, most life occurs in oxic and/or sulfate-reducing sediments.

  17. Multiple regression and inverse moments improve the characterization of the spatial scaling behavior of daily streamflows in the Southeast United States

    USGS Publications Warehouse

    Farmer, William H.; Over, Thomas M.; Vogel, Richard M.

    2015-01-01

    Understanding the spatial structure of daily streamflow is essential for managing freshwater resources, especially in poorly-gaged regions. Spatial scaling assumptions are common in flood frequency prediction (e.g., index-flood method) and the prediction of continuous streamflow at ungaged sites (e.g. drainage-area ratio), with simple scaling by drainage area being the most common assumption. In this study, scaling analyses of daily streamflow from 173 streamgages in the southeastern US resulted in three important findings. First, the use of only positive integer moment orders, as has been done in most previous studies, captures only the probabilistic and spatial scaling behavior of flows above an exceedance probability near the median; negative moment orders (inverse moments) are needed for lower streamflows. Second, assessing scaling by using drainage area alone is shown to result in a high degree of omitted-variable bias, masking the true spatial scaling behavior. Multiple regression is shown to mitigate this bias, controlling for regional heterogeneity of basin attributes, especially those correlated with drainage area. Previous univariate scaling analyses have neglected the scaling of low-flow events and may have produced biased estimates of the spatial scaling exponent. Third, the multiple regression results show that mean flows scale with an exponent of one, low flows scale with spatial scaling exponents greater than one, and high flows scale with exponents less than one. The relationship between scaling exponents and exceedance probabilities may be a fundamental signature of regional streamflow. This signature may improve our understanding of the physical processes generating streamflow at different exceedance probabilities. 

  18. 49 CFR 236.909 - Minimum performance standard.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... product will not result in risk that exceeds the previous condition. The railroad shall determine, prior... product will not result in risk that exceeds the previous condition. In evaluating the sufficiency of the... factors pertinent to evaluation of risk assessments, listed in § 236.913(g)(2). (c) What is the scope of a...

  19. 49 CFR 236.909 - Minimum performance standard.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... product will not result in risk that exceeds the previous condition. The railroad shall determine, prior... product will not result in risk that exceeds the previous condition. In evaluating the sufficiency of the... factors pertinent to evaluation of risk assessments, listed in § 236.913(g)(2). (c) What is the scope of a...

  20. 49 CFR 236.909 - Minimum performance standard.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... product will not result in risk that exceeds the previous condition. The railroad shall determine, prior... product will not result in risk that exceeds the previous condition. In evaluating the sufficiency of the... factors pertinent to evaluation of risk assessments, listed in § 236.913(g)(2). (c) What is the scope of a...

  1. 49 CFR 236.909 - Minimum performance standard.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... product will not result in risk that exceeds the previous condition. The railroad shall determine, prior... product will not result in risk that exceeds the previous condition. In evaluating the sufficiency of the... factors pertinent to evaluation of risk assessments, listed in § 236.913(g)(2). (c) What is the scope of a...

  2. A screening-level modeling approach to estimate nitrogen loading and standard exceedance risk, with application to the Tippecanoe River watershed, Indiana

    EPA Science Inventory

    This paper presents a screening-level modeling approach that can be used to rapidly estimate nutrient loading and assess numerical nutrient standard exceedance risk of surface waters leading to potential classification as impaired for designated use. It can also be used to explor...

  3. Squid rocket science: How squid launch into air

    NASA Astrophysics Data System (ADS)

    O'Dor, Ron; Stewart, Julia; Gilly, William; Payne, John; Borges, Teresa Cerveira; Thys, Tierney

    2013-10-01

    Squid not only swim, they can also fly like rockets, accelerating through the air by forcefully expelling water out of their mantles. Using available lab and field data from four squid species, Sthenoteuthis pteropus, Dosidicus gigas, Illex illecebrosus and Loligo opalescens, including sixteen remarkable photographs of flying S. pteropus off the coast of Brazil, we compared the cost of transport in both water and air and discussed methods of maximizing power output through funnel and mantle constriction. Additionally we found that fin flaps develop at approximately the same size range as flight behaviors in these squids, consistent with previous hypotheses that flaps could function as ailerons whilst aloft. S. pteropus acceleration in air (265 body lengths [BL]/s2; 24.5m/s2) was found to exceed that in water (79BL/s2) three-fold based on estimated mantle length from still photos. Velocities in air (37BL/s; 3.4m/s) exceed those in water (11BL/s) almost four-fold. Given the obvious advantages of this extreme mode of transport, squid flight may in fact be more common than previously thought and potentially employed to reduce migration cost in addition to predation avoidance. Clearly squid flight, the role of fin flaps and funnel, and the energetic benefits are worthy of extended investigation.

  4. Acceleration of high resolution temperature based optimization for hyperthermia treatment planning using element grouping.

    PubMed

    Kok, H P; de Greef, M; Bel, A; Crezee, J

    2009-08-01

    In regional hyperthermia, optimization is useful to obtain adequate applicator settings. A speed-up of the previously published method for high resolution temperature based optimization is proposed. Element grouping as described in literature uses selected voxel sets instead of single voxels to reduce computation time. Elements which achieve their maximum heating potential for approximately the same phase/amplitude setting are grouped. To form groups, eigenvalues and eigenvectors of precomputed temperature matrices are used. At high resolution temperature matrices are unknown and temperatures are estimated using low resolution (1 cm) computations and the high resolution (2 mm) temperature distribution computed for low resolution optimized settings using zooming. This technique can be applied to estimate an upper bound for high resolution eigenvalues. The heating potential of elements was estimated using these upper bounds. Correlations between elements were estimated with low resolution eigenvalues and eigenvectors, since high resolution eigenvectors remain unknown. Four different grouping criteria were applied. Constraints were set to the average group temperatures. Element grouping was applied for five patients and optimal settings for the AMC-8 system were determined. Without element grouping the average computation times for five and ten runs were 7.1 and 14.4 h, respectively. Strict grouping criteria were necessary to prevent an unacceptable exceeding of the normal tissue constraints (up to approximately 2 degrees C), caused by constraining average instead of maximum temperatures. When strict criteria were applied, speed-up factors of 1.8-2.1 and 2.6-3.5 were achieved for five and ten runs, respectively, depending on the grouping criteria. When many runs are performed, the speed-up factor will converge to 4.3-8.5, which is the average reduction factor of the constraints and depends on the grouping criteria. Tumor temperatures were comparable. Maximum exceeding of the constraint in a hot spot was 0.24-0.34 degree C; average maximum exceeding over all five patients was 0.09-0.21 degree C, which is acceptable. High resolution temperature based optimization using element grouping can achieve a speed-up factor of 4-8, without large deviations from the conventional method.

  5. Floods of May 1978 in southeastern Montana and northeastern Wyoming

    USGS Publications Warehouse

    Parrett, Charles; Carlson, D.D.; Craig, G.S.; Chin, E.H.

    1984-01-01

    Heavy rain and some snow fell on previously saturated ground over southeastern Montana and northeastern Wyoming during May 16-19, 1978. The maximum amount of 7.60 inches within a 72-hour period observed at Lame Deer, Montana, set a record for the month of May in that region. Heavy flooding occurred in the drainages of the Yellowstone River and its tributaries as well as the Belle Fourche, Cheyenne, and North Platte Rivers. The previous maximum flood of record was exceeded at 48 gaged sites, and the 1-percent chance flood was equaled or exceeded at 24 sites. Flood damage was extensive, exceeding $33 million. Nineteen counties in the two States were declared major disaster areas. Mean daily suspended-sediment discharges exceeded previously recorded maximum mean daily values at four sites on the Powder River. The maximum daily suspended-sediment discharge of 2,810,000 tons per day occurred on May 20 at the Site Powder River near Arvada, Wyoming. (USGS)

  6. Methods for estimating annual exceedance-probability discharges and largest recorded floods for unregulated streams in rural Missouri

    USGS Publications Warehouse

    Southard, Rodney E.; Veilleux, Andrea G.

    2014-01-01

    Regression analysis techniques were used to develop a set of equations for rural ungaged stream sites for estimating discharges with 50-, 20-, 10-, 4-, 2-, 1-, 0.5-, and 0.2-percent annual exceedance probabilities, which are equivalent to annual flood-frequency recurrence intervals of 2, 5, 10, 25, 50, 100, 200, and 500 years, respectively. Basin and climatic characteristics were computed using geographic information software and digital geospatial data. A total of 35 characteristics were computed for use in preliminary statewide and regional regression analyses. Annual exceedance-probability discharge estimates were computed for 278 streamgages by using the expected moments algorithm to fit a log-Pearson Type III distribution to the logarithms of annual peak discharges for each streamgage using annual peak-discharge data from water year 1844 to 2012. Low-outlier and historic information were incorporated into the annual exceedance-probability analyses, and a generalized multiple Grubbs-Beck test was used to detect potentially influential low floods. Annual peak flows less than a minimum recordable discharge at a streamgage were incorporated into the at-site station analyses. An updated regional skew coefficient was determined for the State of Missouri using Bayesian weighted least-squares/generalized least squares regression analyses. At-site skew estimates for 108 long-term streamgages with 30 or more years of record and the 35 basin characteristics defined for this study were used to estimate the regional variability in skew. However, a constant generalized-skew value of -0.30 and a mean square error of 0.14 were determined in this study. Previous flood studies indicated that the distinct physical features of the three physiographic provinces have a pronounced effect on the magnitude of flood peaks. Trends in the magnitudes of the residuals from preliminary statewide regression analyses from previous studies confirmed that regional analyses in this study were similar and related to three primary physiographic provinces. The final regional regression analyses resulted in three sets of equations. For Regions 1 and 2, the basin characteristics of drainage area and basin shape factor were statistically significant. For Region 3, because of the small amount of data from streamgages, only drainage area was statistically significant. Average standard errors of prediction ranged from 28.7 to 38.4 percent for flood region 1, 24.1 to 43.5 percent for flood region 2, and 25.8 to 30.5 percent for region 3. The regional regression equations are only applicable to stream sites in Missouri with flows not significantly affected by regulation, channelization, backwater, diversion, or urbanization. Basins with about 5 percent or less impervious area were considered to be rural. Applicability of the equations are limited to the basin characteristic values that range from 0.11 to 8,212.38 square miles (mi2) and basin shape from 2.25 to 26.59 for Region 1, 0.17 to 4,008.92 mi2 and basin shape 2.04 to 26.89 for Region 2, and 2.12 to 2,177.58 mi2 for Region 3. Annual peak data from streamgages were used to qualitatively assess the largest floods recorded at streamgages in Missouri since the 1915 water year. Based on existing streamgage data, the 1983 flood event was the largest flood event on record since 1915. The next five largest flood events, in descending order, took place in 1993, 1973, 2008, 1994 and 1915. Since 1915, five of six of the largest floods on record occurred from 1973 to 2012.

  7. 12 CFR 602.14 - Advance payments-notice.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... § 602.14 Advance payments—notice. (a) If fees will be more than $25.00 and you have not told us in... your agreement to pay. (b) If estimated fees exceed $250.00 and you have a history of promptly paying.... (c) If estimated fees exceed $250.00 and you have no history of paying fees, we may require you to...

  8. Risk of cesarean delivery when second-trimester ultrasound dating disagrees with definite last menstrual period.

    PubMed

    Grewal, Jagteshwar; Zhang, Jun; Mikolajczyk, Rafael T; Ford, Jessie

    2010-08-01

    Estimates of gestational age based on early second-trimester ultrasound often differ from that based on the last menstrual period (LMP) even when a woman is certain about her LMP. Discrepancies in these gestational age estimates may be associated with an increased risk of cesarean section and low birth weight. We analyzed 7228 singleton, low-risk, white women from The Routine Antenatal Diagnostic Imaging with Ultrasound trial. The women were recruited at less than 14 weeks of gestation and received ultrasound exams between 15 and 22 weeks. Our results indicate that among nulliparous women, the risk of cesarean section increased from 10% when the ultrasound-based gestational age exceeded the LMP-based estimate by 4 days to 60% when the discrepancy increased to 21 days. Moreover, for each additional day the ultrasound-based estimate exceeded the LMP-based estimate, birth weight was higher by 9.6 g. Our findings indicate that a positive discrepancy (i.e., ultrasound-based estimate exceeds LMP-based estimate) in gestational age is associated with an increased risk of cesarean section. A negative discrepancy, by contrast, may reflect early intrauterine growth restriction and an increased risk of low birth weight. Copyright Thieme Medical Publishers.

  9. Probabilistic assessment of sea level during the last interglacial stage.

    PubMed

    Kopp, Robert E; Simons, Frederik J; Mitrovica, Jerry X; Maloof, Adam C; Oppenheimer, Michael

    2009-12-17

    With polar temperatures approximately 3-5 degrees C warmer than today, the last interglacial stage (approximately 125 kyr ago) serves as a partial analogue for 1-2 degrees C global warming scenarios. Geological records from several sites indicate that local sea levels during the last interglacial were higher than today, but because local sea levels differ from global sea level, accurately reconstructing past global sea level requires an integrated analysis of globally distributed data sets. Here we present an extensive compilation of local sea level indicators and a statistical approach for estimating global sea level, local sea levels, ice sheet volumes and their associated uncertainties. We find a 95% probability that global sea level peaked at least 6.6 m higher than today during the last interglacial; it is likely (67% probability) to have exceeded 8.0 m but is unlikely (33% probability) to have exceeded 9.4 m. When global sea level was close to its current level (>or=-10 m), the millennial average rate of global sea level rise is very likely to have exceeded 5.6 m kyr(-1) but is unlikely to have exceeded 9.2 m kyr(-1). Our analysis extends previous last interglacial sea level studies by integrating literature observations within a probabilistic framework that accounts for the physics of sea level change. The results highlight the long-term vulnerability of ice sheets to even relatively low levels of sustained global warming.

  10. 22 CFR 228.40 - Local procurement.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... transaction does not exceed $5,000. (c) Professional services contracts estimated not to exceed the local... available locally: (1) Utilities, including fuel for heating and cooking, waste disposal and trash...

  11. Methods for estimating peak-flow frequencies at ungaged sites in Montana based on data through water year 2011: Chapter F in Montana StreamStats

    USGS Publications Warehouse

    Sando, Roy; Sando, Steven K.; McCarthy, Peter M.; Dutton, DeAnn M.

    2016-04-05

    The U.S. Geological Survey (USGS), in cooperation with the Montana Department of Natural Resources and Conservation, completed a study to update methods for estimating peak-flow frequencies at ungaged sites in Montana based on peak-flow data at streamflow-gaging stations through water year 2011. The methods allow estimation of peak-flow frequencies (that is, peak-flow magnitudes, in cubic feet per second, associated with annual exceedance probabilities of 66.7, 50, 42.9, 20, 10, 4, 2, 1, 0.5, and 0.2 percent) at ungaged sites. The annual exceedance probabilities correspond to 1.5-, 2-, 2.33-, 5-, 10-, 25-, 50-, 100-, 200-, and 500-year recurrence intervals, respectively.Regional regression analysis is a primary focus of Chapter F of this Scientific Investigations Report, and regression equations for estimating peak-flow frequencies at ungaged sites in eight hydrologic regions in Montana are presented. The regression equations are based on analysis of peak-flow frequencies and basin characteristics at 537 streamflow-gaging stations in or near Montana and were developed using generalized least squares regression or weighted least squares regression.All of the data used in calculating basin characteristics that were included as explanatory variables in the regression equations were developed for and are available through the USGS StreamStats application (http://water.usgs.gov/osw/streamstats/) for Montana. StreamStats is a Web-based geographic information system application that was created by the USGS to provide users with access to an assortment of analytical tools that are useful for water-resource planning and management. The primary purpose of the Montana StreamStats application is to provide estimates of basin characteristics and streamflow characteristics for user-selected ungaged sites on Montana streams. The regional regression equations presented in this report chapter can be conveniently solved using the Montana StreamStats application.Selected results from this study were compared with results of previous studies. For most hydrologic regions, the regression equations reported for this study had lower mean standard errors of prediction (in percent) than the previously reported regression equations for Montana. The equations presented for this study are considered to be an improvement on the previously reported equations primarily because this study (1) included 13 more years of peak-flow data; (2) included 35 more streamflow-gaging stations than previous studies; (3) used a detailed geographic information system (GIS)-based definition of the regulation status of streamflow-gaging stations, which allowed better determination of the unregulated peak-flow records that are appropriate for use in the regional regression analysis; (4) included advancements in GIS and remote-sensing technologies, which allowed more convenient calculation of basin characteristics and investigation of many more candidate basin characteristics; and (5) included advancements in computational and analytical methods, which allowed more thorough and consistent data analysis.This report chapter also presents other methods for estimating peak-flow frequencies at ungaged sites. Two methods for estimating peak-flow frequencies at ungaged sites located on the same streams as streamflow-gaging stations are described. Additionally, envelope curves relating maximum recorded annual peak flows to contributing drainage area for each of the eight hydrologic regions in Montana are presented and compared to a national envelope curve. In addition to providing general information on characteristics of large peak flows, the regional envelope curves can be used to assess the reasonableness of peak-flow frequency estimates determined using the regression equations.

  12. The Impact of Alzheimer's Disease on the Chinese Economy.

    PubMed

    Keogh-Brown, Marcus R; Jensen, Henning Tarp; Arrighi, H Michael; Smith, Richard D

    2016-02-01

    Recent increases in life expectancy may greatly expand future Alzheimer's Disease (AD) burdens. China's demographic profile, aging workforce and predicted increasing burden of AD-related care make its economy vulnerable to AD impacts. Previous economic estimates of AD predominantly focus on health system burdens and omit wider whole-economy effects, potentially underestimating the full economic benefit of effective treatment. AD-related prevalence, morbidity and mortality for 2011-2050 were simulated and were, together with associated caregiver time and costs, imposed on a dynamic Computable General Equilibrium model of the Chinese economy. Both economic and non-economic outcomes were analyzed. Simulated Chinese AD prevalence quadrupled during 2011-50 from 6-28 million. The cumulative discounted value of eliminating AD equates to China's 2012 GDP (US$8 trillion), and the annual predicted real value approaches US AD cost-of-illness (COI) estimates, exceeding US$1 trillion by 2050 (2011-prices). Lost labor contributes 62% of macroeconomic impacts. Only 10% derives from informal care, challenging previous COI-estimates of 56%. Health and macroeconomic models predict an unfolding 2011-2050 Chinese AD epidemic with serious macroeconomic consequences. Significant investment in research and development (medical and non-medical) is warranted and international researchers and national authorities should therefore target development of effective AD treatment and prevention strategies.

  13. The Impact of Alzheimer's Disease on the Chinese Economy

    PubMed Central

    Keogh-Brown, Marcus R.; Jensen, Henning Tarp; Arrighi, H. Michael; Smith, Richard D.

    2015-01-01

    Background Recent increases in life expectancy may greatly expand future Alzheimer's Disease (AD) burdens. China's demographic profile, aging workforce and predicted increasing burden of AD-related care make its economy vulnerable to AD impacts. Previous economic estimates of AD predominantly focus on health system burdens and omit wider whole-economy effects, potentially underestimating the full economic benefit of effective treatment. Methods AD-related prevalence, morbidity and mortality for 2011–2050 were simulated and were, together with associated caregiver time and costs, imposed on a dynamic Computable General Equilibrium model of the Chinese economy. Both economic and non-economic outcomes were analyzed. Findings Simulated Chinese AD prevalence quadrupled during 2011–50 from 6–28 million. The cumulative discounted value of eliminating AD equates to China's 2012 GDP (US$8 trillion), and the annual predicted real value approaches US AD cost-of-illness (COI) estimates, exceeding US$1 trillion by 2050 (2011-prices). Lost labor contributes 62% of macroeconomic impacts. Only 10% derives from informal care, challenging previous COI-estimates of 56%. Interpretation Health and macroeconomic models predict an unfolding 2011–2050 Chinese AD epidemic with serious macroeconomic consequences. Significant investment in research and development (medical and non-medical) is warranted and international researchers and national authorities should therefore target development of effective AD treatment and prevention strategies. PMID:26981556

  14. Hydrologic budgets for the Madison and Minnelusa aquifers, Black Hills of South Dakota and Wyoming, water years 1987-96

    USGS Publications Warehouse

    Carter, Janet M.; Driscoll, Daniel G.; Hamade, Ghaith R.; Jarrell, Gregory J.

    2001-01-01

    The Madison and Minnelusa aquifers are two of the most important aquifers in the Black Hills area of South Dakota and Wyoming. Quantification and evaluation of various hydrologic budget components are important for managing and understanding these aquifers. Hydrologic budgets are developed for two scenarios, including an overall budget for the entire study area and more detailed budgets for subareas. Budgets generally are combined for the Madison and Minnelusa aquifers because most budget components cannot be quantified individually for the aquifers. An average hydrologic budget for the entire study area is computed for water years 1987-96, for which change in storage is approximately equal to zero. Annual estimates of budget components are included in detailed budgets for nine subareas, which consider periods of decreasing storage (1987-92) and increasing storage (1993-96). Inflow components include recharge, leakage from adjacent aquifers, and ground-water inflows across the study area boundary. Outflows include springflow (headwater and artesian), well withdrawals, leakage to adjacent aquifers, and ground-water outflow across the study area boundary. Leakage, ground-water inflows, and ground-water outflows are difficult to quantify and cannot be distinguished from one another. Thus, net ground-water flow, which includes these components, is calculated as a residual, using estimates for the other budget components. For the overall budget for water years 1987-96, net ground-water outflow from the study area is computed as 100 ft3/s (cubic feet per second). Estimates of average combined budget components for the Madison and Minnelusa aquifers are: 395 ft3/s for recharge, 78 ft3/s for headwater springflow, 189 ft3/s for artesian springflow, and 28 ft3/s for well withdrawals. Hydrologic budgets also are quantified for nine subareas for periods of decreasing storage (1987-92) and increasing storage (1993-96), with changes in storage assumed equal but opposite. Common subareas are identified for the Madison and Minnelusa aquifers, and previous components from the overall budget generally are distributed over the subareas. Estimates of net ground-water flow for the two aquifers are computed, with net ground-water outflow exceeding inflow for most subareas. Outflows range from 5.9 ft3/s in the area east of Rapid City to 48.6 ft3/s along the southwestern flanks of the Black Hills. Net groundwater inflow exceeds outflow for two subareas where the discharge of large artesian springs exceeds estimated recharge within the subareas. More detailed subarea budgets also are developed, which include estimates of flow components for the individual aquifers at specific flow zones. The net outflows and inflows from the preliminary subarea budgets are used to estimate transmissivity of flow across specific flow zones based on Darcy?s Law. For estimation purposes, it is assumed that transmissivities of the Madison and Minnelusa aquifers are equal in any particular flow zone. The resulting transmissivity estimates range from 90 ft2/d to about 7,400 ft2/d, which is similar to values reported by previous investigators. The highest transmissivity estimates are for areas in the northern and southwestern parts of the study area, and the lowest transmissivity estimates are along the eastern study area boundary. Evaluation of subarea budgets provides confidence in budget components developed for the overall budget, especially regarding precipitation recharge, which is particularly difficult to estimate. Recharge estimates are consistently compatible with other budget components, including artesian springflow, which is a dominant component in many subareas. Calculated storage changes for subareas also are consistent with other budget components, specifically artesian springflow and net ground-water flow, and also are consistent with water-level fluctuations for observation wells. Ground-water budgets and flowpaths are especially complex i

  15. Construction of estimated flow- and load-duration curves for Kentucky using the Water Availability Tool for Environmental Resources (WATER)

    USGS Publications Warehouse

    Unthank, Michael D.; Newson, Jeremy K.; Williamson, Tanja N.; Nelson, Hugh L.

    2012-01-01

    Flow- and load-duration curves were constructed from the model outputs of the U.S. Geological Survey's Water Availability Tool for Environmental Resources (WATER) application for streams in Kentucky. The WATER application was designed to access multiple geospatial datasets to generate more than 60 years of statistically based streamflow data for Kentucky. The WATER application enables a user to graphically select a site on a stream and generate an estimated hydrograph and flow-duration curve for the watershed upstream of that point. The flow-duration curves are constructed by calculating the exceedance probability of the modeled daily streamflows. User-defined water-quality criteria and (or) sampling results can be loaded into the WATER application to construct load-duration curves that are based on the modeled streamflow results. Estimates of flow and streamflow statistics were derived from TOPographically Based Hydrological MODEL (TOPMODEL) simulations in the WATER application. A modified TOPMODEL code, SDP-TOPMODEL (Sinkhole Drainage Process-TOPMODEL) was used to simulate daily mean discharges over the period of record for 5 karst and 5 non-karst watersheds in Kentucky in order to verify the calibrated model. A statistical evaluation of the model's verification simulations show that calibration criteria, established by previous WATER application reports, were met thus insuring the model's ability to provide acceptably accurate estimates of discharge at gaged and ungaged sites throughout Kentucky. Flow-duration curves are constructed in the WATER application by calculating the exceedence probability of the modeled daily flow values. The flow-duration intervals are expressed as a percentage, with zero corresponding to the highest stream discharge in the streamflow record. Load-duration curves are constructed by applying the loading equation (Load = Flow*Water-quality criterion) at each flow interval.

  16. Metal uptake by homegrown vegetables – The relative importance in human health risk assessments at contaminated sites

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Augustsson, Anna L.M., E-mail: anna.augustsson@lnu.se; Uddh-Söderberg, Terese E.; Hogmalm, K. Johan

    Risk assessments of contaminated land often involve the use of generic bioconcentration factors (BCFs), which express contaminant concentrations in edible plant parts as a function of the concentration in soil, in order to assess the risks associated with consumption of homegrown vegetables. This study aimed to quantify variability in BCFs and evaluate the implications of this variability for human exposure assessments, focusing on cadmium (Cd) and lead (Pb) in lettuce and potatoes sampled around 22 contaminated glassworks sites. In addition, risks associated with measured Cd and Pb concentrations in soil and vegetable samples were characterized and a probabilistic exposure assessmentmore » was conducted to estimate the likelihood of local residents exceeding tolerable daily intakes. The results show that concentrations in vegetables were only moderately elevated despite high concentrations in soil, and most samples complied with applicable foodstuff legislation. Still, the daily intake of Cd (but not Pb) was assessed to exceed toxicological thresholds for about a fifth of the study population. Bioconcentration factors were found to vary more than indicated by previous studies, but decreasing BCFs with increasing metal concentrations in the soil can explain why the calculated exposure is only moderately affected by the choice of BCF value when generic soil guideline values are exceeded and the risk may be unacceptable. - Highlights: • Uptake of Cd and Pb by lettuce and potatoes increased with soil contamination. • Consumption of homegrown vegetables may lead to a daily Cd intake above TDIs. • The variability in the calculated BCFs is high when compared to previous studies. • Exposure assessments are most sensitive to the choice of BCFs at low contamination.« less

  17. Effects of Wolf Mortality on Livestock Depredations

    PubMed Central

    Wielgus, Robert B.; Peebles, Kaylie A.

    2014-01-01

    Predator control and sport hunting are often used to reduce predator populations and livestock depredations, – but the efficacy of lethal control has rarely been tested. We assessed the effects of wolf mortality on reducing livestock depredations in Idaho, Montana and Wyoming from 1987–2012 using a 25 year time series. The number of livestock depredated, livestock populations, wolf population estimates, number of breeding pairs, and wolves killed were calculated for the wolf-occupied area of each state for each year. The data were then analyzed using a negative binomial generalized linear model to test for the expected negative relationship between the number of livestock depredated in the current year and the number of wolves controlled the previous year. We found that the number of livestock depredated was positively associated with the number of livestock and the number of breeding pairs. However, we also found that the number of livestock depredated the following year was positively, not negatively, associated with the number of wolves killed the previous year. The odds of livestock depredations increased 4% for sheep and 5–6% for cattle with increased wolf control - up until wolf mortality exceeded the mean intrinsic growth rate of wolves at 25%. Possible reasons for the increased livestock depredations at ≤25% mortality may be compensatory increased breeding pairs and numbers of wolves following increased mortality. After mortality exceeded 25%, the total number of breeding pairs, wolves, and livestock depredations declined. However, mortality rates exceeding 25% are unsustainable over the long term. Lethal control of individual depredating wolves may sometimes necessary to stop depredations in the near-term, but we recommend that non-lethal alternatives also be considered. PMID:25470821

  18. Estimated dietary exposure to principal food mycotoxins from the first French Total Diet Study.

    PubMed

    Leblanc, J-C; Tard, A; Volatier, J-L; Verger, P

    2005-07-01

    This study reports estimates on dietary exposure from the first French Total Diet Study (FTDS) and compares these estimates with both existing tolerable daily intakes for these toxins and the intakes calculated during previous French studies. To estimate the dietary exposure of the French population to the principal mycotoxins in the French diet (as consumed), 456 composite samples were prepared from 2280 individual samples and analysed for aflatoxins, ochratoxin A, trichothecenes, zearalenone, fumonisins and patulin. Average and high percentile intakes were calculated taking account of different eating patterns for adults, children and vegetarians. The results showed that contaminant levels observed in the foods examined 'as consumed' complied fully with current European legislation. However, particular attention needs to be paid to the exposure of specific population groups, such as children and vegans/macrobiotics, who could be exposed to certain mycotoxins in quantities that exceed the tolerable or weekly daily intake levels. This observation is particularly relevant with respect to ochratoxin A, deoxynivalenol and zearalenone. For these mycotoxins, cereals and cereal products were the main contributors to high exposure.

  19. Kinematics and age of 15 stars-photometric solar analogs

    NASA Astrophysics Data System (ADS)

    Galeev, A. I.; Shimansky, V. V.

    2008-03-01

    The radial and space velocities are inferred for 15 stars that are photometric analogs of the Sun. The space velocity components (U, V, W) of most of these stars lie within the 10-60 km/s interval. The star HD 225239, which in our previous papers we classified as a subgiant, has a space velocity exceeding 100 km/s, and belongs to the thick disk. The inferred fundamental parameters of the atmospheres of solar analogs are combined with published evolutionary tracks to estimate the masses and ages of the stars studied. The kinematics of photometric analogs is compared to the data for a large group of solar-type stars.

  20. Reconnaissance soil geochemistry at the Riverton Uranium Mill Tailings Remedial Action Site, Fremont County, Wyoming

    USGS Publications Warehouse

    Smith, David B.; Sweat, Michael J.

    2012-01-01

    Soil samples were collected and chemically analyzed from the Riverton Uranium Mill Tailings Remedial Action Site, which lies within the Wind River Indian Reservation in Fremont County, Wyoming. Nineteen soil samples from a depth of 0 to 5 centimeters were collected in August 2011 from the site. The samples were sieved to less than 2 millimeters and analyzed for 44 major and trace elements following a near-total multi-acid extraction. Soil pH was also determined. The geochemical data were compared to a background dataset consisting of 160 soil samples previously collected from the same depth throughout the State of Wyoming as part of another ongoing study by the U.S. Geological Survey. Risk from potentially toxic elements in soil from the site to biologic receptors and humans was estimated by comparing the concentration of these elements with soil screening values established by the U.S. Environmental Protection Agency. All 19 samples exceeded the carcinogenic human health screening level for arsenic in residential soils of 0.39 milligrams per kilogram (mg/kg), which represents a one-in-one-million cancer risk (median arsenic concentration in the study area is 2.7 mg/kg). All 19 samples also exceeded the lead and vanadium screening levels for birds. Eighteen of the 19 samples exceeded the manganese screening level for plants, 13 of the 19 samples exceeded the antimony screening level for mammals, and 10 of 19 samples exceeded the zinc screening level for birds. However, these exceedances are also found in soils at most locations in the Wyoming Statewide soil database, and elevated concentrations alone are not necessarily cause for alarm. Uranium and thorium, two other elements of environmental concern, are elevated in soils at the site as compared to the Wyoming dataset, but no human or ecological soil screening levels have been established for these elements.

  1. 12 CFR 327.52 - Annual dividend determination.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... the DIF reserve ratio as of December 31st of 2008 or any later year equals or exceeds 1.35 percent... dividend based upon the reserve ratio of the DIF as of December 31st of the preceding year, and the amount... ratio of the DIF equals or exceeds 1.35 percent of estimated insured deposits and does not exceed 1.50...

  2. Estimating the Exceedance Probability of the Reservoir Inflow Based on the Long-Term Weather Outlooks

    NASA Astrophysics Data System (ADS)

    Huang, Q. Z.; Hsu, S. Y.; Li, M. H.

    2016-12-01

    The long-term streamflow prediction is important not only to estimate water-storage of a reservoir but also to the surface water intakes, which supply people's livelihood, agriculture, and industry. Climatology forecasts of streamflow have been traditionally used for calculating the exceedance probability curve of streamflow and water resource management. In this study, we proposed a stochastic approach to predict the exceedance probability curve of long-term streamflow with the seasonal weather outlook from Central Weather Bureau (CWB), Taiwan. The approach incorporates a statistical downscale weather generator and a catchment-scale hydrological model to convert the monthly outlook into daily rainfall and temperature series and to simulate the streamflow based on the outlook information. Moreover, we applied Bayes' theorem to derive a method for calculating the exceedance probability curve of the reservoir inflow based on the seasonal weather outlook and its imperfection. The results show that our approach can give the exceedance probability curves reflecting the three-month weather outlook and its accuracy. We also show how the improvement of the weather outlook affects the predicted exceedance probability curves of the streamflow. Our approach should be useful for the seasonal planning and management of water resource and their risk assessment.

  3. Spatially robust estimates of biological nitrogen (N) fixation imply substantial human alteration of the tropical N cycle

    USGS Publications Warehouse

    Sullivan, Benjamin W.; Smith, William K.; Townsend, Alan R.; Nasto, Megan K.; Reed, Sasha C.; Chazdon, Robin L.; Cleveland, Cory C.

    2014-01-01

    Biological nitrogen fixation (BNF) is the largest natural source of exogenous nitrogen (N) to unmanaged ecosystems and also the primary baseline against which anthropogenic changes to the N cycle are measured. Rates of BNF in tropical rainforest are thought to be among the highest on Earth, but they are notoriously difficult to quantify and are based on little empirical data. We adapted a sampling strategy from community ecology to generate spatial estimates of symbiotic and free-living BNF in secondary and primary forest sites that span a typical range of tropical forest legume abundance. Although total BNF was higher in secondary than primary forest, overall rates were roughly five times lower than previous estimates for the tropical forest biome. We found strong correlations between symbiotic BNF and legume abundance, but we also show that spatially free-living BNF often exceeds symbiotic inputs. Our results suggest that BNF in tropical forest has been overestimated, and our data are consistent with a recent top-down estimate of global BNF that implied but did not measure low tropical BNF rates. Finally, comparing tropical BNF within the historical area of tropical rainforest with current anthropogenic N inputs indicates that humans have already at least doubled reactive N inputs to the tropical forest biome, a far greater change than previously thought. Because N inputs are increasing faster in the tropics than anywhere on Earth, both the proportion and the effects of human N enrichment are likely to grow in the future.

  4. Risk of arsenic exposure from drinking water and dietary components: implications for risk management in rural Bengal.

    PubMed

    Halder, Dipti; Bhowmick, Subhamoy; Biswas, Ashis; Chatterjee, Debashis; Nriagu, Jerome; Guha Mazumder, Debendra Nath; Šlejkovec, Zdenka; Jacks, Gunnar; Bhattacharya, Prosun

    2013-01-15

    This study investigates the risk of arsenic (As) exposure to the communities in rural Bengal, even when they have been supplied with As safe drinking water. The estimates of exposure via dietary and drinking water routes show that, when people are consuming water with an As concentration of less than 10 μg L(-1), the total daily intake of inorganic As (TDI-iAs) exceeds the previous provisional tolerable daily intake (PTDI) value of 2.1 μg day(-1) kg(-1) BW, recommended by the World Health Organization (WHO) in 35% of the cases due to consumption of rice. When the level of As concentration in drinking water is above 10 μg L(-1), the TDI-iAs exceeds the previous PTDI for all the participants. These results imply that, when rice consumption is a significant contributor to the TDI-iAs, supplying water with an As concentration at the current national drinking water standard for India and Bangladesh would place many people above the safety threshold of PTDI. We also found that the consumption of vegetables in rural Bengal does not pose a significant health threat to the population independently. This study suggests that any effort to mitigate the As exposure of the villagers in Bengal must consider the risk of As exposure from rice consumption together with drinking water.

  5. First-Principles Estimation of Electronic Temperature from X-Ray Thomson Scattering Spectrum of Isochorically Heated Warm Dense Matter

    NASA Astrophysics Data System (ADS)

    Mo, Chongjie; Fu, Zhenguo; Kang, Wei; Zhang, Ping; He, X. T.

    2018-05-01

    Through the perturbation formula of time-dependent density functional theory broadly employed in the calculation of solids, we provide a first-principles calculation of x-ray Thomson scattering spectrum of isochorically heated aluminum foil, as considered in the experiments of Sperling et al. [Phys. Rev. Lett. 115, 115001 (2015), 10.1103/PhysRevLett.115.115001], where ions were constrained near their lattice positions. From the calculated spectra, we find that the electronic temperature cannot exceed 2 eV, much smaller than the previous estimation of 6 eV via the detailed balance relation. Our results may well be an indication of unique electronic properties of warm dense matter, which can be further illustrated by future experiments. The lower electronic temperature predicted partially relieves the concern on the heating of x-ray free electron laser to the sample when used in structure measurement.

  6. Observations and estimates of wave-driven water level extremes at the Marshall Islands

    NASA Astrophysics Data System (ADS)

    Merrifield, M. A.; Becker, J. M.; Ford, M.; Yao, Y.

    2014-10-01

    Wave-driven extreme water levels are examined for coastlines protected by fringing reefs using field observations obtained in the Republic of the Marshall Islands. The 2% exceedence water level near the shoreline due to waves is estimated empirically for the study sites from breaking wave height at the outer reef and by combining separate contributions from setup, sea and swell, and infragravity waves, which are estimated based on breaking wave height and water level over the reef flat. Although each component exhibits a tidal dependence, they sum to yield a 2% exceedence level that does not. A hindcast based on the breaking wave height parameterization is used to assess factors leading to flooding at Roi-Namur caused by an energetic swell event during December 2008. Extreme water levels similar to December 2008 are projected to increase significantly with rising sea level as more wave and tide events combine to exceed inundation threshold levels.

  7. Return period adjustment for runoff coefficients based on analysis in undeveloped Texas watersheds

    USGS Publications Warehouse

    Dhakal, Nirajan; Fang, Xing; Asquith, William H.; Cleveland, Theodore G.; Thompson, David B.

    2013-01-01

    The rational method for peak discharge (Qp) estimation was introduced in the 1880s. The runoff coefficient (C) is a key parameter for the rational method that has an implicit meaning of rate proportionality, and the C has been declared a function of the annual return period by various researchers. Rate-based runoff coefficients as a function of the return period, C(T), were determined for 36 undeveloped watersheds in Texas using peak discharge frequency from previously published regional regression equations and rainfall intensity frequency for return periods T of 2, 5, 10, 25, 50, and 100 years. The C(T) values and return period adjustments C(T)/C(T=10  year) determined in this study are most applicable to undeveloped watersheds. The return period adjustments determined for the Texas watersheds in this study and those extracted from prior studies of non-Texas data exceed values from well-known literature such as design manuals and textbooks. Most importantly, the return period adjustments exceed values currently recognized in Texas Department of Transportation design guidance when T>10  years.

  8. Comparisons of two moments‐based estimators that utilize historical and paleoflood data for the log Pearson type III distribution

    USGS Publications Warehouse

    England, John F.; Salas, José D.; Jarrett, Robert D.

    2003-01-01

    The expected moments algorithm (EMA) [Cohn et al., 1997] and the Bulletin 17B [Interagency Committee on Water Data, 1982] historical weighting procedure (B17H) for the log Pearson type III distribution are compared by Monte Carlo computer simulation for cases in which historical and/or paleoflood data are available. The relative performance of the estimators was explored for three cases: fixed‐threshold exceedances, a fixed number of large floods, and floods generated from a different parent distribution. EMA can effectively incorporate four types of historical and paleoflood data: floods where the discharge is explicitly known, unknown discharges below a single threshold, floods with unknown discharge that exceed some level, and floods with discharges described in a range. The B17H estimator can utilize only the first two types of historical information. Including historical/paleoflood data in the simulation experiments significantly improved the quantile estimates in terms of mean square error and bias relative to using gage data alone. EMA performed significantly better than B17H in nearly all cases considered. B17H performed as well as EMA for estimating X100 in some limited fixed‐threshold exceedance cases. EMA performed comparatively much better in other fixed‐threshold situations, for the single large flood case, and in cases when estimating extreme floods equal to or greater than X500. B17H did not fully utilize historical information when the historical period exceeded 200 years. Robustness studies using GEV‐simulated data confirmed that EMA performed better than B17H. Overall, EMA is preferred to B17H when historical and paleoflood data are available for flood frequency analysis.

  9. Comparisons of two moments-based estimators that utilize historical and paleoflood data for the log Pearson type III distribution

    NASA Astrophysics Data System (ADS)

    England, John F.; Salas, José D.; Jarrett, Robert D.

    2003-09-01

    The expected moments algorithm (EMA) [, 1997] and the Bulletin 17B [, 1982] historical weighting procedure (B17H) for the log Pearson type III distribution are compared by Monte Carlo computer simulation for cases in which historical and/or paleoflood data are available. The relative performance of the estimators was explored for three cases: fixed-threshold exceedances, a fixed number of large floods, and floods generated from a different parent distribution. EMA can effectively incorporate four types of historical and paleoflood data: floods where the discharge is explicitly known, unknown discharges below a single threshold, floods with unknown discharge that exceed some level, and floods with discharges described in a range. The B17H estimator can utilize only the first two types of historical information. Including historical/paleoflood data in the simulation experiments significantly improved the quantile estimates in terms of mean square error and bias relative to using gage data alone. EMA performed significantly better than B17H in nearly all cases considered. B17H performed as well as EMA for estimating X100 in some limited fixed-threshold exceedance cases. EMA performed comparatively much better in other fixed-threshold situations, for the single large flood case, and in cases when estimating extreme floods equal to or greater than X500. B17H did not fully utilize historical information when the historical period exceeded 200 years. Robustness studies using GEV-simulated data confirmed that EMA performed better than B17H. Overall, EMA is preferred to B17H when historical and paleoflood data are available for flood frequency analysis.

  10. Flood of June 22-24, 2006, in North-Central Ohio, With Emphasis on the Cuyahoga River Near Independence

    USGS Publications Warehouse

    Sherwood, James M.; Ebner, Andrew D.; Koltun, G.F.; Astifan, Brian M.

    2007-01-01

    Heavy rains caused severe flooding on June 22-24, 2006, and damaged approximately 4,580 homes and 48 businesses in Cuyahoga County. Damage estimates in Cuyahoga County for the two days of flooding exceed $47 million; statewide damage estimates exceed $150 million. Six counties (Cuyahoga, Erie, Huron, Lucas, Sandusky, and Stark) in northeast Ohio were declared Federal disaster areas. One death, in Lorain County, was attributed to the flooding. The peak streamflow of 25,400 cubic feet per second and corresponding peak gage height of 23.29 feet were the highest recorded at the U.S. Geological Survey (USGS) streamflow-gaging station Cuyahoga River at Independence (04208000) since the gaging station began operation in 1922, exceeding the previous peak streamflow of 24,800 cubic feet per second that occurred on January 22, 1959. An indirect calculation of the peak streamflow was made by use of a step-backwater model because all roads leading to the gaging station were inundated during the flood and field crews could not reach the station to make a direct measurement. Because of a statistically significant and persistent positive trend in the annual-peak-streamflow time series for the Cuyahoga River at Independence, a method was developed and applied to detrend the annual-peak-streamflow time series prior to the traditional log-Pearson Type III flood-frequency analysis. Based on this analysis, the recurrence interval of the computed peak streamflow was estimated to be slightly less than 100 years. Peak-gage-height data, peak-streamflow data, and recurrence-interval estimates for the June 22-24, 2006, flood are tabulated for the Cuyahoga River at Independence and 10 other USGS gaging stations in north-central Ohio. Because flooding along the Cuyahoga River near Independence and Valley View was particularly severe, a study was done to document the peak water-surface profile during the flood from approximately 2 miles downstream from the USGS streamflow-gaging station at Independence to approximately 2 miles upstream from the gaging station. High-water marks were identified and flagged in the field. Third-order-accuracy surveys were used to determine elevations of the high-water marks, and the data were tabulated and plotted.

  11. Bayesian inference and assessment for rare-event bycatch in marine fisheries: a drift gillnet fishery case study.

    PubMed

    Martin, Summer L; Stohs, Stephen M; Moore, Jeffrey E

    2015-03-01

    Fisheries bycatch is a global threat to marine megafauna. Environmental laws require bycatch assessment for protected species, but this is difficult when bycatch is rare. Low bycatch rates, combined with low observer coverage, may lead to biased, imprecise estimates when using standard ratio estimators. Bayesian model-based approaches incorporate uncertainty, produce less volatile estimates, and enable probabilistic evaluation of estimates relative to management thresholds. Here, we demonstrate a pragmatic decision-making process that uses Bayesian model-based inferences to estimate the probability of exceeding management thresholds for bycatch in fisheries with < 100% observer coverage. Using the California drift gillnet fishery as a case study, we (1) model rates of rare-event bycatch and mortality using Bayesian Markov chain Monte Carlo estimation methods and 20 years of observer data; (2) predict unobserved counts of bycatch and mortality; (3) infer expected annual mortality; (4) determine probabilities of mortality exceeding regulatory thresholds; and (5) classify the fishery as having low, medium, or high bycatch impact using those probabilities. We focused on leatherback sea turtles (Dermochelys coriacea) and humpback whales (Megaptera novaeangliae). Candidate models included Poisson or zero-inflated Poisson likelihood, fishing effort, and a bycatch rate that varied with area, time, or regulatory regime. Regulatory regime had the strongest effect on leatherback bycatch, with the highest levels occurring prior to a regulatory change. Area had the strongest effect on humpback bycatch. Cumulative bycatch estimates for the 20-year period were 104-242 leatherbacks (52-153 deaths) and 6-50 humpbacks (0-21 deaths). The probability of exceeding a regulatory threshold under the U.S. Marine Mammal Protection Act (Potential Biological Removal, PBR) of 0.113 humpback deaths was 0.58, warranting a "medium bycatch impact" classification of the fishery. No PBR thresholds exist for leatherbacks, but the probability of exceeding an anticipated level of two deaths per year, stated as part of a U.S. Endangered Species Act assessment process, was 0.0007. The approach demonstrated here would allow managers to objectively and probabilistically classify fisheries with respect to bycatch impacts on species that have population-relevant mortality reference points, and declare with a stipulated level of certainty that bycatch did or did not exceed estimated upper bounds.

  12. Metal uptake by homegrown vegetables - the relative importance in human health risk assessments at contaminated sites.

    PubMed

    Augustsson, Anna L M; Uddh-Söderberg, Terese E; Hogmalm, K Johan; Filipsson, Monika E M

    2015-04-01

    Risk assessments of contaminated land often involve the use of generic bioconcentration factors (BCFs), which express contaminant concentrations in edible plant parts as a function of the concentration in soil, in order to assess the risks associated with consumption of homegrown vegetables. This study aimed to quantify variability in BCFs and evaluate the implications of this variability for human exposure assessments, focusing on cadmium (Cd) and lead (Pb) in lettuce and potatoes sampled around 22 contaminated glassworks sites. In addition, risks associated with measured Cd and Pb concentrations in soil and vegetable samples were characterized and a probabilistic exposure assessment was conducted to estimate the likelihood of local residents exceeding tolerable daily intakes. The results show that concentrations in vegetables were only moderately elevated despite high concentrations in soil, and most samples complied with applicable foodstuff legislation. Still, the daily intake of Cd (but not Pb) was assessed to exceed toxicological thresholds for about a fifth of the study population. Bioconcentration factors were found to vary more than indicated by previous studies, but decreasing BCFs with increasing metal concentrations in the soil can explain why the calculated exposure is only moderately affected by the choice of BCF value when generic soil guideline values are exceeded and the risk may be unacceptable. Copyright © 2015 Elsevier Inc. All rights reserved.

  13. Binary interaction dominates the evolution of massive stars.

    PubMed

    Sana, H; de Mink, S E; de Koter, A; Langer, N; Evans, C J; Gieles, M; Gosset, E; Izzard, R G; Le Bouquin, J-B; Schneider, F R N

    2012-07-27

    The presence of a nearby companion alters the evolution of massive stars in binary systems, leading to phenomena such as stellar mergers, x-ray binaries, and gamma-ray bursts. Unambiguous constraints on the fraction of massive stars affected by binary interaction were lacking. We simultaneously measured all relevant binary characteristics in a sample of Galactic massive O stars and quantified the frequency and nature of binary interactions. More than 70% of all massive stars will exchange mass with a companion, leading to a binary merger in one-third of the cases. These numbers greatly exceed previous estimates and imply that binary interaction dominates the evolution of massive stars, with implications for populations of massive stars and their supernovae.

  14. Epic Flooding in Georgia, 2009

    USGS Publications Warehouse

    Gotvald, Anthony J.; McCallum, Brian E.

    2010-01-01

    Metropolitan Atlanta-September 2009 Floods The epic floods experienced in the Atlanta area in September 2009 were extremely rare. Eighteen streamgages in the Metropolitan Atlanta area had flood magnitudes much greater than the estimated 0.2-percent (500-year) annual exceedance probability. The Federal Emergency Management Agency (FEMA) reported that 23 counties in Georgia were declared disaster areas due to this flood and that 16,981 homes and 3,482 businesses were affected by floodwaters. Ten lives were lost in the flood. The total estimated damages exceed $193 million (H.E. Longenecker, Federal Emergency Management Agency, written commun., November 2009). On Sweetwater Creek near Austell, Ga., just north of Interstate 20, the peak stage was more than 6 feet higher than the estimated peak stage of the 0.2-percent (500-year) flood. Flood magnitudes in Cobb County on Sweetwater, Butler, and Powder Springs Creeks greatly exceeded the estimated 0.2-percent (500-year) floods for these streams. In Douglas County, the Dog River at Ga. Highway 5 near Fairplay had a peak stage nearly 20 feet higher than the estimated peak stage of the 0.2-percent (500-year) flood. On the Chattahoochee River, the U.S. Geological Survey (USGS) gage at Vinings reached the highest level recorded in the past 81 years. Gwinnett, De Kalb, Fulton, and Rockdale Counties also had record flooding.South Georgia March and April 2009 FloodsThe March and April 2009 floods in South Georgia were smaller in magnitude than the September floods but still caused significant damage. No lives were lost in this flood. Approximately $60 million in public infrastructure damage occurred to roads, culverts, bridges and a water treatment facility (Joseph T. McKinney, Federal Emergency Management Agency, written commun., July 2009). Flow at the Satilla River near Waycross, exceeded the 0.5-percent (200-year) flood. Flows at seven other stations in South Georgia exceeded the 1-percent (100-year) flood.

  15. Modeling to Predict Escherichia coli at Presque Isle Beach 2, City of Erie, Erie County, Pennsylvania

    USGS Publications Warehouse

    Zimmerman, Tammy M.

    2008-01-01

    The Lake Erie beaches in Pennsylvania are a valuable recreational resource for Erie County. Concentrations of Escherichia coli (E. coli) at monitored beaches in Presque Isle State Park in Erie, Pa., occasionally exceed the single-sample bathing-water standard of 235 colonies per 100 milliliters resulting in potentially unsafe swimming conditions and prompting beach managers to post public advisories or to close beaches to recreation. To supplement the current method for assessing recreational water quality (E. coli concentrations from the previous day), a predictive regression model for E. coli concentrations at Presque Isle Beach 2 was developed from data collected during the 2004 and 2005 recreational seasons. Model output included predicted E. coli concentrations and exceedance probabilities--the probability that E. coli concentrations would exceed the standard. For this study, E. coli concentrations and other water-quality and environmental data were collected during the 2006 recreational season at Presque Isle Beach 2. The data from 2006, an independent year, were used to test (validate) the 2004-2005 predictive regression model and compare the model performance to the current method. Using 2006 data, the 2004-2005 model yielded more correct responses and better predicted exceedances of the standard than the use of E. coli concentrations from the previous day. The differences were not pronounced, however, and more data are needed. For example, the model correctly predicted exceedances of the standard 11 percent of the time (1 out of 9 exceedances that occurred in 2006) whereas using the E. coli concentrations from the previous day did not result in any correctly predicted exceedances. After validation, new models were developed by adding the 2006 data to the 2004-2005 dataset and by analyzing the data in 2- and 3-year combinations. Results showed that excluding the 2004 data (using 2005 and 2006 data only) yielded the best model. Explanatory variables in the 2005-2006 model were log10 turbidity, bird count, and wave height. The 2005-2006 model correctly predicted when the standard would not be exceeded (specificity) with a response of 95.2 percent (178 out of 187 nonexceedances) and correctly predicted when the standard would be exceeded (sensitivity) with a response of 64.3 percent (9 out of 14 exceedances). In all cases, the results from predictive modeling produced higher percentages of correct predictions than using E. coli concentrations from the previous day. Additional data collected each year can be used to test and possibly improve the model. The results of this study will aid beach managers in more rapidly determining when waters are not safe for recreational use and, subsequently, when to close a beach or post an advisory.

  16. Estimating Local Child Abuse.

    ERIC Educational Resources Information Center

    Ards, Sheila

    1989-01-01

    Three conceptual approaches to estimating local child abuse rates using the National Incidence Study of Child Abuse and Neglect data set are evaluated. All three approaches yield estimates of actual abuse cases that exceed the number of reported cases. (SLD)

  17. Estimating household and community transmission of ocular Chlamydia trachomatis.

    PubMed

    Blake, Isobel M; Burton, Matthew J; Bailey, Robin L; Solomon, Anthony W; West, Sheila; Muñoz, Beatriz; Holland, Martin J; Mabey, David C W; Gambhir, Manoj; Basáñez, María-Gloria; Grassly, Nicholas C

    2009-01-01

    Community-wide administration of antibiotics is one arm of a four-pronged strategy in the global initiative to eliminate blindness due to trachoma. The potential impact of more efficient, targeted treatment of infected households depends on the relative contribution of community and household transmission of infection, which have not previously been estimated. A mathematical model of the household transmission of ocular Chlamydia trachomatis was fit to detailed demographic and prevalence data from four endemic populations in The Gambia and Tanzania. Maximum likelihood estimates of the household and community transmission coefficients were obtained. The estimated household transmission coefficient exceeded both the community transmission coefficient and the rate of clearance of infection by individuals in three of the four populations, allowing persistent transmission of infection within households. In all populations, individuals in larger households contributed more to the incidence of infection than those in smaller households. Transmission of ocular C. trachomatis infection within households is typically very efficient. Failure to treat all infected members of a household during mass administration of antibiotics is likely to result in rapid re-infection of that household, followed by more gradual spread across the community. The feasibility and effectiveness of household targeted strategies should be explored.

  18. Economic evaluation of a comprehensive teenage pregnancy prevention program: pilot program.

    PubMed

    Rosenthal, Marjorie S; Ross, Joseph S; Bilodeau, Roseanne; Richter, Rosemary S; Palley, Jane E; Bradley, Elizabeth H

    2009-12-01

    Previous research has suggested that comprehensive teenage pregnancy prevention programs that address sexual education and life skills development and provide academic support are effective in reducing births among enrolled teenagers. However, there have been limited data on the costs and cost effectiveness of such programs. The study used a community-based participatory research approach to develop estimates of the cost-benefit of the Pathways/Senderos Center, a comprehensive neighborhood-based program to prevent unintended pregnancies and promote positive development for adolescents. Using data from 1997-2003, an in-time intervention analysis was conducted to determine program cost-benefit while teenagers were enrolled; an extrapolation analysis was then used to estimate accrued economic benefits and cost-benefit up to age 30 years. The program operating costs totaled $3,228,152.59 and reduced the teenage childbearing rate from 94.10 to 40.00 per 1000 teenage girls, averting $52,297.84 in total societal costs, with an economic benefit to society from program participation of $2,673,153.11. Therefore, total costs to society exceeded economic benefits by $559,677.05, or $1599.08 per adolescent per year. In an extrapolation analysis, benefits to society exceed costs by $10,474.77 per adolescent per year by age 30 years on average, with social benefits outweighing total social costs by age 20.1 years. This comprehensive teenage pregnancy prevention program is estimated to provide societal economic benefits once participants are young adults, suggesting the need to expand beyond pilot demonstrations and evaluate the long-range cost effectiveness of similarly comprehensive programs when they are implemented more widely in high-risk neighborhoods.

  19. Economic Evaluation of a Comprehensive Teenage Pregnancy Prevention Program: Pilot Program

    PubMed Central

    Rosenthal, Marjorie S.; Ross, Joseph S.; Bilodeau, RoseAnne; Richter, Rosemary S.; Palley, Jane E.; Bradley, Elizabeth H.

    2011-01-01

    Background Previous research has suggested that comprehensive teenage pregnancy prevention programs that address sexual education and life skills development and provide academic are effective in reducing births among enrolled teenagers. However, there have been limited data on costs and cost-effectiveness of such programs. Objectives To use a community-based participatory research approach, to develop estimates of the cost-benefit of the Pathways/Senderos Center, a comprehensive neighborhood-based program to prevent unintended pregnancies and promote positive development for adolescents. Methods Using data from 1997-2003, we conducted an in-time intervention analysis to determine program cost-benefit while teenagers were enrolled and then used an extrapolation analysis to estimate accyrred economibc benefits and cost-benefit up to age 30. Results The program operating costs totaled $3,228,152.59 and reduced the teenage childbearing rate from 94.10 to 40.00 per 1000 teenage females, averting $52,297.84 in total societal costs, with an economic benefit to society from program participation of $2,673,153.11. Therefore, total costs to society exceeded economic benefits by $559,677.05, or $1,599.08 per adolescent per year. In an extrapolation analysis, benefits to society exceed costs by $10,474.77 per adolescent per year by age 30 on average, with social benefits outweighing total social costs by age 20.1. Conclusions We estimate that this comprehensive teenage pregnancy prevention program would provide societal economic benefits once participants are young adults, suggesting the need to expand beyond pilot demonstrations and evaluate the long-range cost-effectiveness of similarly comprehensive programs when implemented more widely in high-risk neighborhoods. PMID:19896030

  20. Arsenic and other elements in drinking water and dietary components from the middle Gangetic plain of Bihar, India: Health risk index.

    PubMed

    Kumar, Manoj; Rahman, Mohammad Mahmudur; Ramanathan, A L; Naidu, Ravi

    2016-01-01

    This study investigates the level of contamination and health risk assessment for arsenic (As) and other elements in drinking water, vegetables and other food components in two blocks (Mohiuddinagar and Mohanpur) from the Samastipur district, Bihar, India. Groundwater (80%) samples exceeded the World Health Organization (WHO) guideline value (10μg/L) of As while Mn exceeded the previous WHO limit of 400μg/L in 28% samples. The estimated daily intake of As, Cd, Co, Cr, Cu, Mn, Ni, Pb and Zn from drinking water and food components were 169, 19, 26, 882, 4645, 14582, 474, 1449 and 12,955μg, respectively (estimated exposure 3.70, 0.41, 0.57, 19.61, 103.22, 324.05, 10.53, 32.21 and 287.90μg per kg bw, respectively). Twelve of 15 cooked rice contained high As concentration compared to uncooked rice. Water contributes (67%) considerable As to daily exposure followed by rice and vegetables. Whereas food is the major contributor of other elements to the dietary exposure. Correlation and principal component analysis (PCA) indicated natural source for As but for other elements, presence of diffused anthropogenic activities were responsible. The chronic daily intake (CDI) and health risk index (HRI) were also estimated from the generated data. The HRI were >1 for As in drinking water, vegetables and rice, for Mn in drinking water, vegetables, rice and wheat, for Pb in rice and wheat indicated the potential health risk to the local population. An assessment of As and other elements of other food components should be conducted to understand the actual health hazards caused by ingestion of food in people residing in the middle Gangetic plain. Copyright © 2015 Elsevier B.V. All rights reserved.

  1. Land cover controls on depression-focused recharge: an example from southern Ontario

    NASA Astrophysics Data System (ADS)

    Buttle, J. M.; Greenwood, W. J.

    2015-12-01

    The Oak Ridges Moraine (ORM) is a critical hydrogeologic feature in southern Ontario. Although previous research has highlighted the implications of spatially-focused recharge in closed topographic depressions for regional groundwater resources, such depression-focused recharge (DFR) has not been empirically demonstrated on the ORM. Permeable surficial sands and gravels mantling much of the ORM imply that water fluxes will largely be vertical recharge rather than lateral downslope transfer into depressions. Nevertheless, lateral fluxes may occur in winter and spring, when concrete frost development encourages surface runoff of rainfall and snowmelt. The potential for DFR was examined under forest and agricultural land cover with similar soils and surficial geology. Soil water contents, soil temperatures and ground frost thickness were measured at the crest and base of closed depressions in two agricultural fields and two forest stands on permeable ORM outcrops. Recharge from late-fall to the end of spring snowmelt was estimated via 1-d water balances and surface-applied bromide tracing. Both forest and agricultural sites experienced soil freezing; however, greater soil water contents prior to freeze-up at the latter led to concrete soil frost development. This resulted in lateral movement of snowmelt and rainfall into topographic depressions and surface ponding, which did not occur in forest depressions. Water balance recharge exceeded estimates from the bromide tracer approach at all locations; nevertheless, both methods indicated DRF exceeded recharge at the depression crest in agricultural areas with little difference in forest areas. Water balance estimates suggest winter-spring DFR (1300 - 2000 mm) is 3-5× recharge on level agricultural sites. Differences in the potential for DFR between agricultural and forest land covers have important implications for the spatial variability of recharge fluxes and the quality of recharging water on the ORM.

  2. Assessment of dietary exposure in the French population to 13 selected food colours, preservatives, antioxidants, stabilizers, emulsifiers and sweeteners.

    PubMed

    Bemrah, Nawel; Leblanc, Jean-Charles; Volatier, Jean-Luc

    2008-01-01

    The results of French intake estimates for 13 food additives prioritized by the methods proposed in the 2001 Report from the European Commission on Dietary Food Additive Intake in the European Union are reported. These 13 additives were selected using the first and second tiers of the three-tier approach. The first tier was based on theoretical food consumption data and the maximum permitted level of additives. The second tier used real individual food consumption data and the maximum permitted level of additives for the substances which exceeded the acceptable daily intakes (ADI) in the first tier. In the third tier reported in this study, intake estimates were calculated for the 13 additives (colours, preservatives, antioxidants, stabilizers, emulsifiers and sweeteners) according to two modelling assumptions corresponding to two different food habit scenarios (assumption 1: consumers consume foods that may or may not contain food additives, and assumption 2: consumers always consume foods that contain additives) when possible. In this approach, real individual food consumption data and the occurrence/use-level of food additives reported by the food industry were used. Overall, the results of the intake estimates are reassuring for the majority of additives studied since the risk of exceeding the ADI was low, except for nitrites, sulfites and annatto, whose ADIs were exceeded by either children or adult consumers or by both populations under one and/or two modelling assumptions. Under the first assumption, the ADI is exceeded for high consumers among adults for nitrites and sulfites (155 and 118.4%, respectively) and among children for nitrites (275%). Under the second assumption, the average nitrites dietary exposure in children exceeds the ADI (146.7%). For high consumers, adults exceed the nitrite and sulfite ADIs (223 and 156.4%, respectively) and children exceed the nitrite, annatto and sulfite ADIs (416.7, 124.6 and 130.6%, respectively).

  3. Streamflow distribution maps for the Cannon River drainage basin, southeast Minnesota, and the St. Louis River drainage basin, northeast Minnesota

    USGS Publications Warehouse

    Smith, Erik A.; Sanocki, Chris A.; Lorenz, David L.; Jacobsen, Katrin E.

    2017-12-27

    Streamflow distribution maps for the Cannon River and St. Louis River drainage basins were developed by the U.S. Geological Survey, in cooperation with the Legislative-Citizen Commission on Minnesota Resources, to illustrate relative and cumulative streamflow distributions. The Cannon River was selected to provide baseline data to assess the effects of potential surficial sand mining, and the St. Louis River was selected to determine the effects of ongoing Mesabi Iron Range mining. Each drainage basin (Cannon, St. Louis) was subdivided into nested drainage basins: the Cannon River was subdivided into 152 nested drainage basins, and the St. Louis River was subdivided into 353 nested drainage basins. For each smaller drainage basin, the estimated volumes of groundwater discharge (as base flow) and surface runoff flowing into all surface-water features were displayed under the following conditions: (1) extreme low-flow conditions, comparable to an exceedance-probability quantile of 0.95; (2) low-flow conditions, comparable to an exceedance-probability quantile of 0.90; (3) a median condition, comparable to an exceedance-probability quantile of 0.50; and (4) a high-flow condition, comparable to an exceedance-probability quantile of 0.02.Streamflow distribution maps were developed using flow-duration curve exceedance-probability quantiles in conjunction with Soil-Water-Balance model outputs; both the flow-duration curve and Soil-Water-Balance models were built upon previously published U.S. Geological Survey reports. The selected streamflow distribution maps provide a proactive water management tool for State cooperators by illustrating flow rates during a range of hydraulic conditions. Furthermore, after the nested drainage basins are highlighted in terms of surface-water flows, the streamflows can be evaluated in the context of meeting specific ecological flows under different flow regimes and potentially assist with decisions regarding groundwater and surface-water appropriations. Presented streamflow distribution maps are foundational work intended to support the development of additional streamflow distribution maps that include statistical constraints on the selected flow conditions.

  4. Spatially robust estimates of biological nitrogen (N) fixation imply substantial human alteration of the tropical N cycle

    PubMed Central

    Sullivan, Benjamin W.; Smith, W. Kolby; Townsend, Alan R.; Nasto, Megan K.; Reed, Sasha C.; Chazdon, Robin L.; Cleveland, Cory C.

    2014-01-01

    Biological nitrogen fixation (BNF) is the largest natural source of exogenous nitrogen (N) to unmanaged ecosystems and also the primary baseline against which anthropogenic changes to the N cycle are measured. Rates of BNF in tropical rainforest are thought to be among the highest on Earth, but they are notoriously difficult to quantify and are based on little empirical data. We adapted a sampling strategy from community ecology to generate spatial estimates of symbiotic and free-living BNF in secondary and primary forest sites that span a typical range of tropical forest legume abundance. Although total BNF was higher in secondary than primary forest, overall rates were roughly five times lower than previous estimates for the tropical forest biome. We found strong correlations between symbiotic BNF and legume abundance, but we also show that spatially free-living BNF often exceeds symbiotic inputs. Our results suggest that BNF in tropical forest has been overestimated, and our data are consistent with a recent top-down estimate of global BNF that implied but did not measure low tropical BNF rates. Finally, comparing tropical BNF within the historical area of tropical rainforest with current anthropogenic N inputs indicates that humans have already at least doubled reactive N inputs to the tropical forest biome, a far greater change than previously thought. Because N inputs are increasing faster in the tropics than anywhere on Earth, both the proportion and the effects of human N enrichment are likely to grow in the future. PMID:24843146

  5. At what costs will screening with CT colonography be competitive? A cost-effectiveness approach.

    PubMed

    Lansdorp-Vogelaar, Iris; van Ballegooijen, Marjolein; Zauber, Ann G; Boer, Rob; Wilschut, Janneke; Habbema, J Dik F

    2009-03-01

    The costs of computed tomographic colonography (CTC) are not yet established for screening use. In our study, we estimated the threshold costs for which CTC screening would be a cost-effective alternative to colonoscopy for colorectal cancer (CRC) screening in the general population. We used the MISCAN-colon microsimulation model to estimate the costs and life-years gained of screening persons aged 50-80 years for 4 screening strategies: (i) optical colonoscopy; and CTC with referral to optical colonoscopy of (ii) any suspected polyp; (iii) a suspected polyp >or=6 mm and (iv) a suspected polyp >or=10 mm. For each of the 4 strategies, screen intervals of 5, 10, 15 and 20 years were considered. Subsequently, for each CTC strategy and interval, the threshold costs of CTC were calculated. We performed a sensitivity analysis to assess the effect of uncertain model parameters on the threshold costs. With equal costs ($662), optical colonoscopy dominated CTC screening. For CTC to gain similar life-years as colonoscopy screening every 10 years, it should be offered every 5 years with referral of polyps >or=6 mm. For this strategy to be as cost-effective as colonoscopy screening, the costs must not exceed $285 or 43% of colonoscopy costs (range in sensitivity analysis: 39-47%). With 25% higher adherence than colonoscopy, CTC threshold costs could be 71% of colonoscopy costs. Our estimate of 43% is considerably lower than previous estimates in literature, because previous studies only compared CTC screening to 10-yearly colonoscopy, where we compared to different intervals of colonoscopy screening.

  6. Probabilistic assessment of precipitation-triggered landslides using historical records of landslide occurence, Seattle, Washington

    USGS Publications Warehouse

    Coe, J.A.; Michael, J.A.; Crovelli, R.A.; Savage, W.Z.; Laprade, W.T.; Nashem, W.D.

    2004-01-01

    Ninety years of historical landslide records were used as input to the Poisson and binomial probability models. Results from these models show that, for precipitation-triggered landslides, approximately 9 percent of the area of Seattle has annual exceedance probabilities of 1 percent or greater. Application of the Poisson model for estimating the future occurrence of individual landslides results in a worst-case scenario map, with a maximum annual exceedance probability of 25 percent on a hillslope near Duwamish Head in West Seattle. Application of the binomial model for estimating the future occurrence of a year with one or more landslides results in a map with a maximum annual exceedance probability of 17 percent (also near Duwamish Head). Slope and geology both play a role in localizing the occurrence of landslides in Seattle. A positive correlation exists between slope and mean exceedance probability, with probability tending to increase as slope increases. Sixty-four percent of all historical landslide locations are within 150 m (500 ft, horizontal distance) of the Esperance Sand/Lawton Clay contact, but within this zone, no positive or negative correlation exists between exceedance probability and distance to the contact.

  7. Evaluation of Potential Exposure to Metals in Laundered Shop Towels

    PubMed Central

    Greenberg, Grace; Beck, Barbara D.

    2013-01-01

    We reported in 2003 that exposure to metals on laundered shop towels (LSTs) could exceed toxicity criteria. New data from LSTs used by workers in North America document the continued presence of metals in freshly laundered towels. We assessed potential exposure to metals based on concentrations of metals on the LSTs, estimates of LST usage by employees, and the transfer of metals from LST-to-hand, hand-to-mouth, and LST-to-lip, under average- or high-exposure scenarios. Exposure estimates were compared to toxicity criteria. Under an average-exposure scenario (excluding metals' data outliers), exceedances of the California Environmental Protection Agency, U.S. Environmental Protection Agency, and the Agency for Toxic Substances and Disease Registry toxicity criteria may occur for aluminum, cadmium, cobalt, copper, iron, and lead. Calculated intakes for these metals were up to more than 400-fold higher (lead) than their respective toxicity criterion. For the high-exposure scenario, additional exceedances may occur, and high-exposure intakes were up to 1,170-fold higher (lead) than their respective toxicity criterion. A sensitivity analysis indicated that alternate plausible assumptions could increase or decrease the magnitude of exceedances, but were unlikely to eliminate certain exceedances, particularly for lead. PMID:24453472

  8. Improved mapping of National Atmospheric Deposition Program wet-deposition in complex terrain using PRISM-gridded data sets

    USGS Publications Warehouse

    Latysh, Natalie E.; Wetherbee, Gregory Alan

    2012-01-01

    High-elevation regions in the United States lack detailed atmospheric wet-deposition data. The National Atmospheric Deposition Program/National Trends Network (NADP/NTN) measures and reports precipitation amounts and chemical constituent concentration and deposition data for the United States on annual isopleth maps using inverse distance weighted (IDW) interpolation methods. This interpolation for unsampled areas does not account for topographic influences. Therefore, NADP/NTN isopleth maps lack detail and potentially underestimate wet deposition in high-elevation regions. The NADP/NTN wet-deposition maps may be improved using precipitation grids generated by other networks. The Parameter-elevation Regressions on Independent Slopes Model (PRISM) produces digital grids of precipitation estimates from many precipitation-monitoring networks and incorporates influences of topographical and geographical features. Because NADP/NTN ion concentrations do not vary with elevation as much as precipitation depths, PRISM is used with unadjusted NADP/NTN data in this paper to calculate ion wet deposition in complex terrain to yield more accurate and detailed isopleth deposition maps in complex terrain. PRISM precipitation estimates generally exceed NADP/NTN precipitation estimates for coastal and mountainous regions in the western United States. NADP/NTN precipitation estimates generally exceed PRISM precipitation estimates for leeward mountainous regions in Washington, Oregon, and Nevada, where abrupt changes in precipitation depths induced by topography are not depicted by IDW interpolation. PRISM-based deposition estimates for nitrate can exceed NADP/NTN estimates by more than 100% for mountainous regions in the western United States.

  9. Improved mapping of National Atmospheric Deposition Program wet-deposition in complex terrain using PRISM-gridded data sets.

    PubMed

    Latysh, Natalie E; Wetherbee, Gregory Alan

    2012-01-01

    High-elevation regions in the United States lack detailed atmospheric wet-deposition data. The National Atmospheric Deposition Program/National Trends Network (NADP/NTN) measures and reports precipitation amounts and chemical constituent concentration and deposition data for the United States on annual isopleth maps using inverse distance weighted (IDW) interpolation methods. This interpolation for unsampled areas does not account for topographic influences. Therefore, NADP/NTN isopleth maps lack detail and potentially underestimate wet deposition in high-elevation regions. The NADP/NTN wet-deposition maps may be improved using precipitation grids generated by other networks. The Parameter-elevation Regressions on Independent Slopes Model (PRISM) produces digital grids of precipitation estimates from many precipitation-monitoring networks and incorporates influences of topographical and geographical features. Because NADP/NTN ion concentrations do not vary with elevation as much as precipitation depths, PRISM is used with unadjusted NADP/NTN data in this paper to calculate ion wet deposition in complex terrain to yield more accurate and detailed isopleth deposition maps in complex terrain. PRISM precipitation estimates generally exceed NADP/NTN precipitation estimates for coastal and mountainous regions in the western United States. NADP/NTN precipitation estimates generally exceed PRISM precipitation estimates for leeward mountainous regions in Washington, Oregon, and Nevada, where abrupt changes in precipitation depths induced by topography are not depicted by IDW interpolation. PRISM-based deposition estimates for nitrate can exceed NADP/NTN estimates by more than 100% for mountainous regions in the western United States.

  10. Health benefits and costs of filtration interventions that reduce indoor exposure to PM2.5 during wildfires.

    PubMed

    Fisk, W J; Chan, W R

    2017-01-01

    Increases in hospital admissions and deaths are associated with increases in outdoor air particles during wildfires. This analysis estimates the health benefits expected if interventions had improved particle filtration in homes in Southern California during a 10-day period of wildfire smoke exposure. Economic benefits and intervention costs are also estimated. The six interventions implemented in all affected houses are projected to prevent 11% to 63% of the hospital admissions and 7% to 39% of the deaths attributable to wildfire particles. The fraction of the population with an admission attributable to wildfire smoke is small, thus, the costs of interventions in all homes far exceeds the economic benefits of reduced hospital admissions. However, the estimated economic value of the prevented deaths exceed or far exceed intervention costs for interventions that do not use portable air cleaners. For the interventions with portable air cleaner use, mortality-related economic benefits exceed intervention costs as long as the cost of the air cleaners, which have a multi-year life, are not attributed to the short wildfire period. Cost effectiveness is improved by intervening only in the homes of the elderly who experience most of the health effects of particles from wildfires. © 2016 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  11. Reconciling fisheries catch and ocean productivity

    PubMed Central

    Stock, Charles A.; Asch, Rebecca G.; Cheung, William W. L.; Dunne, John P.; Friedland, Kevin D.; Lam, Vicky W. Y.; Sarmiento, Jorge L.; Watson, Reg A.

    2017-01-01

    Photosynthesis fuels marine food webs, yet differences in fish catch across globally distributed marine ecosystems far exceed differences in net primary production (NPP). We consider the hypothesis that ecosystem-level variations in pelagic and benthic energy flows from phytoplankton to fish, trophic transfer efficiencies, and fishing effort can quantitatively reconcile this contrast in an energetically consistent manner. To test this hypothesis, we enlist global fish catch data that include previously neglected contributions from small-scale fisheries, a synthesis of global fishing effort, and plankton food web energy flux estimates from a prototype high-resolution global earth system model (ESM). After removing a small number of lightly fished ecosystems, stark interregional differences in fish catch per unit area can be explained (r = 0.79) with an energy-based model that (i) considers dynamic interregional differences in benthic and pelagic energy pathways connecting phytoplankton and fish, (ii) depresses trophic transfer efficiencies in the tropics and, less critically, (iii) associates elevated trophic transfer efficiencies with benthic-predominant systems. Model catch estimates are generally within a factor of 2 of values spanning two orders of magnitude. Climate change projections show that the same macroecological patterns explaining dramatic regional catch differences in the contemporary ocean amplify catch trends, producing changes that may exceed 50% in some regions by the end of the 21st century under high-emissions scenarios. Models failing to resolve these trophodynamic patterns may significantly underestimate regional fisheries catch trends and hinder adaptation to climate change. PMID:28115722

  12. Reconciling fisheries catch and ocean productivity.

    PubMed

    Stock, Charles A; John, Jasmin G; Rykaczewski, Ryan R; Asch, Rebecca G; Cheung, William W L; Dunne, John P; Friedland, Kevin D; Lam, Vicky W Y; Sarmiento, Jorge L; Watson, Reg A

    2017-02-21

    Photosynthesis fuels marine food webs, yet differences in fish catch across globally distributed marine ecosystems far exceed differences in net primary production (NPP). We consider the hypothesis that ecosystem-level variations in pelagic and benthic energy flows from phytoplankton to fish, trophic transfer efficiencies, and fishing effort can quantitatively reconcile this contrast in an energetically consistent manner. To test this hypothesis, we enlist global fish catch data that include previously neglected contributions from small-scale fisheries, a synthesis of global fishing effort, and plankton food web energy flux estimates from a prototype high-resolution global earth system model (ESM). After removing a small number of lightly fished ecosystems, stark interregional differences in fish catch per unit area can be explained ( r = 0.79) with an energy-based model that ( i ) considers dynamic interregional differences in benthic and pelagic energy pathways connecting phytoplankton and fish, ( ii ) depresses trophic transfer efficiencies in the tropics and, less critically, ( iii ) associates elevated trophic transfer efficiencies with benthic-predominant systems. Model catch estimates are generally within a factor of 2 of values spanning two orders of magnitude. Climate change projections show that the same macroecological patterns explaining dramatic regional catch differences in the contemporary ocean amplify catch trends, producing changes that may exceed 50% in some regions by the end of the 21st century under high-emissions scenarios. Models failing to resolve these trophodynamic patterns may significantly underestimate regional fisheries catch trends and hinder adaptation to climate change.

  13. Estimates of potential childhood lead exposure from contaminated soil using the US EPA IEUBK Model in Sydney, Australia.

    PubMed

    Laidlaw, Mark A S; Mohmmad, Shaike M; Gulson, Brian L; Taylor, Mark P; Kristensen, Louise J; Birch, Gavin

    2017-07-01

    Surface soils in portions of the Sydney (New South Wales, Australia) urban area are contaminated with lead (Pb) primarily from past use of Pb in gasoline, the deterioration of exterior lead-based paints, and industrial activities. Surface soil samples (n=341) were collected from a depth of 0-2.5cm at a density of approximately one sample per square kilometre within the Sydney estuary catchment and analysed for lead. The bioaccessibility of soil Pb was analysed in 18 samples. The blood lead level (BLL) of a hypothetical 24 month old child was predicted at soil sampling sites in residential and open land use using the United States Environmental Protection Agency (US EPA) Integrated Exposure Uptake and Biokinetic (IEUBK) model. Other environmental exposures used the Australian National Environmental Protection Measure (NEPM) default values. The IEUBK model predicted a geometric mean BLL of 2.0±2.1µg/dL using measured soil lead bioavailability measurements (bioavailability =34%) and 2.4±2.8µg/dL using the Australian NEPM default assumption (bioavailability =50%). Assuming children were present and residing at the sampling locations, the IEUBK model incorporating soil Pb bioavailability predicted that 5.6% of the children at the sampling locations could potentially have BLLs exceeding 5µg/dL and 2.1% potentially could have BLLs exceeding 10µg/dL. These estimations are consistent with BLLs previously measured in children in Sydney. Copyright © 2017 Elsevier Inc. All rights reserved.

  14. United States Geological Survey fire science: fire danger monitoring and forecasting

    USGS Publications Warehouse

    Eidenshink, Jeff C.; Howard, Stephen M.

    2012-01-01

    Each day, the U.S. Geological Survey produces 7-day forecasts for all Federal lands of the distributions of number of ignitions, number of fires above a given size, and conditional probabilities of fires growing larger than a specified size. The large fire probability map is an estimate of the likelihood that ignitions will become large fires. The large fire forecast map is a probability estimate of the number of fires on federal lands exceeding 100 acres in the forthcoming week. The ignition forecast map is a probability estimate of the number of fires on Federal land greater than 1 acre in the forthcoming week. The extreme event forecast is the probability estimate of the number of fires on Federal land that may exceed 5,000 acres in the forthcoming week.

  15. Extending natural hazard impacts: an assessment of landslide disruptions on a national road transportation network

    NASA Astrophysics Data System (ADS)

    Postance, Benjamin; Hillier, John; Dijkstra, Tom; Dixon, Neil

    2017-01-01

    Disruptions to transportation networks by natural hazard events cause direct losses (e.g. by physical damage) and indirect socio-economic losses via travel delays and decreased transportation efficiency. The severity and spatial distribution of these losses varies according to user travel demands and which links, nodes or infrastructure assets are physically disrupted. Increasing transport network resilience, for example by targeted mitigation strategies, requires the identification of the critical network segments which if disrupted would incur undesirable or unacceptable socio-economic impacts. Here, these impacts are assessed on a national road transportation network by coupling hazard data with a transport network model. This process is illustrated using a case study of landslide hazards on the road network of Scotland. A set of possible landslide-prone road segments is generated using landslide susceptibility data. The results indicate that at least 152 road segments are susceptible to landslides, which could cause indirect economic losses exceeding £35 k for each day of closure. In addition, previous estimates for historic landslide events might be significant underestimates. For example, the estimated losses for the 2007 A83 ‘Rest and Be Thankful’ landslide are £80 k day-1, totalling £1.2 million over a 15 day closure, and are ˜60% greater than previous estimates. The spatial distribution of impact to road users is communicated in terms of ‘extended hazard impact footprints’. These footprints reveal previously unknown exposed communities and unanticipated spatial patterns of severe disruption. Beyond cost-benefit analyses for landslide mitigation efforts, the approach implemented is applicable to other natural hazards (e.g. flooding), combinations of hazards, or even other network disruption events.

  16. VizieR Online Data Catalog: 5 Galactic GC proper motions from Gaia DR1 (Watkins+, 2017)

    NASA Astrophysics Data System (ADS)

    Watkins, L. L.; van der Marel, R. P.

    2017-11-01

    We present a pilot study of Galactic globular cluster (GC) proper motion (PM) determinations using Gaia data. We search for GC stars in the Tycho-Gaia Astrometric Solution (TGAS) catalog from Gaia Data Release 1 (DR1), and identify five members of NGC 104 (47 Tucanae), one member of NGC 5272 (M3), five members of NGC 6121 (M4), seven members of NGC 6397, and two members of NGC 6656 (M22). By taking a weighted average of member stars, fully accounting for the correlations between parameters, we estimate the parallax (and, hence, distance) and PM of the GCs. This provides a homogeneous PM study of multiple GCs based on an astrometric catalog with small and well-controlled systematic errors and yields random PM errors similar to existing measurements. Detailed comparison to the available Hubble Space Telescope (HST) measurements generally shows excellent agreement, validating the astrometric quality of both TGAS and HST. By contrast, comparison to ground-based measurements shows that some of those must have systematic errors exceeding the random errors. Our parallax estimates have uncertainties an order of magnitude larger than previous studies, but nevertheless imply distances consistent with previous estimates. By combining our PM measurements with literature positions, distances, and radial velocities, we measure Galactocentric space motions for the clusters and find that these also agree well with previous analyses. Our analysis provides a framework for determining more accurate distances and PMs of Galactic GCs using future Gaia data releases. This will provide crucial constraints on the near end of the cosmic distance ladder and provide accurate GC orbital histories. (4 data files).

  17. The Energy Budget of the Polar Atmosphere in MERRA

    NASA Technical Reports Server (NTRS)

    Cullather, Richard I.; Bosilovich, Michael G.

    2010-01-01

    Components of the atmospheric energy budget from the Modern Era Retrospective-analysis for Research and Applications (MERRA) are evaluated in polar regions for the period 1979-2005 and compared with previous estimates, in situ observations, and contemporary reanalyses. Closure of the energy budget is reflected by the analysis increments term, which results from virtual enthalpy and latent heating contributions and averages -11 W/sq m over the north polar cap and -22 W/sq m over the south polar cap. Total energy tendency and energy convergence terms from MERRA agree closely with previous study for northern high latitudes but convergence exceeds previous estimates for the south polar cap by 46 percent. Discrepancies with the Southern Hemisphere transport are largest in autumn and may be related to differences in topography with earlier reanalyses. For the Arctic, differences between MERRA and other sources in TOA and surface radiative fluxes maximize in May. These differences are concurrent with the largest discrepancies between MERRA parameterized and observed surface albedo. For May, in situ observations of the upwelling shortwave flux in the Arctic are 80 W/sq m larger than MERRA, while the MERRA downwelling longwave flux is underestimated by 12 W/sq m throughout the year. Over grounded ice sheets, the annual mean net surface energy flux in MERRA is erroneously non-zero. Contemporary reanalyses from the Climate Forecast Center (CFSR) and the Interim Re-Analyses of the European Centre for Medium Range Weather Forecasts (ERA-I) are found to have better surface parameterizations, however these collections are also found to have significant discrepancies with observed surface and TOA energy fluxes. Discrepancies among available reanalyses underscore the challenge of reproducing credible estimates of the atmospheric energy budget in polar regions.

  18. Tycho- Gaia Astrometric Solution Parallaxes and Proper Motions for Five Galactic Globular Clusters

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Watkins, Laura L.; Van der Marel, Roeland P., E-mail: lwatkins@stsci.edu

    2017-04-20

    We present a pilot study of Galactic globular cluster (GC) proper motion (PM) determinations using Gaia data. We search for GC stars in the Tycho- Gaia Astrometric Solution (TGAS) catalog from Gaia Data Release 1 (DR1), and identify five members of NGC 104 (47 Tucanae), one member of NGC 5272 (M3), five members of NGC 6121 (M4), seven members of NGC 6397, and two members of NGC 6656 (M22). By taking a weighted average of member stars, fully accounting for the correlations between parameters, we estimate the parallax (and, hence, distance) and PM of the GCs. This provides a homogeneousmore » PM study of multiple GCs based on an astrometric catalog with small and well-controlled systematic errors and yields random PM errors similar to existing measurements. Detailed comparison to the available Hubble Space Telescope ( HST ) measurements generally shows excellent agreement, validating the astrometric quality of both TGAS and HST . By contrast, comparison to ground-based measurements shows that some of those must have systematic errors exceeding the random errors. Our parallax estimates have uncertainties an order of magnitude larger than previous studies, but nevertheless imply distances consistent with previous estimates. By combining our PM measurements with literature positions, distances, and radial velocities, we measure Galactocentric space motions for the clusters and find that these also agree well with previous analyses. Our analysis provides a framework for determining more accurate distances and PMs of Galactic GCs using future Gaia data releases. This will provide crucial constraints on the near end of the cosmic distance ladder and provide accurate GC orbital histories.« less

  19. Risk assessment for infants exposed to furan from ready-to-eat thermally processed food products in Poland.

    PubMed

    Minorczyk, Maria; Góralczyk, Katarzyna; Struciński, Paweł; Hernik, Agnieszka; Czaja, Katarzyna; Łyczewska, Monika; Korcz, Wojciech; Starski, Andrzej; Ludwicki, Jan K

    2012-01-01

    Thermal processes and long storage of food lead to reactions between reducing sugars and amino acids, or with ascorbic acid, carbohydrates or polyunsaturated fatty acids. As a result of these reactions, new compounds are created. One of these compounds having an adverse effect on human health is furan. The aim of this paper was to estimate the infants exposure to furan found in thermally processed jarred food products, as well as characterizing the risk by comparing the exposure to the reference dose (RfD) and calculating margins of exposure. The material consisted of 301 samples of thermally processed food for infants taken from the Polish market in years 2008 - 2010. The samples included vegetable-meat, vegetables and fruit jarred meals for infants and young children in which the furan levels were analyzed by GC/MS technique. The exposure to furan has been assessed for the 3, 4, 6, 9,12 months old infants using different consumption scenarios. The levels of furan ranged from <1 microg/kg (LOQ) to 166.9 microg/kg. The average furan concentration in all samples was 40.2 microg/kg. The estimated exposures, calculated with different nutrition scenarios, were in the range from 0.03 to 3.56 microg/kg bw/day and exceeded in some cases RfD set at level of 1 microg/kg bw/day. Margins of exposure (MOE) achieved values even below 300 for scenarios assuming higher consumption of vegetable and vegetable-meat products. The magnitude of exposure to furan present in ready-to-eat meals among Polish infants is similar to data reported previously in other European countries but slightly higher than indicated in the recent EFSA report. As for some cases the estimated intake exceeds the RfD, and MOE) values are much lower than 10000 indicating a potential health concern, it is necessary to continue monitoring of furan in jarred food and estimate of its intake by infants.

  20. Risk assessment for adult butterflies exposed to the mosquito control pesticide naled

    USGS Publications Warehouse

    Bargar, Timothy A.

    2012-01-01

    A prospective risk assessment was conducted for adult butterflies potentially exposed to the mosquito control insecticide naled. Published acute mortality data, exposure data collected during field studies, and morphometric data (total surface area and fresh body weight) for adult butterflies were combined in a probabilistic estimate of the likelihood that adult butterfly exposure to naled following aerial applications would exceed levels associated with acute mortality. Adult butterfly exposure was estimated based on the product of (1) naled residues on samplers and (2) an exposure metric that normalized total surface area for adult butterflies to their fresh weight. The likelihood that the 10th percentile refined effect estimate for adult butterflies exposed to naled would be exceeded following aerial naled applications was 67 to 80%. The greatest risk would be for butterflies in the family Lycaenidae, and the lowest risk would be for those in the family Hesperidae, assuming equivalent sensitivity to naled. A range of potential guideline naled deposition levels is presented that, if not exceeded, would reduce the risk of adult butterfly mortality. The results for this risk assessment were compared with other risk estimates for butterflies, and the implications for adult butterflies in areas targeted by aerial naled applications are discussed.

  1. An ecological risk assessment of the acute and chronic effects of the herbicide clopyralid to rainbow trout (Oncorhynchus mykiss)

    USGS Publications Warehouse

    Fairchild, J.F.; Allert, A.L.; Feltz, K.P.; Nelson, K.J.; Valle, J.A.

    2009-01-01

    Clopyralid (3,6-dichloro-2-pyridinecarboxylic acid) is a pyridine herbicide frequently used to control invasive, noxious weeds in the northwestern United States. Clopyralid exhibits low acute toxicity to fish, including the rainbow trout (Oncorhynchus mykiss) and the threatened bull trout (Salvelinus confluentus). However, there are no published chronic toxicity data for clopyralid and fish that can be used in ecological risk assessments. We conducted 30-day chronic toxicity studies with juvenile rainbow trout exposed to the acid form of clopyralid. The 30-day maximum acceptable toxicant concentration (MATC) for growth, calculated as the geometric mean of the no observable effect concentration (68 mg/L) and the lowest observable effect concentration (136 mg/L), was 96 mg/L. No mortality was measured at the highest chronic concentration tested (273 mg/L). The acute:chronic ratio, calculated by dividing the previously published 96-h acutely lethal concentration (96-h ALC50; 700 mg/L) by the MATC was 7.3. Toxicity values were compared to a four-tiered exposure assessment profile assuming an application rate of 1.12 kg/ha. The Tier 1 exposure estimation, based on direct overspray of a 2-m deep pond, was 0.055 mg/L. The Tier 2 maximum exposure estimate, based on the Generic Exposure Estimate Concentration model (GEENEC), was 0.057 mg/L. The Tier 3 maximum exposure estimate, based on previously published results of the Groundwater Loading Effects of Agricultural Management Systems model (GLEAMS), was 0.073 mg/L. The Tier 4 exposure estimate, based on published edge-of-field monitoring data, was estimated at 0.008 mg/L. Comparison of toxicity data to estimated environmental concentrations of clopyralid indicates that the safety factor for rainbow trout exposed to clopyralid at labeled use rates exceeds 1000. Therefore, the herbicide presents little to no risk to rainbow trout or other salmonids such as the threatened bull trout. ?? 2009 US Government.

  2. Flash floods of August 10, 2009, in the Villages of Gowanda and Silver Creek, New York

    USGS Publications Warehouse

    Szabo, Carolyn O.; Coon, William F.; Niziol, Thomas A.

    2011-01-01

    Late during the night of August 9, 2009, two storm systems intersected over western New York and produced torrential rain that caused severe flash flooding during the early morning hours of August 10 in parts of Cattaraugus, Chautauqua, and Erie Counties. Nearly 6 inches of rain fell in 1.5 hours as recorded by a National Weather Service weather observer in Perrysburg, which lies between Gowanda and Silver Creek-the communities that suffered the most damage. This storm intensity had an annual exceedance probability of less than 0.2 percent (recurrence interval greater than 500 years). Although flooding along Cattaraugus Creek occurred elsewhere, Cattaraugus Creek was responsible for very little flooding in Gowanda. Rather the small tributaries, Thatcher Brook and Grannis Brook, caused the flooding in Gowanda, as did Silver Creek and Walnut Creek in the Village of Silver Creek. Damages from the flooding were widespread. Numerous road culverts were washed out, and more than one-quarter of the roads in Cattaraugus County were damaged. Many people were evacuated or rescued in Gowanda and Silver Creek, and two deaths occurred during the flood in Gowanda. The water supplies of both communities were compromised by damages to village reservoirs and water-transmission infrastructures. Water and mud damage to residential and commercial properties was extensive. The tri-county area was declared a Federal disaster area and more than $45 million in Federal disaster assistance was distributed to more than 1,500 individuals and an estimated 1,100 public projects. The combined total estimate of damages from the flash floods was greater than $90 million. Over 240 high-water marks were surveyed by the U.S. Geological Survey; a subset of these marks was used to create flood-water-surface profiles for four streams and to delineate the areal extent of flooding in Gowanda and Silver Creek. Flood elevations exceeded previously defined 0.2-percent annual exceedance probability (500-year recurrence interval) elevations by 2 to 4 feet in Gowanda and as much as 6 to 8 feet in Silver Creek. Most of the high-water marks were used in indirect hydraulic computations to estimate peak flows for four streams. The peak flows in Grannis Brook and Thatcher Brook were computed, using the slope-area method, to be 1,400 and 7,600 cubic feet per second, respectively, and peak flow in Silver Creek was computed, using the width-contraction method, to be 19,500 cubic feet per second. The annual exceedance probabilities for flows in these and other basins with small drainage areas that fell almost entirely within the area of heaviest precipitation were less than 0.2 percent (or recurrence intervals greater than 500 years). The peak flow in Cattaraugus Creek at Gowanda was computed, using the slope-area method, to be 33,200 cubic feet per second with an annual exceedance probability of 2.2 percent (recurrence interval of 45 years).

  3. Do Indonesian Children's Experiences with Large Currency Units Facilitate Magnitude Estimation of Long Temporal Periods?

    NASA Astrophysics Data System (ADS)

    Cheek, Kim A.

    2017-08-01

    Ideas about temporal (and spatial) scale impact students' understanding across science disciplines. Learners have difficulty comprehending the long time periods associated with natural processes because they have no referent for the magnitudes involved. When people have a good "feel" for quantity, they estimate cardinal number magnitude linearly. Magnitude estimation errors can be explained by confusion about the structure of the decimal number system, particularly in terms of how powers of ten are related to one another. Indonesian children regularly use large currency units. This study investigated if they estimate long time periods accurately and if they estimate those time periods the same way they estimate analogous currency units. Thirty-nine children from a private International Baccalaureate school estimated temporal magnitudes up to 10,000,000,000 years in a two-part study. Artifacts children created were compared to theoretical model predictions previously used in number magnitude estimation studies as reported by Landy et al. (Cognitive Science 37:775-799, 2013). Over one third estimated the magnitude of time periods up to 10,000,000,000 years linearly, exceeding what would be expected based upon prior research with children this age who lack daily experience with large quantities. About half treated successive powers of ten as a count sequence instead of multiplicatively related when estimating magnitudes of time periods. Children generally estimated the magnitudes of long time periods and familiar, analogous currency units the same way. Implications for ways to improve the teaching and learning of this crosscutting concept/overarching idea are discussed.

  4. High-resolution meiotic and physical mapping of the Best vitelliform macular dystrophy (VMD2) locus to pericentromeric chromosome 11

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Weber, B.H.F.; Vogt, G.; Stoehr, H.

    1994-12-01

    Best vitelliform macular dystrophy (VMD2) has previously been linked to several microsatellite markers from chromosome 11. Subsequently, additional genetic studies have refined the Best disease region to a 3.7-cM interval flanked by markers at D11S903 and PYGM. To further narrow the interval containing the Best disease gene and to obtain an estimate of the physical size of the minimal candidate region, we used a combination of high-resolution PCR hybrid mapping and analysis of recombinant Best disease chromosomes. We identified six markers from within the D11S903-PYGM interval that show no recombination with the defective gene in three multigeneration Best disease pedigrees.more » Our hybrid panel localizes these markers on either side of the centromere on chromosome 11. The closest markers flanking the disease gene are at D11S986 in band p12-11.22 on the short arm and at D11S480 in band q13.2-13.3 on the proximal long arm. This study demonstrates that the physical size of the Best disease region is exceedingly larger than previously estimated from the genetic data, because of the proximity of the defective gene to the centromere of chromosome 11.« less

  5. Sibling recurrence and the genetic epidemiology of autism

    PubMed Central

    Constantino, John N.; Zhang, Yi; Frazier, Thomas; Abbacchi, Anna M.; Law, Paul

    2010-01-01

    Objective Although the symptoms of autism exhibit quantitative distributions in nature, estimates of recurrence risk in families have never previously considered or incorporated quantitative characterization of the autistic phenotype among siblings. Method We report the results of quantitative characterization of 2,920 children from 1,235 families participating in a national volunteer register who met the criteria of having at least one child clinically-affected by an autism spectrum disorder (ASD) and at least one full biological sibling. Results The occurrence of a traditionally-defined ASD in an additional child occurred in 10.9% of the families. An additional 20% of non-ASD-affected siblings had a history of language delay, half of whom had exhibited autistic qualities of speech. Quantitative characterization using the Social Responsiveness Scale (SRS) supported previously-reported aggregation of a wide range of subclinical (quantitative) autistic traits among otherwise unaffected children in multiple-incidence families, and a relative absence of quantitative autistic traits among siblings in single-incidence autism families. Girls whose standardized severity ratings fell above a first percentile severity threshold (relative to the general population distribution) were significantly less likely to have elicited community diagnoses than their male counterparts. Conclusions These data suggest that, depending on how it is defined, sibling recurrence in ASD may exceed previously-published estimates, and varies as a function of family type. The results support differences in mechanisms of genetic transmission between simplex and multiplex autism, and advance current understanding of the genetic epidemiology of autism. PMID:20889652

  6. Towards a new paleotemperature proxy from reef coral occurrences.

    PubMed

    Lauchstedt, Andreas; Pandolfi, John M; Kiessling, Wolfgang

    2017-09-05

    Global mean temperature is thought to have exceeded that of today during the last interglacial episode (LIG, ~ 125,000 yrs b.p.) but robust paleoclimate data are still rare in low latitudes. Occurrence data of tropical reef corals may provide new proxies of low latitude sea-surface temperatures. Using modern reef coral distributions we developed a geographically explicit model of sea surface temperatures. Applying this model to coral occurrence data of the LIG provides a latitudinal U-shaped pattern of temperature anomalies with cooler than modern temperatures around the equator and warmer subtropical climes. Our results agree with previously published estimates of LIG temperatures and suggest a poleward broadening of the habitable zone for reef corals during the LIG.

  7. Summary of U.S. Geological Survey reports documenting flood profiles of streams in Iowa, 1963-2012

    USGS Publications Warehouse

    Eash, David A.

    2014-01-01

    This report is part of an ongoing program that is publishing flood profiles of streams in Iowa. The program is managed by the U.S. Geological Survey in cooperation with the Iowa Department of Transportation and the Iowa Highway Research Board (Project HR-140). Information from flood profiles is used by engineers to analyze and design bridges, culverts, and roadways. This report summarizes 47 U.S. Geological Survey flood-profile reports that were published for streams in Iowa during a 50-year period from 1963 to 2012. Flood events profiled in the reports range from 1903 to 2010. Streams in Iowa that have been selected for the preparation of flood-profile reports typically have drainage areas of 100 square miles or greater, and the documented flood events have annual exceedance probabilities of less than 2 to 4 percent. This report summarizes flood-profile measurements, changes in flood-profile report content throughout the years, streams that were profiled in the reports, the occurrence of flood events profiled, and annual exceedance-probability estimates of observed flood events. To develop flood profiles for selected flood events for selected stream reaches, the U.S. Geological Survey measured high-water marks and river miles at selected locations. A total of 94 stream reaches have been profiled in U.S. Geological Survey flood-profile reports. Three rivers in Iowa have been profiled along the same stream reach for five different flood events and six rivers in Iowa have been profiled along the same stream reach for four different flood events. Floods were profiled for June flood events for 18 different years, followed by July flood events for 13 years, May flood events for 11 years, and April flood events for 9 years. Most of the flood-profile reports include estimates of annual exceedance probabilities of observed flood events at streamgages located along profiled stream reaches. Comparisons of 179 historic and updated annual exceedance-probability estimates indicate few differences that are considered substantial between the historic and updated estimates for the observed flood events. Overall, precise comparisons for 114 observed flood events indicate that updated annual exceedance probabilities have increased for most of the observed flood events compared to the historic annual exceedance probabilities. Multiple large flood events exceeding the 2-percent annual exceedance-probability discharge estimate occurred at 37 of 98 selected streamgages during 1960–2012. Five large flood events were recorded at two streamgages in Ames during 1990–2010 and four large flood events were recorded at four other streamgages during 1973–2010. Results of Kendall’s tau trend-analysis tests for 35 of 37 selected streamgages indicate that a statistically significant trend is not evident for the 1963–2012 period of record; nor is an overall clear positive or negative trend evident for the 37 streamgages.

  8. Observation of the Kaiser Effect Using Noble Gas Release Signals

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bauer, Stephen J.

    The Kaiser effect was defined in the early 1950s (Kaiser 1953) and was extensively reviewed and evaluated by Lavrov (2002) with a view toward understanding stress estimations. The Kaiser effect is a stress memory phenomenon which has most often been demonstrated in rock using acoustic emissions. During cyclic loading–unloading–reloading, the acoustic emissions are near zero until the load exceeds the level of the previous load cycle. Here, we sought to explore the Kaiser effect in rock using real-time noble gas release. Laboratory studies using real-time mass spectrometry measurements during deformation have quantified, to a degree, the types of gases releasedmore » (Bauer et al. 2016a, b), their release rates and amounts during deformation, estimates of permeability created from pore structure modifications during deformation (Gardner et al. 2017) and the impact of mineral plasticity upon gas release. We found that noble gases contained in brittle crystalline rock are readily released during deformation.« less

  9. Observation of the Kaiser Effect Using Noble Gas Release Signals

    DOE PAGES

    Bauer, Stephen J.

    2017-10-24

    The Kaiser effect was defined in the early 1950s (Kaiser 1953) and was extensively reviewed and evaluated by Lavrov (2002) with a view toward understanding stress estimations. The Kaiser effect is a stress memory phenomenon which has most often been demonstrated in rock using acoustic emissions. During cyclic loading–unloading–reloading, the acoustic emissions are near zero until the load exceeds the level of the previous load cycle. Here, we sought to explore the Kaiser effect in rock using real-time noble gas release. Laboratory studies using real-time mass spectrometry measurements during deformation have quantified, to a degree, the types of gases releasedmore » (Bauer et al. 2016a, b), their release rates and amounts during deformation, estimates of permeability created from pore structure modifications during deformation (Gardner et al. 2017) and the impact of mineral plasticity upon gas release. We found that noble gases contained in brittle crystalline rock are readily released during deformation.« less

  10. Systematics of stretching of fluid inclusions I: fluorite and sphalerite at 1 atmosphere confining pressure.

    USGS Publications Warehouse

    Bodnar, R.J.; Bethke, P.M.

    1984-01-01

    Measured homogenization T of fluid inclusions in fluorite and sphalerite may be higher than the true homogenization T of samples that have been previously heated in the laboratory or naturally in post-entrapment events. As T and with it internal P is increased, the resulting volume increase may become inelastic. If the volume increase exceeds the precision of T measurement, the inclusion is said to have stretched. More than 1300 measurements on fluid inclusions in fluorite and sphalerite indicate that stretching is systematically related to P-V-T-X properties of the fluid, inclusion size and shape, physical properties of the host mineral, and the confining P. Experimental methods are detailed in an appendix. The mechanism of stretching is probably plastic deformation or - not observed - microfracturing. The systematic relationship between the internal P necessary to initiate stretching and the inclusion volume provides a means of recognizing previously stretched inclusions and estimating the magnitude of post-entrapment thermal events. -G.J.N.

  11. Dietary exposure to benzoates (E210-E213), parabens (E214-E219), nitrites (E249-E250), nitrates (E251-E252), BHA (E320), BHT (E321) and aspartame (E951) in children less than 3 years old in France.

    PubMed

    Mancini, F R; Paul, D; Gauvreau, J; Volatier, J L; Vin, K; Hulin, M

    2015-01-01

    This study aimed to estimate the exposure to seven additives (benzoates, parabens, nitrites, nitrates, BHA, BHT and aspartame) in children aged less than 3 years old in France. A conservative approach, combining individual consumption data with maximum permitted levels, was carried out for all the additives. More refined estimates using occurrence data obtained from products' labels (collected by the French Observatory of Food Quality) were conducted for those additives that exceeded the acceptable daily intake (ADI). Information on additives' occurrence was obtained from the food labels. When the ADI was still exceeded, the exposure estimate was further refined using measured concentration data, if available. When using the maximum permitted level (MPL), the ADI was exceeded for benzoates (1.94 mg kg(-1) bw day(-1)), nitrites (0.09 mg kg(-1) bw day(-1)) and BHA (0.39 mg kg(-1) bw day(-1)) in 25%, 54% and 20% of the entire study population respectively. The main food contributors identified with this approach were current foods as these additives are not authorised in specific infant food: vegetable soups and broths for both benzoates and BHA, delicatessen and meat for nitrites. The exposure estimate was significantly reduced when using occurrence data, but in the upper-bound scenario the ADI was still exceeded significantly by the age group 13-36 months for benzoates (2%) and BHA (1%), and by the age group 7-12 months (16%) and 13-36 months (58%) for nitrites. Measured concentration data were available exclusively for nitrites and the results obtained using these data showed that the nitrites' intake was below the ADI for all the population considered in this study. These results suggest that refinement of exposure, based on the assessment of food levels, is needed to estimate the exposure of children to BHA and benzoates for which the risk of exceeding the ADI cannot be excluded when using occurrence data.

  12. PROUCL 4.0 SOFTWARE

    EPA Science Inventory

    Statistical inference, including both estimation and hypotheses testing approaches, is routinely used to: estimate environmental parameters of interest, such as exposure point concentration (EPC) terms, not-to-exceed values, and background level threshold values (BTVs) for contam...

  13. Technique for estimation of streamflow statistics in mineral areas of interest in Afghanistan

    USGS Publications Warehouse

    Olson, Scott A.; Mack, Thomas J.

    2011-01-01

    A technique for estimating streamflow statistics at ungaged stream sites in areas of mineral interest in Afghanistan using drainage-area-ratio relations of historical streamflow data was developed and is documented in this report. The technique can be used to estimate the following streamflow statistics at ungaged sites: (1) 7-day low flow with a 10-year recurrence interval, (2) 7-day low flow with a 2-year recurrence interval, (3) daily mean streamflow exceeded 90 percent of the time, (4) daily mean streamflow exceeded 80 percent of the time, (5) mean monthly streamflow for each month of the year, (6) mean annual streamflow, and (7) minimum monthly streamflow for each month of the year. Because they are based on limited historical data, the estimates of streamflow statistics at ungaged sites are considered preliminary.

  14. An operational system of fire danger rating over Mediterranean Europe

    NASA Astrophysics Data System (ADS)

    Pinto, Miguel M.; DaCamara, Carlos C.; Trigo, Isabel F.; Trigo, Ricardo M.

    2017-04-01

    A methodology is presented to assess fire danger based on the probability of exceedance of prescribed thresholds of daily released energy. The procedure is developed and tested over Mediterranean Europe, defined by latitude circles of 35 and 45°N and meridians of 10°W and 27.5°E, for the period 2010-2016. The procedure involves estimating the so-called static and daily probabilities of exceedance. For a given point, the static probability is estimated by the ratio of the number of daily fire occurrences releasing energy above a given threshold to the total number of occurrences inside a cell centred at the point. The daily probability of exceedance which takes into account meteorological factors by means of the Canadian Fire Weather Index (FWI) is in turn estimated based on a Generalized Pareto distribution with static probability and FWI as covariates of the scale parameter. The rationale of the procedure is that small fires, assessed by the static probability, have a weak dependence on weather, whereas the larger fires strongly depend on concurrent meteorological conditions. It is shown that observed frequencies of exceedance over the study area for the period 2010-2016 match with the estimated values of probability based on the developed models for static and daily probabilities of exceedance. Some (small) variability is however found between different years suggesting that refinements can be made in future works by using a larger sample to further increase the robustness of the method. The developed methodology presents the advantage of evaluating fire danger with the same criteria for all the study area, making it a good parameter to harmonize fire danger forecasts and forest management studies. Research was performed within the framework of EUMETSAT Satellite Application Facility for Land Surface Analysis (LSA SAF). Part of methods developed and results obtained are on the basis of the platform supported by The Navigator Company that is currently providing information about fire meteorological danger for Portugal for a wide range of users.

  15. Updated techniques for estimating monthly streamflow-duration characteristics at ungaged and partial-record sites in central Nevada

    USGS Publications Warehouse

    Hess, Glen W.

    2002-01-01

    Techniques for estimating monthly streamflow-duration characteristics at ungaged and partial-record sites in central Nevada have been updated. These techniques were developed using streamflow records at six continuous-record sites, basin physical and climatic characteristics, and concurrent streamflow measurements at four partial-record sites. Two methods, the basin-characteristic method and the concurrent-measurement method, were developed to provide estimating techniques for selected streamflow characteristics at ungaged and partial-record sites in central Nevada. In the first method, logarithmic-regression analyses were used to relate monthly mean streamflows (from all months and by month) from continuous-record gaging sites of various percent exceedence levels or monthly mean streamflows (by month) to selected basin physical and climatic variables at ungaged sites. Analyses indicate that the total drainage area and percent of drainage area at altitudes greater than 10,000 feet are the most significant variables. For the equations developed from all months of monthly mean streamflow, the coefficient of determination averaged 0.84 and the standard error of estimate of the relations for the ungaged sites averaged 72 percent. For the equations derived from monthly means by month, the coefficient of determination averaged 0.72 and the standard error of estimate of the relations averaged 78 percent. If standard errors are compared, the relations developed in this study appear generally to be less accurate than those developed in a previous study. However, the new relations are based on additional data and the slight increase in error may be due to the wider range of streamflow for a longer period of record, 1995-2000. In the second method, streamflow measurements at partial-record sites were correlated with concurrent streamflows at nearby gaged sites by the use of linear-regression techniques. Statistical measures of results using the second method typically indicated greater accuracy than for the first method. However, to make estimates for individual months, the concurrent-measurement method requires several years additional streamflow data at more partial-record sites. Thus, exceedence values for individual months are not yet available due to the low number of concurrent-streamflow-measurement data available. Reliability, limitations, and applications of both estimating methods are described herein.

  16. The Dynamics and Neural Correlates of Audio-Visual Integration Capacity as Determined by Temporal Unpredictability, Proactive Interference, and SOA.

    PubMed

    Wilbiks, Jonathan M P; Dyson, Benjamin J

    2016-01-01

    Over 5 experiments, we challenge the idea that the capacity of audio-visual integration need be fixed at 1 item. We observe that the conditions under which audio-visual integration is most likely to exceed 1 occur when stimulus change operates at a slow rather than fast rate of presentation and when the task is of intermediate difficulty such as when low levels of proactive interference (3 rather than 8 interfering visual presentations) are combined with the temporal unpredictability of the critical frame (Experiment 2), or, high levels of proactive interference are combined with the temporal predictability of the critical frame (Experiment 4). Neural data suggest that capacity might also be determined by the quality of perceptual information entering working memory. Experiment 5 supported the proposition that audio-visual integration was at play during the previous experiments. The data are consistent with the dynamic nature usually associated with cross-modal binding, and while audio-visual integration capacity likely cannot exceed uni-modal capacity estimates, performance may be better than being able to associate only one visual stimulus with one auditory stimulus.

  17. The Dynamics and Neural Correlates of Audio-Visual Integration Capacity as Determined by Temporal Unpredictability, Proactive Interference, and SOA

    PubMed Central

    Wilbiks, Jonathan M. P.; Dyson, Benjamin J.

    2016-01-01

    Over 5 experiments, we challenge the idea that the capacity of audio-visual integration need be fixed at 1 item. We observe that the conditions under which audio-visual integration is most likely to exceed 1 occur when stimulus change operates at a slow rather than fast rate of presentation and when the task is of intermediate difficulty such as when low levels of proactive interference (3 rather than 8 interfering visual presentations) are combined with the temporal unpredictability of the critical frame (Experiment 2), or, high levels of proactive interference are combined with the temporal predictability of the critical frame (Experiment 4). Neural data suggest that capacity might also be determined by the quality of perceptual information entering working memory. Experiment 5 supported the proposition that audio-visual integration was at play during the previous experiments. The data are consistent with the dynamic nature usually associated with cross-modal binding, and while audio-visual integration capacity likely cannot exceed uni-modal capacity estimates, performance may be better than being able to associate only one visual stimulus with one auditory stimulus. PMID:27977790

  18. Breaking the speed limit--comparative sprinting performance of brook trout (Salvelinus fontinalis) and brown trout (Salmo trutta)

    USGS Publications Warehouse

    Castro-Santos, Theodore; Sanz-Ronda, Francisco Javier; Ruiz-Legazpi, Jorge

    2013-01-01

    Sprinting behavior of free-ranging fish has long been thought to exceed that of captive fish. Here we present data from wild-caught brook trout (Salvelinus fontinalis) and brown trout (Salmo trutta), volitionally entering and sprinting against high-velocity flows in an open-channel flume. Performance of the two species was nearly identical, with the species attaining absolute speeds > 25 body lengths·s−1. These speeds far exceed previously published observations for any salmonid species and contribute to the mounting evidence that commonly accepted estimates of swimming performance are low. Brook trout demonstrated two distinct modes in the relationship between swim speed and fatigue time, similar to the shift from prolonged to sprint mode described by other authors, but in this case occurring at speeds > 19 body lengths·s−1. This is the first demonstration of multiple modes of sprint swimming at such high swim speeds. Neither species optimized for distance maximization, however, indicating that physiological limits alone are poor predictors of swimming performance. By combining distributions of volitional swim speeds with endurance, we were able to account for >80% of the variation in distance traversed by both species.

  19. Amino acid production exceeds plant nitrogen demand in Siberian tundra

    NASA Astrophysics Data System (ADS)

    Wild, Birgit; Eloy Alves, Ricardo J.; Bárta, Jiři; Čapek, Petr; Gentsch, Norman; Guggenberger, Georg; Hugelius, Gustaf; Knoltsch, Anna; Kuhry, Peter; Lashchinskiy, Nikolay; Mikutta, Robert; Palmtag, Juri; Prommer, Judith; Schnecker, Jörg; Shibistova, Olga; Takriti, Mounir; Urich, Tim; Richter, Andreas

    2018-03-01

    Arctic plant productivity is often limited by low soil N availability. This has been attributed to slow breakdown of N-containing polymers in litter and soil organic matter (SOM) into smaller, available units, and to shallow plant rooting constrained by permafrost and high soil moisture. Using 15N pool dilution assays, we here quantified gross amino acid and ammonium production rates in 97 active layer samples from four sites across the Siberian Arctic. We found that amino acid production in organic layers alone exceeded literature-based estimates of maximum plant N uptake 17-fold and therefore reject the hypothesis that arctic plant N limitation results from slow SOM breakdown. High microbial N use efficiency in organic layers rather suggests strong competition of microorganisms and plants in the dominant rooting zone. Deeper horizons showed lower amino acid production rates per volume, but also lower microbial N use efficiency. Permafrost thaw together with soil drainage might facilitate deeper plant rooting and uptake of previously inaccessible subsoil N, and thereby promote plant productivity in arctic ecosystems. We conclude that changes in microbial decomposer activity, microbial N utilization and plant root density with soil depth interactively control N availability for plants in the Arctic.

  20. Flood on the Virgin River, January 1989, in Utah, Arizona, and Nevada

    USGS Publications Warehouse

    Carlson, D.D.; Meyer, D.F.

    1995-01-01

    The impoundment of water in Quail Creek Reservoir in Utah began in April 1985. The drainage area for the reservoir is 78.4 square miles, including Quail Creek and Leeds Creek watersheds. Water also is diverted from the Virgin River above Hurricane, Utah, to supplement the filing of the reservoir. A dike, which is one of the structures impounding water in Quail Creek Reservoir, failed on January 1, 1989. This failure resulted in the release of about 25,000 acre-feet of water into the Virgin River near Hurricane, Utah. Flooding occurred along the Virgin River flood plain in Utah, Arizona, and Nevada. The previous maximum discharge of record was exceeded at three U.S. Geological Survey streamflow-gaging stations, and the flood discharges exceeded the theoretical 100-year flood discharges. Peak discharge estimates ranged from 60,000 to 66,000 cubic feet per second at the three streamflow-gaging stations. Damage to roads, bridges, agricultural land, livestock, irrigation structures, businesses, and residences totaled more than $12 million. The greatest damage was to agricultural and public-works facilities. Washington County, which is in southwestern Utah, was declared a disaster area by President George Bush.

  1. Health risk assessment of hazardous metals for population via consumption of seafood from Ogoniland, Rivers State, Nigeria; a case study of Kaa, B-Dere, and Bodo City.

    PubMed

    Nkpaa, K W; Patrick-Iwuanyanwu, K C; Wegwu, M O; Essien, E B

    2016-01-01

    This study was designed to investigate the human health risk through consumption of seafood from contaminated sites in Kaa, B-Dere, and Bodo City all in Ogoniland. The potential non-carcinogenic health risk for consumers were investigated by assessing the estimated daily intake and target hazard quotients for Cr, Cd, Zn, Pb, Mn, and Fe while carcinogenic health effect from Cr, Cd, and Pb was also estimated. The estimated daily intake from seafood consumption was below the threshold values for Cr, Mn, and Zn while they exceeded the threshold for Cd, Pb, and Fe. The target hazard quotients for Zn and Cr were below 1. Target hazard quotients values for Cd, Pb, Mn, and Fe were greater than 1 except for Fe level in Liza falcipinis from Kaa. Furthermore, estimation of carcinogenic risk for Cr in all samples under study exceeded the accepted risk level of 10E-4. Also, Cd carcinogenic risk level for L. falcipinis and Callinectes pallidus collected from B-Dere and C. pallidus collected from Bodo City was 1.1E-3 which also exceeded the accepted risk level of 10E-4 for Cd. Estimation of carcinogenic risk for Pb was within the acceptable range of 10E-4. Consumers of seafood from these sites in Ogoniland may be exposed to metal pollution.

  2. 48 CFR 236.273 - Construction in foreign countries.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... estimated to exceed $1,000,000 and are to be performed in the United States outlying area in the Pacific and on Kwajalein Atoll, or in countries bordering the Arabian Gulf, shall be awarded only to United States firms, unless— (1) The lowest responsive and responsible offer of a United States firm exceeds the...

  3. 48 CFR 236.273 - Construction in foreign countries.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... estimated to exceed $1,000,000 and are to be performed in the United States outlying area in the Pacific and on Kwajalein Atoll, or in countries bordering the Arabian Gulf, shall be awarded only to United States firms, unless— (1) The lowest responsive and responsible offer of a United States firm exceeds the...

  4. 50 CFR 648.122 - Scup specifications.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... conjunction with each ACL and ACT, a sector specific research set-aside, estimates of sector-related discards... ensure the sector-specific ACL for an upcoming fishing year or years will not be exceeded. The measures... recreational measures are necessary to ensure that the sector ACL will not be exceeded, he or she will publish...

  5. 50 CFR 648.122 - Scup specifications.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... conjunction with each ACL and ACT, a sector specific research set-aside, estimates of sector-related discards... ensure the sector-specific ACL for an upcoming fishing year or years will not be exceeded. The measures... recreational measures are necessary to ensure that the sector ACL will not be exceeded, he or she will publish...

  6. 50 CFR 648.122 - Scup specifications.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... conjunction with each ACL and ACT, a sector specific research set-aside, estimates of sector-related discards... ensure the sector-specific ACL for an upcoming fishing year or years will not be exceeded. The measures... recreational measures are necessary to ensure that the sector ACL will not be exceeded, he or she will publish...

  7. A preview of Delaware's timber resource

    Treesearch

    Joseph E. Barnard; Teresa M. Bowers

    1973-01-01

    The recently completed forest survey of Delaware indicated little change in the total forest area since the 1957 estimate. Softwood volume and the acreage of softwood types decreased considerably. Hardwoods now comprise two-thirds of the volume and three-fourths of the forest area. Total average annual growth exceeded removals, but softwood removals exceeded average...

  8. Raising the speed limit from 75 to 80mph on Utah rural interstates: Effects on vehicle speeds and speed variance.

    PubMed

    Hu, Wen

    2017-06-01

    In November 2010 and October 2013, Utah increased speed limits on sections of rural interstates from 75 to 80mph. Effects on vehicle speeds and speed variance were examined. Speeds were measured in May 2010 and May 2014 within the new 80mph zones, and at a nearby spillover site and at more distant control sites where speed limits remained 75mph. Log-linear regression models estimated percentage changes in speed variance and mean speeds for passenger vehicles and large trucks associated with the speed limit increase. Logistic regression models estimated effects on the probability of passenger vehicles exceeding 80, 85, or 90mph and large trucks exceeding 80mph. Within the 80mph zones and at the spillover location in 2014, mean passenger vehicle speeds were significantly higher (4.1% and 3.5%, respectively), as were the probabilities that passenger vehicles exceeded 80mph (122.3% and 88.5%, respectively), than would have been expected without the speed limit increase. Probabilities that passenger vehicles exceeded 85 and 90mph were non-significantly higher than expected within the 80mph zones. For large trucks, the mean speed and probability of exceeding 80mph were higher than expected within the 80mph zones. Only the increase in mean speed was significant. Raising the speed limit was associated with non-significant increases in speed variance. The study adds to the wealth of evidence that increasing speed limits leads to higher travel speeds and an increased probability of exceeding the new speed limit. Results moreover contradict the claim that increasing speed limits reduces speed variance. Although the estimated increases in mean vehicle speeds may appear modest, prior research suggests such increases would be associated with substantial increases in fatal or injury crashes. This should be considered by lawmakers considering increasing speed limits. Copyright © 2017 Elsevier Ltd and National Safety Council. All rights reserved.

  9. Evaluation of atmospheric nitrogen deposition model performance in the context of U.S. critical load assessments

    NASA Astrophysics Data System (ADS)

    Williams, Jason J.; Chung, Serena H.; Johansen, Anne M.; Lamb, Brian K.; Vaughan, Joseph K.; Beutel, Marc

    2017-02-01

    Air quality models are widely used to estimate pollutant deposition rates and thereby calculate critical loads and critical load exceedances (model deposition > critical load). However, model operational performance is not always quantified specifically to inform these applications. We developed a performance assessment approach designed to inform critical load and exceedance calculations, and applied it to the Pacific Northwest region of the U.S. We quantified wet inorganic N deposition performance of several widely-used air quality models, including five different Community Multiscale Air Quality Model (CMAQ) simulations, the Tdep model, and 'PRISM x NTN' model. Modeled wet inorganic N deposition estimates were compared to wet inorganic N deposition measurements at 16 National Trends Network (NTN) monitoring sites, and to annual bulk inorganic N deposition measurements at Mount Rainier National Park. Model bias (model - observed) and error (|model - observed|) were expressed as a percentage of regional critical load values for diatoms and lichens. This novel approach demonstrated that wet inorganic N deposition bias in the Pacific Northwest approached or exceeded 100% of regional diatom and lichen critical load values at several individual monitoring sites, and approached or exceeded 50% of critical loads when averaged regionally. Even models that adjusted deposition estimates based on deposition measurements to reduce bias or that spatially-interpolated measurement data, had bias that approached or exceeded critical loads at some locations. While wet inorganic N deposition model bias is only one source of uncertainty that can affect critical load and exceedance calculations, results demonstrate expressing bias as a percentage of critical loads at a spatial scale consistent with calculations may be a useful exercise for those performing calculations. It may help decide if model performance is adequate for a particular calculation, help assess confidence in calculation results, and highlight cases where a non-deterministic approach may be needed.

  10. Predicting the safe load on backpacker's arm using Lagrange multipliers method

    NASA Astrophysics Data System (ADS)

    Abdalla, Faisal Saleh; Rambely, Azmin Sham

    2014-09-01

    In this study, a technique has been suggested to reduce a backpack load by transmitting determined loads to the children arm. The purpose of this paper is to estimate school children arm muscles while load carriage as well as to determine the safe load can be carried at wrist while walking with backpack. A mathematical model, as three DOFs model, was investigated in the sagittal plane and Lagrange multipliers method (LMM) was utilized to minimize a quadratic objective function of muscle forces. The muscle forces were minimized with three different load conditions which are termed as 0-L=0 N, 1-L=21.95 N, and 2-L=43.9 N. The investigated muscles were estimated and compared to their maximum forces throughout the load conditions. Flexor and extensor muscles were estimated and the results showed that flexor muscles were active while extensor muscles showed inactivity. The estimated muscle forces were didn't exceed their maximum forces with 0-L and 1-L conditions whereas biceps and FCR muscles exceeded their maximum forces with 2-L condition. Consequently, 1-L condition is quiet safe to be carried by hand whereas 2-L condition is not. Thus to reduce the load in the backpack the transmitted load shouldn't exceed 1-L condition.

  11. Risk assessment for adult butterflies exposed to the mosquito control pesticide naled.

    PubMed

    Bargar, Timothy A

    2012-04-01

    A prospective risk assessment was conducted for adult butterflies potentially exposed to the mosquito control insecticide naled. Published acute mortality data, exposure data collected during field studies, and morphometric data (total surface area and fresh body weight) for adult butterflies were combined in a probabilistic estimate of the likelihood that adult butterfly exposure to naled following aerial applications would exceed levels associated with acute mortality. Adult butterfly exposure was estimated based on the product of (1) naled residues on samplers and (2) an exposure metric that normalized total surface area for adult butterflies to their fresh weight. The likelihood that the 10th percentile refined effect estimate for adult butterflies exposed to naled would be exceeded following aerial naled applications was 67 to 80%. The greatest risk would be for butterflies in the family Lycaenidae, and the lowest risk would be for those in the family Hesperidae, assuming equivalent sensitivity to naled. A range of potential guideline naled deposition levels is presented that, if not exceeded, would reduce the risk of adult butterfly mortality. The results for this risk assessment were compared with other risk estimates for butterflies, and the implications for adult butterflies in areas targeted by aerial naled applications are discussed. Copyright © 2012 SETAC.

  12. Gaussian process models for reference ET estimation from alternative meteorological data sources

    USDA-ARS?s Scientific Manuscript database

    Accurate estimates of daily crop evapotranspiration (ET) are needed for efficient irrigation management, especially in arid and semi-arid regions where crop water demand exceeds rainfall. Daily grass or alfalfa reference ET values and crop coefficients are widely used to estimate crop water demand. ...

  13. Geothermal resources and reserves in Indonesia: an updated revision

    NASA Astrophysics Data System (ADS)

    Fauzi, A.

    2015-02-01

    More than 300 high- to low-enthalpy geothermal sources have been identified throughout Indonesia. From the early 1980s until the late 1990s, the geothermal potential for power production in Indonesia was estimated to be about 20 000 MWe. The most recent estimate exceeds 29 000 MWe derived from the 300 sites (Geological Agency, December 2013). This resource estimate has been obtained by adding all of the estimated geothermal potential resources and reserves classified as "speculative", "hypothetical", "possible", "probable", and "proven" from all sites where such information is available. However, this approach to estimating the geothermal potential is flawed because it includes double counting of some reserve estimates as resource estimates, thus giving an inflated figure for the total national geothermal potential. This paper describes an updated revision of the geothermal resource estimate in Indonesia using a more realistic methodology. The methodology proposes that the preliminary "Speculative Resource" category should cover the full potential of a geothermal area and form the base reference figure for the resource of the area. Further investigation of this resource may improve the level of confidence of the category of reserves but will not necessarily increase the figure of the "preliminary resource estimate" as a whole, unless the result of the investigation is higher. A previous paper (Fauzi, 2013a, b) redefined and revised the geothermal resource estimate for Indonesia. The methodology, adopted from Fauzi (2013a, b), will be fully described in this paper. As a result of using the revised methodology, the potential geothermal resources and reserves for Indonesia are estimated to be about 24 000 MWe, some 5000 MWe less than the 2013 national estimate.

  14. Hurricane Harvey Rainfall, Did It Exceed PMP and What are the Implications?

    NASA Astrophysics Data System (ADS)

    Kappel, B.; Hultstrand, D.; Muhlestein, G.

    2017-12-01

    Rainfall resulting from Hurricane Harvey reached historic levels over the coastal regions of Texas and Louisiana during the last week of August 2017. Although extreme rainfall from this landfalling tropical system is not uncommon in the region, Harvey was unique in that it persisted over the same general location for several days, producing volumes of rainfall not previously observed in the United States. Devastating flooding and severe stress to infrastructure in the region was the result. Coincidentally, Applied Weather Associates had recently completed an updated statewide Probable Maximum Precipitation (PMP) study for Texas. This storm proved to be a real-time test of the adequacy of those values. AWA calculates PMP following a storm-based approach. This same approach was use in the HMRs. Therefore inclusion of all PMP-type storms is critically important to ensuring that appropriate PMP values are produced. This presentation will discuss the analysis of the Harvey rainfall using the Storm Precipitation Analysis System (SPAS) program used to analyze all storms used in PMP development, compare the results of the Harvey rainfall analysis against previous similar storms, and provide comparisons of the Harvey rainfall against previous and current PMP depths. Discussion will be included regarding the implications of the storm on previous and future PMP estimates, dam safety design, and infrastructure vulnerable to extreme flooding.

  15. ProUCL Version 4.0 Technical Guide

    EPA Science Inventory

    Statistical inference, including both estimation and hypotheses testing approaches, is routinely used to: estimate environmental parameters of interest, such as exposure point concentration (EPC) terms, not-to-exceed values, and background level threshold values (BTVs) for contam...

  16. An approximate Kalman filter for ocean data assimilation: An example with an idealized Gulf Stream model

    NASA Technical Reports Server (NTRS)

    Fukumori, Ichiro; Malanotte-Rizzoli, Paola

    1995-01-01

    A practical method of data assimilation for use with large, nonlinear, ocean general circulation models is explored. A Kalman filter based on approximation of the state error covariance matrix is presented, employing a reduction of the effective model dimension, the error's asymptotic steady state limit, and a time-invariant linearization of the dynamic model for the error integration. The approximations lead to dramatic computational savings in applying estimation theory to large complex systems. We examine the utility of the approximate filter in assimilating different measurement types using a twin experiment of an idealized Gulf Stream. A nonlinear primitive equation model of an unstable east-west jet is studied with a state dimension exceeding 170,000 elements. Assimilation of various pseudomeasurements are examined, including velocity, density, and volume transport at localized arrays and realistic distributions of satellite altimetry and acoustic tomography observations. Results are compared in terms of their effects on the accuracies of the estimation. The approximate filter is shown to outperform an empirical nudging scheme used in a previous study. The examples demonstrate that useful approximate estimation errors can be computed in a practical manner for general circulation models.

  17. Systemic Amyloidosis in England: an epidemiological study

    PubMed Central

    Pinney, Jennifer H; Smith, Colette J; Taube, Jessi B; Lachmann, Helen J; Venner, Christopher P; Gibbs, Simon D J; Dungu, Jason; Banypersad, Sanjay M; Wechalekar, Ashutosh D; Whelan, Carol J; Hawkins, Philip N; Gillmore, Julian D

    2013-01-01

    Epidemiological studies of systemic amyloidosis are scarce and the burden of disease in England has not previously been estimated. In 1999, the National Health Service commissioned the National Amyloidosis Centre (NAC) to provide a national clinical service for all patients with amyloidosis. Data for all individuals referred to the NAC is held on a comprehensive central database, and these were compared with English death certificate data for amyloidosis from 2000 to 2008, obtained from the Office of National Statistics. Amyloidosis was stated on death certificates of 2543 individuals, representing 0·58/1000 recorded deaths. During the same period, 1143 amyloidosis patients followed at the NAC died, 903 (79%) of whom had amyloidosis recorded on their death certificates. The estimated minimum incidence of systemic amyloidosis in the English population in 2008, based on new referrals to the NAC, was 0·4/100 000 population. The incidence peaked at age 60–79 years. Systemic AL amyloidosis was the most common type with an estimated minimum incidence of 0·3/100 000 population. Although there are various limitations to this study, the available data suggest the incidence of systemic amyloidosis in England exceeds 0·8/100 000 of the population. PMID:23480608

  18. Experimental approach for thermal parameters estimation during glass forming process

    NASA Astrophysics Data System (ADS)

    Abdulhay, B.; Bourouga, B.; Alzetto, F.; Challita, C.

    2016-10-01

    In this paper, an experimental device designed and developedto estimate thermal conditions at the Glass / piston contact interface is presented. This deviceis made of two parts: the upper part contains the piston made of metal and a heating device to raise the temperature of the piston up to 500 °C. The lower part is composed of a lead crucible and a glass sample. The assembly is provided with a heating system, an induction furnace of 6 kW for heating the glass up to 950 °C.The developed experimental procedure has permitted in a previous published study to estimate the Thermal Contact ResistanceTCR using the inverse technique developed by Beck [1]. The semi-transparent character of the glass has been taken into account by an additional radiative heat flux and an equivalent thermal conductivity. After the set-up tests, reproducibility experiments for a specific contact pressure have been carried outwith a maximum dispersion that doesn't exceed 6%. Then, experiments under different conditions for a specific glass forming process regarding the application (Packaging, Buildings and Automobile) were carried out. The objective is to determine, experimentallyfor each application,the typical conditions capable to minimize the glass temperature loss during the glass forming process.

  19. An approximate Kalman filter for ocean data assimilation: An example with an idealized Gulf Stream model

    NASA Astrophysics Data System (ADS)

    Fukumori, Ichiro; Malanotte-Rizzoli, Paola

    1995-04-01

    A practical method of data assimilation for use with large, nonlinear, ocean general circulation models is explored. A Kaiman filter based on approximations of the state error covariance matrix is presented, employing a reduction of the effective model dimension, the error's asymptotic steady state limit, and a time-invariant linearization of the dynamic model for the error integration. The approximations lead to dramatic computational savings in applying estimation theory to large complex systems. We examine the utility of the approximate filter in assimilating different measurement types using a twin experiment of an idealized Gulf Stream. A nonlinear primitive equation model of an unstable east-west jet is studied with a state dimension exceeding 170,000 elements. Assimilation of various pseudomeasurements are examined, including velocity, density, and volume transport at localized arrays and realistic distributions of satellite altimetry and acoustic tomography observations. Results are compared in terms of their effects on the accuracies of the estimation. The approximate filter is shown to outperform an empirical nudging scheme used in a previous study. The examples demonstrate that useful approximate estimation errors can be computed in a practical manner for general circulation models.

  20. A retrospective analysis of benefits and impacts of U.S. renewable portfolio standards

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Barbose, Galen; Wiser, Ryan; Heeter, Jenny

    As states consider revising or developing renewable portfolio standards (RPS), they are evaluating policy costs, benefits, and other impacts. We present the first U. S. national-level assessment of state RPS program benefits and impacts, focusing on new renewable electricity resources used to meet RPS compliance obligations in 2013. In our central-case scenario, reductions in life-cycle greenhouse gas emissions from displaced fossil fuel-generated electricity resulted in $2.2 billion of global benefits. Health and environmental benefits from reductions in criteria air pollutants (sulfur dioxide, nitrogen oxides, and particulate matter 2.5) were even greater, estimated at $5.2 billion in the central case. Furthermore » benefits accrued in the form of reductions in water withdrawals and consumption for power generation. Finally, although best considered resource transfers rather than net societal benefits, new renewable electricity generation used for RPS compliance in 2013 also supported nearly 200,000 U. S.-based gross jobs and reduced wholesale electricity prices and natural gas prices, saving consumers a combined $1.3-$4.9 billion. In total, the estimated benefits and impacts well-exceed previous estimates of RPS compliance costs.« less

  1. 78 FR 76074 - Department of State Acquisition Regulation

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-12-16

    ... DOSAR Sec. Sec. 652.245-70(a)(3) and 652.245-71, on the theory that Part 45 governs the management and... expectancy to exceed two years. (b) The contracting officer shall insert the clause at 652.245-71, Special... property when put into use; and is of a durable nature with an estimated useful life expectancy to exceed...

  2. 50 CFR 622.49 - Annual catch limits (ACLs) and accountability measures (AMs).

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... limit (ACL), the AA will file a notification with the Office of the Federal Register, at or near the...-year ACL was exceeded. The commercial ACL for 2010 and subsequent fishing years is 138,000 lb (62,596 kg). (ii) Recreational sector. If recreational landings, as estimated by the SRD, exceed the ACL, the...

  3. Limits to the Fraction of High-energy Photon Emitting Gamma-Ray Bursts

    NASA Astrophysics Data System (ADS)

    Akerlof, Carl W.; Zheng, WeiKang

    2013-02-01

    After almost four years of operation, the two instruments on board the Fermi Gamma-ray Space Telescope have shown that the number of gamma-ray bursts (GRBs) with high-energy photon emission above 100 MeV cannot exceed roughly 9% of the total number of all such events, at least at the present detection limits. In a recent paper, we found that GRBs with photons detected in the Large Area Telescope have a surprisingly broad distribution with respect to the observed event photon number. Extrapolation of our empirical fit to numbers of photons below our previous detection limit suggests that the overall rate of such low flux events could be estimated by standard image co-adding techniques. In this case, we have taken advantage of the excellent angular resolution of the Swift mission to provide accurate reference points for 79 GRB events which have eluded any previous correlations with high-energy photons. We find a small but significant signal in the co-added field. Guided by the extrapolated power-law fit previously obtained for the number distribution of GRBs with higher fluxes, the data suggest that only a small fraction of GRBs are sources of high-energy photons.

  4. Paleoflood investigations to improve peak-streamflow regional-regression equations for natural streamflow in eastern Colorado, 2015

    USGS Publications Warehouse

    Kohn, Michael S.; Stevens, Michael R.; Harden, Tessa M.; Godaire, Jeanne E.; Klinger, Ralph E.; Mommandi, Amanullah

    2016-09-09

    The U.S. Geological Survey (USGS), in cooperation with the Colorado Department of Transportation, developed regional-regression equations for estimating the 50-, 20-, 10-, 4-, 2-, 1-, 0.5-, 0.2-percent annual exceedance-probability discharge (AEPD) for natural streamflow in eastern Colorado. A total of 188 streamgages, consisting of 6,536 years of record and a mean of approximately 35 years of record per streamgage, were used to develop the peak-streamflow regional-regression equations. The estimated AEPDs for each streamgage were computed using the USGS software program PeakFQ. The AEPDs were determined using systematic data through water year 2013. Based on previous studies conducted in Colorado and neighboring States and on the availability of data, 72 characteristics (57 basin and 15 climatic characteristics) were evaluated as candidate explanatory variables in the regression analysis. Paleoflood and non-exceedance bound ages were established based on reconnaissance-level methods. Multiple lines of evidence were used at each streamgage to arrive at a conclusion (age estimate) to add a higher degree of certainty to reconnaissance-level estimates. Paleoflood or nonexceedance bound evidence was documented at 41 streamgages, and 3 streamgages had previously collected paleoflood data.To determine the peak discharge of a paleoflood or non-exceedanc bound, two different hydraulic models were used.The mean standard error of prediction (SEP) for all 8 AEPDs was reduced approximately 25 percent compared to the previous flood-frequency study. For paleoflood data to be effective in reducing the SEP in eastern Colorado, a larger ratio than 44 of 188 (23 percent) streamgages would need paleoflood data and that paleoflood data would need to increase the record length by more than 25 years for the 1-percent AEPD. The greatest reduction in SEP for the peak-streamflow regional-regression equations was observed when additional new basin characteristics were included in the peak-streamflow regional-regression equations and when eastern Colorado was divided into two separate hydrologic regions. To make further reductions in the uncertainties of the peak-streamflow regional-regression equations in the Foothills and Plains hydrologic regions, additional streamgages or crest-stage gages are needed to collect peak-streamflow data on natural streams in eastern Colorado.Generalized-Least Squares regression was used to compute the final peak-streamflow regional-regression equations for peak-streamflow. Dividing eastern Colorado into two new individual regions at –104° longitude resulted in peak-streamflow regional-regression equations with the smallest SEP. The new hydrologic region located between –104° longitude and the Kansas-Nebraska State line will be designated the Plains hydrologic region and the hydrologic region comprising the rest of eastern Colorado located west of the –104° longitude and east of the Rocky Mountains and below 7,500 feet in the South Platte River Basin and below 9,000 feet in the Arkansas River Basin will be designated the Foothills hydrologic region.

  5. Methods for estimating the magnitude and frequency of peak streamflows for unregulated streams in Oklahoma

    USGS Publications Warehouse

    Lewis, Jason M.

    2010-01-01

    Peak-streamflow regression equations were determined for estimating flows with exceedance probabilities from 50 to 0.2 percent for the state of Oklahoma. These regression equations incorporate basin characteristics to estimate peak-streamflow magnitude and frequency throughout the state by use of a generalized least squares regression analysis. The most statistically significant independent variables required to estimate peak-streamflow magnitude and frequency for unregulated streams in Oklahoma are contributing drainage area, mean-annual precipitation, and main-channel slope. The regression equations are applicable for watershed basins with drainage areas less than 2,510 square miles that are not affected by regulation. The resulting regression equations had a standard model error ranging from 31 to 46 percent. Annual-maximum peak flows observed at 231 streamflow-gaging stations through water year 2008 were used for the regression analysis. Gage peak-streamflow estimates were used from previous work unless 2008 gaging-station data were available, in which new peak-streamflow estimates were calculated. The U.S. Geological Survey StreamStats web application was used to obtain the independent variables required for the peak-streamflow regression equations. Limitations on the use of the regression equations and the reliability of regression estimates for natural unregulated streams are described. Log-Pearson Type III analysis information, basin and climate characteristics, and the peak-streamflow frequency estimates for the 231 gaging stations in and near Oklahoma are listed. Methodologies are presented to estimate peak streamflows at ungaged sites by using estimates from gaging stations on unregulated streams. For ungaged sites on urban streams and streams regulated by small floodwater retarding structures, an adjustment of the statewide regression equations for natural unregulated streams can be used to estimate peak-streamflow magnitude and frequency.

  6. Toxicological benchmarks for screening potential contaminants of concern for effects on aquatic biota: 1994 Revision

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Suter, G.W. II; Mabrey, J.B.

    1994-07-01

    This report presents potential screening benchmarks for protection of aquatic life from contaminants in water. Because there is no guidance for screening benchmarks, a set of alternative benchmarks is presented herein. The alternative benchmarks are based on different conceptual approaches to estimating concentrations causing significant effects. For the upper screening benchmark, there are the acute National Ambient Water Quality Criteria (NAWQC) and the Secondary Acute Values (SAV). The SAV concentrations are values estimated with 80% confidence not to exceed the unknown acute NAWQC for those chemicals with no NAWQC. The alternative chronic benchmarks are the chronic NAWQC, the Secondary Chronicmore » Value (SCV), the lowest chronic values for fish and daphnids from chronic toxicity tests, the estimated EC20 for a sensitive species, and the concentration estimated to cause a 20% reduction in the recruit abundance of largemouth bass. It is recommended that ambient chemical concentrations be compared to all of these benchmarks. If NAWQC are exceeded, the chemicals must be contaminants of concern because the NAWQC are applicable or relevant and appropriate requirements (ARARs). If NAWQC are not exceeded, but other benchmarks are, contaminants should be selected on the basis of the number of benchmarks exceeded and the conservatism of the particular benchmark values, as discussed in the text. To the extent that toxicity data are available, this report presents the alternative benchmarks for chemicals that have been detected on the Oak Ridge Reservation. It also presents the data used to calculate benchmarks and the sources of the data. It compares the benchmarks and discusses their relative conservatism and utility.« less

  7. Aquatic concentrations of chemical analytes compared to ecotoxicity estimates

    USGS Publications Warehouse

    Kostich, Mitchell S.; Flick, Robert W.; Angela L. Batt,; Mash, Heath E.; Boone, J. Scott; Furlong, Edward T.; Kolpin, Dana W.; Glassmeyer, Susan T.

    2017-01-01

    We describe screening level estimates of potential aquatic toxicity posed by 227 chemical analytes that were measured in 25 ambient water samples collected as part of a joint USGS/USEPA drinking water plant study. Measured concentrations were compared to biological effect concentration (EC) estimates, including USEPA aquatic life criteria, effective plasma concentrations of pharmaceuticals, published toxicity data summarized in the USEPA ECOTOX database, and chemical structure-based predictions. Potential dietary exposures were estimated using a generic 3-tiered food web accumulation scenario. For many analytes, few or no measured effect data were found, and for some analytes, reporting limits exceeded EC estimates, limiting the scope of conclusions. Results suggest occasional occurrence above ECs for copper, aluminum, strontium, lead, uranium, and nitrate. Sparse effect data for manganese, antimony, and vanadium suggest that these analytes may occur above ECs, but additional effect data would be desirable to corroborate EC estimates. These conclusions were not affected by bioaccumulation estimates. No organic analyte concentrations were found to exceed EC estimates, but ten analytes had concentrations in excess of 1/10th of their respective EC: triclocarban, norverapamil, progesterone, atrazine, metolachlor, triclosan, para-nonylphenol, ibuprofen, venlafaxine, and amitriptyline, suggesting more detailed characterization of these analytes.

  8. Aquatic concentrations of chemical analytes compared to ecotoxicity estimates.

    PubMed

    Kostich, Mitchell S; Flick, Robert W; Batt, Angela L; Mash, Heath E; Boone, J Scott; Furlong, Edward T; Kolpin, Dana W; Glassmeyer, Susan T

    2017-02-01

    We describe screening level estimates of potential aquatic toxicity posed by 227 chemical analytes that were measured in 25 ambient water samples collected as part of a joint USGS/USEPA drinking water plant study. Measured concentrations were compared to biological effect concentration (EC) estimates, including USEPA aquatic life criteria, effective plasma concentrations of pharmaceuticals, published toxicity data summarized in the USEPA ECOTOX database, and chemical structure-based predictions. Potential dietary exposures were estimated using a generic 3-tiered food web accumulation scenario. For many analytes, few or no measured effect data were found, and for some analytes, reporting limits exceeded EC estimates, limiting the scope of conclusions. Results suggest occasional occurrence above ECs for copper, aluminum, strontium, lead, uranium, and nitrate. Sparse effect data for manganese, antimony, and vanadium suggest that these analytes may occur above ECs, but additional effect data would be desirable to corroborate EC estimates. These conclusions were not affected by bioaccumulation estimates. No organic analyte concentrations were found to exceed EC estimates, but ten analytes had concentrations in excess of 1/10th of their respective EC: triclocarban, norverapamil, progesterone, atrazine, metolachlor, triclosan, para-nonylphenol, ibuprofen, venlafaxine, and amitriptyline, suggesting more detailed characterization of these analytes. Published by Elsevier B.V.

  9. Assessment of the effect of population and diary sampling methods on estimation of school-age children exposure to fine particles.

    PubMed

    Che, W W; Frey, H Christopher; Lau, Alexis K H

    2014-12-01

    Population and diary sampling methods are employed in exposure models to sample simulated individuals and their daily activity on each simulation day. Different sampling methods may lead to variations in estimated human exposure. In this study, two population sampling methods (stratified-random and random-random) and three diary sampling methods (random resampling, diversity and autocorrelation, and Markov-chain cluster [MCC]) are evaluated. Their impacts on estimated children's exposure to ambient fine particulate matter (PM2.5 ) are quantified via case studies for children in Wake County, NC for July 2002. The estimated mean daily average exposure is 12.9 μg/m(3) for simulated children using the stratified population sampling method, and 12.2 μg/m(3) using the random sampling method. These minor differences are caused by the random sampling among ages within census tracts. Among the three diary sampling methods, there are differences in the estimated number of individuals with multiple days of exposures exceeding a benchmark of concern of 25 μg/m(3) due to differences in how multiday longitudinal diaries are estimated. The MCC method is relatively more conservative. In case studies evaluated here, the MCC method led to 10% higher estimation of the number of individuals with repeated exposures exceeding the benchmark. The comparisons help to identify and contrast the capabilities of each method and to offer insight regarding implications of method choice. Exposure simulation results are robust to the two population sampling methods evaluated, and are sensitive to the choice of method for simulating longitudinal diaries, particularly when analyzing results for specific microenvironments or for exposures exceeding a benchmark of concern. © 2014 Society for Risk Analysis.

  10. Physical map of the Brucella melitensis 16 M chromosome.

    PubMed Central

    Allardet-Servent, A; Carles-Nurit, M J; Bourg, G; Michaux, S; Ramuz, M

    1991-01-01

    We present the first restriction map of the Brucella melitensis 16 M chromosome obtained by Southern blot hybridization of SpeI, XhoI, and XbaI fragments separated by pulsed-field gel electrophoresis. All restriction fragments (a total of 113) were mapped into an open circle. The main difficulty in mapping involved the exceedingly high number of restriction fragments, as was expected considering the 59% G + C content of the Brucella genome. Several cloned genes were placed on this map, especially rRNA operons which are repeated three times. The size of the B. melitensis chromosome, estimated as 2,600 kb long in a previous study, appeared longer (3,130 kb) by restriction mapping. This restriction map is an initial approach to achieve a genetic map of the Brucella chromosome. Images PMID:2007548

  11. Gaussian processes-based predictive models to estimate reference ET from alternative meteorological data sources for irrigation scheduling

    USDA-ARS?s Scientific Manuscript database

    Accurate estimates of daily crop evapotranspiration (ET) are needed for efficient irrigation management, especially in arid and semi-arid irrigated regions where crop water demand exceeds rainfall. The impact of inaccurate ET estimates can be tremendous in both irrigation cost and the increased dema...

  12. Ethnic differences in risk from mercury among Savannah River fishermen.

    PubMed

    Burger, J; Gaines, K F; Gochfeld, M

    2001-06-01

    Fishing plays an important role in people's lives and contaminant levels in fish are a public health concern. Many states have issued consumption advisories; South Carolina and Georgia have issued them for the Savannah River based on mercury and radionuclide levels. This study examined ethnic differences in risk from mercury exposure among people consuming fish from the Savannah River, based on site-specific consumption patterns and analysis of mercury in fish. Among fish, there were significant interspecies differences in mercury levels, and there were ethnic differences in consumption patterns. Two methods of examining risk are presented: (1) Hazard Index (HI), and (2) estimates of how much and how often people of different body mass can consume different species of fish. Blacks consumed more fish and had higher HIs than Whites. Even at the median consumption, the HI for Blacks exceeded 1.0 for bass and bowfin, and, at the 75th percentile of consumption, the HI exceeded 1.0 for almost all species. At the White male median consumption, noHI exceeded 1, but for the 95th percentile consumer, the HI exceeded 1.0 almost regardless of which species were eaten. Although females consumed about two thirds the quantity of males, HIs exceeded 1 for most Black females and for White females at or above the 75th percentile of consumption. Thus, close to half of the Black fishermen were eating enough Savannah River fish to exceed HI = 1. Caution must be used in evaluating an HI because the RfDs were developed to protect the most vulnerable individuals. The percentage of each fish species tested that exceeded the maximum permitted limits of mercury in fish was also examined. Over 80% of bowfin, 38% of bass, and 21% of pickerel sampled exceeded 0.5 ppm. The risk methodology is applicable anywhere that comparable data can be obtained. The risk estimates are representative for fishermen along the Savannah River, and are not necessarily for the general populations.

  13. Estimates of in-place oil shale of various grades in federal lands, Piceance Basin, Colorado

    USGS Publications Warehouse

    Mercier, Tracey J.; Johnson, Ronald C.; Brownfield, Michael E.

    2010-01-01

    The entire oil shale interval in the Piceance Basin is subdivided into seventeen “rich” and “lean” zones that were assessed separately. These zones are roughly time-stratigraphic units consisting of distinctive, laterally continuous sequences of oil shale beds that can be traced throughout much of the Piceance Basin. Several subtotals of the 1.5 trillion barrels total were calculated: (1) about 920 billion barrels (60 percent) exceed 15 gallons per ton (GPT); (2) about 352 billion barrels (23 percent) exceed 25 GPT; (3) more than one trillion barrels (70 percent) underlie Federally-managed lands; and (4) about 689 billion barrels (75 percent) of the 15 GPT total and about 284 billion barrels (19 percent) of the 25 GPT total are under Federal mineral (subsurface) ownership. These 15 and 25 GPT estimates include only those areas where the weighted average of an entire zone exceeds those minimum cutoffs. In areas where the entire zone does not meet the minimum criteria, some oil shale intervals of significant thicknesses could exist within the zone that exceed these minimum cutoffs. For example, a 30-ft interval within an oil shale zone might exceed 25 GPT but if the entire zone averages less than 25 GPT, these resources are not included in the 15 and 25 GPT subtotals, although they might be exploited in the future.

  14. [An investigation of ionizing radiation dose in a manufacturing enterprise of ion-absorbing type rare earth ore].

    PubMed

    Zhang, W F; Tang, S H; Tan, Q; Liu, Y M

    2016-08-20

    Objective: To investigate radioactive source term dose monitoring and estimation results in a manufacturing enterprise of ion-absorbing type rare earth ore and the possible ionizing radiation dose received by its workers. Methods: Ionizing radiation monitoring data of the posts in the control area and supervised area of workplace were collected, and the annual average effective dose directly estimated or estimated using formulas was evaluated and analyzed. Results: In the control area and supervised area of the workplace for this rare earth ore, α surface contamination activity had a maximum value of 0.35 Bq/cm 2 and a minimum value of 0.01 Bq/cm 2 ; β radioactive surface contamination activity had a maximum value of 18.8 Bq/cm 2 and a minimum value of 0.22 Bq/cm 2 . In 14 monitoring points in the workplace, the maximum value of the annual average effective dose of occupational exposure was 1.641 mSv/a, which did not exceed the authorized limit for workers (5 mSv/a) , but exceeded the authorized limit for general personnel (0.25 mSv/a) . The radionuclide specific activity of ionic mixed rare earth oxides was determined to be 0.9. Conclusion: The annual average effective dose of occupational exposure in this enterprise does not exceed the authorized limit for workers, but it exceeds the authorized limit for general personnel. We should pay attention to the focus of the radiation process, especially for public works radiation.

  15. Hurricane Agnes rainfall and floods, June-July 1972

    USGS Publications Warehouse

    Bailey, James F.; Patterson, James Lee; Paulhus, Joseph Louis Hornore

    1975-01-01

    Hurricane Agnes originated in the Caribbean Sea region in mid-June. Circulation barely reached hurricane intensity for a brief period in the Gulf of Mexico. The storm crossed the Florida Panhandle coastline on June 19, 1972, and followed an unusually extended overland trajectory combining with an extratropical system to bring very heavy rain from the Carolinas northward to New York. This torrential rain followed the abnormally wet May weather in the Middle Atlantic States and set the stage for the subsequent major flooding. The record-breaking floods occurred in the Middle Atlantic States in late June and early July 1972. Many streams in the affected area experienced peak discharges several times the previous maxima of record. Estimated recurrence intervals of peak flows at many gaging stations on major rivers and their tributaries exceeded 100 years. The suspended-sediment concentration and load of most flooded streams were also unusually high. The widespread flooding from this storm caused Agnes to be called the most destructive hurricane in United States history, claiming 117 lives and causing damage estimated at $3.1 billion in 12 States. Damage was particularly high in New York, Pennsylvania, Maryland, and Virginia. The detailed life history of Hurricane Agnes, including the tropical depression and tropical storm stages, is traced. Associated rainfalls are analyzed and compared with climatologic recurrence values. These are followed by a detailed description of the flood and streamflows of each affected basin. A summary of peak stages and discharges and comparison data for previous floods at 989 stations are presented. Deaths and flood damage estimates are compiled.

  16. 49 CFR 7.42 - Payment of fees.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... that could be charged to the requestor will likely exceed US $25, the requestor will be notified of the... estimated fees may amount to more than US $25, the request will be deemed not to have been received until... required to pay are likely to exceed US $250; or (2) The requestor has failed to pay within 30 days of the...

  17. Evaluating the importance of abiotic and biotic drivers on Bythotrephes biomass in Lakes Superior and Michigan

    USGS Publications Warehouse

    Keeler, Kevin M.; Bunnell, David B.; Diana, James S.; Adams, Jean V.; Mychek-Londer, Justin G.; Warner, David M.; Yule, Daniel; Vinson, Mark

    2015-01-01

    The ability of planktivorous fishes to exert top-down control on Bythotrephes potentially has far-reaching impacts on aquatic food-webs, given previously described effects of Bythotrephes on zooplankton communities. We estimated consumption of Bythotrephes by planktivorous and benthivorous fishes, using bioenergetics and daily ration models at nearshore (18 m), intermediate (46 m), and offshore (110 m) depths along one western Lake Superior transect (April, and September-November) and two northern Lake Michigan transects (April, July, September). In Lake Superior, consumption (primarily by cisco Coregonus artedi) exceeded Bythotrephes production at all offshore sites in September-November (up to 396% of production consumed) and at the intermediate site in November (842%) with no evidence of consumption nearshore. By comparing Bythotrephes biomass following months of excessive consumption, we conservatively concluded that top-down control was evident only at the offshore site during September-October. In Lake Michigan, consumption by fishes (primarily alewife Alosa pseudoharengus) exceeded production at nearshore sites (up to 178%), but not in deeper sites (< 15%). Evidence for top-down control in the nearshore was not supported, however, as Bythotrephes never subsequently declined. Using generalized additive models, temperature, and not fish consumption, not zooplankton prey density, best explained variability in Bythotrephes biomass. The non-linear pattern revealed Bythotrephes to increase with temperature up to 16 °C, and then decline between 16 and 23 °C. We discuss how temperature likely has direct negative impacts on Bythotrephes when temperatures near 23 °C, but speculate that predation also contributes to declining biomass when temperatures exceed 16 °C.

  18. Assessment of an apparent relationship between availability of soluble carbohydrates and reduced nitrogen during floral initiation in tobacco

    NASA Technical Reports Server (NTRS)

    Raper, C. D. Jr; Thomas, J. F.; Tolley-Henry, L.; Rideout, J. W.; Raper CD, J. r. (Principal Investigator)

    1988-01-01

    Daily relative accumulation rate of soluble carbohydrates (RARS) and reduced nitrogen (RARN) in the shoot, as estimates of source strength, were compared with daily relative growth rates (RGR) of the shoot, as an estimate of sink demand, during floral transformation in apical meristems of tobacco (Nicotiana tabacum 'NC 2326') grown at day/night temperatures of 18/14, 22/18, 26/22, 30/26, and 34/30 C. Source strength was assumed to exceed sink demand for either carbohydrates or nitrogen when the ratio of RARS/RGR or RARN/RGR was greater than unity, and sink demand was assumed to exceed source strength when the ratio was less than unity. Time of floral initiation, which was delayed up to 21 days with increases in temperature over the experimental range, was associated with intervals in which source strength of either carbohydrate or nitrogen exceeded sink demand, while sink demand for the other exceeded source strength. Floral initiation was not observed during intervals in which source strengths of both carbohydrates and nitrogen were greater than or less than sink demand. These results indicate that floral initiation is responsive to an imbalance in the relative availabilities of carbohydrate and nitrogen.

  19. Application of SPARROW modeling to understanding contaminant fate and transport from uplands to streams

    USGS Publications Warehouse

    Ator, Scott; Garcia, Ana Maria.

    2016-01-01

    Understanding spatial variability in contaminant fate and transport is critical to efficient regional water-quality restoration. An approach to capitalize on previously calibrated spatially referenced regression (SPARROW) models to improve the understanding of contaminant fate and transport was developed and applied to the case of nitrogen in the 166,000 km2 Chesapeake Bay watershed. A continuous function of four hydrogeologic, soil, and other landscape properties significant (α = 0.10) to nitrogen transport from uplands to streams was evaluated and compared among each of the more than 80,000 individual catchments (mean area, 2.1 km2) in the watershed. Budgets (including inputs, losses or net change in storage in uplands and stream corridors, and delivery to tidal waters) were also estimated for nitrogen applied to these catchments from selected upland sources. Most (81%) of such inputs are removed, retained, or otherwise processed in uplands rather than transported to surface waters. Combining SPARROW results with previous budget estimates suggests 55% of this processing is attributable to denitrification, 23% to crop or timber harvest, and 6% to volatilization. Remaining upland inputs represent a net annual increase in landscape storage in soils or biomass exceeding 10 kg per hectare in some areas. Such insights are important for planning watershed restoration and for improving future watershed models.

  20. Real-time stereo vision-based lane detection system

    NASA Astrophysics Data System (ADS)

    Fan, Rui; Dahnoun, Naim

    2018-07-01

    The detection of multiple curved lane markings on a non-flat road surface is still a challenging task for vehicular systems. To make an improvement, depth information can be used to enhance the robustness of the lane detection systems. In this paper, a proposed lane detection system is developed from our previous work where the estimation of the dense vanishing point is further improved using the disparity information. However, the outliers in the least squares fitting severely affect the accuracy when estimating the vanishing point. Therefore, in this paper we use random sample consensus to update the parameters of the road model iteratively until the percentage of the inliers exceeds our pre-set threshold. This significantly helps the system to overcome some suddenly changing conditions. Furthermore, we propose a novel lane position validation approach which computes the energy of each possible solution and selects all satisfying lane positions for visualisation. The proposed system is implemented on a heterogeneous system which consists of an Intel Core i7-4720HQ CPU and an NVIDIA GTX 970M GPU. A processing speed of 143 fps has been achieved, which is over 38 times faster than our previous work. Moreover, in order to evaluate the detection precision, we tested 2495 frames including 5361 lanes. It is shown that the overall successful detection rate is increased from 98.7% to 99.5%.

  1. Blowing Snow Sublimation and Transport over Antarctica from 11 Years of CALIPSO Observations

    NASA Technical Reports Server (NTRS)

    Palm, Stephen P.; Kayetha, Vinay; Yang, Yuekui; Pauly, Rebecca

    2017-01-01

    Blowing snow processes commonly occur over the earth's ice sheets when the 10 mile wind speed exceeds a threshold value. These processes play a key role in the sublimation and redistribution of snow thereby influencing the surface mass balance. Prior field studies and modeling results have shown the importance of blowing snow sublimation and transport on the surface mass budget and hydrological cycle of high-latitude regions. For the first time, we present continent-wide estimates of blowing snow sublimation and transport over Antarctica for the period 2006-2016 based on direct observation of blowing snow events. We use an improved version of the blowing snow detection algorithm developed for previous work that uses atmospheric backscatter measurements obtained from the CALIOP (Cloud-Aerosol Lidar with Orthogonal Polarization) lidar aboard the CALIPSO (Cloud-Aerosol Lidar and Infrared Pathfinder Satellite Observation) satellite. The blowing snow events identified by CALIPSO and meteorological fields from MERRA-2 are used to compute the blowing snow sublimation and transport rates. Our results show that maximum sublimation occurs along and slightly inland of the coastline. This is contrary to the observed maximum blowing snow frequency which occurs over the interior. The associated temperature and moisture reanalysis fields likely contribute to the spatial distribution of the maximum sublimation values. However, the spatial pattern of the sublimation rate over Antarctica is consistent with modeling studies and precipitation estimates. Overall, our results show that the 2006-2016 Antarctica average integrated blowing snow sublimation is about 393 +/- 196 Gt yr(exp -1), which is considerably larger than previous model-derived estimates. We find maximum blowing snow transport amount of 5 Mt km-1 yr(exp -1) over parts of East Antarctica and estimate that the average snow transport from continent to ocean is about 3.7 Gt yr(exp -1). These continent-wide estimates are the first of their kind and can be used to help model and constrain the surface mass budget over Antarctica.

  2. Assessment of mammal reproduction for hunting sustainability through community-based sampling of species in the wild.

    PubMed

    Mayor, Pedro; El Bizri, Hani; Bodmer, Richard E; Bowler, Mark

    2017-08-01

    Wildlife subsistence hunting is a major source of protein for tropical rural populations and a prominent conservation issue. The intrinsic rate of natural increase. (r max ) of populations is a key reproductive parameter in the most used assessments of hunting sustainability. However, researchers face severe difficulties in obtaining reproductive data in the wild, so these assessments often rely on classic reproductive rates calculated mostly from studies of captive animals conducted 30 years ago. The result is a flaw in almost 50% of studies, which hampers management decision making. We conducted a 15-year study in the Amazon in which we used reproductive data from the genitalia of 950 hunted female mammals. Genitalia were collected by local hunters. We examined tissue from these samples to estimate birthrates for wild populations of the 10 most hunted mammals. We compared our estimates with classic measures and considered the utility of the use of r max in sustainability assessments. For woolly monkey (Lagothrix poeppigii) and tapir (Tapirus terrestris), wild birthrates were similar to those from captive populations, whereas birthrates for other ungulates and lowland-paca (Cuniculus paca) were significantly lower than previous estimates. Conversely, for capuchin monkeys (Sapajus macrocephalus), agoutis (Dasyprocta sp.), and coatis (Nasua nasua), our calculated reproductive rates greatly exceeded often-used values. Researchers could keep applying classic measures compatible with our estimates, but for other species previous estimates of r max may not be appropriate. We suggest that data from local studies be used to set hunting quotas. Our maximum rates of population growth in the wild correlated with body weight, which suggests that our method is consistent and reliable. Integration of this method into community-based wildlife management and the training of local hunters to record pregnancies in hunted animals could efficiently generate useful information of life histories of wild species and thus improve management of natural resources. © 2016 Society for Conservation Biology.

  3. A screening-level modeling approach to estimate nitrogen ...

    EPA Pesticide Factsheets

    This paper presents a screening-level modeling approach that can be used to rapidly estimate nutrient loading and assess numerical nutrient standard exceedance risk of surface waters leading to potential classification as impaired for designated use. It can also be used to explore best management practice (BMP) implementation to reduce loading. The modeling framework uses a hybrid statistical and process based approach to estimate source of pollutants, their transport and decay in the terrestrial and aquatic parts of watersheds. The framework is developed in the ArcGIS environment and is based on the total maximum daily load (TMDL) balance model. Nitrogen (N) is currently addressed in the framework, referred to as WQM-TMDL-N. Loading for each catchment includes non-point sources (NPS) and point sources (PS). NPS loading is estimated using export coefficient or event mean concentration methods depending on the temporal scales, i.e., annual or daily. Loading from atmospheric deposition is also included. The probability of a nutrient load to exceed a target load is evaluated using probabilistic risk assessment, by including the uncertainty associated with export coefficients of various land uses. The computed risk data can be visualized as spatial maps which show the load exceedance probability for all stream segments. In an application of this modeling approach to the Tippecanoe River watershed in Indiana, USA, total nitrogen (TN) loading and risk of standard exce

  4. Aquatic concentrations of chemical analytes compared to ...

    EPA Pesticide Factsheets

    We describe screening level estimates of potential aquatic toxicity posed by 227 chemical analytes that were measured in 25 ambient water samples collected as part of a joint USGS/USEPA drinking water plant study. Measured concentrations were compared to biological effect concentration (EC) estimates, including USEPA aquatic life criteria, effective plasma concentrations of pharmaceuticals, published toxicity data summarized in the USEPA ECOTOX database, and chemical structure-based predictions. Potential dietary exposures were estimated using a generic 3-tiered food web accumulation scenario. For many analytes, few or no measured effect data were found, and for some analytes, reporting limits exceeded EC estimates, limiting the scope of conclusions. Results suggest occasional occurrence above ECs for copper, aluminum, strontium, lead, uranium, and nitrate. Sparse effect data for manganese, antimony, and vanadium suggest that these analytes may occur above ECs, but additional effect data would be desirable to corroborate EC estimates. These conclusions were not affected by bioaccumulation estimates. No organic analyte concentrations were found to exceed EC estimates, but ten analytes had concentrations in excess of 1/10th of their respective EC: triclocarban, norverapamil, progesterone, atrazine, metolachlor, triclosan, para-nonylphenol, ibuprofen, venlafaxine, and amitriptyline, suggesting more detailed characterization of these analytes. Purpose: to provide sc

  5. Integrated assessment of infant exposure to persistent organic pollutants and mercury via dietary intake in a central western Mediterranean site (Menorca Island).

    PubMed

    Junqué, Eva; Garí, Mercè; Arce, Anna; Torrent, Maties; Sunyer, Jordi; Grimalt, Joan O

    2017-07-01

    In this research the levels of organochlorine compounds (OCs) and mercury (Hg) in several food items from Menorca Island were presented. The dietary exposure assessment was performed in children population from the island. Finally, body burden of OCs and Hg in these infants were associated with their dietary intakes of the selected food items. The dietary exposure to persistent pollutants by children population from Menorca Island was assessed. The concentrations of 11 organochlorine pesticides, 6 polychlorinated biphenils (PCBs) and 1 inorganic toxic element, Hg, were determined in 46 food samples that included fish, shellfish, meat, fruit, vegetables, cheese and eggs, which were acquired in local markets and department stores in the Menorca Island. The most contaminated food items were fish and shellfish, followed by meat and cheese products. OC levels were similar or lower than in other previous studies. However, 66% of the analysed fish and shellfish species for Hg exceeded the human consumption safety limits according to the European Union Legislation. Pollutant data from food was combined with the pattern of consumption of these foodstuffs in order to calculate the estimated daily intake (EDI) of these contaminants. According to our results, fish and fruit were the main sources of OCs to the EDIs (contributing to 37% and 29%, respectively) while fish and shellfish were the main sources of Hg (76% and 17%). The estimated EDIs of OCs were well below to the reported FAO/WHO Tolerable Intakes. However, estimated weekly intake of Hg would exceed the Provisional Tolerable Weekly Intake indicated by EFSA in the case that the only fish and seafood source would be from the central western Mediterranean. Direct associations between fish/shellfish consumption and hair concentrations of Hg and fish and meat consumption and 4,4'-DDT concentrations in venous serum in the Menorcan children were observed. Copyright © 2017 Elsevier Inc. All rights reserved.

  6. Global Burden of Leptospirosis: Estimated in Terms of Disability Adjusted Life Years

    PubMed Central

    Torgerson, Paul R.; Hagan, José E.; Costa, Federico; Calcagno, Juan; Kane, Michael; Martinez-Silveira, Martha S.; Goris, Marga G. A.; Stein, Claudia; Ko, Albert I.; Abela-Ridder, Bernadette

    2015-01-01

    Background Leptospirosis, a spirochaetal zoonosis, occurs in diverse epidemiological settings and affects vulnerable populations, such as rural subsistence farmers and urban slum dwellers. Although leptospirosis can cause life-threatening disease, there is no global burden of disease estimate in terms of Disability Adjusted Life Years (DALYs) available. Methodology/Principal Findings We utilised the results of a parallel publication that reported global estimates of morbidity and mortality due to leptospirosis. We estimated Years of Life Lost (YLLs) from age and gender stratified mortality rates. Years of Life with Disability (YLDs) were developed from a simple disease model indicating likely sequelae. DALYs were estimated from the sum of YLLs and YLDs. The study suggested that globally approximately 2·90 million DALYs are lost per annum (UIs 1·25–4·54 million) from the approximately annual 1·03 million cases reported previously. Males are predominantly affected with an estimated 2·33 million DALYs (UIs 0·98–3·69) or approximately 80% of the total burden. For comparison, this is over 70% of the global burden of cholera estimated by GBD 2010. Tropical regions of South and South-east Asia, Western Pacific, Central and South America, and Africa had the highest estimated leptospirosis disease burden. Conclusions/Significance Leptospirosis imparts a significant health burden worldwide, which approach or exceed those encountered for a number of other zoonotic and neglected tropical diseases. The study findings indicate that highest burden estimates occur in resource-poor tropical countries, which include regions of Africa where the burden of leptospirosis has been under-appreciated and possibly misallocated to other febrile illnesses such as malaria. PMID:26431366

  7. 34 CFR 673.5 - Overaward.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... loan or the FSEOG, combined with the other estimated financial assistance the student receives, does not exceed the student's financial need. (2) FWS Program. An institution may only award FWS employment to a student if the award, combined with the other estimated financial assistance the student...

  8. Chemical fractionation of Cu and Zn in stormwater, roadway dust and stormwater pond sediments

    USGS Publications Warehouse

    Camponelli, Kimberly M.; Lev, Steven M.; Snodgrass, Joel W.; Landa, Edward R.; Casey, Ryan E.

    2010-01-01

    This study evaluated the chemical fractionation of Cu and Zn from source to deposition in a stormwater system. Cu and Zn concentrations and chemical fractionation were determined for roadway dust, roadway runoff and pond sediments. Stormwater Cu and Zn concentrations were used to generate cumulative frequency distributions to characterize potential exposure to pond-dwelling organisms. Dissolved stormwater Zn exceeded USEPA acute and chronic water quality criteria in approximately 20% of storm samples and 20% of the storm duration sampled. Dissolved Cu exceeded the previously published chronic criterion in 75% of storm samples and duration and exceeded the acute criterion in 45% of samples and duration. The majority of sediment Cu (92–98%) occurred in the most recalcitrant phase, suggesting low bioavailability; Zn was substantially more available (39–62% recalcitrant). Most sediment concentrations for Cu and Zn exceeded published threshold effect concentrations and Zn often exceeded probable effect concentrations in surface sediments.

  9. Human cancer risk estimation for 1,3-butadiene: An assessment of personal exposure and different microenvironments.

    PubMed

    Huy, Lai Nguyen; Lee, Shun Cheng; Zhang, Zhuozhi

    2018-03-01

    This study estimated the lifetime cancer risk (LCR) attributable to 1,3-butadiene (BD) personal exposure and to other microenvironments, including residential home, outdoor, in-office, in-vehicle, and dining. Detailed life expectancy by country (WHO), inhalation rate and body weight by gender reported by USEPA were used for the calculation, focusing on adult population (25≤Age<65). LCR estimation of the adult population due to personal exposure exceeded the USEPA benchmark of 1×10 -6 in many cities. For outdoor BD exposure, LCR estimations in 45 out of 175 cities/sites (sharing 26%) exceeded the USEPA benchmark. Out of the top 20 cities having high LCR estimations, developing countries contributed 19 cities, including 14, 3, 1, 1 cities in China, India, Chile, and Pakistan. One city in the United States was in the list due to the nearby industrial facilities. The LCR calculations for BD levels found in residential home, in-vehicle and dining microenvironments also exceeded 1×10 -6 in some cities, while LCR caused by in-office BD levels had the smallest risk. Four cities/regions were used for investigating source distributions to total LCR results because of their sufficient BD data. Home exposure contributed significantly to total LCR value (ranging 56% to 86%), followed by in-vehicle (4% to 38%) and dining (4 to 7%). Outdoor microenvironment shared highly in Tianjin with 6%, whereas in-office contributed from 2-3% for all cities. High LCR estimations found in developing countries highlighted the greater cancer risk caused by BD in other cities without available measurement data. Copyright © 2017 Elsevier B.V. All rights reserved.

  10. Estimate of tephra accumulation probabilities for the U.S. Department of Energy's Hanford Site, Washington

    USGS Publications Warehouse

    Hoblitt, Richard P.; Scott, William E.

    2011-01-01

    In response to a request from the U.S. Department of Energy, we estimate the thickness of tephra accumulation that has an annual probability of 1 in 10,000 of being equaled or exceeded at the Hanford Site in south-central Washington State, where a project to build the Tank Waste Treatment and Immobilization Plant is underway. We follow the methodology of a 1987 probabilistic assessment of tephra accumulation in the Pacific Northwest. For a given thickness of tephra, we calculate the product of three probabilities: (1) the annual probability of an eruption producing 0.1 km3 (bulk volume) or more of tephra, (2) the probability that the wind will be blowing toward the Hanford Site, and (3) the probability that tephra accumulations will equal or exceed the given thickness at a given distance. Mount St. Helens, which lies about 200 km upwind from the Hanford Site, has been the most prolific source of tephra fallout among Cascade volcanoes in the recent geologic past and its annual eruption probability based on this record (0.008) dominates assessment of future tephra falls at the site. The probability that the prevailing wind blows toward Hanford from Mount St. Helens is 0.180. We estimate exceedance probabilities of various thicknesses of tephra fallout from an analysis of 14 eruptions of the size expectable from Mount St. Helens and for which we have measurements of tephra fallout at 200 km. The result is that the estimated thickness of tephra accumulation that has an annual probability of 1 in 10,000 of being equaled or exceeded is about 10 centimeters. It is likely that this thickness is a maximum estimate because we used conservative estimates of eruption and wind probabilities and because the 14 deposits we used probably provide an over-estimate. The use of deposits in this analysis that were mostly compacted by the time they were studied and measured implies that the bulk density of the tephra fallout we consider here is in the range of 1,000-1,250 kg/m3. The load of 10 cm of such tephra fallout on a flat surface would therefore be in the range of 100-125 kg/m2; addition of water from rainfall or snowmelt would provide additional load.

  11. Calculating weighted estimates of peak streamflow statistics

    USGS Publications Warehouse

    Cohn, Timothy A.; Berenbrock, Charles; Kiang, Julie E.; Mason, Jr., Robert R.

    2012-01-01

    According to the Federal guidelines for flood-frequency estimation, the uncertainty of peak streamflow statistics, such as the 1-percent annual exceedance probability (AEP) flow at a streamgage, can be reduced by combining the at-site estimate with the regional regression estimate to obtain a weighted estimate of the flow statistic. The procedure assumes the estimates are independent, which is reasonable in most practical situations. The purpose of this publication is to describe and make available a method for calculating a weighted estimate from the uncertainty or variance of the two independent estimates.

  12. A seismic hazard uncertainty analysis for the New Madrid seismic zone

    USGS Publications Warehouse

    Cramer, C.H.

    2001-01-01

    A review of the scientific issues relevant to characterizing earthquake sources in the New Madrid seismic zone has led to the development of a logic tree of possible alternative parameters. A variability analysis, using Monte Carlo sampling of this consensus logic tree, is presented and discussed. The analysis shows that for 2%-exceedence-in-50-year hazard, the best-estimate seismic hazard map is similar to previously published seismic hazard maps for the area. For peak ground acceleration (PGA) and spectral acceleration at 0.2 and 1.0 s (0.2 and 1.0 s Sa), the coefficient of variation (COV) representing the knowledge-based uncertainty in seismic hazard can exceed 0.6 over the New Madrid seismic zone and diminishes to about 0.1 away from areas of seismic activity. Sensitivity analyses show that the largest contributor to PGA, 0.2 and 1.0 s Sa seismic hazard variability is the uncertainty in the location of future 1811-1812 New Madrid sized earthquakes. This is followed by the variability due to the choice of ground motion attenuation relation, the magnitude for the 1811-1812 New Madrid earthquakes, and the recurrence interval for M>6.5 events. Seismic hazard is not very sensitive to the variability in seismogenic width and length. Published by Elsevier Science B.V.

  13. Self Consistent Bathymetric Mapping Using Sub-maps: Survey Results From the TAG Hydrothermal Structure

    NASA Astrophysics Data System (ADS)

    Roman, C. N.; Reves-Sohn, R.; Singh, H.; Humphris, S.

    2005-12-01

    The spatial resolution of microbathymetry maps created using robotic vehicles such as ROVs, AUVs and manned submersibles in the deep ocean is currently limited by the accuracy of the vehicle navigation data. Errors in the vehicle position estimate commonly exceed the ranging errors of the acoustic mapping sensor itself, which creates inconsistency in the map making process and produces artifacts that lower resolution and distort map integrity. We present a methodology for producing self-consistent maps and improving vehicle position estimation by exploiting accurate local navigation and utilizing terrain relative measurements. The complete map is broken down into individual "sub-maps'', which are generated using short term Doppler based navigation. The sub-maps are pairwise registered to constrain the vehicle position estimates by matching terrain that has been imaged multiple times. This procedure is implemented using a delayed state Kalman filter to incorporate the sub-map registrations as relative position measurements between previously visited vehicle locations. Archiving of previous positions in a filter state vector allows for continual adjustment of the sub-map locations. The terrain registration is accomplished using a two dimensional correlation and a six degree of freedom point cloud alignment method tailored to bathymetric data. This registration procedure is applicable to fully 3 dimensional complex underwater environments. The complete bathymetric map is then created from the union of all sub-maps that have been aligned in a consistent manner. The method is applied to an SM2000 multibeam survey of the TAG hydrothermal structure on the Mid-Atlantic Ridge at 26(°)N using the Jason II ROV. The survey included numerous crossing tracklines designed to test this algorithm, and the final gridded bathymetry data is sub-meter accurate. The high-resolution map has allowed for the identification of previously unrecognized fracture patterns associated with flow focusing at TAG, as well as imaging of fine-scale features such as individual sulfide talus blocks and ODP re-entry cones.

  14. A study of the cost-effective markets for new technology agricultural aircraft

    NASA Technical Reports Server (NTRS)

    Hazelrigg, G. A., Jr.; Clyne, F.

    1979-01-01

    A previously developed data base was used to estimate the regional and total U.S. cost-effective markets for a new technology agricultural aircraft as incorporating features which could result from NASA-sponsored aerial applications research. The results show that the long-term market penetration of a new technology aircraft would be near 3,000 aircraft. This market penetration would be attained in approximately 20 years. Annual sales would be about 200 aircraft after 5 to 6 years of introduction. The net present value of cost savings benefit which this aircraft would yield (measured on an infinite horizon basis) would be about $35 million counted at a 10 percent discount rate and $120 million at a 5 percent discount rate. At both discount rates the present value of cost savings exceeds the present value of research and development (R&D) costs estimated for the development of the technology base needed for the proposed aircraft. These results are quite conservative as they have been derived neglecting future growth in the agricultural aviation industry, which has been averaging about 12 percent per year over the past several years.

  15. A Second Generation Swirl-Venturi Lean Direct Injection Combustion Concept

    NASA Technical Reports Server (NTRS)

    Tacina, Kathleen M.; Chang, Clarence T.; He, Zhuohui Joe; Lee, Phil; Dam, Bidhan; Mongia, Hukam

    2014-01-01

    A low-NO (sub x) aircraft gas turbine engine combustion concept was developed and tested. The concept is a second generation swirl-venturi lean direct injection (SV-LDI) concept. LDI is a lean-burn combustion concept in which the fuel is injected directly into the flame zone. Three second generation SV-LDI configurations were developed. All three were based on the baseline 9-point SV-LDI configuration reported previously. These second generation configurations had better low power operability than the baseline 9-point configuration. Two of these second generation configurations were tested in a NASA Glenn Research Center flametube; these two configurations are called the at dome and 5-recess configurations. Results show that the 5-recess configuration generally had lower NO (sub x) emissions than the flat dome configuration. Correlation equations were developed for the flat dome configuration so that the landing-takeoff NO (sub x) emissions could be estimated. The flat dome landing-takeoff NO (sub x) is estimated to be 87-88 percent below the CAEP/6 standards, exceeding the ERA project goal of 75 percent reduction.

  16. Mapping critical loads of nitrogen deposition for aquatic ecosystems in the Rocky Mountains, USA

    USGS Publications Warehouse

    Nanus, Leora; Clow, David W.; Saros, Jasmine E.; Stephens, Verlin C.; Campbell, Donald H.

    2012-01-01

    Spatially explicit estimates of critical loads of nitrogen (N) deposition (CLNdep) for nutrient enrichment in aquatic ecosystems were developed for the Rocky Mountains, USA, using a geostatistical approach. The lowest CLNdep estimates (−1 yr−1) occurred in high-elevation basins with steep slopes, sparse vegetation, and abundance of exposed bedrock and talus. These areas often correspond with areas of high N deposition (>3 kg N ha−1 yr−1), resulting in CLNdep exceedances ≥1.5 ± 1 kg N ha−1 yr−1. CLNdep and CLNdep exceedances exhibit substantial spatial variability related to basin characteristics and are highly sensitive to the NO3− threshold at which ecological effects are thought to occur. Based on an NO3− threshold of 0.5 μmol L−1, N deposition exceeds CLNdep in 21 ± 8% of the study area; thus, broad areas of the Rocky Mountains may be impacted by excess N deposition, with greatest impacts at high elevations.

  17. Mapping critical loads of nitrogen deposition for aquatic ecosystems in the Rocky Mountains, USA

    USGS Publications Warehouse

    Nanus, Leora; Clow, David W.; Saros, Jasmine E.; Stephens, Verlin C.; Campbell, Donald H.

    2012-01-01

    Spatially explicit estimates of critical loads of nitrogen (N) deposition (CLNdep) for nutrient enrichment in aquatic ecosystems were developed for the Rocky Mountains, USA, using a geostatistical approach. The lowest CLNdep estimates (-1 yr-1) occurred in high-elevation basins with steep slopes, sparse vegetation, and abundance of exposed bedrock and talus. These areas often correspond with areas of high N deposition (>3 kg N ha-1 yr-1), resulting in CLNdep exceedances ≥1.5 ± 1 kg N ha-1 yr-1. CLNdep and CLNdep exceedances exhibit substantial spatial variability related to basin characteristics and are highly sensitive to the NO3- threshold at which ecological effects are thought to occur. Based on an NO3- threshold of 0.5 μmol L-1, N deposition exceeds CLNdep in 21 ± 8% of the study area; thus, broad areas of the Rocky Mountains may be impacted by excess N deposition, with greatest impacts at high elevations.

  18. Global Statistical Maps of Extreme-Event Magnetic Observatory 1 Min First Differences in Horizontal Intensity

    NASA Technical Reports Server (NTRS)

    Love, Jeffrey J.; Coïsson, Pierdavide; Pulkkinen, Antti

    2016-01-01

    Analysis is made of the long-term statistics of three different measures of ground level, storm time geomagnetic activity: instantaneous 1 min first differences in horizontal intensity (delta)Bh, the root-mean-square of 10 consecutive 1 min differences S, and the ramp change R over 10 min. Geomagnetic latitude maps of the cumulative exceedances of these three quantities are constructed, giving the threshold(nTmin) for which activity within a 24 h period can be expected to occur once per year, decade, and century. Specifically, at geomagnetic 55deg, we estimate once-per-century (delta)Bh, S, and R exceedances and a site-to-site,proportional, 1 standard deviation range [1(sigma), lower and upper] to be, respectively, 1000, [690, 1450]; 500,[350, 720]; and 200, [140, 280] nTmin. At 40deg, we estimate once-per-century (delta)Bh, S, and R exceedances and1(sigma) values to be 200, [140, 290]; 100, [70, 140]; and 40, [30, 60] nTmin.

  19. Microsecond Simulations of DNA and Ion Transport in Nanopores with Novel Ion-Ion and Ion-Nucleotides Effective Potentials

    PubMed Central

    De Biase, Pablo M.; Markosyan, Suren; Noskov, Sergei

    2014-01-01

    We developed a novel scheme based on the Grand-Canonical Monte-Carlo/Brownian Dynamics (GCMC/BD) simulations and have extended it to studies of ion currents across three nanopores with the potential for ssDNA sequencing: solid-state nanopore Si3N4, α-hemolysin, and E111N/M113Y/K147N mutant. To describe nucleotide-specific ion dynamics compatible with ssDNA coarse-grained model, we used the Inverse Monte-Carlo protocol, which maps the relevant ion-nucleotide distribution functions from an all-atom MD simulations. Combined with the previously developed simulation platform for Brownian Dynamic (BD) simulations of ion transport, it allows for microsecond- and millisecond-long simulations of ssDNA dynamics in nanopore with a conductance computation accuracy that equals or exceeds that of all-atom MD simulations. In spite of the simplifications, the protocol produces results that agree with the results of previous studies on ion conductance across open channels and provide direct correlations with experimentally measured blockade currents and ion conductances that have been estimated from all-atom MD simulations. PMID:24738152

  20. The 25 percent-efficient GaAs Cassegrainian concentrator cell

    NASA Technical Reports Server (NTRS)

    Hamaker, H. C.; Grounner, M.; Kaminar, N. R.; Kuryla, M. S.; Ladle, M. J.; Liu, D. D.; Macmillan, H. F.; Partain, L. D.; Virshup, G. F.; Werthen, J. G.

    1989-01-01

    Very high-efficiency GaAs Cassegrainian solar cells have been fabricated in both the n-p and p-n configurations. The n-p configuration exhibits the highest efficiency at concentration, the best cells having an efficiency eta of 24.5 percent (100X, AM0, temperature T = 28 C). Although the cells are designed for operation at this concentration, peak efficiency is observed near 300 suns (eta = 25.1 percent). To our knowledge, this is the highest reported solar cell efficiency for space applications. The improvement in efficiency over that reported at the previous SPRAT conference is attributed primarily to lower series resistance and improved grid-line plating procedures. Using previously measured temperature coefficients, researchers estimate that the n-p GaAs cells should deliver approximately 22.5 percent efficiency at the operating conditions of 100 suns and T = 80 C. This performance exceeds the NASA program goal of 22 percent for the Cassegrainian cell. One hundred Cassegrainian cells have been sent to NASA as deliverables, sixty-eight in the n-p configuration and thirty-two in the p-n configuration.

  1. Shorebird abundance and distribution on the coastal plain of the Arctic National Wildlife Refuge

    USGS Publications Warehouse

    Brown, S.; Bart, J.; Lanctot, Richard B.; Johnson, J.A.; Kendall, S.; Payer, D.; Johnson, J.

    2007-01-01

    The coastal plain of the Arctic National Wildlife Refuge hosts seven species of migratory shorebirds listed as highly imperiled or high priority by the U.S. Shorebird Conservation Plan and five species listed as Birds of Conservation Concern by the U.S. Fish and Wildlife Service. During the first comprehensive shorebird survey of the 674 000 ha "1002 Area" on the coastal plain, we recorded 14 species of breeding shorebirds at 197 rapidly surveyed plots during June 2002 and 2004. We also estimated detection ratios with a double counting technique, using data collected at 37 intensively studied plots located on the North Slope of Alaska and northern Canada. We stratified the study area by major habitat types, including wetlands, moist areas, uplands, and riparian areas, using previously classified Landsat imagery. We developed population estimates with confidence limits by species, and estimated the total number of shorebirds in the study area to be 230 000 (95% CI: 104 000-363 000), which exceeds the biological criterion for classification as both a Western Hemisphere Shorebird Reserve Network Site of International Importance (100 000 birds) and a Ramsar Wetland of International Importance (20 000 birds), even when conservatively estimated. Species richness and the density of many species were highest in wetland or riparian habitats, which are clustered along the coast. ?? The Cooper Ornithological Society 2007.

  2. Peak flow regression equations For small, ungaged streams in Maine: Comparing map-based to field-based variables

    USGS Publications Warehouse

    Lombard, Pamela J.; Hodgkins, Glenn A.

    2015-01-01

    Regression equations to estimate peak streamflows with 1- to 500-year recurrence intervals (annual exceedance probabilities from 99 to 0.2 percent, respectively) were developed for small, ungaged streams in Maine. Equations presented here are the best available equations for estimating peak flows at ungaged basins in Maine with drainage areas from 0.3 to 12 square miles (mi2). Previously developed equations continue to be the best available equations for estimating peak flows for basin areas greater than 12 mi2. New equations presented here are based on streamflow records at 40 U.S. Geological Survey streamgages with a minimum of 10 years of recorded peak flows between 1963 and 2012. Ordinary least-squares regression techniques were used to determine the best explanatory variables for the regression equations. Traditional map-based explanatory variables were compared to variables requiring field measurements. Two field-based variables—culvert rust lines and bankfull channel widths—either were not commonly found or did not explain enough of the variability in the peak flows to warrant inclusion in the equations. The best explanatory variables were drainage area and percent basin wetlands; values for these variables were determined with a geographic information system. Generalized least-squares regression was used with these two variables to determine the equation coefficients and estimates of accuracy for the final equations.

  3. Estimating risks for water-quality exceedances of total-copper from highway and urban runoff under predevelopment and current conditions with the Stochastic Empirical Loading and Dilution Model (SELDM)

    USGS Publications Warehouse

    Granato, Gregory E.; Jones, Susan C.; Dunn, Christopher N.; Van Weele, Brian

    2017-01-01

    The stochastic empirical loading and dilution model (SELDM) was used to demonstrate methods for estimating risks for water-quality exceedances of event-mean concentrations (EMCs) of total-copper. Monte Carlo methods were used to simulate stormflow, total-hardness, suspended-sediment, and total-copper EMCs as stochastic variables. These simulations were done for the Charles River Basin upstream of Interstate 495 in Bellingham, Massachusetts. The hydrology and water quality of this site were simulated with SELDM by using data from nearby, hydrologically similar sites. Three simulations were done to assess the potential effects of the highway on receiving-water quality with and without highway-runoff treatment by a structural best-management practice (BMP). In the low-development scenario, total copper in the receiving stream was simulated by using a sediment transport curve, sediment chemistry, and sediment-water partition coefficients. In this scenario, neither the highway runoff nor the BMP effluent caused concentration exceedances in the receiving stream that exceed the once in three-year threshold (about 0.54 percent). In the second scenario, without the highway, runoff from the large urban areas in the basin caused exceedances in the receiving stream in 2.24 percent of runoff events. In the third scenario, which included the effects of the urban runoff, neither the highway runoff nor the BMP effluent increased the percentage of exceedances in the receiving stream. Comparison of the simulated geometric mean EMCs with data collected at a downstream monitoring site indicates that these simulated values are within the 95-percent confidence interval of the geometric mean of the measured EMCs.

  4. Risk assessment of water quality in three North Carolina, USA, streams supporting federally endangered freshwater mussels (Unionidae)

    USGS Publications Warehouse

    Ward, S.; Augspurger, T.; Dwyer, F.J.; Kane, C.; Ingersoll, C.G.

    2007-01-01

    Water quality data were collected from three drainages supporting the endangered Carolina heelsplitter (Lasmigona decorata) and dwarf wedgemussel (Alasmidonta heterodon) to determine the potential for impaired water quality to limit the recovery of these freshwater mussels in North Carolina, USA. Total recoverable copper, total residual chlorine, and total ammonia nitrogen were measured every two months for approximately a year at sites bracketing wastewater sources and mussel habitat. These data and state monitoring datasets were compared with ecological screening values, including estimates of chemical concentrations likely to be protective of mussels, and federal ambient water quality criteria to assess site risks following a hazard quotient approach. In one drainage, the site-specific ammonia ecological screening value for acute exposures was exceeded in 6% of the samples, and 15% of samples exceeded the chronic ecological screening value; however, ammonia concentrations were generally below levels of concern in other drainages. In all drainages, copper concentrations were higher than ecological screening values most frequently (exceeding the ecological screening values for acute exposures in 65-94% of the samples). Chlorine concentrations exceeding the acute water quality criterion were observed in 14 and 35% of samples in two of three drainages. The ecological screening values were exceeded most frequently in Goose Creek and the Upper Tar River drainages; concentrations rarely exceeded ecological screening values in the Swift Creek drainage except for copper. The site-specific risk assessment approach provides valuable information (including site-specific risk estimates and ecological screening values for protection) that can be applied through regulatory and nonregulatory means to improve water quality for mussels where risks are indicated and pollutant threats persist. ?? 2007 SETAC.

  5. Streambed scour evaluations and conditions at selected bridge sites in Alaska, 2012

    USGS Publications Warehouse

    Beebee, Robin A.; Schauer, Paul V.

    2015-11-19

    Vertical contraction and pressure flow occurred during 1 percent or smaller annual exceedance probability floods at five sites, including three aggradation sites. Contraction scour exceeded 5 feet at two sites, and total scour at piers (pier scour plus contraction scour) exceeded 5 feet at two sites. Debris accumulation increased calculated pier scour at six sites by an average of 1.2 feet. Total scour at abutments including contraction scour exceeded 5 feet at seven sites. Scour estimates seemed excessive at aggradation sites where upstream sediment supply controls scour and deposition processes, at cohesive soil sites where conservative assumptions were made for soil strength and flood duration, and for abutment scour at sites where failure of the embankment and attendant channel widening would reduce scour.

  6. Shifting the bell curve: the benefits and costs of raising student achievement.

    PubMed

    Yeh, Stuart S

    2009-02-01

    Benefit-cost analysis was conducted to estimate the increase in earnings, increased tax revenues, value of less crime, and reductions in welfare costs attributable to nationwide implementation of rapid assessment, a promising intervention for raising student achievement in math and reading. Results suggest that social benefits would exceed total social costs by a ratio of 28. Fiscal benefits to the federal government would exceed costs to the federal treasury by a ratio of 93. Social benefits would exceed costs to each state treasury by a ratio no lower than 286, and fiscal benefits would exceed costs to each state treasury by a ratio no lower than 5, for all but two state treasuries. Sensitivity analyses suggest that the findings are robust to a 5-fold change in the underlying parameters.

  7. Comparison of LiDAR- and photointerpretation-based estimates of canopy cover

    Treesearch

    Demetrios Gatziolis

    2012-01-01

    An evaluation of the agreement between photointerpretation- and LiDARbased estimates of canopy cover was performed using 397 90 x 90 m reference areas in Oregon. It was determined that at low canopy cover levels LiDAR estimates tend to exceed those from photointerpretation and that this tendency reverses at high canopy cover levels. Characteristics of the airborne...

  8. Contribution of prepregnancy body mass index and gestational weight gain to adverse neonatal outcomes: population attributable fractions for Canada.

    PubMed

    Dzakpasu, Susie; Fahey, John; Kirby, Russell S; Tough, Suzanne C; Chalmers, Beverley; Heaman, Maureen I; Bartholomew, Sharon; Biringer, Anne; Darling, Elizabeth K; Lee, Lily S; McDonald, Sarah D

    2015-02-05

    Low or high prepregnancy body mass index (BMI) and inadequate or excess gestational weight gain (GWG) are associated with adverse neonatal outcomes. This study estimates the contribution of these risk factors to preterm births (PTBs), small-for-gestational age (SGA) and large-for-gestational age (LGA) births in Canada compared to the contribution of prenatal smoking, a recognized perinatal risk factor. We analyzed data from the Canadian Maternity Experiences Survey. A sample of 5,930 women who had a singleton live birth in 2005-2006 was weighted to a nationally representative population of 71,200 women. From adjusted odds ratios, we calculated population attributable fractions to estimate the contribution of BMI, GWG and prenatal smoking to PTB, SGA and LGA infants overall and across four obstetric groups. Overall, 6% of women were underweight (<18.5 kg/m(2)) and 34.4% were overweight or obese (≥25.0 kg/m(2)). More than half (59.4%) gained above the recommended weight for their BMI, 18.6% gained less than the recommended weight and 10.4% smoked prenatally. Excess GWG contributed more to adverse outcomes than BMI, contributing to 18.2% of PTB and 15.9% of LGA. Although the distribution of BMI and GWG was similar across obstetric groups, their impact was greater among primigravid women and multigravid women without a previous PTB or pregnancy loss. The contributions of BMI and GWG to PTB and SGA exceeded that of prenatal smoking. Maternal weight, and GWG in particular, contributes significantly to the occurrence of adverse neonatal outcomes in Canada. Indeed, this contribution exceeds that of prenatal smoking for PTB and SGA, highlighting its public health importance.

  9. The massive halos of spiral galaxies

    NASA Technical Reports Server (NTRS)

    Zaritsky, Dennis; White, Simon D. M.

    1994-01-01

    We use a sample of satellite galaxies to demonstrate the existence of extended massive dark halos around spiral galaxies. Isolated spirals with rotation velocities near 250 km/s have a typical halo mass within 200 kpc of 1.5-2.6 x 10(exp 12) solar mass (90% confidence range for H(sub 0) = 75 km/s/Mpc). This result is most easily derived using standard mass estimator techniques, but such techniques do not account for the strong observational selection effects in the sample, nor for the extended mass distributions that the data imply. These complications can be addressed using scale-free models similar to those previously employed to study binary galaxies. When satellite velocities are assumed isotropic, both methods imply massive and extended halos. However, the derived masses depend sensitively on the assumed shape of satellite orbits. Furthermore, both methods ignore the fact that many of the satellites in the sample have orbital periods comparable to the Hubble time. The orbital phases of such satellites cannot be random, and their distribution in radius cannot be freely adjusted; rather these properties reflect ongoing infall onto the outer halos of their primaries. We use detailed dynamical models for halo formation to evaluate these problems, and we devise a maximum likelihood technique for estimating the parameters of such models from the data. The most strongly constrained parameter is the mass within 200-300 kpc, giving the confidence limits quoted above. The eccentricity, e, of satellite orbits is also strongly constrained, 0.50 less than e less than 0.88 at 90% confidence, implying a near-isotropic distribution of satellite velocities. The cosmic density parameter in the vicinity of our isolated halos exceeds 0.13 at 90% confidence, with preferred values exceeding 0.3.

  10. Simulation analysis of the unconfined aquifer, Raft River geothermal area, Idaho-Utah

    USGS Publications Warehouse

    Nichols, William D.

    1979-01-01

    This study covers about 1,000 mi2 (2,600 km2 ) of the southern Raft River drainage basin in south-central Idaho and northwest Utah. The main area of interest, approximately 200 mi2 (520 km2 ) of semiarid agricultural and rangeland in the southern Raft River Valley that includes the known Geothermal Resource Area near Bridge, Idaho, was modelled numerically to evaluate the hydrodynamics of the unconfined aquifer. Computed and estimated transmissivity values range from 1,200 feet squared per day (110 meters squared per day) to 73,500 feet squared per day (6,830 meters squared per day). Water budgets, including ground-water recharge and discharge for approximate equilibrium conditions, have been computed by several previous investigators; their estimates of available ground-water recharge range from about 46,000 acre-feet per year (57 cubic hectometers per year) to 100,000 acre-feet per year (123 cubic hectometers per year).Simulation modeling of equilibrium conditions represented by 1952 water levels suggests: (1) recharge to the water-table aquifer is about 63,000 acre-feet per year (77 cubic hectometers per year); (2) a significant volume of ground water is discharged through evapotranspiration by phreatophytes growing on the valley bottomlands; (3) the major source of recharge may be from upward leakage of water from a deeper, confined reservoir; and (4) the aquifer transmissivity probably does not exceed about 12,000 feet squared per day (3,100 meters squared per day). Additional analysis carried out by simulating transient conditions from 1952 to 1965 strongly suggests that aquifer transmissivity does not exceed about 7,700 feet squared per day (700 meters squared per day). The model was calibrated using slightly modified published pumpage data; it satisfactorily reproduced the historic water-level decline over the period 1952-65.

  11. Sampling and evaluation of specific absorption rates during patient examinations performed on 1.5-Tesla MR systems.

    PubMed

    Brix, G; Reinl, M; Brinker, G

    2001-07-01

    It was the purpose of present study, to evaluate a large number of exposure-time courses measured during patient examinations in clinical routine in relation to the current IEC standard and the draft version of the revised standard and, moreover, to investigate whether there is a correlation between the subjective heat perception of the patients during the MR examination and the intensity of RF power deposition. To this end, radiofrequency exposure to 591 patients undergoing MR examinations performed on 1.5-Tesla MR systems was monitored in five clinics and evaluated in accordance with both IEC standards. For each of the 7902 sequences applied, whole body and partial body SARs were estimated on the basis of a simple patient model. Following the examinations, 149 patients were willing to provide information in a questionnaire regarding their body weight and their subjective heat perception during the examination. Although patient masses entered into the MR system were in some cases too high, reliable masses could be estimated by the SAR monitor. In relation to our data, the revision of the IEC standard results in a tightening of the restrictions, but still more than 96% of the examinations did not exceed the SAR limits recommended for the normal operating mode. For the exposure conditions examined, no statistically significant correlation was found between the subjective heat perception of the patients and the intensity of power deposition. Taking advantage of the possibility to compute running SAR averages, MR sequences can be employed in clinical practice for which SAR levels exceed the defined IEC limits, if the acquisition time is short in relation to the averaging period and energy deposition has been low previous to the applied high-power sequence.

  12. Ecological Risk Assessment of Perfluooroctane Sulfonate (PFOS) to Aquatic Fauna From a Bayou Adjacent to Former Fire Training Areas at a U.S. Air Force Installation.

    PubMed

    Salice, Christopher J; Anderson, Todd A; Anderson, Richard H; Olson, Adric D

    2018-04-25

    Per- and polyfluoroalkyl substances (PFASs) continue to receive significant attention with particular concern for PFASs such as perfluorooctane sulfonate (PFOS) which was a constituent of Aqueous Film-Forming Foam used widely as a fire suppressant for aircraft since the 1970 s. We were interested in the potential for risk to ecological receptors inhabiting Cooper Bayou which is adjacent to two former fire-training areas (FTAs) at Barksdale Air Force Base, LA. Previous research showed higher PFOS concentrations in surface water and biota from Cooper Bayou compared to reference sites. To estimate risk, we compared surface water concentrations from multiple sites within Cooper Bayou to several PFOS chronic toxicity benchmarks for freshwater aquatic organisms (∼0.4-5.1 µg PFOS/L), and showed probility of exceedances from 0.04 to 0.5 suggesting a potential for adverse effects in the most contaminated habitats. A tissue residue assessment similarly showed some exceedance of benchmarks but with with a lower probability (max = 0.17). Both FTAs have been inactive for more than a decade so exposures (and, thus, risks) are expected to decline. Several uncertainties limit confidence in our risk estimates and include, highly dynamic surface water concentrations and limited chronic toxicity data for relevant species. Also, we have little data concerning organisms higher in the food chain which may receive higher lifetime exposures given the potential for PFOS to bioaccumulate and the longevity of many of these organisms. Overall, this study suggests PFOS can occur at concentrations that may cause adverse effects to ecological receptors although additional, focused research is needed to reduce uncertainties. This article is protected by copyright. All rights reserved. This article is protected by copyright. All rights reserved.

  13. Streambed scour evaluations and conditions at selected bridge sites in Alaska, 2013–15

    USGS Publications Warehouse

    Beebee, Robin A.; Dworsky, Karenth L.; Knopp, Schyler J.

    2017-12-27

    Streambed scour potential was evaluated at 52 river- and stream-spanning bridges in Alaska that lack a quantitative scour analysis or have unknown foundation details. All sites were evaluated for stream stability and long-term scour potential. Contraction scour and abutment scour were calculated for 52 bridges, and pier scour was calculated for 11 bridges that had piers. Vertical contraction (pressure flow) scour was calculated for sites where the modeled water surface was higher than the superstructure of the bridge. In most cases, hydraulic models of the 1- and 0.2-percent annual exceedance probability floods (also known as the 100- and 500-year floods, respectively) were used to derive hydraulic variables for the scour calculations. Alternate flood values were used in scour calculations for sites where smaller floods overtopped a bridge or where standard flood-frequency estimation techniques did not apply. Scour also was calculated for large recorded floods at 13 sites.Channel instability at 11 sites was related to human activities (in-channel mining, dredging, and channel relocation). Eight of the dredged sites are located on active unstable alluvial fans and were graded to protect infrastructure. The trend toward aggradation during major floods at these sites reduces confidence in scour estimates.Vertical contraction and pressure flow occurred during the 0.2-percent or smaller annual exceedance probability floods at eight sites. Contraction scour exceeded 5 feet (ft) at four sites, and total scour at piers (pier scour plus contraction scour) exceeded 5 ft at four sites. Debris accumulation increased calculated pier scour at six sites by an average of 2.4 ft. Total scour at abutments exceeded 5 ft at 10 sites. Scour estimates seemed excessive at two piers where equations did not account for channel armoring, and at four abutments where failure of the embankment and attendant channel widening would reduce scour.

  14. Dietary intake and food sources of added sugar in the Australian population.

    PubMed

    Lei, Linggang; Rangan, Anna; Flood, Victoria M; Louie, Jimmy Chun Yu

    2016-03-14

    Previous studies in Australian children/adolescents and adults examining added sugar (AS) intake were based on now out-of-date national surveys. We aimed to examine the AS and free sugar (FS) intakes and the main food sources of AS among Australians, using plausible dietary data collected by a multiple-pass, 24-h recall, from the 2011-12 Australian Health Survey respondents (n 8202). AS and FS intakes were estimated using a previously published method, and as defined by the WHO, respectively. Food groups contributing to the AS intake were described and compared by age group and sex by one-way ANOVA. Linear regression was used to test for trends across age groups. Usual intake of FS (as percentage energy (%EFS)) was computed using a published method and compared with the WHO cut-off of <10%EFS. The mean AS intake of the participants was 60·3 (SD 52·6) g/d. Sugar-sweetened beverages accounted for the greatest proportion of the AS intake of the Australian population (21·4 (sd 30·1)%), followed by sugar and sweet spreads (16·3 (SD 24·5)%) and cakes, biscuits, pastries and batter-based products (15·7 (sd 24·4)%). More than half of the study population exceeded the WHO's cut-off for FS, especially children and adolescents. Overall, 80-90% of the daily AS intake came from high-sugar energy-dense and/or nutrient-poor foods. To conclude, the majority of Australian adults and children exceed the WHO recommendation for FS intake. Efforts to reduce AS intake should focus on energy-dense and/or nutrient-poor foods.

  15. Estimation of Second Primary Cancer Risk After Treatment with Radioactive Iodine for Differentiated Thyroid Carcinoma.

    PubMed

    Corrêa, Nilton Lavatori; de Sá, Lidia Vasconcellos; de Mello, Rossana Corbo Ramalho

    2017-02-01

    An increase in the incidence of second primary cancers is the late effect of greatest concern that could occur in differentiated thyroid carcinoma (DTC) patients treated with radioactive iodine (RAI). The decision to treat a patient with RAI should therefore incorporate a careful risk-benefit analysis. The objective of this work was to adapt the risk-estimation models developed by the Biological Effects of Ionizing Radiation Committee to local epidemiological characteristics in order to assess the carcinogenesis risk from radiation in a population of Brazilian DTC patients treated with RAI. Absorbed radiation doses in critical organs were also estimated to determine whether they exceeded the thresholds for deterministic effects. A total of 416 DTC patients treated with RAI were retrospectively studied. Four organs were selected for absorbed dose estimation and subsequent calculation of carcinogenic risk: the kidney, stomach, salivary glands, and bone marrow. Absorbed doses were calculated by dose factors (absorbed dose per unit activity administered) previously established and based on standard human models. The lifetime attributable risk (LAR) of incidence of cancer as a function of age, sex, and organ-specific dose was estimated, relating it to the activity of RAI administered in the initial treatment. The salivary glands received the greatest absorbed doses of radiation, followed by the stomach, kidney, and bone marrow. None of these, however, surpassed the threshold for deterministic effects for a single administration of RAI. Younger patients received the same level of absorbed dose in the critical organs as older patients did. The lifetime attributable risk for stomach cancer incidence was by far the highest, followed in descending order by salivary-gland cancer, leukemia, and kidney cancer. RAI in a single administration is safe in terms of deterministic effects because even high-administered activities do not result in absorbed doses that exceed the thresholds for significant tissue reactions. The Biological Effects of Ionizing Radiation Committee mathematical models are a practical method of quantifying the risks of a second primary cancer, demonstrating a marked decrease in risk for younger patients with the administration of lower RAI activities and suggesting that only the smallest activities necessary to promote an effective ablation should be administered in low-risk DTC patients.

  16. Host factors governing resistance to Rhizoctonia solani

    USDA-ARS?s Scientific Manuscript database

    In the state of Washington, USA, annual losses of wheat attributed to soilborne necrotrophic fungal pathogens, such as Rhizoctonia solani, are estimated to be over US$100 million, and global estimates exceed US$1 billion. Host genetic resistance is a sustainable means of disease control that can be ...

  17. Flooding in the Northeastern United States, 2011

    USGS Publications Warehouse

    Suro, Thomas P.; Roland, Mark A.; Kiah, Richard G.

    2015-12-31

    The annual exceedance probability (AEP) for 327 streamgages in the Northeastern United States were computed using annual peak streamflow data through 2011 and are included in this report. The 2011 peak streamflow for 129 of those streamgages was estimated to have an AEP of less than or equal to 1 percent. Almost 100 of these peak streamflows were a result of the flooding associated with Hurricane Irene in late August 2011. More extreme than the 1-percent AEP, is the 0.2-percent AEP. The USGS recorded peak streamflows at 31 streamgages that equaled or exceeded the estimated 0.2-percent AEP during 2011. Collectively, the USGS recorded peak streamflows having estimated AEPs of less than 1 percent in Connecticut, Delaware, Maine, Maryland, Massachusetts, Ohio, Pennsylvania, New Hampshire, New Jersey, New York, and Vermont and new period-of-record peak streamflows were recorded at more than 180 streamgages resulting from the floods of 2011.

  18. Methods for estimating magnitude and frequency of 1-, 3-, 7-, 15-, and 30-day flood-duration flows in Arizona

    USGS Publications Warehouse

    Kennedy, Jeffrey R.; Paretti, Nicholas V.; Veilleux, Andrea G.

    2014-01-01

    Regression equations, which allow predictions of n-day flood-duration flows for selected annual exceedance probabilities at ungaged sites, were developed using generalized least-squares regression and flood-duration flow frequency estimates at 56 streamgaging stations within a single, relatively uniform physiographic region in the central part of Arizona, between the Colorado Plateau and Basin and Range Province, called the Transition Zone. Drainage area explained most of the variation in the n-day flood-duration annual exceedance probabilities, but mean annual precipitation and mean elevation were also significant variables in the regression models. Standard error of prediction for the regression equations varies from 28 to 53 percent and generally decreases with increasing n-day duration. Outside the Transition Zone there are insufficient streamgaging stations to develop regression equations, but flood-duration flow frequency estimates are presented at select streamgaging stations.

  19. Estimating Adolescent Risk for Hearing Loss Based on Data From a Large School-Based Survey

    PubMed Central

    Verschuure, Hans; van der Ploeg, Catharina P. B.; Brug, Johannes; Raat, Hein

    2010-01-01

    Objectives. We estimated whether and to what extent a group of adolescents were at risk of developing permanent hearing loss as a result of voluntary exposure to high-volume music, and we assessed whether such exposure was associated with hearing-related symptoms. Methods. In 2007, 1512 adolescents (aged 12–19 years) in Dutch secondary schools completed questionnaires about their music-listening behavior and whether they experienced hearing-related symptoms after listening to high-volume music. We used their self-reported data in conjunction with published average sound levels of music players, discotheques, and pop concerts to estimate their noise exposure, and we compared that exposure to our own “loosened” (i.e., less strict) version of current European safety standards for occupational noise exposure. Results. About half of the adolescents exceeded safety standards for occupational noise exposure. About one third of the respondents exceeded safety standards solely as a result of listening to MP3 players. Hearing symptoms that occurred after using an MP3 player or going to a discotheque were associated with exposure to high-volume music. Conclusions. Adolescents often exceeded current occupational safety standards for noise exposure, highlighting the need for specific safety standards for leisure-time noise exposure. PMID:20395587

  20. A probabilistic risk assessment for deployed military personnel after the implementation of the "Leishmaniasis Control Program" at Tallil Air Base, Iraq.

    PubMed

    Schleier, Jerome J; Davis, Ryan S; Barber, Loren M; Macedo, Paula A; Peterson, Robert K D

    2009-05-01

    Leishmaniasis has been of concern to the U.S. military and has re-emerged in importance because of recent deployments to the Middle East. We conducted a retrospective probabilistic risk assessment for military personnel potentially exposed to insecticides during the "Leishmaniasis Control Plan" (LCP) undertaken in 2003 at Tallil Air Base, Iraq. We estimated acute and subchronic risks from resmethrin, malathion, piperonyl butoxide (PBO), and pyrethrins applied using a truck-mounted ultra-low-volume (ULV) sprayer and lambda-cyhalothrin, cyfluthrin, bifenthrin, chlorpyrifos, and cypermethrin used for residual sprays. We used the risk quotient (RQ) method for our risk assessment (estimated environmental exposure/toxic endpoint) and set the RQ level of concern (LOC) at 1.0. Acute RQs for truck-mounted ULV and residual sprays ranged from 0.00007 to 33.3 at the 95th percentile. Acute exposure to lambda-cyhalothrin, bifenthrin, and chlorpyrifos exceeded the RQ LOC. Subchronic RQs for truck-mounted ULV and residual sprays ranged from 0.00008 to 32.8 at the 95th percentile. Subchronic exposures to lambda-cyhalothrin and chlorpyrifos exceeded the LOC. However, estimated exposures to lambda-cyhalothrin, bifenthrin, and chlorpyrifos did not exceed their respective no observed adverse effect levels.

  1. Evaluation of Shiryaev-Roberts Procedure for On-line Environmental Radiation Monitoring

    NASA Astrophysics Data System (ADS)

    Watson, Mara Mae

    An on-line radiation monitoring system that simultaneously concentrates and detects radioactivity is needed to detect an accidental leakage from a nuclear waste disposal facility or clandestine nuclear activity. Previous studies have shown that classical control chart methods can be applied to on-line radiation monitoring data to quickly detect these events as they occur; however, Bayesian control chart methods were not included in these studies. This work will evaluate the performance of a Bayesian control chart method, the Shiryaev-Roberts (SR) procedure, compared to classical control chart methods, Shewhart 3-sigma and cumulative sum (CUSUM), for use in on-line radiation monitoring of 99Tc in water using extractive scintillating resin. Measurements were collected by pumping solutions containing 0.1-5 Bq/L of 99Tc, as 99T cO4-, through a flow cell packed with extractive scintillating resin coupled to a Beta-RAM Model 5 HPLC detector. While 99T cO4- accumulated on the resin, simultaneous measurements were acquired in 10-s intervals and then re-binned to 100-s intervals. The Bayesian statistical method, Shiryaev-Roberts procedure, and classical control chart methods, Shewhart 3-sigma and cumulative sum (CUSUM), were applied to the data using statistical algorithms developed in MATLAB RTM. Two SR control charts were constructed using Poisson distributions and Gaussian distributions to estimate the likelihood ratio, and are referred to as Poisson SR and Gaussian SR to indicate the distribution used to calculate the statistic. The Poisson and Gaussian SR methods required as little as 28.9 mL less solution at 5 Bq/L and as much as 170 mL less solution at 0.5 Bq/L to exceed the control limit than the Shewhart 3-sigma method. The Poisson SR method needed as little as 6.20 mL less solution at 5 Bq/L and up to 125 mL less solution at 0.5 Bq/L to exceed the control limit than the CUSUM method. The Gaussian SR and CUSUM method required comparable solution volumes for test solutions containing at least 1.5 Bq/L of 99T c. For activity concentrations less than 1.5 Bq/L, the Gaussian SR method required as much as 40.8 mL less solution at 0.5 Bq/L to exceed the control limit than the CUSUM method. Both SR methods were able to consistently detect test solutions containing 0.1 Bq/L, unlike the Shewhart 3-sigma and CUSUM methods. Although the Poisson SR method required as much as 178 mL less solution to exceed the control limit than the Gaussian SR method, the Gaussian SR false positive of 0% was much lower than the Poisson SR false positive rate of 1.14%. A lower false positive rate made it easier to differentiate between a false positive and an increase in mean count rate caused by activity accumulating on the resin. The SR procedure is thus the ideal tool for low-level on-line radiation monitoring using extractive scintillating resin, because it needed less volume in most cases to detect an upward shift in the mean count rate than the Shewhart 3-sigma and CUSUM methods and consistently detected lower activity concentrations. The desired results for the monitoring scheme, however, need to be considered prior to choosing between the Poisson and Gaussian distribution to estimate the likelihood ratio, because each was advantageous under different circumstances. Once the control limit was exceeded, activity concentrations were estimated from the SR control chart using the slope of the control chart on a semi-logarithmic plot. Five of nine test solutions for the Poisson SR control chart produced concentration estimates within 30% of the actual value, but the worst case was 263.2% different than the actual value. The estimations for the Gaussian SR control chart were much more precise, with six of eight solutions producing estimates within 30%. Although the activity concentrations estimations were only mediocre for the Poisson SR control chart and satisfactory for the Gaussian SR control chart, these results demonstrate that a relationship exists between activity concentration and the SR control chart magnitude that can be exploited to determine the activity concentration from the SR control chart. More complex methods should be investigated to improve activity concentration estimations from the SR control charts.

  2. Human health risks related to the consumption of foodstuffs of animal origin contaminated by bisphenol A.

    PubMed

    Gorecki, Sébastien; Bemrah, Nawel; Roudot, Alain-Claude; Marchioni, Eric; Le Bizec, Bruno; Faivre, Franck; Kadawathagedara, Manik; Botton, Jérémie; Rivière, Gilles

    2017-12-01

    Bisphenol A (BPA) is used in a wide variety of products and objects for consumers use (digital media such as CD's and DVD's, sport equipment, food and beverage containers, medical equipment). For humans, the main route of exposure to BPA is food. Based on previous estimates, almost 20% of the dietary exposure to BPA in the French population would be from food of animal origin. However, due to the use of composite samples, the source of the contamination had not been identified. Therefore, 322 individual samples of non-canned foods of animal origin were collected with the objectives of first updating the estimation of the exposure of the French population and second identifying the source of contamination of these foodstuffs using a specific analytical method. Compared to previous estimates in France, a decline in the contamination of the samples was observed, in particular with regard to meat. The estimated mean dietary exposures ranged from 0.048 to 0.050 μg (kg bw) -1 d -1 for 3-17 year children and adolescents, from 0.034 to 0.035 μg (kg bw) -1 d -1 for adults and from 0.047 to 0.049 μg (kg bw) -1 d -1 for pregnant women. The contribution of meat to total dietary exposure of pregnant women, adults and children was up to three times lower than the previous estimates. Despite this downward trend in contamination, the toxicological values were observed to have been exceeded for the population of pregnant women. With the aim of acquiring more knowledge about the origin the potential source(s) of contamination of non-canned foods of animal origin, a specific analytical method was developed to directly identify and quantify the presence of conjugated BPA (BPA-monoglucuronide, BPA-diglucuronide and sulphate forms) in 50 samples. No conjugated forms of BPAs were detected in the analysed samples, indicating clearly that BPA content in animal food was not due to metabolism but arise post mortem in food. This contamination may occur during food production. However, despite extensive sampling performed in several different shops (butcheries, supermarkets …. ) and in different conditions (fresh, prepared, frozen …), the source(s) of the contamination could not be specifically identified. Copyright © 2017 Elsevier Ltd. All rights reserved.

  3. Simulating floods in the Amazon River Basin: Impacts of new river geomorphic and dynamic flow parameterizations

    NASA Astrophysics Data System (ADS)

    Coe, M. T.; Costa, M. H.; Howard, E. A.

    2006-12-01

    In this paper we analyze the hydrology of the Amazon River system for the latter half of the 20th century with our recently completed model of terrestrial hydrology (Terrestrial Hydrology Model with Biogeochemistry, THMB). We evaluate the simulated hydrology of the Central Amazon basin against limited observations of river discharge, floodplain inundation, and water height and analyze the spatial and temporal variability of the hydrology for the period 1939-1998. We compare the simulated discharge and floodplain inundated area to the simulations by Coe et al., 2002 using a previous version of this model. The new model simulates the discharge and flooded area in better agreement with the observations than the previous model. The coefficient of correlation between the simulated and observed discharge for the greater than 27000 monthly observations of discharge at 120 sites throughout the Brazilian Amazon is 0.9874 compared to 0.9744 for the previous model. The coefficient of correlation between the simulated monthly flooded area and the satellite-based estimates by Sippel et al., 1998 exceeds 0.7 for 8 of the 12 mainstem reaches. The seasonal and inter-annual variability of the water height and the river slope compares favorably to the satellite altimetric measurements of height reported by Birkett et al., 2002.

  4. Floods of January-February 1959 in Indiana

    USGS Publications Warehouse

    Hale, Malcolm D.; Hoggatt, Richard Earl

    1961-01-01

    Previous maximum stages during the period of record were exceeded at 26 gaging stations. The peak discharge of Big Indian Creek near Corydon, and peak stages of Laughery Creek near Farmers Retreat and Vernon Fork at Vernon on January 21, were greater than any since at least 1897. The peak stage of Wabash River at Huntington on February 10 exceeded that of the historical 1913 flood by 0.5 foot.

  5. Estimating the prevalence of negative attitudes towards people with disability: a comparison of direct questioning, projective questioning and randomised response.

    PubMed

    Ostapczuk, Martin; Musch, Jochen

    2011-01-01

    Despite being susceptible to social desirability bias, attitudes towards people with disabilities are traditionally assessed via self-report. We investigated two methods presumably providing more valid prevalence estimates of sensitive attitudes than direct questioning (DQ). Most people projective questioning (MPPQ) attempts to reduce bias by asking interviewees to estimate the number of other people holding a sensitive attribute, rather than confirming or denying the attribute for themselves. The randomised-response technique (RRT) tries to reduce bias by assuring confidentiality through a random scrambling of the respondent's answers. We assessed negative attitudes towards people with physical and mental disability via MPPQ, RRT and DQ to compare the resulting estimates. The MPPQ estimates exceeded the DQ estimates. Employing a cheating-detection extension of the RRT, we determined the proportion of respondents disregarding the RRT instructions and computed an upper bound for the prevalence of negative attitudes. MPPQ estimates exceeded this upper bound and were thus shown to overestimate the prevalence. Furthermore, we found more negative attitudes towards people with mental disabilities than those with physical disabilities in all three questioning conditions. We recommend employing the cheating-detection variant of the RRT to gain additional insight in future studies on attitudes towards people with disabilities.

  6. Estimating historical atmospheric mercury concentrations from silver mining and their legacies in present-day surface soil in Potosí, Bolivia

    NASA Astrophysics Data System (ADS)

    Hagan, Nicole; Robins, Nicholas; Hsu-Kim, Heileen; Halabi, Susan; Morris, Mark; Woodall, George; Zhang, Tong; Bacon, Allan; Richter, Daniel De B.; Vandenberg, John

    2011-12-01

    Detailed Spanish records of mercury use and silver production during the colonial period in Potosí, Bolivia were evaluated to estimate atmospheric emissions of mercury from silver smelting. Mercury was used in the silver production process in Potosí and nearly 32,000 metric tons of mercury were released to the environment. AERMOD was used in combination with the estimated emissions to approximate historical air concentrations of mercury from colonial mining operations during 1715, a year of relatively low silver production. Source characteristics were selected from archival documents, colonial maps and images of silver smelters in Potosí and a base case of input parameters was selected. Input parameters were varied to understand the sensitivity of the model to each parameter. Modeled maximum 1-h concentrations were most sensitive to stack height and diameter, whereas an index of community exposure was relatively insensitive to uncertainty in input parameters. Modeled 1-h and long-term concentrations were compared to inhalation reference values for elemental mercury vapor. Estimated 1-h maximum concentrations within 500 m of the silver smelters consistently exceeded present-day occupational inhalation reference values. Additionally, the entire community was estimated to have been exposed to levels of mercury vapor that exceed present-day acute inhalation reference values for the general public. Estimated long-term maximum concentrations of mercury were predicted to substantially exceed the EPA Reference Concentration for areas within 600 m of the silver smelters. A concentration gradient predicted by AERMOD was used to select soil sampling locations along transects in Potosí. Total mercury in soils ranged from 0.105 to 155 mg kg-1, among the highest levels reported for surface soils in the scientific literature. The correlation between estimated air concentrations and measured soil concentrations will guide future research to determine the extent to which the current community of Potosí and vicinity is at risk of adverse health effects from historical mercury contamination.

  7. Cancer and non-cancer health effects from food contaminant exposures for children and adults in California: a risk assessment

    PubMed Central

    2012-01-01

    Background In the absence of current cumulative dietary exposure assessments, this analysis was conducted to estimate exposure to multiple dietary contaminants for children, who are more vulnerable to toxic exposure than adults. Methods We estimated exposure to multiple food contaminants based on dietary data from preschool-age children (2–4 years, n=207), school-age children (5–7 years, n=157), parents of young children (n=446), and older adults (n=149). We compared exposure estimates for eleven toxic compounds (acrylamide, arsenic, lead, mercury, chlorpyrifos, permethrin, endosulfan, dieldrin, chlordane, DDE, and dioxin) based on self-reported food frequency data by age group. To determine if cancer and non-cancer benchmark levels were exceeded, chemical levels in food were derived from publicly available databases including the Total Diet Study. Results Cancer benchmark levels were exceeded by all children (100%) for arsenic, dieldrin, DDE, and dioxins. Non-cancer benchmarks were exceeded by >95% of preschool-age children for acrylamide and by 10% of preschool-age children for mercury. Preschool-age children had significantly higher estimated intakes of 6 of 11 compounds compared to school-age children (p<0.0001 to p=0.02). Based on self-reported dietary data, the greatest exposure to pesticides from foods included in this analysis were tomatoes, peaches, apples, peppers, grapes, lettuce, broccoli, strawberries, spinach, dairy, pears, green beans, and celery. Conclusions Dietary strategies to reduce exposure to toxic compounds for which cancer and non-cancer benchmarks are exceeded by children vary by compound. These strategies include consuming organically produced dairy and selected fruits and vegetables to reduce pesticide intake, consuming less animal foods (meat, dairy, and fish) to reduce intake of persistent organic pollutants and metals, and consuming lower quantities of chips, cereal, crackers, and other processed carbohydrate foods to reduce acrylamide intake. PMID:23140444

  8. Statistical Modeling of Occupational Exposure to Polycyclic Aromatic Hydrocarbons Using OSHA Data.

    PubMed

    Lee, Derrick G; Lavoué, Jérôme; Spinelli, John J; Burstyn, Igor

    2015-01-01

    Polycyclic aromatic hydrocarbons (PAHs) are a group of pollutants with multiple variants classified as carcinogenic. The Occupational Safety and Health Administration (OSHA) provided access to two PAH exposure databanks of United States workplace compliance testing data collected between 1979 and 2010. Mixed-effects logistic models were used to predict the exceedance fraction (EF), i.e., the probability of exceeding OSHA's Permissible Exposure Limit (PEL = 0.2 mg/m3) for PAHs based on industry and occupation. Measurements of coal tar pitch volatiles were used as a surrogate for PAHs. Time, databank, occupation, and industry were included as fixed-effects while an identifier for the compliance inspection number was included as a random effect. Analyses involved 2,509 full-shift personal measurements. Results showed that the majority of industries had an estimated EF < 0.5, although several industries, including Standardized Industry Classification codes 1623 (Water, Sewer, Pipeline, and Communication and Powerline Construction), 1711 (Plumbing, Heating, and Air-Conditioning), 2824 (Manmade Organic Fibres), 3496 (Misc. Fabricated Wire products), and 5812 (Eating Places), and Major group's 13 (Oil and Gas Extraction) and 30 (Rubber and Miscellaneous Plastic Products), were estimated to have more than an 80% likelihood of exceeding the PEL. There was an inverse temporal trend of exceeding the PEL, with lower risk in most recent years, albeit not statistically significant. Similar results were shown when incorporating occupation, but varied depending on the occupation as the majority of industries predicted at the administrative level, e.g., managers, had an estimated EF < 0.5 while at the minimally skilled/laborer level there was a substantial increase in the estimated EF. These statistical models allow the prediction of PAH exposure risk through individual occupational histories and will be used to create a job-exposure matrix for use in a population-based case-control study exploring PAH exposure and breast cancer risk.

  9. 23 CFR 1340.9 - Computation of estimates.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... OBSERVATIONAL SURVEYS OF SEAT BELT USE Survey Design Requirements § 1340.9 Computation of estimates. (a) Data... design and any subsequent adjustments. (e) Sampling weight adjustments for observation sites with no... section, the nonresponse rate for the entire survey shall not exceed 10 percent for the ratio of the total...

  10. 23 CFR 1340.9 - Computation of estimates.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... OBSERVATIONAL SURVEYS OF SEAT BELT USE Survey Design Requirements § 1340.9 Computation of estimates. (a) Data... design and any subsequent adjustments. (e) Sampling weight adjustments for observation sites with no... section, the nonresponse rate for the entire survey shall not exceed 10 percent for the ratio of the total...

  11. 23 CFR 1340.9 - Computation of estimates.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... OBSERVATIONAL SURVEYS OF SEAT BELT USE Survey Design Requirements § 1340.9 Computation of estimates. (a) Data... design and any subsequent adjustments. (e) Sampling weight adjustments for observation sites with no... section, the nonresponse rate for the entire survey shall not exceed 10 percent for the ratio of the total...

  12. Comparisons of estimates of annual exceedance-probability discharges for small drainage basins in Iowa, based on data through water year 2013.

    DOT National Transportation Integrated Search

    2015-01-01

    Traditionally, the Iowa Department of Transportation : has used the Iowa Runoff Chart and single-variable regional-regression equations (RREs) from a U.S. Geological Survey : report (published in 1987) as the primary methods to estimate : annual exce...

  13. [Seroprevalence of varicella-zoster virus antibodies after the recent introduction of the universal childhood immunisation schedule in the Community of Madrid].

    PubMed

    García-Comas, Luis; Ordobás Gavín, María; Sanz Moreno, Juan Carlos; Ramos Blázquez, Belén; Gutiérrez Rodríguez, M Angeles; Barranco Ordóñez, Dolores

    2016-12-01

    In November 2006, the Community of Madrid included the chickenpox vaccine into the immunisation schedule for children from 15 months of age. This was withdrawn in January 2014. Seroprevalence of antibodies to the virus is estimated after the first 2-3 years from the inclusion of the vaccine, and as well as its evolution since 1999. A cross-sectional study was conducted on the target population consisting of residents in the Community of Madrid between 2 and 60 years of age. Measurement of IgG antibodies was performed using an ELISA technique. Seroprevalence was estimated according to sociodemographic characteristics using multiple logistic regressions. The results are compared with previous surveys. Also, the seroprevalence and geometric mean of the antibody according immunisation status and history of the disease are presented. The confidence level used is 95%. A total of 4,378 subjects were included, with a response rate of 69%. The estimated seroprevalence was 95.3% (95% CI: 94.6% - 95.9%). Over 90% of children from the age of 10 have antibodies. The seroprevalence was higher in people with less education. The seroprevalence of immunity vaccine exceeds 90% in the first year after vaccination, but in the second year decreased to 82.6% (95% CI 56.0 - 94.7). Significant differences, attributable to universal vaccination, were found compared to previous surveys. Continued surveillance is needed in order to assess the impact of the withdrawal of the recommendation to vaccinate at 15 months. Copyright © 2016 Elsevier España, S.L.U. and Sociedad Española de Enfermedades Infecciosas y Microbiología Clínica. All rights reserved.

  14. Calving fluxes and basal melt rates of Antarctic ice shelves.

    PubMed

    Depoorter, M A; Bamber, J L; Griggs, J A; Lenaerts, J T M; Ligtenberg, S R M; van den Broeke, M R; Moholdt, G

    2013-10-03

    Iceberg calving has been assumed to be the dominant cause of mass loss for the Antarctic ice sheet, with previous estimates of the calving flux exceeding 2,000 gigatonnes per year. More recently, the importance of melting by the ocean has been demonstrated close to the grounding line and near the calving front. So far, however, no study has reliably quantified the calving flux and the basal mass balance (the balance between accretion and ablation at the ice-shelf base) for the whole of Antarctica. The distribution of fresh water in the Southern Ocean and its partitioning between the liquid and solid phases is therefore poorly constrained. Here we estimate the mass balance components for all ice shelves in Antarctica, using satellite measurements of calving flux and grounding-line flux, modelled ice-shelf snow accumulation rates and a regional scaling that accounts for unsurveyed areas. We obtain a total calving flux of 1,321 ± 144 gigatonnes per year and a total basal mass balance of -1,454 ± 174 gigatonnes per year. This means that about half of the ice-sheet surface mass gain is lost through oceanic erosion before reaching the ice front, and the calving flux is about 34 per cent less than previous estimates derived from iceberg tracking. In addition, the fraction of mass loss due to basal processes varies from about 10 to 90 per cent between ice shelves. We find a significant positive correlation between basal mass loss and surface elevation change for ice shelves experiencing surface lowering and enhanced discharge. We suggest that basal mass loss is a valuable metric for predicting future ice-shelf vulnerability to oceanic forcing.

  15. Twentieth-century global-mean sea level rise: Is the whole greater than the sum of the parts?

    USGS Publications Warehouse

    Gregory, J.M.; White, N.J.; Church, J.A.; Bierkens, M.F.P.; Box, J.E.; Van den Broeke, M.R.; Cogley, J.G.; Fettweis, X.; Hanna, E.; Huybrechts, P.; Konikow, Leonard F.; Leclercq, P.W.; Marzeion, B.; Oerlemans, J.; Tamisiea, M.E.; Wada, Y.; Wake, L.M.; Van de Wal, R.S.W.

    2013-01-01

    Confidence in projections of global-mean sea level rise (GMSLR) depends on an ability to account for GMSLR during the twentieth century. There are contributions from ocean thermal expansion, mass loss from glaciers and ice sheets, groundwater extraction, and reservoir impoundment. Progress has been made toward solving the “enigma” of twentieth-century GMSLR, which is that the observed GMSLR has previously been found to exceed the sum of estimated contributions, especially for the earlier decades. The authors propose the following: thermal expansion simulated by climate models may previously have been underestimated because of their not including volcanic forcing in their control state; the rate of glacier mass loss was larger than previously estimated and was not smaller in the first half than in the second half of the century; the Greenland ice sheet could have made a positive contribution throughout the century; and groundwater depletion and reservoir impoundment, which are of opposite sign, may have been approximately equal in magnitude. It is possible to reconstruct the time series of GMSLR from the quantified contributions, apart from a constant residual term, which is small enough to be explained as a long-term contribution from the Antarctic ice sheet. The reconstructions account for the observation that the rate of GMSLR was not much larger during the last 50 years than during the twentieth century as a whole, despite the increasing anthropogenic forcing. Semiempirical methods for projecting GMSLR depend on the existence of a relationship between global climate change and the rate of GMSLR, but the implication of the authors' closure of the budget is that such a relationship is weak or absent during the twentieth century.

  16. New constraints on the spatial distribution and morphology of the Halimeda bioherms of the Great Barrier Reef, Australia

    NASA Astrophysics Data System (ADS)

    McNeil, Mardi A.; Webster, Jody M.; Beaman, Robin J.; Graham, Trevor L.

    2016-12-01

    Halimeda bioherms occur as extensive geological structures on the northern Great Barrier Reef (GBR), Australia. We present the most complete, high-resolution spatial mapping of the northern GBR Halimeda bioherms, based on new airborne lidar and multibeam echosounder bathymetry data. Our analysis reveals that bioherm morphology does not conform to the previous model of parallel ridges and troughs, but is far more complex than previously thought. We define and describe three morphological sub-types: reticulate, annulate, and undulate, which are distributed in a cross-shelf pattern of reduced complexity from east to west. The northern GBR bioherms cover an area of 6095 km2, three times larger than the original estimate, exceeding the area and volume of calcium carbonate in the adjacent modern shelf-edge barrier reefs. We have mapped a 1740 km2 bioherm complex north of Raine Island in the Cape York region not previously recorded, extending the northern limit by more than 1° of latitude. Bioherm formation and distribution are controlled by a complex interaction of outer-shelf geometry, regional and local currents, coupled with the morphology and depth of continental slope submarine canyons determining the delivery of cool, nutrient-rich water upwelling through inter-reef passages. Distribution and mapping of Halimeda bioherms in relation to Great Barrier Reef Marine Park Authority bioregion classifications and management zones are inconsistent and currently poorly defined due to a lack of high-resolution data not available until now. These new estimates of bioherm spatial distribution and morphology have implications for understanding the role these geological features play as structurally complex and productive inter-reef habitats, and as calcium carbonate sinks which record a complete history of the Holocene post-glacial marine transgression in the northern GBR.

  17. Methods for estimating annual exceedance-probability discharges for streams in Iowa, based on data through water year 2010

    USGS Publications Warehouse

    Eash, David A.; Barnes, Kimberlee K.; Veilleux, Andrea G.

    2013-01-01

    A statewide study was performed to develop regional regression equations for estimating selected annual exceedance-probability statistics for ungaged stream sites in Iowa. The study area comprises streamgages located within Iowa and 50 miles beyond the State’s borders. Annual exceedance-probability estimates were computed for 518 streamgages by using the expected moments algorithm to fit a Pearson Type III distribution to the logarithms of annual peak discharges for each streamgage using annual peak-discharge data through 2010. The estimation of the selected statistics included a Bayesian weighted least-squares/generalized least-squares regression analysis to update regional skew coefficients for the 518 streamgages. Low-outlier and historic information were incorporated into the annual exceedance-probability analyses, and a generalized Grubbs-Beck test was used to detect multiple potentially influential low flows. Also, geographic information system software was used to measure 59 selected basin characteristics for each streamgage. Regional regression analysis, using generalized least-squares regression, was used to develop a set of equations for each flood region in Iowa for estimating discharges for ungaged stream sites with 50-, 20-, 10-, 4-, 2-, 1-, 0.5-, and 0.2-percent annual exceedance probabilities, which are equivalent to annual flood-frequency recurrence intervals of 2, 5, 10, 25, 50, 100, 200, and 500 years, respectively. A total of 394 streamgages were included in the development of regional regression equations for three flood regions (regions 1, 2, and 3) that were defined for Iowa based on landform regions and soil regions. Average standard errors of prediction range from 31.8 to 45.2 percent for flood region 1, 19.4 to 46.8 percent for flood region 2, and 26.5 to 43.1 percent for flood region 3. The pseudo coefficients of determination for the generalized least-squares equations range from 90.8 to 96.2 percent for flood region 1, 91.5 to 97.9 percent for flood region 2, and 92.4 to 96.0 percent for flood region 3. The regression equations are applicable only to stream sites in Iowa with flows not significantly affected by regulation, diversion, channelization, backwater, or urbanization and with basin characteristics within the range of those used to develop the equations. These regression equations will be implemented within the U.S. Geological Survey StreamStats Web-based geographic information system tool. StreamStats allows users to click on any ungaged site on a river and compute estimates of the eight selected statistics; in addition, 90-percent prediction intervals and the measured basin characteristics for the ungaged sites also are provided by the Web-based tool. StreamStats also allows users to click on any streamgage in Iowa and estimates computed for these eight selected statistics are provided for the streamgage.

  18. Return period estimates for European windstorm clusters: a multi-model perspective

    NASA Astrophysics Data System (ADS)

    Renggli, Dominik; Zimmerli, Peter

    2017-04-01

    Clusters of storms over Europe can lead to very large aggregated losses. Realistic return period estimates for such cluster are therefore of vital interest to the (re)insurance industry. Such return period estimates are usually derived from historical storm activity statistics of the last 30 to 40 years. However, climate models provide an alternative source, potentially representing thousands of simulated storm seasons. In this study, we made use of decadal hindcast data from eight different climate models in the CMIP5 archive. We used an objective tracking algorithm to identify individual windstorms in the climate model data. The algorithm also computes a (population density weighted) Storm Severity Index (SSI) for each of the identified storms (both on a continental and more regional basis). We derived return period estimates for the cluster seasons 1990, 1999, 2013/2014 and 1884 in the following way: For each climate model, we extracted two different exceedance frequency curves. The first describes the exceedance frequency (or the return period as the inverse of it) of a given SSI level due to an individual storm occurrence. The second describes the exceedance frequency of the seasonally aggregated SSI level (i.e. the sum of the SSI values of all storms in a given season). Starting from appropriate return period assumptions for each individual storm of a historical cluster (e.g. Anatol, Lothar and Martin in 1999) and using the first curve, we extracted the SSI levels at the corresponding return periods. Summing these SSI values results in the seasonally aggregated SSI value. Combining this with the second (aggregated) exceedance frequency curve results in return period estimate of the historical cluster season. Since we do this for each model separately, we obtain eight different return period estimates for each historical cluster. In this way, we obtained the following return period estimates: 50 to 80 years for the 1990 season, 20 to 45 years for the 1999 season, 3 to 4 years for the 2013/2014 season, and 14 to 16 years for the 1884 season. More detailed results show substantial variation between five different regions (UK, France, Germany, Benelux and Scandinavia), as expected from the path and footprints of the different events. For example, the 1990 season is estimated to be well beyond a 100-year season for Germany and Benelux. 1999 clearly was an extreme season for France, whereas the1884 was very disruptive for the UK. Such return period estimates can be used as an independent benchmark for other approaches quantifying clustering of European windstorms. The study might also serve as an example to derive similar risk measures also for other climate-related perils from a robust, publicly available data source.

  19. Evaluation of the magnitude and frequency of floods in urban watersheds in Phoenix and Tucson, Arizona

    USGS Publications Warehouse

    Kennedy, Jeffrey R.; Paretti, Nicholas V.

    2014-01-01

    Flooding in urban areas routinely causes severe damage to property and often results in loss of life. To investigate the effect of urbanization on the magnitude and frequency of flood peaks, a flood frequency analysis was carried out using data from urbanized streamgaging stations in Phoenix and Tucson, Arizona. Flood peaks at each station were predicted using the log-Pearson Type III distribution, fitted using the expected moments algorithm and the multiple Grubbs-Beck low outlier test. The station estimates were then compared to flood peaks estimated by rural-regression equations for Arizona, and to flood peaks adjusted for urbanization using a previously developed procedure for adjusting U.S. Geological Survey rural regression peak discharges in an urban setting. Only smaller, more common flood peaks at the 50-, 20-, 10-, and 4-percent annual exceedance probabilities (AEPs) demonstrate any increase in magnitude as a result of urbanization; the 1-, 0.5-, and 0.2-percent AEP flood estimates are predicted without bias by the rural-regression equations. Percent imperviousness was determined not to account for the difference in estimated flood peaks between stations, either when adjusting the rural-regression equations or when deriving urban-regression equations to predict flood peaks directly from basin characteristics. Comparison with urban adjustment equations indicates that flood peaks are systematically overestimated if the rural-regression-estimated flood peaks are adjusted upward to account for urbanization. At nearly every streamgaging station in the analysis, adjusted rural-regression estimates were greater than the estimates derived using station data. One likely reason for the lack of increase in flood peaks with urbanization is the presence of significant stormwater retention and detention structures within the watershed used in the study.

  20. Dietary and inhalation intake of lead and estimation of blood lead levels in adults and children in Kanpur, India.

    PubMed

    Sharma, Mukesh; Maheshwari, Mayank; Morisawa, S

    2005-12-01

    This research was initiated to study lead levels in various food items in the city of Kanpur, India, to assess the dietary intake of lead and to estimate blood lead (PbB) levels, a biomarker of lead toxicity. For this purpose, sampling of food products, laboratory analysis, and computational exercises were undertaken. Specifically, six food groups (leafy vegetables, nonleafy vegetables, fruits, pulses, cereals, and milk), drinking water, and lead air concentration were considered for estimating lead intake. Results indicated highest lead content in leafy vegetables followed by pulses. Fruits showed low lead content and drinking water lead levels were always within tolerable limits. It was estimated that average daily lead intake through diet was about 114 microg/day for adults and 50 microg/day in children; tolerable limit is 250 microg/day for adults and 90 microg/day for children. The estimated lead intakes were translated into the resultant PbB concentrations for children and adults using a physiologically-based pharmacokinetic (PBPK) model. Monte Carlo simulation of PbB level variations for adults showed that probability of exceeding the tolerable limit of PbB (i.e.,10 microg/dL) was 0.062 for the pre-unleaded and 0.000328 for the post-unleaded gasoline period. The probability of exceeding tolerable limits in PbB level was reduced by a factor of 189 in the post-unleaded scenario. The study also suggested that in spite of the introduction of unleaded gasoline, children continue to be at a high risk (probability of exceeding 10 microg/dL = 0.39) because of a high intake of lead per unit body weight.

  1. Characterization of shallow groundwater quality in the Lower St. Johns River Basin: a case study.

    PubMed

    Ouyang, Ying; Zhang, Jia-En; Parajuli, Prem

    2013-12-01

    Characterization of groundwater quality allows the evaluation of groundwater pollution and provides information for better management of groundwater resources. This study characterized the shallow groundwater quality and its spatial and seasonal variations in the Lower St. Johns River Basin, Florida, USA, under agricultural, forest, wastewater, and residential land uses using field measurements and two-dimensional kriging analysis. Comparison of the concentrations of groundwater quality constituents against the US EPA's water quality criteria showed that the maximum nitrate/nitrite (NO x ) and arsenic (As) concentrations exceeded the EPA's drinking water standard limits, while the maximum Cl, SO 4 (2-) , and Mn concentrations exceeded the EPA's national secondary drinking water regulations. In general, high kriging estimated groundwater NH 4 (+) concentrations were found around the agricultural areas, while high kriging estimated groundwater NO x concentrations were observed in the residential areas with a high density of septic tank distribution. Our study further revealed that more areas were found with high estimated NO x concentrations in summer than in spring. This occurred partially because of more NO x leaching into the shallow groundwater due to the wetter summer and partially because of faster nitrification rate due to the higher temperature in summer. Large extent and high kriging estimated total phosphorus concentrations were found in the residential areas. Overall, the groundwater Na and Mg concentration distributions were relatively more even in summer than in spring. Higher kriging estimated groundwater As concentrations were found around the agricultural areas, which exceeded the EPA's drinking water standard limit. Very small variations in groundwater dissolved organic carbon concentrations were observed between spring and summer. This study demonstrated that the concentrations of groundwater quality constituents varied from location to location, and impacts of land uses on groundwater quality variation were profound.

  2. A parallelizable real-time motion tracking algorithm with applications to ultrasonic strain imaging.

    PubMed

    Jiang, J; Hall, T J

    2007-07-07

    Ultrasound-based mechanical strain imaging systems utilize signals from conventional diagnostic ultrasound systems to image tissue elasticity contrast that provides new diagnostically valuable information. Previous works (Hall et al 2003 Ultrasound Med. Biol. 29 427, Zhu and Hall 2002 Ultrason. Imaging 24 161) demonstrated that uniaxial deformation with minimal elevation motion is preferred for breast strain imaging and real-time strain image feedback to operators is important to accomplish this goal. The work reported here enhances the real-time speckle tracking algorithm with two significant modifications. One fundamental change is that the proposed algorithm is a column-based algorithm (a column is defined by a line of data parallel to the ultrasound beam direction, i.e. an A-line), as opposed to a row-based algorithm (a row is defined by a line of data perpendicular to the ultrasound beam direction). Then, displacement estimates from its adjacent columns provide good guidance for motion tracking in a significantly reduced search region to reduce computational cost. Consequently, the process of displacement estimation can be naturally split into at least two separated tasks, computed in parallel, propagating outward from the center of the region of interest (ROI). The proposed algorithm has been implemented and optimized in a Windows system as a stand-alone ANSI C++ program. Results of preliminary tests, using numerical and tissue-mimicking phantoms, and in vivo tissue data, suggest that high contrast strain images can be consistently obtained with frame rates (10 frames s(-1)) that exceed our previous methods.

  3. Spatial Distribution of Io's Neutral Oxygen Cloud Observed by Hisaki

    NASA Astrophysics Data System (ADS)

    Koga, Ryoichi; Tsuchiya, Fuminori; Kagitani, Masato; Sakanoi, Takeshi; Yoneda, Mizuki; Yoshioka, Kazuo; Yoshikawa, Ichiro; Kimura, Tomoki; Murakami, Go; Yamazaki, Atsushi; Smith, H. Todd; Bagenal, Fran

    2018-05-01

    We report on the spatial distribution of a neutral oxygen cloud surrounding Jupiter's moon Io and along Io's orbit observed by the Hisaki satellite. Atomic oxygen and sulfur in Io's atmosphere escape from the exosphere mainly through atmospheric sputtering. Some of the neutral atoms escape from Io's gravitational sphere and form neutral clouds around Jupiter. The extreme ultraviolet spectrograph called EXCEED (Extreme Ultraviolet Spectroscope for Exospheric Dynamics) installed on the Japan Aerospace Exploration Agency's Hisaki satellite observed the Io plasma torus continuously in 2014-2015, and we derived the spatial distribution of atomic oxygen emissions at 130.4 nm. The results show that Io's oxygen cloud is composed of two regions, namely, a dense region near Io and a diffuse region with a longitudinally homogeneous distribution along Io's orbit. The dense region mainly extends on the leading side of Io and inside of Io's orbit. The emissions spread out to 7.6 Jupiter radii (RJ). Based on Hisaki observations, we estimated the radial distribution of the atomic oxygen number density and oxygen ion source rate. The peak atomic oxygen number density is 80 cm-3, which is spread 1.2 RJ in the north-south direction. We found more oxygen atoms inside Io's orbit than a previous study. We estimated the total oxygen ion source rate to be 410 kg/s, which is consistent with the value derived from a previous study that used a physical chemistry model based on Hisaki observations of ultraviolet emission ions in the Io plasma torus.

  4. Susceptibility to enhanced chemical migration from depression-focused preferential flow, High Plains aquifer

    USGS Publications Warehouse

    Gurdak, Jason J.; Walvoord, Michelle Ann; McMahon, Peter B.

    2008-01-01

    Aquifer susceptibility to contamination is controlled in part by the inherent hydrogeologic properties of the vadose zone, which includes preferential-flow pathways. The purpose of this study was to investigate the importance of seasonal ponding near leaky irrigation wells as a mechanism for depression-focused preferential flow and enhanced chemical migration through the vadose zone of the High Plains aquifer. Such a mechanism may help explain the widespread presence of agrichemicals in recently recharged groundwater despite estimates of advective chemical transit times through the vadose zone from diffuse recharge that exceed the historical period of agriculture. Using a combination of field observations, vadose zone flow and transport simulations, and probabilistic neural network modeling, we demonstrated that vadose zone transit times near irrigation wells range from 7 to 50 yr, which are one to two orders of magnitude faster than previous estimates based on diffuse recharge. These findings support the concept of fast and slow transport zones and help to explain the previous discordant findings of long vadose zone transit times and the presence of agrichemicals at the water table. Using predictions of aquifer susceptibility from probabilistic neural network models, we delineated approximately 20% of the areal extent of the aquifer to have conditions that may promote advective chemical transit times to the water table of <50 yr if seasonal ponding and depression-focused flow exist. This aquifer-susceptibility map may help managers prioritize areas for groundwater monitoring or implementation of best management practices.

  5. AIDS cases soar in past year.

    PubMed

    1994-01-01

    The World Health Organization's Global Programme on AIDS (WHO/GPA) released figures on July 1, 1994, estimating that around 1.5 million people developed AIDS between mid-1993 and mid-1994, three times as many as the previous 12 months. Approximately 200,000 new cases were estimated to exist in south and southeast Asia, over eight times as many as during the previous year. Only 985,119 cases of AIDS had actually been reported to WHO by June 30, 1994 since the onset of the pandemic, however, GPA estimates that about 4 million have actually developed AIDS. GPA estimates indicate that more than 16 million adults and over one million children have been infected since its inception. Close to three million new HIV infections occurred between mid-1993 and mid-1994. The cumulative total in south and southeast Asia reached 2.5 million. The cumulative total in this region is expected to escalate to 10 million by the year 2000 with a serious threat of its spread to China and other countries with a potential of new economic growth. HIV levels have been significant among injecting drug users (IDUs) in Ho Chi Minh City, Viet Nam (increasing from 2% in late 1992 to more than 30% at the end of 1993), in peninsular Malaysia, and in Yunnan province, China. In Bangkok (Thailand), Manipur (India), and Yangon (Myanmar) HIV prevalence rates among IDUs have risen to 50% since the late 1980s. Heterosexual transmission has been on the rise among female sex workers in several Indian states, in cities of Myanmar, Thailand, and Cambodia as well as among fishermen in eastern and western Indonesia. In Thailand over 3.5% of military recruits aged 21 years are infected, and in a northern province HIV prevalence exceeds 8% among women attending antenatal clinics and 20% of military recruits are infected. In industrial countries the number did not change much from mid-1993 because the number of new infections was balanced by AIDS deaths.

  6. Direct URCA process in neutron stars

    NASA Technical Reports Server (NTRS)

    Lattimer, James M.; Prakash, Madappa; Pethick, C. J.; Haensel, Pawel

    1991-01-01

    It is shown that the direct URCA process can occur in neutron stars if the proton concentration exceeds some critical value in the range 11-15 percent. The proton concentration, which is determined by the poorly known symmetry energy of matter above nuclear density, exceeds the critical value in many current calculations. If it occurs, the direct URCA process enhances neutrino emission and neutron star cooling rates by a large factor compared to any process considered previously.

  7. Spillway sizing of large dams in Austria

    NASA Astrophysics Data System (ADS)

    Reszler, Ch.; Gutknecht, D.; Blöschl, G.

    2003-04-01

    This paper discusses the basic philosophy of defining and calculating design floods for large dams in Austria, both for the construction of new dams and for a re-assessment of the safety of existing dams. Currently the consensus is to choose flood peak values corresponding to a probability of exceedance of 2*10-4 for a given year. A two step procedure is proposed to estimate the design flood discharges - a rapid assessment and a detailed assessment. In the rapid assessment the design discharge is chosen as a constant multiple of flood values read from a map of regionalised floods. The safety factor or multiplier takes care of the uncertainties of the local estimation and the regionalisation procedure. If the current design level of a spillway exceeds the value so estimated, no further calculations are needed. Otherwise (and for new dams) a detailed assessment is required. The idea of the detailed assessment is to draw upon all existing sources of information to constrain the uncertainties. The three main sources are local flood frequency analysis, where flood data are available; regional flood estimation from hydrologically similar catchments; and rainfall-runoff modelling using design storms as inputs. The three values obtained by these methods are then assessed and weighted in terms of their reliability to facilitate selection of the design flood. The uncertainty assessment of the various methods is based on confidence intervals, estimates of regional heterogeneity, data availability and sensitivity analyses of the rainfall-runoff model. As the definition of the design floods discussed above is based on probability concepts it is also important to examine the excess risk, i.e. the possibility of the occurrence of a flood exceeding the design levels. The excess risk is evaluated based on a so called Safety Check Flood (SCF), similar to the existing practice in other countries in Europe. The SCF is a vehicle to analyse the damage potential of an event of this magnitude. This is to provide guidance for protective measures to dealing with very extreme floods. The SCF is used to check the vulnerability of the system with regard to structural stability, morphological effects, etc., and to develop alarm plans and disaster mitigation procedures. The basis for estimating the SCF are the uncertainty assessments of the design flood values estimated by the three methods including unlikely combinations of the controlling factors and attending uncertainties. Finally we discuss the impact on the downstream valley of floods exceeding the design values and of smaller floods and illustrate the basic concepts by examples from the recent flood in August 2002.

  8. Modeling environmental noise exceedances using non-homogeneous Poisson processes.

    PubMed

    Guarnaccia, Claudio; Quartieri, Joseph; Barrios, Juan M; Rodrigues, Eliane R

    2014-10-01

    In this work a non-homogeneous Poisson model is considered to study noise exposure. The Poisson process, counting the number of times that a sound level surpasses a threshold, is used to estimate the probability that a population is exposed to high levels of noise a certain number of times in a given time interval. The rate function of the Poisson process is assumed to be of a Weibull type. The presented model is applied to community noise data from Messina, Sicily (Italy). Four sets of data are used to estimate the parameters involved in the model. After the estimation and tuning are made, a way of estimating the probability that an environmental noise threshold is exceeded a certain number of times in a given time interval is presented. This estimation can be very useful in the study of noise exposure of a population and also to predict, given the current behavior of the data, the probability of occurrence of high levels of noise in the near future. One of the most important features of the model is that it implicitly takes into account different noise sources, which need to be treated separately when using usual models.

  9. Applications of Extreme Value Theory in Public Health.

    PubMed

    Thomas, Maud; Lemaitre, Magali; Wilson, Mark L; Viboud, Cécile; Yordanov, Youri; Wackernagel, Hans; Carrat, Fabrice

    2016-01-01

    We present how Extreme Value Theory (EVT) can be used in public health to predict future extreme events. We applied EVT to weekly rates of Pneumonia and Influenza (P&I) deaths over 1979-2011. We further explored the daily number of emergency department visits in a network of 37 hospitals over 2004-2014. Maxima of grouped consecutive observations were fitted to a generalized extreme value distribution. The distribution was used to estimate the probability of extreme values in specified time periods. An annual P&I death rate of 12 per 100,000 (the highest maximum observed) should be exceeded once over the next 30 years and each year, there should be a 3% risk that the P&I death rate will exceed this value. Over the past 10 years, the observed maximum increase in the daily number of visits from the same weekday between two consecutive weeks was 1133. We estimated at 0.37% the probability of exceeding a daily increase of 1000 on each month. The EVT method can be applied to various topics in epidemiology thus contributing to public health planning for extreme events.

  10. Global statistical maps of extreme-event magnetic observatory 1 min first differences in horizontal intensity

    USGS Publications Warehouse

    Love, Jeffrey J.; Coisson, Pierdavide; Pulkkinen, Antti

    2016-01-01

    Analysis is made of the long-term statistics of three different measures of ground level, storm time geomagnetic activity: instantaneous 1 min first differences in horizontal intensity ΔBh, the root-mean-square of 10 consecutive 1 min differences S, and the ramp change R over 10 min. Geomagnetic latitude maps of the cumulative exceedances of these three quantities are constructed, giving the threshold (nT/min) for which activity within a 24 h period can be expected to occur once per year, decade, and century. Specifically, at geomagnetic 55°, we estimate once-per-century ΔBh, S, and R exceedances and a site-to-site, proportional, 1 standard deviation range [1 σ, lower and upper] to be, respectively, 1000, [690, 1450]; 500, [350, 720]; and 200, [140, 280] nT/min. At 40°, we estimate once-per-century ΔBh, S, and R exceedances and 1 σ values to be 200, [140, 290]; 100, [70, 140]; and 40, [30, 60] nT/min.

  11. 20 CFR 606.23 - Avoidance of tax credit reduction.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... taxable year in an amount not less than the sum of— (i) The potential additional taxes (as estimated by... exceeds the potential additional taxes for such taxable year as estimated under paragraph (a)(1)(i) of... advance is taken into account in determining the amount of the potential additional taxes. (2) The OWS...

  12. 20 CFR 606.23 - Avoidance of tax credit reduction.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... taxable year in an amount not less than the sum of— (i) The potential additional taxes (as estimated by... exceeds the potential additional taxes for such taxable year as estimated under paragraph (a)(1)(i) of... advance is taken into account in determining the amount of the potential additional taxes. (2) The OWS...

  13. 20 CFR 606.23 - Avoidance of tax credit reduction.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... taxable year in an amount not less than the sum of— (i) The potential additional taxes (as estimated by... exceeds the potential additional taxes for such taxable year as estimated under paragraph (a)(1)(i) of... advance is taken into account in determining the amount of the potential additional taxes. (2) The OWS...

  14. 20 CFR 606.23 - Avoidance of tax credit reduction.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... taxable year in an amount not less than the sum of— (i) The potential additional taxes (as estimated by... exceeds the potential additional taxes for such taxable year as estimated under paragraph (a)(1)(i) of... advance is taken into account in determining the amount of the potential additional taxes. (2) The OWS...

  15. 20 CFR 606.23 - Avoidance of tax credit reduction.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... taxable year in an amount not less than the sum of— (i) The potential additional taxes (as estimated by... exceeds the potential additional taxes for such taxable year as estimated under paragraph (a)(1)(i) of... advance is taken into account in determining the amount of the potential additional taxes. (2) The OWS...

  16. Autocorrelation structure of convective rainfall in semiarid-arid climate derived from high-resolution X-Band radar estimates

    NASA Astrophysics Data System (ADS)

    Marra, Francesco; Morin, Efrat

    2018-02-01

    Small scale rainfall variability is a key factor driving runoff response in fast responding systems, such as mountainous, urban and arid catchments. In this paper, the spatial-temporal autocorrelation structure of convective rainfall is derived with extremely high resolutions (60 m, 1 min) using estimates from an X-Band weather radar recently installed in a semiarid-arid area. The 2-dimensional spatial autocorrelation of convective rainfall fields and the temporal autocorrelation of point-wise and distributed rainfall fields are examined. The autocorrelation structures are characterized by spatial anisotropy, correlation distances 1.5-2.8 km and rarely exceeding 5 km, and time-correlation distances 1.8-6.4 min and rarely exceeding 10 min. The observed spatial variability is expected to negatively affect estimates from rain gauges and microwave links rather than satellite and C-/S-Band radars; conversely, the temporal variability is expected to negatively affect remote sensing estimates rather than rain gauges. The presented results provide quantitative information for stochastic weather generators, cloud-resolving models, dryland hydrologic and agricultural models, and multi-sensor merging techniques.

  17. Estimators of The Magnitude-Squared Spectrum and Methods for Incorporating SNR Uncertainty

    PubMed Central

    Lu, Yang; Loizou, Philipos C.

    2011-01-01

    Statistical estimators of the magnitude-squared spectrum are derived based on the assumption that the magnitude-squared spectrum of the noisy speech signal can be computed as the sum of the (clean) signal and noise magnitude-squared spectra. Maximum a posterior (MAP) and minimum mean square error (MMSE) estimators are derived based on a Gaussian statistical model. The gain function of the MAP estimator was found to be identical to the gain function used in the ideal binary mask (IdBM) that is widely used in computational auditory scene analysis (CASA). As such, it was binary and assumed the value of 1 if the local SNR exceeded 0 dB, and assumed the value of 0 otherwise. By modeling the local instantaneous SNR as an F-distributed random variable, soft masking methods were derived incorporating SNR uncertainty. The soft masking method, in particular, which weighted the noisy magnitude-squared spectrum by the a priori probability that the local SNR exceeds 0 dB was shown to be identical to the Wiener gain function. Results indicated that the proposed estimators yielded significantly better speech quality than the conventional MMSE spectral power estimators, in terms of yielding lower residual noise and lower speech distortion. PMID:21886543

  18. Experimental constraints on the damp peridotite solidus and oceanic mantle potential temperature

    NASA Astrophysics Data System (ADS)

    Sarafian, Emily; Gaetani, Glenn A.; Hauri, Erik H.; Sarafian, Adam R.

    2017-03-01

    Decompression of hot mantle rock upwelling beneath oceanic spreading centers causes it to exceed the melting point (solidus), producing magmas that ascend to form basaltic crust ~6 to 7 kilometers thick. The oceanic upper mantle contains ~50 to 200 micrograms per gram of water (H2O) dissolved in nominally anhydrous minerals, which—relative to its low concentration—has a disproportionate effect on the solidus that has not been quantified experimentally. Here, we present results from an experimental determination of the peridotite solidus containing known amounts of dissolved hydrogen. Our data reveal that the H2O-undersaturated peridotite solidus is hotter than previously thought. Reconciling geophysical observations of the melting regime beneath the East Pacific Rise with our experimental results requires that existing estimates for the oceanic upper mantle potential temperature be adjusted upward by about 60°C.

  19. The age of the meteorite recovery surfaces of Roosevelt County, New Mexico, USA

    NASA Technical Reports Server (NTRS)

    Zolensky, Michael E.; Rendell, Helen M.; Wilson, Ivan; Wells, Gordon L.

    1992-01-01

    We have obtained minimum age estimates for the sand units underlying the two largest meteorite deflation surfaces in Roosevelt County, New Mexico, USA, using thermoluminescence dating techniques. The dates obtained ranged from 53.5 (+/- 5.4) to 95.2 (+/- 9.5) ka, and must be considered lower limits for the terrestrial ages of the meteorites found within these specific deflation surfaces. These ages greatly exceed previous measurements from adjacent meteorite-producing deflation basins. We find that Roosevelt County meteorites are probably terrestrial contemporaries of the meteorites found at most accumulation zones in Antarctica. The apparent high meteorite accumulation rate reported for Roosevelt County by Zolensky et al. (1990) is incorrect, as it used an age of 16 ka for all Roosevelt County recovery surfaces. We conclude that the extreme variability of terrestrial ages of the Roosevelt County deflation surfaces effectively precludes their use for calculations of the meteorite accumulation rate at the Earth's surface.

  20. Ultrafast cavitation induced by an X-ray laser in water drops

    NASA Astrophysics Data System (ADS)

    Stan, Claudiu; Willmott, Philip; Stone, Howard; Koglin, Jason; Liang, Mengning; Aquila, Andrew; Robinson, Joseph; Gumerlock, Karl; Blaj, Gabriel; Sierra, Raymond; Boutet, Sebastien; Guillet, Serge; Curtis, Robin; Vetter, Sharon; Loos, Henrik; Turner, James; Decker, Franz-Josef

    2016-11-01

    Cavitation in pure water is determined by an intrinsic heterogeneous cavitation mechanism, which prevents in general the experimental generation of large tensions (negative pressures) in bulk liquid water. We developed an ultrafast decompression technique, based on the reflection of shock waves generated by an X-ray laser inside liquid drops, to stretch liquids to large negative pressures in a few nanoseconds. Using this method, we observed cavitation in liquid water at pressures below -100 MPa. These large tensions exceed significantly those achieved previously, mainly due to the ultrafast decompression. The decompression induced by shock waves generated by an X-ray laser is rapid enough to continue to stretch the liquid phase after the heterogeneous cavitation occurs in water, despite the rapid growth of cavitation nanobubbles. We developed a nucleation-and-growth hydrodynamic cavitation model that explains our results and estimates the concentration of heterogeneous cavitation nuclei in water.

  1. Pushing the limits of Monte Carlo simulations for the three-dimensional Ising model

    NASA Astrophysics Data System (ADS)

    Ferrenberg, Alan M.; Xu, Jiahao; Landau, David P.

    2018-04-01

    While the three-dimensional Ising model has defied analytic solution, various numerical methods like Monte Carlo, Monte Carlo renormalization group, and series expansion have provided precise information about the phase transition. Using Monte Carlo simulation that employs the Wolff cluster flipping algorithm with both 32-bit and 53-bit random number generators and data analysis with histogram reweighting and quadruple precision arithmetic, we have investigated the critical behavior of the simple cubic Ising Model, with lattice sizes ranging from 163 to 10243. By analyzing data with cross correlations between various thermodynamic quantities obtained from the same data pool, e.g., logarithmic derivatives of magnetization and derivatives of magnetization cumulants, we have obtained the critical inverse temperature Kc=0.221 654 626 (5 ) and the critical exponent of the correlation length ν =0.629 912 (86 ) with precision that exceeds all previous Monte Carlo estimates.

  2. Modelling the spatial distribution of ammonia emissions in the UK.

    PubMed

    Hellsten, S; Dragosits, U; Place, C J; Vieno, M; Dore, A J; Misselbrook, T H; Tang, Y S; Sutton, M A

    2008-08-01

    Ammonia emissions (NH3) are characterised by a high spatial variability at a local scale. When modelling the spatial distribution of NH3 emissions, it is important to provide robust emission estimates, since the model output is used to assess potential environmental impacts, e.g. exceedance of critical loads. The aim of this study was to provide a new, updated spatial NH3 emission inventory for the UK for the year 2000, based on an improved modelling approach and the use of updated input datasets. The AENEID model distributes NH3 emissions from a range of agricultural activities, such as grazing and housing of livestock, storage and spreading of manures, and fertilizer application, at a 1-km grid resolution over the most suitable landcover types. The results of the emission calculation for the year 2000 are analysed and the methodology is compared with a previous spatial emission inventory for 1996.

  3. Seepage investigation of the Rio Grande from below Leasburg Dam, Leasburg, New Mexico, to above American Dam, El Paso, Texas, 2015

    USGS Publications Warehouse

    Briody, Alyse C.; Robertson, Andrew J.; Thomas, Nicole

    2016-03-22

    Net seepage gain or loss was computed for each subreach (the interval between two adjacent measurement locations along the river) by subtracting the discharge measured at the upstream location from the discharge measured at the closest downstream location along the river and then subtracting any inflow to the river within the subreach. An estimated gain or loss was determined to be meaningful when it exceeded the cumulative measurement uncertainty associated with the net seepage computation. The cumulative seepage loss in the 64-mile study reach in 2015 was 17.3 plus or minus 2.6 cubic feet per second. Gaining and losing reaches identified in this investigation generally correspond to seepage patterns observed in previous investigations conducted during dry years, with the gaining reaches occurring primarily at the southern (downstream) end of the basin.

  4. Estimation of flood discharges at selected annual exceedance probabilities for unregulated, rural streams in Vermont, with a section on Vermont regional skew regression

    USGS Publications Warehouse

    Olson, Scott A.; with a section by Veilleux, Andrea G.

    2014-01-01

    This report provides estimates of flood discharges at selected annual exceedance probabilities (AEPs) for streamgages in and adjacent to Vermont and equations for estimating flood discharges at AEPs of 50-, 20-, 10-, 4-, 2-, 1-, 0.5-, and 0.2-percent (recurrence intervals of 2-, 5-, 10-, 25-, 50-, 100-, 200-, and 500-years, respectively) for ungaged, unregulated, rural streams in Vermont. The equations were developed using generalized least-squares regression. Flood-frequency and drainage-basin characteristics from 145 streamgages were used in developing the equations. The drainage-basin characteristics used as explanatory variables in the regression equations include drainage area, percentage of wetland area, and the basin-wide mean of the average annual precipitation. The average standard errors of prediction for estimating the flood discharges at the 50-, 20-, 10-, 4-, 2-, 1-, 0.5-, and 0.2-percent AEP with these equations are 34.9, 36.0, 38.7, 42.4, 44.9, 47.3, 50.7, and 55.1 percent, respectively. Flood discharges at selected AEPs for streamgages were computed by using the Expected Moments Algorithm. To improve estimates of the flood discharges for given exceedance probabilities at streamgages in Vermont, a new generalized skew coefficient was developed. The new generalized skew for the region is a constant, 0.44. The mean square error of the generalized skew coefficient is 0.078. This report describes a technique for using results from the regression equations to adjust an AEP discharge computed from a streamgage record. This report also describes a technique for using a drainage-area adjustment to estimate flood discharge at a selected AEP for an ungaged site upstream or downstream from a streamgage. The final regression equations and the flood-discharge frequency data used in this study will be available in StreamStats. StreamStats is a World Wide Web application providing automated regression-equation solutions for user-selected sites on streams.

  5. Geoelectric Hazard Maps for the Mid-Atlantic United States: 100 Year Extreme Values and the 1989 Magnetic Storm

    NASA Astrophysics Data System (ADS)

    Love, Jeffrey J.; Lucas, Greg M.; Kelbert, Anna; Bedrosian, Paul A.

    2018-01-01

    Maps of extreme value geoelectric field amplitude are constructed for the Mid-Atlantic United States, a region with high population density and critically important power grid infrastructure. Geoelectric field time series for the years 1983-2014 are estimated by convolving Earth surface impedances obtained from 61 magnetotelluric survey sites across the Mid-Atlantic with historical 1 min (2 min Nyquist) measurements of geomagnetic variation obtained from a nearby observatory. Statistical models are fitted to the maximum geoelectric amplitudes occurring during magnetic storms, and extrapolations made to estimate threshold amplitudes only exceeded, on average, once per century. For the Mid-Atlantic region, 100 year geoelectric exceedance amplitudes have a range of almost 3 orders of magnitude (from 0.04 V/km at a site in southern Pennsylvania to 24.29 V/km at a site in central Virginia), and they have significant geographic granularity, all of which is due to site-to-site differences in magnetotelluric impedance. Maps of these 100 year exceedance amplitudes resemble those of the estimated geoelectric amplitudes attained during the March 1989 magnetic storm, and, in that sense, the March 1989 storm resembles what might be loosely called a "100 year" event. The geoelectric hazard maps reported here stand in stark contrast with the 100 year geoelectric benchmarks developed for the North American Electric Reliability Corporation.

  6. Regionalization of precipitation characteristics in Montana using L-moments

    USGS Publications Warehouse

    Parrett, C.

    1998-01-01

    Dimensionless precipitation-frequency curves for estimating precipitation depths having small exceedance probabilities were developed for 2-, 6-, and 24-hour storm durations for three homogeneous regions in Montana. L-moment statistics were used to help define the homogeneous regions. The generalized extreme value distribution was used to construct the frequency curves for each duration within each region. The effective record length for each duration in each region was estimated using a graphical method and was found to range from 500 years for 6-hour duration data in Region 2 to 5,100 years for 24-hour duration data in Region 3. The temporal characteristics of storms were analyzed, and methods for estimating synthetic storm hyetographs were developed. Dimensionless depth-duration data were grouped by independent duration (2,6, and 24 hours) and by region, and the beta distribution was fit to dimensionless depth data for various incremental time intervals. Ordinary least-squares regression was used to develop relations between dimensionless depths for a key, short duration - termed the kernel duration - and dimensionless depths for other durations. The regression relations were used, together with the probabilistic dimensionless depth data for the kernel duration, to calculate dimensionless depth-duration curves for exceedance probabilities from .1 to .9. Dimensionless storm hyetographs for each independent duration in each region were constructed for median value conditions based on an exceedance probability of .5.

  7. Aquatic risk assessment of a polycarboxylate dispersant polymer used in laundry detergents.

    PubMed

    Hamilton, J D; Freeman, M B; Reinert, K H

    1996-09-01

    Polycarboxylates enhance detergent soil removal properties and prevent encrustation of calcium salts on fabrics during washing. Laundry wastewater typically reaches wastewater treatment plants, which then discharge into aquatic environments. The yearly average concentration of a 4500 molecular weight (MW) sodium acrylate homopolymer reaching U.S. wastewater treatment plants will be approximately 0.7 mg/L. Publications showing the low to moderate acute aquatic toxicity of polycarboxylates are readily available. However, there are no published evaluations that estimate wastewater removal and characterize the probability of exceedance of acceptable chronic aquatic exposure. WW-TREAT can be used to estimate removal during wastewater treatment and PG-GRIDS can be applied to characterize risk for exceedance in wastewater treatment plant outfalls. After adjustments for the MW distribution of the homopolymer, WW-TREAT predicted that 6.5% will be removed in primary treatment plants and 60% will be removed in combined primary and activated sludge treatment plants. These estimates are consistent with wastewater fate tests, but underestimate homopolymer removal when homopolymer precipitation is included. Acceptable levels of chronic outfall (receiving water) exposure were based on aquatic toxicity testing in algae, fish, and Daphnia magna. PG-GRIDS predicted that no unreasonable risk for exceedance of acceptable chronic exposure will occur in the outfalls of U.S. wastewater plants. Future development of wastewater treatment models should consider polymer MW distribution and precipitation as factors that may alter removal of materials from wastewater.

  8. Geoelectric hazard maps for the Mid-Atlantic United States: 100 year extreme values and the 1989 magnetic storm

    USGS Publications Warehouse

    Love, Jeffrey J.; Lucas, Greg M.; Kelbert, Anna; Bedrosian, Paul A.

    2018-01-01

    Maps of extreme value geoelectric field amplitude are constructed for the Mid‐Atlantic United States, a region with high population density and critically important power grid infrastructure. Geoelectric field time series for the years 1983–2014 are estimated by convolving Earth surface impedances obtained from 61 magnetotelluric survey sites across the Mid‐Atlantic with historical 1 min (2 min Nyquist) measurements of geomagnetic variation obtained from a nearby observatory. Statistical models are fitted to the maximum geoelectric amplitudes occurring during magnetic storms, and extrapolations made to estimate threshold amplitudes only exceeded, on average, once per century. For the Mid‐Atlantic region, 100 year geoelectric exceedance amplitudes have a range of almost 3 orders of magnitude (from 0.04 V/km at a site in southern Pennsylvania to 24.29 V/km at a site in central Virginia), and they have significant geographic granularity, all of which is due to site‐to‐site differences in magnetotelluric impedance. Maps of these 100 year exceedance amplitudes resemble those of the estimated geoelectric amplitudes attained during the March 1989 magnetic storm, and, in that sense, the March 1989 storm resembles what might be loosely called a “100 year” event. The geoelectric hazard maps reported here stand in stark contrast with the 100 year geoelectric benchmarks developed for the North American Electric Reliability Corporation.

  9. Coastal ocean and shelf-sea biogeochemical cycling of trace elements and isotopes: lessons learned from GEOTRACES

    PubMed Central

    Lam, Phoebe J.; Lohan, Maeve C.; Kwon, Eun Young; Hatje, Vanessa; Shiller, Alan M.; Cutter, Gregory A.; Thomas, Alex; Milne, Angela; Thomas, Helmuth; Andersson, Per S.; Porcelli, Don; Tanaka, Takahiro; Geibert, Walter; Dehairs, Frank; Garcia-Orellana, Jordi

    2016-01-01

    Continental shelves and shelf seas play a central role in the global carbon cycle. However, their importance with respect to trace element and isotope (TEI) inputs to ocean basins is less well understood. Here, we present major findings on shelf TEI biogeochemistry from the GEOTRACES programme as well as a proof of concept for a new method to estimate shelf TEI fluxes. The case studies focus on advances in our understanding of TEI cycling in the Arctic, transformations within a major river estuary (Amazon), shelf sediment micronutrient fluxes and basin-scale estimates of submarine groundwater discharge. The proposed shelf flux tracer is 228-radium (T1/2 = 5.75 yr), which is continuously supplied to the shelf from coastal aquifers, sediment porewater exchange and rivers. Model-derived shelf 228Ra fluxes are combined with TEI/ 228Ra ratios to quantify ocean TEI fluxes from the western North Atlantic margin. The results from this new approach agree well with previous estimates for shelf Co, Fe, Mn and Zn inputs and exceed published estimates of atmospheric deposition by factors of approximately 3–23. Lastly, recommendations are made for additional GEOTRACES process studies and coastal margin-focused section cruises that will help refine the model and provide better insight on the mechanisms driving shelf-derived TEI fluxes to the ocean. This article is part of the themed issue ‘Biological and climatic impacts of ocean trace element chemistry’. PMID:29035267

  10. Coastal ocean and shelf-sea biogeochemical cycling of trace elements and isotopes: lessons learned from GEOTRACES

    NASA Astrophysics Data System (ADS)

    Charette, Matthew A.; Lam, Phoebe J.; Lohan, Maeve C.; Kwon, Eun Young; Hatje, Vanessa; Jeandel, Catherine; Shiller, Alan M.; Cutter, Gregory A.; Thomas, Alex; Boyd, Philip W.; Homoky, William B.; Milne, Angela; Thomas, Helmuth; Andersson, Per S.; Porcelli, Don; Tanaka, Takahiro; Geibert, Walter; Dehairs, Frank; Garcia-Orellana, Jordi

    2016-11-01

    Continental shelves and shelf seas play a central role in the global carbon cycle. However, their importance with respect to trace element and isotope (TEI) inputs to ocean basins is less well understood. Here, we present major findings on shelf TEI biogeochemistry from the GEOTRACES programme as well as a proof of concept for a new method to estimate shelf TEI fluxes. The case studies focus on advances in our understanding of TEI cycling in the Arctic, transformations within a major river estuary (Amazon), shelf sediment micronutrient fluxes and basin-scale estimates of submarine groundwater discharge. The proposed shelf flux tracer is 228-radium (T1/2 = 5.75 yr), which is continuously supplied to the shelf from coastal aquifers, sediment porewater exchange and rivers. Model-derived shelf 228Ra fluxes are combined with TEI/ 228Ra ratios to quantify ocean TEI fluxes from the western North Atlantic margin. The results from this new approach agree well with previous estimates for shelf Co, Fe, Mn and Zn inputs and exceed published estimates of atmospheric deposition by factors of approximately 3-23. Lastly, recommendations are made for additional GEOTRACES process studies and coastal margin-focused section cruises that will help refine the model and provide better insight on the mechanisms driving shelf-derived TEI fluxes to the ocean. This article is part of the themed issue 'Biological and climatic impacts of ocean trace element chemistry'.

  11. Societal costs of underage drinking.

    PubMed

    Miller, Ted R; Levy, David T; Spicer, Rebecca S; Taylor, Dexter M

    2006-07-01

    Despite minimum-purchase-age laws, young people regularly drink alcohol. This study estimated the magnitude and costs of problems resulting from underage drinking by category-traffic crashes, violence, property crime, suicide, burns, drownings, fetal alcohol syndrome, high-risk sex, poisonings, psychoses, and dependency treatment-and compared those costs with associated alcohol sales. Previous studies did not break out costs of alcohol problems by age. For each category of alcohol-related problems, we estimated fatal and nonfatal cases attributable to underage alcohol use. We multiplied alcohol-attributable cases by estimated costs per case to obtain total costs for each problem. Underage drinking accounted for at least 16% of alcohol sales in 2001. It led to 3,170 deaths and 2.6 million other harmful events. The estimated $61.9 billion bill (relative SE = 18.5%) included $5.4 billion in medical costs, $14.9 billion in work loss and other resource costs, and $41.6 billion in lost quality of life. Quality-of-life costs, which accounted for 67% of total costs, required challenging indirect measurement. Alcohol-attributable violence and traffic crashes dominated the costs. Leaving aside quality of life, the societal harm of $1 per drink consumed by an underage drinker exceeded the average purchase price of $0.90 or the associated $0.10 in tax revenues. Recent attention has focused on problems resulting from youth use of illicit drugs and tobacco. In light of the associated substantial injuries, deaths, and high costs to society, youth drinking behaviors merit the same kind of serious attention.

  12. Metal-to-insulator transition induced by UV illumination in a single SnO2 nanobelt

    NASA Astrophysics Data System (ADS)

    Viana, E. R.; Ribeiro, G. M.; de Oliveira, A. G.; González, J. C.

    2017-11-01

    An individual tin oxide (SnO2) nanobelt was connected in a back-gate field-effect transistor configuration and the conductivity of the nanobelt was measured at different temperatures from 400 K to 4 K, in darkness and under UV illumination. In darkness, the SnO2 nanobelts showed semiconductor behavior for the whole temperature range measured. However, when subjected to UV illumination the photoinduced carriers were high enough to lead to a metal-to-insulator transition (MIT), near room temperature, at T MIT = 240 K. By measuring the current versus gate voltage curves, and considering the electrostatic properties of a non-ideal conductor, for the SnO2 nanobelt on top of a gate-oxide substrate, we estimated the capacitance per unit length, the mobility and the density of carriers. In darkness, the density was estimated to be 5-10 × 1018 cm-3, in agreement with our previously reported result (Phys. Status Solid. RRL 6, 262-4 (2012)). However, under UV illumination the density of carriers was estimated to be 0.2-3.8 × 1019 cm-3 near T MIT, which exceeded the critical Mott density estimated to be 2.8 × 1019 cm-3 above 240 K. These results showed that the electrical properties of the SnO2 nanobelts can be drastically modified and easily tuned from semiconducting to metallic states as a function of temperature and light.

  13. Exploring Kepler Giant Planets in the Habitable Zone

    NASA Astrophysics Data System (ADS)

    Hill, Michelle L.; Kane, Stephen R.; Seperuelo Duarte, Eduardo; Kopparapu, Ravi K.; Gelino, Dawn M.; Wittenmyer, Robert A.

    2018-06-01

    The Kepler mission found hundreds of planet candidates within the Habitable Zones (HZ) of their host star, including over 70 candidates with radii larger than three Earth radii (R ⊕) within the optimistic HZ (OHZ). These giant planets are potential hosts to large terrestrial satellites (or exomoons) which would also exist in the HZ. We calculate the occurrence rates of giant planets (R p = 3.0–25 R ⊕) in the OHZ, and find a frequency of (6.5 ± 1.9)% for G stars, (11.5 ± 3.1)% for K stars, and (6 ± 6)% for M stars. We compare this with previously estimated occurrence rates of terrestrial planets in the HZ of G, K, and M stars and find that if each giant planet has one large terrestrial moon then these moons are less likely to exist in the HZ than terrestrial planets. However, if each giant planet holds more than one moon, then the occurrence rates of moons in the HZ would be comparable to that of terrestrial planets, and could potentially exceed them. We estimate the mass of each planet candidate using the mass–radius relationship developed by Chen & Kipping. We calculate the Hill radius of each planet to determine the area of influence of the planet in which any attached moon may reside, then calculate the estimated angular separation of the moon and planet for future imaging missions. Finally, we estimate the radial velocity semi-amplitudes of each planet for use in follow-up observations.

  14. Coastal ocean and shelf-sea biogeochemical cycling of trace elements and isotopes: lessons learned from GEOTRACES.

    PubMed

    Charette, Matthew A; Lam, Phoebe J; Lohan, Maeve C; Kwon, Eun Young; Hatje, Vanessa; Jeandel, Catherine; Shiller, Alan M; Cutter, Gregory A; Thomas, Alex; Boyd, Philip W; Homoky, William B; Milne, Angela; Thomas, Helmuth; Andersson, Per S; Porcelli, Don; Tanaka, Takahiro; Geibert, Walter; Dehairs, Frank; Garcia-Orellana, Jordi

    2016-11-28

    Continental shelves and shelf seas play a central role in the global carbon cycle. However, their importance with respect to trace element and isotope (TEI) inputs to ocean basins is less well understood. Here, we present major findings on shelf TEI biogeochemistry from the GEOTRACES programme as well as a proof of concept for a new method to estimate shelf TEI fluxes. The case studies focus on advances in our understanding of TEI cycling in the Arctic, transformations within a major river estuary (Amazon), shelf sediment micronutrient fluxes and basin-scale estimates of submarine groundwater discharge. The proposed shelf flux tracer is 228-radium ( T 1/2  = 5.75 yr), which is continuously supplied to the shelf from coastal aquifers, sediment porewater exchange and rivers. Model-derived shelf 228 Ra fluxes are combined with TEI/ 228 Ra ratios to quantify ocean TEI fluxes from the western North Atlantic margin. The results from this new approach agree well with previous estimates for shelf Co, Fe, Mn and Zn inputs and exceed published estimates of atmospheric deposition by factors of approximately 3-23. Lastly, recommendations are made for additional GEOTRACES process studies and coastal margin-focused section cruises that will help refine the model and provide better insight on the mechanisms driving shelf-derived TEI fluxes to the ocean.This article is part of the themed issue 'Biological and climatic impacts of ocean trace element chemistry'. © 2015 The Authors.

  15. Baseline toxicity of a chlorobenzene mixture and total body residues measured and estimated with solid-phase microextraction.

    PubMed

    Leslie, Heather A; Hermens, Joop L M; Kraak, Michiel H S

    2004-08-01

    Body residues of compounds with a narcotic mode of action that exceed critical levels result in baseline toxicity in organisms. Previous studies have shown that internal concentrations in organisms also can be estimated by way of passive sampling. In this experiment, solid-phase microextraction (SPME) fibers were used as a tool to estimate the body residues, which were then compared to measured levels. Past application of SPME fibers in the assessment of toxicity risk of samples has focused on separate exposure of fibers and organisms, often necessitated by the amount of agitation needed in order to achieve steady state in the fibers within a convenient time period. Uptake kinetic studies have shown that in SPME fibers with thin coatings, equilibrium concentrations can be reached without agitation within the time frame of a toxicity test. In contrast to toxicity experiments to date, the SPME fibers in the current study were exposed concomitantly to the test water with the organisms, ensuring an exposure under the exact same conditions. Fibers and two aquatic invertebrate species were exposed to a mixture of four chlorobenzenes with a narcotic mode of action. The total body residue of these compounds in the organisms was determined, as was the acute toxicity resulting from the accumulation. The total body residues of both species were correlated to the total concentrations in SPME fibers. It was concluded that toxicity could be predicted based on total body residue (TBR) estimates from fiber concentrations.

  16. Using present day observations to detect when ocean acidification exceeds natural variability of surface seawater Ωaragonite

    NASA Astrophysics Data System (ADS)

    Sutton, A.; Sabine, C. L.; Feely, R. A.

    2016-02-01

    One of the major challenges to assessing the impact of ocean acidification on marine life is the need to better understand the magnitude of long-term change in the context of natural variability. High-frequency moored observations can be highly effective in defining interannual, seasonal, and subseasonal variability at key locations. Here we present monthly aragonite saturation state (Ωaragonite) climatology for 15 open ocean, coastal, and coral reef locations using 3-hourly moored observations of surface seawater pCO2 and pH collected together since as early as 2009. We then use these present day surface mooring observations to estimate pre-industrial variability at each location and compare these results to previous modeling studies addressing global-scale variability and change. Our observations suggest that open oceans sites, especially in the subtropics, are experiencing Ωaragonite values throughout much of the year which are outside the range of pre-industrial values. In coastal and coral reef ecosystems, which have higher natural variability, seasonal patterns where present day Ωaragonite values exceeding pre-industrial bounds are emerging with some sites exhibiting subseasonal conditions approaching Ωaragonite = 1. Linking these seasonal patterns in carbonate chemistry to biological processes in these regions is critical to identify when and where marine life may encounter Ωaragonite values outside the conditions to which they have adapted.

  17. Co-benefits of mitigating global greenhouse gas emissions for future air quality and human health

    NASA Astrophysics Data System (ADS)

    West, J. Jason; Smith, Steven J.; Silva, Raquel A.; Naik, Vaishali; Zhang, Yuqiang; Adelman, Zachariah; Fry, Meridith M.; Anenberg, Susan; Horowitz, Larry W.; Lamarque, Jean-Francois

    2013-10-01

    Actions to reduce greenhouse gas (GHG) emissions often reduce co-emitted air pollutants, bringing co-benefits for air quality and human health. Past studies typically evaluated near-term and local co-benefits, neglecting the long-range transport of air pollutants, long-term demographic changes, and the influence of climate change on air quality. Here we simulate the co-benefits of global GHG reductions on air quality and human health using a global atmospheric model and consistent future scenarios, via two mechanisms: reducing co-emitted air pollutants, and slowing climate change and its effect on air quality. We use new relationships between chronic mortality and exposure to fine particulate matter and ozone, global modelling methods and new future scenarios. Relative to a reference scenario, global GHG mitigation avoids 0.5+/-0.2, 1.3+/-0.5 and 2.2+/-0.8 million premature deaths in 2030, 2050 and 2100. Global average marginal co-benefits of avoided mortality are US$50-380 per tonne of CO2, which exceed previous estimates, exceed marginal abatement costs in 2030 and 2050, and are within the low range of costs in 2100. East Asian co-benefits are 10-70 times the marginal cost in 2030. Air quality and health co-benefits, especially as they are mainly local and near-term, provide strong additional motivation for transitioning to a low-carbon future.

  18. Biological treatment process for removing petroleum hydrocarbons from oil field produced waters

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tellez, G.; Khandan, N.

    1995-12-31

    The feasibility of removing petroleum hydrocarbons from oil fields produced waters using biological treatment was evaluated under laboratory and field conditions. Based on previous laboratory studies, a field-scale prototype system was designed and operated over a period of four months. Two different sources of produced waters were tested in this field study under various continuous flow rates ranging from 375 1/D to 1,800 1/D. One source of produced water was an open storage pit; the other, a closed storage tank. The TDS concentrations of these sources exceeded 50,000 mg/l; total n-alkanes exceeded 100 mg/l; total petroleum hydrocarbons exceeded 125 mg/l;more » and total BTEX exceeded 3 mg/l. Removals of total n-alkanes, total petroleum hydrocarbons, and BTEX remained consistently high over 99%. During these tests, the energy costs averaged $0.20/bbl at 12 bbl/D.« less

  19. Methods for estimating flow-duration and annual mean-flow statistics for ungaged streams in Oklahoma

    USGS Publications Warehouse

    Esralew, Rachel A.; Smith, S. Jerrod

    2010-01-01

    Flow statistics can be used to provide decision makers with surface-water information needed for activities such as water-supply permitting, flow regulation, and other water rights issues. Flow statistics could be needed at any location along a stream. Most often, streamflow statistics are needed at ungaged sites, where no flow data are available to compute the statistics. Methods are presented in this report for estimating flow-duration and annual mean-flow statistics for ungaged streams in Oklahoma. Flow statistics included the (1) annual (period of record), (2) seasonal (summer-autumn and winter-spring), and (3) 12 monthly duration statistics, including the 20th, 50th, 80th, 90th, and 95th percentile flow exceedances, and the annual mean-flow (mean of daily flows for the period of record). Flow statistics were calculated from daily streamflow information collected from 235 streamflow-gaging stations throughout Oklahoma and areas in adjacent states. A drainage-area ratio method is the preferred method for estimating flow statistics at an ungaged location that is on a stream near a gage. The method generally is reliable only if the drainage-area ratio of the two sites is between 0.5 and 1.5. Regression equations that relate flow statistics to drainage-basin characteristics were developed for the purpose of estimating selected flow-duration and annual mean-flow statistics for ungaged streams that are not near gaging stations on the same stream. Regression equations were developed from flow statistics and drainage-basin characteristics for 113 unregulated gaging stations. Separate regression equations were developed by using U.S. Geological Survey streamflow-gaging stations in regions with similar drainage-basin characteristics. These equations can increase the accuracy of regression equations used for estimating flow-duration and annual mean-flow statistics at ungaged stream locations in Oklahoma. Streamflow-gaging stations were grouped by selected drainage-basin characteristics by using a k-means cluster analysis. Three regions were identified for Oklahoma on the basis of the clustering of gaging stations and a manual delineation of distinguishable hydrologic and geologic boundaries: Region 1 (western Oklahoma excluding the Oklahoma and Texas Panhandles), Region 2 (north- and south-central Oklahoma), and Region 3 (eastern and central Oklahoma). A total of 228 regression equations (225 flow-duration regressions and three annual mean-flow regressions) were developed using ordinary least-squares and left-censored (Tobit) multiple-regression techniques. These equations can be used to estimate 75 flow-duration statistics and annual mean-flow for ungaged streams in the three regions. Drainage-basin characteristics that were statistically significant independent variables in the regression analyses were (1) contributing drainage area; (2) station elevation; (3) mean drainage-basin elevation; (4) channel slope; (5) percentage of forested canopy; (6) mean drainage-basin hillslope; (7) soil permeability; and (8) mean annual, seasonal, and monthly precipitation. The accuracy of flow-duration regression equations generally decreased from high-flow exceedance (low-exceedance probability) to low-flow exceedance (high-exceedance probability) . This decrease may have happened because a greater uncertainty exists for low-flow estimates and low-flow is largely affected by localized geology that was not quantified by the drainage-basin characteristics selected. The standard errors of estimate of regression equations for Region 1 (western Oklahoma) were substantially larger than those standard errors for other regions, especially for low-flow exceedances. These errors may be a result of greater variability in low flow because of increased irrigation activities in this region. Regression equations may not be reliable for sites where the drainage-basin characteristics are outside the range of values of independent vari

  20. Disruption of Rhino Demography by Poachers May Lead to Population Declines in Kruger National Park, South Africa

    PubMed Central

    Ferreira, Sam M.; Greaver, Cathy; Knight, Grant A.; Knight, Mike H.; Smit, Izak P. J.; Pienaar, Danie

    2015-01-01

    The onslaught on the World’s rhinoceroses continues despite numerous initiatives aimed at curbing it. When losses due to poaching exceed birth rates, declining rhino populations result. We used previously published estimates and growth rates for black rhinos (2008) and white rhinos (2010) together with known poaching trends at the time to predict population sizes and poaching rates in Kruger National Park, South Africa for 2013. Kruger is a stronghold for the south-eastern black rhino and southern white rhino. Counting rhinos on 878 blocks 3x3 km in size using helicopters, estimating availability bias and collating observer and detectability biases allowed estimates using the Jolly’s estimator. The exponential escalation in number of rhinos poached per day appears to have slowed. The black rhino estimate of 414 individuals (95% confidence interval: 343-487) was lower than the predicted 835 individuals (95% CI: 754-956). The white rhino estimate of 8,968 individuals (95% CI: 8,394-9,564) overlapped with the predicted 9,417 individuals (95% CI: 7,698-11,183). Density- and rainfall-dependent responses in birth- and death rates of white rhinos provide opportunities to offset anticipated poaching effects through removals of rhinos from high density areas to increase birth and survival rates. Biological management of rhinos, however, need complimentary management of the poaching threat as present poaching trends predict detectable declines in white rhino abundances by 2018. Strategic responses such as anti-poaching that protect supply from illegal harvesting, reducing demand, and increasing supply commonly require crime network disruption as a first step complimented by providing options for alternative economies in areas abutting protected areas. PMID:26121681

  1. Erosivity, surface runoff, and soil erosion estimation using GIS-coupled runoff-erosion model in the Mamuaba catchment, Brazil.

    PubMed

    Marques da Silva, Richarde; Guimarães Santos, Celso Augusto; Carneiro de Lima Silva, Valeriano; Pereira e Silva, Leonardo

    2013-11-01

    This study evaluates erosivity, surface runoff generation, and soil erosion rates for Mamuaba catchment, sub-catchment of Gramame River basin (Brazil) by using the ArcView Soil and Water Assessment Tool (AvSWAT) model. Calibration and validation of the model was performed on monthly basis, and it could simulate surface runoff and soil erosion to a good level of accuracy. Daily rainfall data between 1969 and 1989 from six rain gauges were used, and the monthly rainfall erosivity of each station was computed for all the studied years. In order to evaluate the calibration and validation of the model, monthly runoff data between January 1978 and April 1982 from one runoff gauge were used as well. The estimated soil loss rates were also realistic when compared to what can be observed in the field and to results from previous studies around of catchment. The long-term average soil loss was estimated at 9.4 t ha(-1) year(-1); most of the area of the catchment (60%) was predicted to suffer from a low- to moderate-erosion risk (<6 t ha(-1) year(-1)) and, in 20% of the catchment, the soil erosion was estimated to exceed > 12 t ha(-1) year(-1). Expectedly, estimated soil loss was significantly correlated with measured rainfall and simulated surface runoff. Based on the estimated soil loss rates, the catchment was divided into four priority categories (low, moderate, high and very high) for conservation intervention. The study demonstrates that the AvSWAT model provides a useful tool for soil erosion assessment from catchments and facilitates the planning for a sustainable land management in northeastern Brazil.

  2. Modelling the effect on injuries and fatalities when changing mode of transport from car to bicycle.

    PubMed

    Nilsson, Philip; Stigson, Helena; Ohlin, Maria; Strandroth, Johan

    2017-03-01

    Several studies have estimated the health effects of active commuting, where a transport mode shift from car to bicycle reduces risk of mortality and morbidity. Previous studies mainly assess the negative aspects of bicycling by referring to fatalities or police reported injuries. However, most bicycle crashes are not reported by the police and therefore hospital reported data would cover a much higher rate of injuries from bicycle crashes. The aim of the present study was to estimate the effect on injuries and fatalities from traffic crashes when shifting mode of transport from car to bicycle by using hospital reported data. This present study models the change in number of injuries and fatalities due to a transport mode change using a given flow change from car to bicycle and current injury and fatality risk per distance for bicyclists and car occupants. show that bicyclists have a much higher injury risk (29 times) and fatality risk (10 times) than car occupants. In a scenario where car occupants in Stockholm living close to their work place shifts transport mode to bicycling, injuries, fatalities and health loss expressed in Disability-Adjusted Life Years (DALY) were estimated to increase. The vast majority of the estimated DALY increase was caused by severe injuries and fatalities and it tends to fluctuate so that the number of severe crashes may exceed the estimation with a large margin. Although the estimated increase of traffic crashes and DALY, a transport mode shift is seen as a way towards a more sustainable society. Thus, this present study highlights the need of strategic preventive measures in order to minimize the negative impacts from increased bicycling. Copyright © 2016 Elsevier Ltd. All rights reserved.

  3. Cost of improving Access to Psychological Therapies (IAPT) programme: an analysis of cost of session, treatment and recovery in selected Primary Care Trusts in the East of England region.

    PubMed

    Radhakrishnan, Muralikrishnan; Hammond, Geoffrey; Jones, Peter B; Watson, Alison; McMillan-Shields, Fiona; Lafortune, Louise

    2013-01-01

    Recent literature on Improving Access to Psychological Therapies (IAPT) has reported on improvements in clinical outcomes, changes in employment status and the concept of recovery attributable to IAPT treatment, but not on the costs of the programme. This article reports the costs associated with a single session, completed course of treatment and recovery for four treatment courses (i.e., remaining in low or high intensity treatment, stepping up or down) in IAPT services in 5 East of England region Primary Care Trusts. Costs were estimated using treatment activity data and gross financial information, along with assumptions about how these financial data could be broken down. The estimated average cost of a high intensity session was £177 and the average cost for a low intensity session was £99. The average cost of treatment was £493 (low intensity), £1416 (high intensity), £699 (stepped down), £1514 (stepped up) and £877 (All). The cost per recovered patient was £1043 (low intensity), £2895 (high intensity), £1653 (stepped down), £2914 (stepped up) and £1766 (All). Sensitivity analysis revealed that the costs are sensitive to cost ratio assumptions, indicating that inaccurate ratios are likely to influence overall estimates. Results indicate the cost per session exceeds previously reported estimates, but cost of treatment is only marginally higher. The current cost estimates are supportive of the originally proposed IAPT model on cost-benefit grounds. The study also provides a framework to estimate costs using financial data, especially when programmes have block contract arrangements. Replication and additional analyses along with evidence-based discussion regarding alternative, cost-effective methods of intervention is recommended. Copyright © 2012 Elsevier Ltd. All rights reserved.

  4. Pollution Critical Load Exceedance and an Extended Growing Season as Modulators of Red Spruce Radial Growth

    NASA Astrophysics Data System (ADS)

    Kosiba, A. M.; Schaberg, P. G.; Engel, B. J.; Rayback, S. A.; Hawley, G. J.; Pontius, J.; Miller, E. K.

    2016-12-01

    Acidic sulfur (S) and nitrogen (N) deposition depletes cations such as calcium (Ca) from forest soils and has been linked to increases in foliar winter injury that led to the decline of red spruce (Picea rubens Sarg.) in the northeastern United States. We used results from a 30 m resolution steady-state S and N critical load exceedance model for New England to better understand the spatial connections between Ca depletion and red spruce productivity. To calculate exceedance, atmospheric deposition was estimated for a 5-year period (1984-1988) because tree health and productivity declines were expected to be most responsive to high acid loading. We examined how radial growth (basal area increment) of 441 dominant and co-dominant red spruce trees from 37 sites across Vermont and New Hampshire was related to modeled estimates of S and N critical load exceedance. We assessed growth using statistical models with exceedance as a source of variation, but which also included "year" and "elevation class" (to help account for climatic variability) and interactions among factors. Exceedance was significantly and negatively associated with mean growth for the study period (1951-2010) overall, and particularly for the 1980s and 2000s - periods of numerous and/or severe foliar winter injury events. However, climate-related sources of variation (year and elevation) accounted for most of the differences in growth over the chronology. Interestingly, recent growth for red spruce is now the highest recorded over our dendrochronological record for the species - suggesting that the factors shaping growth may be changing. Because red spruce is a temperate conifer that has the capacity to photosynthesize year-round, it is possible that warmer temperatures may be extending the functional growing season of the species thereby fostering increased growth. Data from elevational transects on Mount Mansfield (Vermont's tallest mountain) indicate that warmer spring, summer, fall and even winter temperatures are positively correlated with increased radial growth for red spruce.

  5. Simulations of a hypothetical temperature control structure at Detroit Dam on the North Santiam River, northwestern Oregon

    USGS Publications Warehouse

    Buccola, Norman L.; Stonewall, Adam J.; Rounds, Stewart A.

    2015-01-01

    Estimated egg-emergence days for endangered Upper Willamette River Chinook salmon (Oncorhynchus tshawytscha) and Upper Willamette River winter steelhead (Oncorhynchus mykiss) were assessed for all scenarios. Estimated spring Chinook fry emergence under SlidingWeir scenarios was 9 days later immediately downstream of Big Cliff Dam, and 4 days later at Greens Bridge compared with existing structural scenarios at Detroit Dam. Despite the inclusion of a hypothetical sliding weir at Detroit Dam, temperatures exceeded without-dams temperatures during November and December. These late-autumn exceedances likely represent the residual thermal effect of Detroit Lake operated to meet minimum dry-season release rates (supporting instream habitat and irrigation requirements) and lake levels specified by the current (2014) operating rules (supporting recreation and flood mitigation).

  6. Marine ecosystem appropriation in the Indo-Pacific: a case study of the live reef fish food trade

    NASA Technical Reports Server (NTRS)

    Warren-Rhodes, Kimberley; Sadovy, Yvonne; Cesar, Herman

    2003-01-01

    Our ecological footprint analyses of coral reef fish fisheries and, in particular, the live reef fish food trade (FT), indicate many countries' current consumption exceeds estimated sustainable per capita global, regional and local coral reef production levels. Hong Kong appropriates 25% of SE Asia's annual reef fish production of 135 260-286 560 tonnes (t) through its FT demand, exceeding regional biocapacity by 8.3 times; reef fish fisheries demand out-paces sustainable production in the Indo-Pacific and SE Asia by 2.5 and 6 times. In contrast, most Pacific islands live within their own reef fisheries means with local demand at < 20% of total capacity in Oceania. The FT annually requisitions up to 40% of SE Asia's estimated reef fish and virtually all of its estimated grouper yields. Our results underscore the unsustainable nature of the FT and the urgent need for regional management and conservation of coral reef fisheries in the Indo-Pacific.

  7. Estimating the Pollution Risk of Cadmium in Soil Using a Composite Soil Environmental Quality Standard

    PubMed Central

    Huang, Biao; Zhao, Yongcun

    2014-01-01

    Estimating standard-exceeding probabilities of toxic metals in soil is crucial for environmental evaluation. Because soil pH and land use types have strong effects on the bioavailability of trace metals in soil, they were taken into account by some environmental protection agencies in making composite soil environmental quality standards (SEQSs) that contain multiple metal thresholds under different pH and land use conditions. This study proposed a method for estimating the standard-exceeding probability map of soil cadmium using a composite SEQS. The spatial variability and uncertainty of soil pH and site-specific land use type were incorporated through simulated realizations by sequential Gaussian simulation. A case study was conducted using a sample data set from a 150 km2 area in Wuhan City and the composite SEQS for cadmium, recently set by the State Environmental Protection Administration of China. The method may be useful for evaluating the pollution risks of trace metals in soil with composite SEQSs. PMID:24672364

  8. Marine ecosystem appropriation in the Indo-Pacific: a case study of the live reef fish food trade.

    PubMed

    Warren-Rhodes, Kimberley; Sadovy, Yvonne; Cesar, Herman

    2003-11-01

    Our ecological footprint analyses of coral reef fish fisheries and, in particular, the live reef fish food trade (FT), indicate many countries' current consumption exceeds estimated sustainable per capita global, regional and local coral reef production levels. Hong Kong appropriates 25% of SE Asia's annual reef fish production of 135 260-286 560 tonnes (t) through its FT demand, exceeding regional biocapacity by 8.3 times; reef fish fisheries demand out-paces sustainable production in the Indo-Pacific and SE Asia by 2.5 and 6 times. In contrast, most Pacific islands live within their own reef fisheries means with local demand at < 20% of total capacity in Oceania. The FT annually requisitions up to 40% of SE Asia's estimated reef fish and virtually all of its estimated grouper yields. Our results underscore the unsustainable nature of the FT and the urgent need for regional management and conservation of coral reef fisheries in the Indo-Pacific.

  9. How Much (More) Should CEOs Make? A Universal Desire for More Equal Pay.

    PubMed

    Kiatpongsan, Sorapop; Norton, Michael I

    2014-11-01

    Do people from different countries and different backgrounds have similar preferences for how much more the rich should earn than the poor? Using survey data from 40 countries (N = 55,238), we compare respondents' estimates of the wages of people in different occupations-chief executive officers, cabinet ministers, and unskilled workers-to their ideals for what those wages should be. We show that ideal pay gaps between skilled and unskilled workers are significantly smaller than estimated pay gaps and that there is consensus across countries, socioeconomic status, and political beliefs. Moreover, data from 16 countries reveals that people dramatically underestimate actual pay inequality. In the United States-where underestimation was particularly pronounced-the actual pay ratio of CEOs to unskilled workers (354:1) far exceeded the estimated ratio (30:1), which in turn far exceeded the ideal ratio (7:1). In sum, respondents underestimate actual pay gaps, and their ideal pay gaps are even further from reality than those underestimates. © The Author(s) 2014.

  10. Phytoplankton growth rates in a light-limited environment, San Francisco Bay

    USGS Publications Warehouse

    Alpine, Andrea E.; Cloern, James E.

    1988-01-01

    This study was motivated by the need for quantitative measures of phytoplankton population growth rate in an estuarine environment, and was designed around the presumption that growth rates can be related empirically to light exposure. We conducted the study in San Francisco Bay (California, USA), which has large horizontal gradients in light availability (Zp:Zm) typical of many coastal plain estuaries, and nutrient concentrations that often exceed those presumed to limit phytoplankton growth (Cloern et al. 1985). We tested the hypothesis that light availability is the primary control of phytoplankton growth, and that previous estimates of growth rate based on the ratio of productivity to biomass (Cloern et al. 1985) are realistic. Specifically, we wanted to verify that growth rate varies spatially along horizontal gradients of light availability indexed as Zp:Zm, such that phytoplankton turnover rate is rapid in shallow clear areas (high Zp:Zm) and slow in deep turbid areas (low Zp:Zm). We used an in situ incubation technique which simulated vertical mixing, and measured both changes in cell number and carbon production as independent estimates of growth rate across a range of Zp:Zm ratios.

  11. Versatile Gaussian probes for squeezing estimation

    NASA Astrophysics Data System (ADS)

    Rigovacca, Luca; Farace, Alessandro; Souza, Leonardo A. M.; De Pasquale, Antonella; Giovannetti, Vittorio; Adesso, Gerardo

    2017-05-01

    We consider an instance of "black-box" quantum metrology in the Gaussian framework, where we aim to estimate the amount of squeezing applied on an input probe, without previous knowledge on the phase of the applied squeezing. By taking the quantum Fisher information (QFI) as the figure of merit, we evaluate its average and variance with respect to this phase in order to identify probe states that yield good precision for many different squeezing directions. We first consider the case of single-mode Gaussian probes with the same energy, and find that pure squeezed states maximize the average quantum Fisher information (AvQFI) at the cost of a performance that oscillates strongly as the squeezing direction is changed. Although the variance can be brought to zero by correlating the probing system with a reference mode, the maximum AvQFI cannot be increased in the same way. A different scenario opens if one takes into account the effects of photon losses: coherent states represent the optimal single-mode choice when losses exceed a certain threshold and, moreover, correlated probes can now yield larger AvQFI values than all single-mode states, on top of having zero variance.

  12. Probabilistic description of infant head kinematics in abusive head trauma.

    PubMed

    Lintern, T O; Nash, M P; Kelly, P; Bloomfield, F H; Taberner, A J; Nielsen, P M F

    2017-12-01

    Abusive head trauma (AHT) is a potentially fatal result of child abuse, but the mechanisms by which injury occur are often unclear. To investigate the contention that shaking alone can elicit the injuries observed, effective computational models are necessary. The aim of this study was to develop a probabilistic model describing infant head kinematics in AHT. A deterministic model incorporating an infant's mechanical properties, subjected to different shaking motions, was developed in OpenSim. A Monte Carlo analysis was used to simulate the range of infant kinematics produced as a result of varying both the mechanical properties and the type of shaking motions. By excluding physically unrealistic shaking motions, worst-case shaking scenarios were simulated and compared to existing injury criteria for a newborn, a 4.5 month-old, and a 12 month-old infant. In none of the three cases were head kinematics observed to exceed previously-estimated subdural haemorrhage injury thresholds. The results of this study provide no biomechanical evidence to demonstrate how shaking by a human alone can cause the injuries observed in AHT, suggesting either that additional factors, such as impact, are required, or that the current estimates of injury thresholds are incorrect.

  13. Brochosomal coats turn leafhopper (Insecta, Hemiptera, Cicadellidae) integument to superhydrophobic state

    PubMed Central

    Rakitov, Roman; Gorb, Stanislav N.

    2013-01-01

    Leafhoppers (Insecta, Hemiptera, Cicadellidae) actively coat their integuments with brochosomes, hollow proteinaceous spheres of usually 200–700 nm in diameter, with honeycombed walls. The coats have been previously suggested to act as a water-repellent and anti-adhesive protective barrier against the insect's own exudates. We estimated their wettability through contact angle (CA) measurements of water, diiodomethane, ethylene glycol and ethanol on detached wings of the leafhoppers Alnetoidia alneti, Athysanus argentarius and Cicadella viridis. Intact brochosome-coated integuments were repellent to all test liquids, except ethanol, and exhibited superhydrophobicity, with the average water CAs of 165–172°, and the apparent surface free energy (SFE) estimates not exceeding 0.74 mN m−1. By contrast, the integuments from which brochosomes were removed with a peeling technique using fluid polyvinylsiloxane displayed water CAs of only 103–129° and SFEs above 20 mN m−1. Observations of water-sprayed wings in a cryo-scanning electron microscope confirmed that brochosomal coats prevented water from contacting the integument. Their superhydrophobic properties appear to result from fractal roughness, which dramatically reduces the area of contact with high-surface-tension liquids, including, presumably, leafhopper exudates. PMID:23235705

  14. Slowing extrusion tectonics: Lowered estimate of post-Early Miocene slip rate for the Altyn Tagh fault

    USGS Publications Warehouse

    Yue, Y.; Ritts, B.D.; Graham, S.A.; Wooden, J.L.; Gehrels, G.E.; Zhang, Z.

    2004-01-01

    Determination of long-term slip rate for the Altyn Tagh fault is essential for testing whether Asian tectonics is dominated by lateral extrusion or distributed crustal shortening. Previous slip-history studies focused on either Quaternary slip-rate measurements or pre-Early Miocene total-offset estimates and do not allow a clear distinction between rates based on the two. The magmatic and metamorphic history revealed by SHRIMP zircon dating of clasts from Miocene conglomerate in the Xorkol basin north of the Altyn Tagh fault strikingly matches that of basement in the southern Qilian Shan and northern Qaidam regions south of the fault. This match requires that the post-Early Miocene long-term slip rate along the Altyn Tagh fault cannot exceed 10 mm/year, supporting the hypothesis of distributed crustal thickening for post-Early Miocene times. This low long-term slip rate and recently documented large pre-Early Miocene cumulative offset across the fault support a two-stage evolution, wherein Asian tectonics was dominated by lateral extrusion before the end of Early Miocene, and since then has been dominated by distributed crustal thickening and rapid plateau uplift. ?? 2003 Elsevier B.V. All rights reserved.

  15. Peak discharge, flood frequency, and peak stage of floods on Big Cottonwood Creek at U.S. Highway 50 near Coaldale, Colorado, and Fountain Creek below U.S. Highway 24 in Colorado Springs, Colorado, 2016

    USGS Publications Warehouse

    Kohn, Michael S.; Stevens, Michael R.; Mommandi, Amanullah; Khan, Aziz R.

    2017-12-14

    The U.S. Geological Survey (USGS), in cooperation with the Colorado Department of Transportation, determined the peak discharge, annual exceedance probability (flood frequency), and peak stage of two floods that took place on Big Cottonwood Creek at U.S. Highway 50 near Coaldale, Colorado (hereafter referred to as “Big Cottonwood Creek site”), on August 23, 2016, and on Fountain Creek below U.S. Highway 24 in Colorado Springs, Colorado (hereafter referred to as “Fountain Creek site”), on August 29, 2016. A one-dimensional hydraulic model was used to estimate the peak discharge. To define the flood frequency of each flood, peak-streamflow regional-regression equations or statistical analyses of USGS streamgage records were used to estimate annual exceedance probability of the peak discharge. A survey of the high-water mark profile was used to determine the peak stage, and the limitations and accuracy of each component also are presented in this report. Collection and computation of flood data, such as peak discharge, annual exceedance probability, and peak stage at structures critical to Colorado’s infrastructure are an important addition to the flood data collected annually by the USGS.The peak discharge of the August 23, 2016, flood at the Big Cottonwood Creek site was 917 cubic feet per second (ft3/s) with a measurement quality of poor (uncertainty plus or minus 25 percent or greater). The peak discharge of the August 29, 2016, flood at the Fountain Creek site was 5,970 ft3/s with a measurement quality of poor (uncertainty plus or minus 25 percent or greater).The August 23, 2016, flood at the Big Cottonwood Creek site had an annual exceedance probability of less than 0.01 (return period greater than the 100-year flood) and had an annual exceedance probability of greater than 0.005 (return period less than the 200-year flood). The August 23, 2016, flood event was caused by a precipitation event having an annual exceedance probability of 1.0 (return period of 1 year, or the 1-year storm), which is a statistically common (high probability) storm. The Big Cottonwood Creek site is downstream from the Hayden Pass Fire burn area, which dramatically altered the hydrology of the watershed and caused this statistically rare (low probability) flood from a statistically common (high probability) storm. The peak flood stage at the cross section closest to the U.S. Highway 50 culvert was 6,438.32 feet (ft) above the North American Datum of 1988 (NAVD 88).The August 29, 2016, flood at the Fountain Creek site had an estimated annual exceedance probability of 0.5505 (return period equal to the 1.8-year flood). The August 29, 2016, flood event was caused by a precipitation event having an annual exceedance probability of 1.0 (return period of 1 year, or the 1-year storm). The peak stage during this flood at the cross section closest to the U.S. Highway 24 bridge was 5,832.89 ft (NAVD 88).Slope-area indirect discharge measurements were carried out at the Big Cottonwood Creek and Fountain Creek sites to estimate peak discharge of the August 23, 2016, flood and August 29, 2016, flood, respectively. The USGS computer program Slope-Area Computation Graphical User Interface was used to compute the peak discharge by adding the surveyed cross sections with Manning roughness coefficient assignments to the high-water marks. The Manning roughness coefficients for each cross section were estimated in the field using the Cowan method.

  16. Children’s Phthalate Intakes and Resultant Cumulative Exposures Estimated from Urine Compared with Estimates from Dust Ingestion, Inhalation and Dermal Absorption in Their Homes and Daycare Centers

    PubMed Central

    Bekö, Gabriel; Weschler, Charles J.; Langer, Sarka; Callesen, Michael; Toftum, Jørn; Clausen, Geo

    2013-01-01

    Total daily intakes of diethyl phthalate (DEP), di(n-butyl) phthalate (DnBP), di(isobutyl) phthalate (DiBP), butyl benzyl phthalate (BBzP) and di(2-ethylhexyl) phthalate (DEHP) were calculated from phthalate metabolite levels measured in the urine of 431 Danish children between 3 and 6 years of age. For each child the intake attributable to exposures in the indoor environment via dust ingestion, inhalation and dermal absorption were estimated from the phthalate levels in the dust collected from the child’s home and daycare center. Based on the urine samples, DEHP had the highest total daily intake (median: 4.42 µg/d/kg-bw) and BBzP the lowest (median: 0.49 µg/d/kg-bw). For DEP, DnBP and DiBP, exposures to air and dust in the indoor environment accounted for approximately 100%, 15% and 50% of the total intake, respectively, with dermal absorption from the gas-phase being the major exposure pathway. More than 90% of the total intake of BBzP and DEHP came from sources other than indoor air and dust. Daily intake of DnBP and DiBP from all exposure pathways, based on levels of metabolites in urine samples, exceeded the Tolerable Daily Intake (TDI) for 22 and 23 children, respectively. Indoor exposures resulted in an average daily DiBP intake that exceeded the TDI for 14 children. Using the concept of relative cumulative Tolerable Daily Intake (TDIcum), which is applicable for phthalates that have established TDIs based on the same health endpoint, we examined the cumulative total exposure to DnBP, DiBP and DEHP from all pathways; it exceeded the tolerable levels for 30% of the children. From the three indoor pathways alone, several children had a cumulative intake that exceeded TDIcum. Exposures to phthalates present in the air and dust indoors meaningfully contribute to a child’s total intake of certain phthalates. Such exposures, by themselves, may lead to intakes exceeding current limit values. PMID:23626820

  17. Application of sediment quality guidelines in the assessment and management of contaminated surficial sediments in Port Jackson (Sydney Harbour), Australia.

    PubMed

    Birch, Gavin F; Taylor, Stuart E

    2002-06-01

    Sediments in the Port Jackson estuary are polluted by a wide range of toxicants and concentrations are among the highest reported for any major harbor in the world. Sediment quality guidelines (SQGs), developed by the National Oceanographic and Atmospheric Administration (NOAA) in the United States are used to estimate possible adverse biological effects of sedimentary contaminants in Port Jackson to benthic animals. The NOAA guidelines indicate that Pb, Zn, DDD, and DDE are the most likely contaminants to cause adverse biological effects in Port Jackson. On an individual chemical basis, the detrimental effects due to these toxicants may occur over extensive areas of the harbor, i.e., about 40%, 30%, 15% and 50%, respectively. The NOAA SQGs can also be used to estimate the probability of sediment toxicity for contaminant mixtures by determining the number of contaminants exceeding an upper guideline value (effects range medium, or ERM), which predicts probable adverse biological effects. The exceedence approach is used in the current study to estimate the probability of sediment toxicity and to prioritize the harbour in terms of possible adverse effects on sediment-dwelling animals. Approximately 1% of the harbor is mantled with sediment containing more than ten contaminants exceeding their respective ERM concentrations and, based on NOAA data, these sediments have an 80% probability of being toxic. Sediment with six to ten contaminants exceeding their respective ERM guidelines extend over approximately 4% of the harbor and have a 57% probability of toxicity. These areas are located in the landward reaches of embayments in the upper and central harbor in proximity to the most industrialised and urbanized part of the catchment. Sediment in a further 17% of the harbor has between one and five exceedences and has a 32% probability of being toxic. The application of SQGs developed by NOAA has not been tested outside North America, and the validity of using them in Port Jackson has yet to be demonstrated. The screening approach adopted here is to use SQGs to identify contaminants of concern and to determine areas of environmental risk. The practical application and management implications of the results of this investigation are discussed.

  18. Toxicity potential of disinfection agent in tannery wastewater.

    PubMed

    Tisler, Tatjana; Zagorc-Koncan, Jana; Cotman, Magda; Drolc, Andreja

    2004-09-01

    Wastewater from a tannery was investigated using chemical-specific analyses and assessment of the acute toxicity of the whole effluent over a 2-year period. The wastewater samples were overloaded with organic and inorganic compounds, and measured concentrations of the chemical parameters as well as dilution factors estimating acute toxicity, frequently exceeded the permissible limits for the discharge of wastewater from a tannery into the receiving stream. In the later part of the monitoring programme, the toxicity of the samples was significantly increased in comparison to the previous samples. The agent for hide disinfection was assumed to be the reason for the increased toxicity of the wastewater samples, and the extremely high acute and chronic toxicity of the agent to bacteria, algae, daphnids, and fish confirmed this suspicion. The most sensitive species was Daphnia magna; the 48 h EC50 was 0.70 x 10(-5)v/v% and the 21d IC25 was 0.40 x 10(-6)v/v% of the agent. After withdrawal of this highly toxic agent for hide disinfection from the technological process in the tannery, the toxicity of the wastewater declined to the previous level.

  19. 75 FR 42835 - Medicare Program; Inpatient Rehabilitation Facility Prospective Payment System for Federal Fiscal...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-07-22

    ... estimated cost of the case exceeds the adjusted outlier threshold. We calculate the adjusted outlier... to 80 percent of the difference between the estimated cost of the case and the outlier threshold. In... Federal Prospective Payment Rates VI. Update to Payments for High-Cost Outliers under the IRF PPS A...

  20. 42 CFR 457.218 - Repayment of Federal funds by installments.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... amount to be repaid exceeds 21/2 percent of the estimated or actual annual State share for the State CHIP... State CHIP program is ongoing, CMS uses the annual estimated State share of State CHIP expenditures... State CHIP program has been terminated by Federal law or by the State, CMS uses the actual State share...

  1. 42 CFR 457.218 - Repayment of Federal funds by installments.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... amount to be repaid exceeds 21/2 percent of the estimated or actual annual State share for the State CHIP... State CHIP program is ongoing, CMS uses the annual estimated State share of State CHIP expenditures... State CHIP program has been terminated by Federal law or by the State, CMS uses the actual State share...

  2. About an adaptively weighted Kaplan-Meier estimate.

    PubMed

    Plante, Jean-François

    2009-09-01

    The minimum averaged mean squared error nonparametric adaptive weights use data from m possibly different populations to infer about one population of interest. The definition of these weights is based on the properties of the empirical distribution function. We use the Kaplan-Meier estimate to let the weights accommodate right-censored data and use them to define the weighted Kaplan-Meier estimate. The proposed estimate is smoother than the usual Kaplan-Meier estimate and converges uniformly in probability to the target distribution. Simulations show that the performances of the weighted Kaplan-Meier estimate on finite samples exceed that of the usual Kaplan-Meier estimate. A case study is also presented.

  3. Modeling fortification of corn masa flour with folic acid: the potential impact on exceeding the tolerable upper intake level for folic acid, NHANES 2001–2008

    PubMed Central

    Hamner, Heather C.; Tinker, Sarah C.; Berry, R.J.; Mulinare, Joe

    2013-01-01

    Background The Institute of Medicine set a tolerable upper intake level (UL) for usual daily total folic acid intake (1,000 µg). Less than 3% of US adults currently exceed the UL. Objective The objective of this study was to determine if folic acid fortification of corn masa flour would increase the percentage of the US population who exceed the UL. Design We used dietary intake data from NHANES 2001–2008 to estimate the percentage of adults and children who would exceed the UL if corn masa flour were fortified at 140 µg of folic acid/100 g. Results In 2001–2008, 2.5% of the US adult population (aged≥19 years) exceeded the UL, which could increase to 2.6% if fortification of corn masa flour occurred. With corn masa flour fortification, percentage point increases were small and not statistically significant for US adults exceeding the UL regardless of supplement use, sex, race/ethnicity, or age. Children aged 1–8 years, specifically supplement users, were the most likely to exceed their age-specific UL. With fortification of corn masa flour, there were no statistically significant increases in the percentage of US children who were exceeding their age-specific UL, and the percentage point increases were small. Conclusions Our results suggest that fortification of corn masa flour would not significantly increase the percentage of individuals who would exceed the UL. Supplement use was the main factor related to exceeding the UL with or without fortification of corn masa flour and within all strata of sex, race/ethnicity, and age group. PMID:23316130

  4. Estimating Flow-Duration and Low-Flow Frequency Statistics for Unregulated Streams in Oregon

    USGS Publications Warehouse

    Risley, John; Stonewall, Adam J.; Haluska, Tana

    2008-01-01

    Flow statistical datasets, basin-characteristic datasets, and regression equations were developed to provide decision makers with surface-water information needed for activities such as water-quality regulation, water-rights adjudication, biological habitat assessment, infrastructure design, and water-supply planning and management. The flow statistics, which included annual and monthly period of record flow durations (5th, 10th, 25th, 50th, and 95th percent exceedances) and annual and monthly 7-day, 10-year (7Q10) and 7-day, 2-year (7Q2) low flows, were computed at 466 streamflow-gaging stations at sites with unregulated flow conditions throughout Oregon and adjacent areas of neighboring States. Regression equations, created from the flow statistics and basin characteristics of the stations, can be used to estimate flow statistics at ungaged stream sites in Oregon. The study area was divided into 10 regression modeling regions based on ecological, topographic, geologic, hydrologic, and climatic criteria. In total, 910 annual and monthly regression equations were created to predict the 7 flow statistics in the 10 regions. Equations to predict the five flow-duration exceedance percentages and the two low-flow frequency statistics were created with Ordinary Least Squares and Generalized Least Squares regression, respectively. The standard errors of estimate of the equations created to predict the 5th and 95th percent exceedances had medians of 42.4 and 64.4 percent, respectively. The standard errors of prediction of the equations created to predict the 7Q2 and 7Q10 low-flow statistics had medians of 51.7 and 61.2 percent, respectively. Standard errors for regression equations for sites in western Oregon were smaller than those in eastern Oregon partly because of a greater density of available streamflow-gaging stations in western Oregon than eastern Oregon. High-flow regression equations (such as the 5th and 10th percent exceedances) also generally were more accurate than the low-flow regression equations (such as the 95th percent exceedance and 7Q10 low-flow statistic). The regression equations predict unregulated flow conditions in Oregon. Flow estimates need to be adjusted if they are used at ungaged sites that are regulated by reservoirs or affected by water-supply and agricultural withdrawals if actual flow conditions are of interest. The regression equations are installed in the USGS StreamStats Web-based tool (http://water.usgs.gov/osw/streamstats/index.html, accessed July 16, 2008). StreamStats provides users with a set of annual and monthly flow-duration and low-flow frequency estimates for ungaged sites in Oregon in addition to the basin characteristics for the sites. Prediction intervals at the 90-percent confidence level also are automatically computed.

  5. Ground-water quality in the Appalachian Plateaus, Kanawha River basin, West Virginia

    USGS Publications Warehouse

    Sheets, Charlynn J.; Kozar, Mark D.

    2000-01-01

    Water samples collected from 30 privately-owned and small public-supply wells in the Appalachian Plateaus of the Kanawha River Basin were analyzed for a wide range of constituents, including bacteria, major ions, nutrients, trace elements, radon, pesticides, and volatile organic compounds. Concentrations of most constituents from samples analyzed did not exceed U.S. Environmental Protection Agency (USEPA) standards. Constituents that exceeded drinking-water standards in at least one sample were total coliform bacteria, Escherichia coli (E. coli), iron, manganese, and sulfate. Total coliform bacteria were present in samples from five sites, and E. coli were present at only one site. USEPA secondary maximum contaminant levels (SMCLs) were exceeded for three constituents -- sulfate exceeded the SMCL of 250 mg/L (milligrams per liter) in samples from 2 of 30 wells; iron exceeded the SMCL of 300 ?g/L (micrograms per liter) in samples from 12 of the wells, and manganese exceeded the SMCL of 50 ?g/L in samples from 17 of the wells sampled. None of the samples contained concentrations of nutrients that exceeded the USEPA maximum contaminant levels (MCLs) for these constituents. The maximum concentration of nitrate detected was only 4.1 mg/L, which is below the MCL of 10 mg/L. Concentrations of nitrate in precipitation and shallow ground water are similar, potentially indicating that precipitation may be a source of nitrate in shallow ground water in the study area. Radon concentrations exceeded the recently proposed maximum contaminant level of 300 pCi/L at 50 percent of the sites sampled. The median concentration of radon was only 290 pCi/L. Radon-222 is a naturally occurring, carcinogenic, radioactive decay product of uranium. Concentrations, however, did not exceed the alternate maximum contaminant level (AMCL) for radon of 4,000 pCi/L in any of the 30 samples. Arsenic concentrations exceeded the proposed MCL of 5?g/L at 4 of the 30 sites. No samples exceeded the current MCL of 50 ?g/L. Neither pesticides nor volatile organic compounds (VOCs) were prevalent in the study area, and the concentrations of the compounds that were detected did not exceed any USEPA MCLs. Pesticides were detected in only two of the 30 wells sampled, but four pesticides -- atrazine, carbofuran, DCPA, and deethylatrazine -- were detected in one well; molinate was detected in the other well. All of the pesticides detected were at estimated concentrations of only 0.002 ?g/L. Of the VOCs detected, trihalomethane compounds (THMs), which can result from chlorination of a well, were the most common. THMs were detected in 13 of the 30 wells sampled. Gasoline by-products, such as benzene, toluene, ethylbenzene and xylene (BTEX compounds) were detected in 10 of the 30 wells sampled. The maximum concentration of any of the VOCs detected in this study, however, was only 1.040 ?g/L, for the THM dichlorofluoromethane. Water samples from 25 of the wells were analyzed for chlorofluorocarbons (CFCs) to estimate the apparent age of ground water. The analyses indicated that age of water ranged from 10 to greater than 57 years, and that the age of ground water could be correlated with the topographic setting of the wells sampled. Thus the apparent age of water in wells on hilltops was youngest (median of 13 years) and that of water in wells in valleys was oldest (median of 42 years). Water from wells on hillsides was intermediate in age (median of 29 years). These data can be used to define contributing areas to wells, corroborate or revise conceptual ground-water flow models, estimate contaminant travel times from spills to other sources such as nearby domestic or public supply wells, and to manage point and nonpoint source activities that may affect critical aquifers.

  6. Bioaccessible arsenic in the home environment in southwest England.

    PubMed

    Rieuwerts, J S; Searle, P; Buck, R

    2006-12-01

    Samples of household dust and garden soil were collected from twenty households in the vicinity of an ex-mining site in southwest England and from nine households in a control village. All samples were analysed by ICP-MS for pseudo-total arsenic (As) concentrations and the results show clearly elevated levels, with maximum As concentrations of 486 microg g(-1) in housedusts and 471 microg g(-1) in garden soils (and mean concentrations of 149 microg g(-1) and 262 microg g(-1), respectively). Arsenic concentrations in all samples from the mining area exceeded the UK Soil Guideline Value (SGV) of 20 microg g(-1). No significant correlation was observed between garden soil and housedust As concentrations. Bioaccessible As concentrations were determined in a small subset of samples using the Physiologically Based Extraction Test (PBET). For the stomach phase of the PBET, bioaccessibility percentages of 10-20% were generally recorded. Higher percentages (generally 30-45%) were recorded in the intestine phases with a maximum value (for one of the housedusts) of 59%. Data from the mining area were used, together with default values for soil ingestion rates and infant body weights from the Contaminated Land Exposure Assessment (CLEA) model, to derive estimates of As intake for infants and small children (0-6 years old). Dose estimates of up to 3.53 microg kg(-1) bw day(-1) for housedusts and 2.43 microg kg(-1) bw day(-1) for garden soils were calculated, compared to the index dose used for the derivation of the SGV of 0.3 microg kg(-1) bw day(-1) (based on health risk assessments). The index dose was exceeded by 75% (18 out of 24) of the estimated As doses that were calculated for children aged 0-6 years, a group which is particularly at risk from exposure via soil and dust ingestion. The results of the present study support the concerns expressed by previous authors about the significant As contamination in southwest England and the potential implications for human health.

  7. Not So Fast: Swimming Behavior of Sailfish during Predator-Prey Interactions using High-Speed Video and Accelerometry.

    PubMed

    Marras, Stefano; Noda, Takuji; Steffensen, John F; Svendsen, Morten B S; Krause, Jens; Wilson, Alexander D M; Kurvers, Ralf H J M; Herbert-Read, James; Boswell, Kevin M; Domenici, Paolo

    2015-10-01

    Billfishes are considered among the fastest swimmers in the oceans. Despite early estimates of extremely high speeds, more recent work showed that these predators (e.g., blue marlin) spend most of their time swimming slowly, rarely exceeding 2 m s(-1). Predator-prey interactions provide a context within which one may expect maximal speeds both by predators and prey. Beyond speed, however, an important component determining the outcome of predator-prey encounters is unsteady swimming (i.e., turning and accelerating). Although large predators are faster than their small prey, the latter show higher performance in unsteady swimming. To contrast the evading behaviors of their highly maneuverable prey, sailfish and other large aquatic predators possess morphological adaptations, such as elongated bills, which can be moved more rapidly than the whole body itself, facilitating capture of the prey. Therefore, it is an open question whether such supposedly very fast swimmers do use high-speed bursts when feeding on evasive prey, in addition to using their bill for slashing prey. Here, we measured the swimming behavior of sailfish by using high-frequency accelerometry and high-speed video observations during predator-prey interactions. These measurements allowed analyses of tail beat frequencies to estimate swimming speeds. Our results suggest that sailfish burst at speeds of about 7 m s(-1) and do not exceed swimming speeds of 10 m s(-1) during predator-prey interactions. These speeds are much lower than previous estimates. In addition, the oscillations of the bill during swimming with, and without, extension of the dorsal fin (i.e., the sail) were measured. We suggest that extension of the dorsal fin may allow sailfish to improve the control of the bill and minimize its yaw, hence preventing disturbance of the prey. Therefore, sailfish, like other large predators, may rely mainly on accuracy of movement and the use of the extensions of their bodies, rather than resorting to top speeds when hunting evasive prey. © The Author 2015. Published by Oxford University Press on behalf of the Society for Integrative and Comparative Biology. All rights reserved. For permissions please email: journals.permissions@oup.com.

  8. Antibiotic-resistant Escherichia coli in women with acute cystitis in Canada

    PubMed Central

    McIsaac, Warren J; Moineddin, Rahim; Meaney, Christopher; Mazzulli, Tony

    2013-01-01

    BACKGROUND: Trimethoprim-sulfamethoxazole (TMP-SMX) has been a traditional first-line antibiotic treatment for acute cystitis; however, guidelines do not recommend TMP-SMX in regions where Escherichia coli resistance exceeds 20%. While resistance is increasing, there are no recent Canadian estimates from a primary care setting to guide prescribing decisions. METHODS: A total of 330 family physicians assessed 752 women with suspected acute cystitis between 2009 and 2011. Physicians documented clinical features and collected urine for cultures for 430 (57.2%) women. The proportion of resistant isolates of E coli and exact binomial 95% CIs were estimated nationally, and compared regionally and demographically. These estimates were compared with those from a 2002 national study. RESULTS: The proportion of TMP-SMX-resistant E coli was 16.0% nationally (95% CI 11.3% to 21.8%). This was not statistically higher than 2002 (10.9% [P=0.14]). TMP-SMX resistance was increased in women ≤50 years of age (21.4%) compared with older women (10.7% [P=0.037]). In women with no antibiotic exposure in the previous three months, TMP-SMX-resistant E coli remained more prevalent in younger women (21.8%) compared with older women (4.4% [P=0.003]). The proportion of ciprofloxacin-resistant E coli was 5.5% nationally (95% CI 2.7% to 9.9%), and was increased compared with 2002 (1.1% [P=0.036]). Ciprofloxacin resistance was highest in British Columbia (17.7%) compared with other regions (2.7% [P=0.003]), and was increased compared with 2002 levels in this province (0.0% [P=0.025]). Nitrofurantoin-resistant E coli levels were low (0.5% [95% CI 0.01% to 2.7%). DISCUSSION: The proportion of TMP-SMX-resistant E coli causing acute cystitis in women in Canada remains below 20% nationally, but may exceed this level in premenopausal women. Ciprofloxacin resistance has increased, notably in British Columbia. Nitrofurantoin resistance levels are low across the country. These observations indicate that TMP-SMX and nitrofurantoin remain appropriate empirical antibiotic agents for treating cystitis in primary care settings in Canada. PMID:24421825

  9. Comparison of methods for non-stationary hydrologic frequency analysis: Case study using annual maximum daily precipitation in Taiwan

    NASA Astrophysics Data System (ADS)

    Chen, Po-Chun; Wang, Yuan-Heng; You, Gene Jiing-Yun; Wei, Chih-Chiang

    2017-02-01

    Future climatic conditions likely will not satisfy stationarity assumption. To address this concern, this study applied three methods to analyze non-stationarity in hydrologic conditions. Based on the principle of identifying distribution and trends (IDT) with time-varying moments, we employed the parametric weighted least squares (WLS) estimation in conjunction with the non-parametric discrete wavelet transform (DWT) and ensemble empirical mode decomposition (EEMD). Our aim was to evaluate the applicability of non-parameter approaches, compared with traditional parameter-based methods. In contrast to most previous studies, which analyzed the non-stationarity of first moments, we incorporated second-moment analysis. Through the estimation of long-term risk, we were able to examine the behavior of return periods under two different definitions: the reciprocal of the exceedance probability of occurrence and the expected recurrence time. The proposed framework represents an improvement over stationary frequency analysis for the design of hydraulic systems. A case study was performed using precipitation data from major climate stations in Taiwan to evaluate the non-stationarity of annual maximum daily precipitation. The results demonstrate the applicability of these three methods in the identification of non-stationarity. For most cases, no significant differences were observed with regard to the trends identified using WLS, DWT, and EEMD. According to the results, a linear model should be able to capture time-variance in either the first or second moment while parabolic trends should be used with caution due to their characteristic rapid increases. It is also observed that local variations in precipitation tend to be overemphasized by DWT and EEMD. The two definitions provided for the concept of return period allows for ambiguous interpretation. With the consideration of non-stationarity, the return period is relatively small under the definition of expected recurrence time comparing to the estimation using the reciprocal of the exceedance probability of occurrence. However, the calculation of expected recurrence time is based on the assumption of perfect knowledge of long-term risk, which involves high uncertainty. When the risk is decreasing with time, the expected recurrence time will lead to the divergence of return period and make this definition inapplicable for engineering purposes.

  10. Analysis of ground-water data for selected wells near Holloman Air Force Base, New Mexico, 1950-95

    USGS Publications Warehouse

    Huff, G.F.

    1996-01-01

    Ground-water-level, ground-water-withdrawal, and ground- water-quality data were evaluated for trends. Holloman Air Force Base is located in the west-central part of Otero County, New Mexico. Ground-water-data analyses include assembly and inspection of U.S. Geological Survey and Holloman Air Force Base data, including ground-water-level data for public-supply and observation wells and withdrawal and water-quality data for public-supply wells in the area. Well Douglas 4 shows a statistically significant decreasing trend in water levels for 1972-86 and a statistically significant increasing trend in water levels for 1986-90. Water levels in wells San Andres 5 and San Andres 6 show statistically significant decreasing trends for 1972-93 and 1981-89, respectively. A mixture of statistically significant increasing trends, statistically significant decreasing trends, and lack of statistically significant trends over periods ranging from the early 1970's to the early 1990's are indicated for the Boles wells and wells near the Boles wells. Well Boles 5 shows a statistically significant increasing trend in water levels for 1981-90. Well Boles 5 and well 17S.09E.25.343 show no statistically significant trends in water levels for 1990-93 and 1988-93, respectively. For 1986-93, well Frenchy 1 shows a statistically significant decreasing trend in water levels. Ground-water withdrawal from the San Andres and Douglas wells regularly exceeded estimated ground-water recharge from San Andres Canyon for 1963-87. For 1951-57 and 1960-86, ground-water withdrawal from the Boles wells regularly exceeded total estimated ground-water recharge from Mule, Arrow, and Lead Canyons. Ground-water withdrawal from the San Andres and Douglas wells and from the Boles wells nearly equaled estimated ground- water recharge for 1989-93 and 1986-93, respectively. For 1987- 93, ground-water withdrawal from the Escondido well regularly exceeded estimated ground-water recharge from Escondido Canyon, and ground-water withdrawal from the Frenchy wells regularly exceeded total estimated ground-water recharge from Dog and Deadman Canyons. Water-quality samples were collected from selected Douglas, San Andres, and Boles public-supply wells from December 1994 to February 1995. Concentrations of dissolved nitrate show the most consistent increases between current and historical data. Current concentrations of dissolved nitrate are greater than historical concentrations in 7 of 10 wells.

  11. Distribution of cadmium, iron and zinc in millstreams of hard winter wheat (Triticum aestivum L.)

    USDA-ARS?s Scientific Manuscript database

    Hard winter wheat (Triticum aestivum L.) is a major crop in the Great Plains of the United 14 States, and our previous work demonstrated that wheat genotypes vary for grain cadmium 15 accumulation, with some exceeding the CODEX standard (0.2 mg kg-1). Previous reports of 16 cadmium distribution in ...

  12. Is Recent Warming Unprecedented in the Common Era? Insights from PAGES2k data and the Last Millennium Reanalysis

    NASA Astrophysics Data System (ADS)

    Erb, M. P.; Emile-Geay, J.; McKay, N.; Hakim, G. J.; Steig, E. J.; Anchukaitis, K. J.

    2017-12-01

    Paleoclimate observations provide a critical context for 20th century warming by putting recent climate change into a longer-term perspective. Previous work (e.g. IPCC AR3-5) has claimed that recent decades are exceptional in the context of past centuries, though these statements are usually accompanied by large uncertainties and little spatial detail. Here we leverage a recent multiproxy compilation (PAGES2k Consortium, 2017) to revisit this long-standing question. We do so via two complementary approaches. The first approach compares multi-decadal averages and trends in PAGES2k proxy records, which include trees, corals, ice cores, and more. Numerous proxy records reveal that late 20th century values are extreme compared to the remainder of the recorded period, although considerable variability exists in the signals preserved in individual records. The second approach uses the same PAGES2k data blended with climate model output to produce an optimal analysis: the Last Millennium Reanalysis (LMR; Hakim et al., 2016). Unlike proxy data, LMR is spatially-complete and explicitly models uncertainty in proxy records, resulting in objective error estimates. The LMR results show that for nearly every region of the world, late 20th century temperatures exceed temperatures in previous multi-decadal periods during the Common Era, and 20th century warming rates exceed rates in previous centuries. An uncertainty with the present analyses concerns the interpretation of proxy records. PAGES2k included only records that are primarily sensitive to temperature, but many proxies may be influenced by secondary non-temperature effects. Additionally, the issue of seasonality is important as, for example, many temperature-sensitive tree ring chronologies in the Northern Hemisphere respond to summer or growing season temperature rather than annual-means. These uncertainties will be further explored. References Hakim, G. J., et al., 2016: The last millennium climate reanalysis project: Framework and first results. Journal of Geophysical Research: Atmospheres, 121(12), 6745-6764. http://doi.org/10.1002/2016JD024751 PAGES2k Consortium, 2017: A global multiproxy database for temperature reconstructions of the Common Era. Scientific Data, 1-33. http://doi.org/10.1038/sdata.2017.88

  13. Statistical Survey of Persistent Organic Pollutants: Risk Estimations to Humans and Wildlife through Consumption of Fish from U.S. Rivers.

    PubMed

    Batt, Angela L; Wathen, John B; Lazorchak, James M; Olsen, Anthony R; Kincaid, Thomas M

    2017-03-07

    U.S. EPA conducted a national statistical survey of fish tissue contamination at 540 river sites (representing 82 954 river km) in 2008-2009, and analyzed samples for 50 persistent organic pollutants (POPs), including 21 PCB congeners, 8 PBDE congeners, and 21 organochlorine pesticides. The survey results were used to provide national estimates of contamination for these POPs. PCBs were the most abundant, being measured in 93.5% of samples. Summed concentrations of the 21 PCB congeners had a national weighted mean of 32.7 μg/kg and a maximum concentration of 857 μg/kg, and exceeded the human health cancer screening value of 12 μg/kg in 48% of the national sampled population of river km, and in 70% of the urban sampled population. PBDEs (92.0%), chlordane (88.5%) and DDT (98.7%) were also detected frequently, although at lower concentrations. Results were examined by subpopulations of rivers, including urban or nonurban and three defined ecoregions. PCBs, PBDEs, and DDT occur at significantly higher concentrations in fish from urban rivers versus nonurban; however, the distribution varied more among the ecoregions. Wildlife screening values previously published for bird and mammalian species were converted from whole fish to fillet screening values, and used to estimate risk for wildlife through fish consumption.

  14. Aeroshell Design Techniques for Aerocapture Entry Vehicles

    NASA Technical Reports Server (NTRS)

    Dyke, R. Eric; Hrinda, Glenn A.

    2004-01-01

    A major goal of NASA s In-Space Propulsion Program is to shorten trip times for scientific planetary missions. To meet this challenge arrival speeds will increase, requiring significant braking for orbit insertion, and thus increased deceleration propellant mass that may exceed launch lift capabilities. A technology called aerocapture has been developed to expand the mission potential of exploratory probes destined for planets with suitable atmospheres. Aerocapture inserts a probe into planetary orbit via a single pass through the atmosphere using the probe s aeroshell drag to reduce velocity. The benefit of an aerocapture maneuver is a large reduction in propellant mass that may result in smaller, less costly missions and reduced mission cruise times. The methodology used to design rigid aerocapture aeroshells will be presented with an emphasis on a new systems tool under development. Current methods for fast, efficient evaluations of structural systems for exploratory vehicles to planets and moons within our solar system have been under development within NASA having limited success. Many systems tools that have been attempted applied structural mass estimation techniques based on historical data and curve fitting techniques that are difficult and cumbersome to apply to new vehicle concepts and missions. The resulting vehicle aeroshell mass may be incorrectly estimated or have high margins included to account for uncertainty. This new tool will reduce the guesswork previously found in conceptual aeroshell mass estimations.

  15. Rapid measurement of auditory filter shape in mice using the auditory brainstem response and notched noise.

    PubMed

    Lina, Ioan A; Lauer, Amanda M

    2013-04-01

    The notched noise method is an effective procedure for measuring frequency resolution and auditory filter shapes in both human and animal models of hearing. Briefly, auditory filter shape and bandwidth estimates are derived from masked thresholds for tones presented in noise containing widening spectral notches. As the spectral notch widens, increasingly less of the noise falls within the auditory filter and the tone becomes more detectible until the notch width exceeds the filter bandwidth. Behavioral procedures have been used for the derivation of notched noise auditory filter shapes in mice; however, the time and effort needed to train and test animals on these tasks renders a constraint on the widespread application of this testing method. As an alternative procedure, we combined relatively non-invasive auditory brainstem response (ABR) measurements and the notched noise method to estimate auditory filters in normal-hearing mice at center frequencies of 8, 11.2, and 16 kHz. A complete set of simultaneous masked thresholds for a particular tone frequency were obtained in about an hour. ABR-derived filter bandwidths broadened with increasing frequency, consistent with previous studies. The ABR notched noise procedure provides a fast alternative to estimating frequency selectivity in mice that is well-suited to high through-put or time-sensitive screening. Copyright © 2013 Elsevier B.V. All rights reserved.

  16. Committed warming inferred from observations

    NASA Astrophysics Data System (ADS)

    Mauritsen, Thorsten; Pincus, Robert

    2017-09-01

    Due to the lifetime of CO2, the thermal inertia of the oceans, and the temporary impacts of short-lived aerosols and reactive greenhouse gases, the Earth’s climate is not equilibrated with anthropogenic forcing. As a result, even if fossil-fuel emissions were to suddenly cease, some level of committed warming is expected due to past emissions as studied previously using climate models. Here, we provide an observational-based quantification of this committed warming using the instrument record of global-mean warming, recently improved estimates of Earth’s energy imbalance, and estimates of radiative forcing from the Fifth Assessment Report of the Intergovernmental Panel on Climate Change. Compared with pre-industrial levels, we find a committed warming of 1.5 K (0.9-3.6, 5th-95th percentile) at equilibrium, and of 1.3 K (0.9-2.3) within this century. However, when assuming that ocean carbon uptake cancels remnant greenhouse gas-induced warming on centennial timescales, committed warming is reduced to 1.1 K (0.7-1.8). In the latter case there is a 13% risk that committed warming already exceeds the 1.5 K target set in Paris. Regular updates of these observationally constrained committed warming estimates, although simplistic, can provide transparent guidance as uncertainty regarding transient climate sensitivity inevitably narrows and the understanding of the limitations of the framework is advanced.

  17. Data for floods of May 1978 in northeastern Wyoming and southeastern Montana

    USGS Publications Warehouse

    Parrett, Charles; Carlson, D.D.; Craig, G.S.; Hull, J.A.

    1978-01-01

    Severe flooding in northeastern Wyoming and southeastern Montana in May 1978 is described by tables of data, graphs, and photographs. Flood peaks were determined at 162 sites in the flooded area. At most of the sites, peak discharges were determined from existing stage-discharge relationship curves, and at 30 of the sites indirect flow measurements were made. At 19 sites, the May 1978 peak discharge exceeded the previous peak of record and also exceeded the computed 100-year frequency flood. (Woodard-USGS)

  18. Meta-analysis of alcohol price and income elasticities – with corrections for publication bias

    PubMed Central

    2013-01-01

    Background This paper contributes to the evidence-base on prices and alcohol use by presenting meta-analytic summaries of price and income elasticities for alcohol beverages. The analysis improves on previous meta-analyses by correcting for outliers and publication bias. Methods Adjusting for outliers is important to avoid assigning too much weight to studies with very small standard errors or large effect sizes. Trimmed samples are used for this purpose. Correcting for publication bias is important to avoid giving too much weight to studies that reflect selection by investigators or others involved with publication processes. Cumulative meta-analysis is proposed as a method to avoid or reduce publication bias, resulting in more robust estimates. The literature search obtained 182 primary studies for aggregate alcohol consumption, which exceeds the database used in previous reviews and meta-analyses. Results For individual beverages, corrected price elasticities are smaller (less elastic) by 28-29 percent compared with consensus averages frequently used for alcohol beverages. The average price and income elasticities are: beer, -0.30 and 0.50; wine, -0.45 and 1.00; and spirits, -0.55 and 1.00. For total alcohol, the price elasticity is -0.50 and the income elasticity is 0.60. Conclusions These new results imply that attempts to reduce alcohol consumption through price or tax increases will be less effective or more costly than previously claimed. PMID:23883547

  19. Influence of Contact Angle, Growth Angle and Melt Surface Tension on Detached Solidification of InSb

    NASA Technical Reports Server (NTRS)

    Wang, Yazhen; Regel, Liya L.; Wilcox, William R.

    2000-01-01

    We extended the previous analysis of detached solidification of InSb based on the moving meniscus model. We found that for steady detached solidification to occur in a sealed ampoule in zero gravity, it is necessary for the growth angle to exceed a critical value, the contact angle for the melt on the ampoule wall to exceed a critical value, and the melt-gas surface tension to be below a critical value. These critical values would depend on the material properties and the growth parameters. For the conditions examined here, the sum of the growth angle and the contact angle must exceed approximately 130, which is significantly less than required if both ends of the ampoule are open.

  20. Methods for estimating magnitude and frequency of floods in Montana based on data through 1983

    USGS Publications Warehouse

    Omang, R.J.; Parrett, Charles; Hull, J.A.

    1986-01-01

    Equations are presented for estimating flood magnitudes for ungaged sites in Montana based on data through 1983. The State was divided into eight regions based on hydrologic conditions, and separate multiple regression equations were developed for each region. These equations relate annual flood magnitudes and frequencies to basin characteristics and are applicable only to natural flow streams. In three of the regions, equations also were developed relating flood magnitudes and frequencies to basin characteristics and channel geometry measurements. The standard errors of estimate for an exceedance probability of 1% ranged from 39% to 87%. Techniques are described for estimating annual flood magnitude and flood frequency information at ungaged sites based on data from gaged sites on the same stream. Included are curves relating flood frequency information to drainage area for eight major streams in the State. Maximum known flood magnitudes in Montana are compared with estimated 1 %-chance flood magnitudes and with maximum known floods in the United States. Values of flood magnitudes for selected exceedance probabilities and values of significant basin characteristics and channel geometry measurements for all gaging stations used in the analysis are tabulated. Included are 375 stations in Montana and 28 nearby stations in Canada and adjoining States. (Author 's abstract)

  1. Estimating age-specific reproductive numbers-A comparison of methods.

    PubMed

    Moser, Carlee B; White, Laura F

    2016-01-01

    Large outbreaks, such as those caused by influenza, put a strain on resources necessary for their control. In particular, children have been shown to play a key role in influenza transmission during recent outbreaks, and targeted interventions, such as school closures, could positively impact the course of emerging epidemics. As an outbreak is unfolding, it is important to be able to estimate reproductive numbers that incorporate this heterogeneity and to use surveillance data that is routinely collected to more effectively target interventions and obtain an accurate understanding of transmission dynamics. There are a growing number of methods that estimate age-group specific reproductive numbers with limited data that build on methods assuming a homogenously mixing population. In this article, we introduce a new approach that is flexible and improves on many aspects of existing methods. We apply this method to influenza data from two outbreaks, the 2009 H1N1 outbreaks in South Africa and Japan, to estimate age-group specific reproductive numbers and compare it to three other methods that also use existing data from social mixing surveys to quantify contact rates among different age groups. In this exercise, all estimates of the reproductive numbers for children exceeded the critical threshold of one and in most cases exceeded those of adults. We introduce a flexible new method to estimate reproductive numbers that describe heterogeneity in the population.

  2. Probabilistic rainfall warning system with an interactive user interface

    NASA Astrophysics Data System (ADS)

    Koistinen, Jarmo; Hohti, Harri; Kauhanen, Janne; Kilpinen, Juha; Kurki, Vesa; Lauri, Tuomo; Nurmi, Pertti; Rossi, Pekka; Jokelainen, Miikka; Heinonen, Mari; Fred, Tommi; Moisseev, Dmitri; Mäkelä, Antti

    2013-04-01

    A real time 24/7 automatic alert system is in operational use at the Finnish Meteorological Institute (FMI). It consists of gridded forecasts of the exceedance probabilities of rainfall class thresholds in the continuous lead time range of 1 hour to 5 days. Nowcasting up to six hours applies ensemble member extrapolations of weather radar measurements. With 2.8 GHz processors using 8 threads it takes about 20 seconds to generate 51 radar based ensemble members in a grid of 760 x 1226 points. Nowcasting exploits also lightning density and satellite based pseudo rainfall estimates. The latter ones utilize convective rain rate (CRR) estimate from Meteosat Second Generation. The extrapolation technique applies atmospheric motion vectors (AMV) originally developed for upper wind estimation with satellite images. Exceedance probabilities of four rainfall accumulation categories are computed for the future 1 h and 6 h periods and they are updated every 15 minutes. For longer forecasts exceedance probabilities are calculated for future 6 and 24 h periods during the next 4 days. From approximately 1 hour to 2 days Poor man's Ensemble Prediction System (PEPS) is used applying e.g. the high resolution short range Numerical Weather Prediction models HIRLAM and AROME. The longest forecasts apply EPS data from the European Centre for Medium Range Weather Forecasts (ECMWF). The blending of the ensemble sets from the various forecast sources is performed applying mixing of accumulations with equal exceedance probabilities. The blending system contains a real time adaptive estimator of the predictability of radar based extrapolations. The uncompressed output data are written to file for each member, having total size of 10 GB. Ensemble data from other sources (satellite, lightning, NWP) are converted to the same geometry as the radar data and blended as was explained above. A verification system utilizing telemetering rain gauges has been established. Alert dissemination e.g. for citizens and professional end users applies SMS messages and, in near future, smartphone maps. The present interactive user interface facilitates free selection of alert sites and two warning thresholds (any rain, heavy rain) at any location in Finland. The pilot service was tested by 1000-3000 users during summers 2010 and 2012. As an example of dedicated end-user services gridded exceedance scenarios (of probabilities 5 %, 50 % and 90 %) of hourly rainfall accumulations for the next 3 hours have been utilized as an online input data for the influent model at the Greater Helsinki Wastewater Treatment Plant.

  3. Dietary exposure to aluminium in the popular Chinese fried bread youtiao.

    PubMed

    Li, Ge; Zhao, Xue; Wu, Shimin; Hua, Hongying; Wang, Qiang; Zhang, Zhiheng

    2017-06-01

    Youtiao is a typical, traditional and widely consumed fried food in China. Fermentation of youtiao involves the use of aluminium potassium sulphate (alum). There are health concerns related to the levels of aluminium in food; therefore, we aimed to determine the aluminium concentrations of youtiao from various locations, and to estimate the dietary exposure by different age groups in southern and northern China. The aluminium content of youtiao samples varied considerably (range = 4.46-852.69 mg kg -1 ). Both the mean and median aluminium contents of youtiao exceeded 100 mg kg -1 , which is the China National Standard (GB) 2760-2014 National Food Safety for Standards for food additives. However, the median and 97.5th percentile of weekly dietary exposure to aluminium from youtiao, estimated using Monte Carlo simulation, did not exceed the provisional tolerable weekly intake (PTWI) set by the joint FAO/WHO Expert Committee on Food Additives (JECFA) for children, adolescents, adults and seniors. The weekly dietary exposure to aluminium would exceed the PTWI if children, adolescents, adults and seniors consumed 134.47, 260.98, 327.10 or 320.41 g of youtiao per week, respectively.

  4. Design flow duration curves for environmental flows estimation in Damodar River Basin, India

    NASA Astrophysics Data System (ADS)

    Verma, Ravindra Kumar; Murthy, Shankar; Verma, Sangeeta; Mishra, Surendra Kumar

    2017-06-01

    In this study, environmental flows (EFs) are estimated for six watersheds of Damodar River Basin (DRB) using flow duration curve (FDC) derived using two approaches: (a) period of record and (b) stochastic approaches for daily, 7-, 30-, 60-day moving averages, and 7-daily mean annual flows observed at Tenughat dam, Konar dam, Maithon dam, Panchet dam, Damodar bridge, Burnpur during 1981-2010 and at Phusro during 1988-2010. For stochastic FDCs, 7-day FDCs for 10, 20-, 50- and 100-year return periods were derived for extraction of discharge values at every 5% probability of exceedance. FDCs derived using the first approach show high probability of exceedance (5-75%) for the same discharge values. Furthermore, discharge values of 60-day mean are higher than those derived using daily, 7-, and 30-day mean values. The discharge values of 95% probability of exceedance (Q95) derived from 7Q10 (ranges from 2.04 to 5.56 cumec) and 7Q100 (ranges from 3.4 to 31.48 cumec) FDCs using the second approach are found more appropriate as EFs during drought/low flow and normal precipitation years.

  5. Effects and costs of requiring child-restraint systems for young children traveling on commercial airplanes.

    PubMed

    Newman, Thomas B; Johnston, Brian D; Grossman, David C

    2003-10-01

    The US Federal Aviation Administration is planning a new regulation requiring children younger than 2 years to ride in approved child-restraint seats on airplanes. To estimate the annual number of child air crash deaths that might be prevented by the proposed regulation, the threshold proportion of families switching from air to car travel above which the risks of the policy would exceed its benefits, and the cost per death prevented. Risk and economic analyses. Child-restraint seat use could prevent about 0.4 child air crash deaths per year in the United States. Increased deaths as a result of car travel could exceed deaths prevented by restraint seat use if the proportion of families switching from air to car travel exceeded about 5% to 10%. The estimate for this proportion varied with assumptions about trip distance, driver characteristics, and the effectiveness of child-restraint seats but is unlikely to exceed 15%. Assuming no increase in car travel, for each dollar increase in the cost of implementing the regulation per round trip per family, the cost per death prevented would increase by about $6.4 million. Unless space for young children in restraint seats can be provided at low cost to families, with little or no diversion to automobile travel, a policy requiring restraint seat use could cause a net increase in deaths. Even excluding this possibility, the cost of the proposed policy per death prevented is high.

  6. The flux of radionuclides in flowback fluid from shale gas exploitation.

    PubMed

    Almond, S; Clancy, S A; Davies, R J; Worrall, F

    2014-11-01

    This study considers the flux of radioactivity in flowback fluid from shale gas development in three areas: the Carboniferous, Bowland Shale, UK; the Silurian Shale, Poland; and the Carboniferous Barnett Shale, USA. The radioactive flux from these basins was estimated, given estimates of the number of wells developed or to be developed, the flowback volume per well and the concentration of K (potassium) and Ra (radium) in the flowback water. For comparative purposes, the range of concentration was itself considered within four scenarios for the concentration range of radioactive measured in each shale gas basin, the groundwater of the each shale gas basin, global groundwater and local surface water. The study found that (i) for the Barnett Shale and the Silurian Shale, Poland, the 1 % exceedance flux in flowback water was between seven and eight times that would be expected from local groundwater. However, for the Bowland Shale, UK, the 1 % exceedance flux (the flux that would only be expected to be exceeded 1 % of the time, i.e. a reasonable worst case scenario) in flowback water was 500 times that expected from local groundwater. (ii) In no scenario was the 1 % exceedance exposure greater than 1 mSv-the allowable annual exposure allowed for in the UK. (iii) The radioactive flux of per energy produced was lower for shale gas than for conventional oil and gas production, nuclear power production and electricity generated through burning coal.

  7. Shifting Gravel and the Acoustic Detection Range of Killer Whale Calls

    NASA Astrophysics Data System (ADS)

    Bassett, C.; Thomson, J. M.; Polagye, B. L.; Wood, J.

    2012-12-01

    In environments suitable for tidal energy development, strong currents result in large bed stresses that mobilize sediments, producing sediment-generated noise. Sediment-generated noise caused by mobilization events can exceed noise levels attributed to other ambient noise sources at frequencies related to the diameters of the mobilized grains. At a site in Admiralty Inlet, Puget Sound, Washington, one year of ambient noise data (0.02 - 30 kHz) and current velocity data are combined. Peak currents at the site exceed 3.5 m/s. During slack currents, vessel traffic is the dominant noise source. When currents exceed 0.85 m/s noise level increases between 2 kHz and 30 kHz are correlated with near-bed currents and bed stress estimates. Acoustic spectrum levels during strong currents exceed quiescent slack tide conditions by 20 dB or more between 2 and 30 kHz. These frequencies are consistent with sound generated by the mobilization of gravel and pebbles. To investigate the implications of sediment-generated noise for post-installation passive acoustic monitoring of a planned tidal energy project, ambient noise conditions during slack currents and strong currents are combined with the characteristics of Southern Resident killer whale (Orcinus orca) vocalizations and sound propagation modeling. The reduction in detection range is estimated for common vocalizations under different ambient noise conditions. The importance of sediment-generated noise for passive acoustic monitoring at tidal energy sites for different marine mammal functional hearing groups and other sediment compositions are considered.

  8. 49 CFR 24.301 - Payment for actual reasonable moving and related expenses.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    .... (4) Storage of the personal property for a period not to exceed 12 months, unless the Agency... (g)(1) through (g)(7) of this section. Self-moves based on the lower of two bids or estimates are not...-moves based on the lower of two bids or estimates are not eligible for reimbursement under this section...

  9. 49 CFR 24.301 - Payment for actual reasonable moving and related expenses.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    .... (4) Storage of the personal property for a period not to exceed 12 months, unless the Agency... (g)(1) through (g)(7) of this section. Self-moves based on the lower of two bids or estimates are not...-moves based on the lower of two bids or estimates are not eligible for reimbursement under this section...

  10. 49 CFR 24.301 - Payment for actual reasonable moving and related expenses.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    .... (4) Storage of the personal property for a period not to exceed 12 months, unless the Agency... (g)(1) through (g)(7) of this section. Self-moves based on the lower of two bids or estimates are not...-moves based on the lower of two bids or estimates are not eligible for reimbursement under this section...

  11. 49 CFR 24.301 - Payment for actual reasonable moving and related expenses.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    .... (4) Storage of the personal property for a period not to exceed 12 months, unless the Agency... (g)(1) through (g)(7) of this section. Self-moves based on the lower of two bids or estimates are not...-moves based on the lower of two bids or estimates are not eligible for reimbursement under this section...

  12. 49 CFR 24.301 - Payment for actual reasonable moving and related expenses.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    .... (4) Storage of the personal property for a period not to exceed 12 months, unless the Agency... (g)(1) through (g)(7) of this section. Self-moves based on the lower of two bids or estimates are not...-moves based on the lower of two bids or estimates are not eligible for reimbursement under this section...

  13. 78 FR 12705 - Atlantic Highly Migratory Species; North and South Atlantic 2013 Commercial Swordfish Quotas

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-02-25

    ... dead discards. We will adjust the quotas in the final rule based on updated data, including dead... quota, the sum of updated landings data (from late reports) and dead discard estimates would need to reach or exceed 475 mt dw. In 2011, dead discards were estimated to equal 101.5 mt dw and late reports...

  14. Estimates of critical acid loads and exceedances for forest soils across the conterminous United States

    Treesearch

    Steven G. McNulty; Erika C. Cohen; Jennifer A. Moore Myers; Timothy J. Sullivan; Harbin Li

    2007-01-01

    Concern regarding the impacts of continued nitrogen and sulfur deposition on ecosystem health has prompted the development of critical acid load assessments for forest soils. A critical acid load is a quantitative estimate of exposure to one or more pollutants at or above which harmful acidification-related effects on sensitive elements of the environment occur. A...

  15. Estimates of Intraclass Correlation Coefficients from Longitudinal Group-Randomized Trials of Adolescent HIV/STI/Pregnancy Prevention Programs

    ERIC Educational Resources Information Center

    Glassman, Jill R.; Potter, Susan C.; Baumler, Elizabeth R.; Coyle, Karin K.

    2015-01-01

    Introduction: Group-randomized trials (GRTs) are one of the most rigorous methods for evaluating the effectiveness of group-based health risk prevention programs. Efficiently designing GRTs with a sample size that is sufficient for meeting the trial's power and precision goals while not wasting resources exceeding them requires estimates of the…

  16. Flood recovery maps for the White River in Bethel, Stockbridge, and Rochester, Vermont, and the Tweed River in Stockbridge and Pittsfield, Vermont, 2014

    USGS Publications Warehouse

    Olson, Scott A.

    2015-01-01

    Eighteen high-water marks from Tropical Storm Irene were available along the studied reaches. The discharges in the Tropical Storm Irene HEC–RAS model were adjusted so that the resulting water-surface elevations matched the high-water mark elevations along the study reaches. This allowed for an estimation of the water-surface profile throughout the study area resulting from Tropical Storm Irene. From a comparison of the estimated water-surface profile of Tropical Storm Irene to the water-surface profiles of the 1- and 0.2-percent AEP floods, it was determined that the high-water elevations resulting from Tropical Storm Irene exceeded the estimated 1-percent AEP flood throughout the White River and Tweed River study reaches and exceeded the estimated 0.2-percent AEP flood in 16.7 of the 28.6 study reach miles. The simulated water-surface profiles were then combined with a geographic information system digital elevation model derived from light detection and ranging (lidar) data having a 18.2-centimeter vertical accuracy at the 95-percent confidence level and 1-meter horizontal resolution to delineate the area flooded for each water-surface profile.

  17. Exposures to quartz, diesel, dust, and welding fumes during heavy and highway construction.

    PubMed

    Woskie, Susan R; Kalil, Andrew; Bello, Dhimiter; Virji, M Abbas

    2002-01-01

    Personal samples for exposure to dust, diesel exhaust, quartz, and welding fume were collected on heavy and highway construction workers. The respirable, thoracic, and inhalable fractions of dust and quartz exposures were estimated from 260 personal impactor samples. Respirable quartz exposures exceeded the National Institute for Occupational Safety and Health (NIOSH) recommended exposure limit (REL) in 7-31% of cases for the trades sampled. More than 50% of the samples in the installation of drop ceilings and wall tiles and concrete finish operations exceeded the NIOSH REL for quartz. Thoracic exposures to quartz and dust exceeded respirable exposures by a factor of 4.5 and 2.8, respectively. Inhalable exposures to quartz and dust exceeded respirable exposures by a factor of 25.6 and 9.3, respectively. These findings are important due to the identification of quartz as a carcinogen by the National Toxicology Program and the International Agency for Research on Cancer. Fourteen percent of the personal samples for EC (n = 261), collected as a marker for diesel exhaust, exceeded the American Conference of Governmental Industrial Hygienists (ACGIH) threshold limit value (TLV) for diesel exhaust. Seventeen of the 22 (77%) samples taken during a partially enclosed welding operation reached or exceeded the ACGIH TLV of 5 mg/m3 for welding fume.

  18. 499 E Illinois, November 2012, Lindsay Light Radiological Survey

    EPA Pesticide Factsheets

    Field gamma measurements did not exceed the respective field instrument threshold value previously stated andmaximum values observed at each lift ranged from approximately 10,700 to 12,600 cpm unshielded

  19. A parallelizable real-time motion tracking algorithm with applications to ultrasonic strain imaging

    NASA Astrophysics Data System (ADS)

    Jiang, J.; Hall, T. J.

    2007-07-01

    Ultrasound-based mechanical strain imaging systems utilize signals from conventional diagnostic ultrasound systems to image tissue elasticity contrast that provides new diagnostically valuable information. Previous works (Hall et al 2003 Ultrasound Med. Biol. 29 427, Zhu and Hall 2002 Ultrason. Imaging 24 161) demonstrated that uniaxial deformation with minimal elevation motion is preferred for breast strain imaging and real-time strain image feedback to operators is important to accomplish this goal. The work reported here enhances the real-time speckle tracking algorithm with two significant modifications. One fundamental change is that the proposed algorithm is a column-based algorithm (a column is defined by a line of data parallel to the ultrasound beam direction, i.e. an A-line), as opposed to a row-based algorithm (a row is defined by a line of data perpendicular to the ultrasound beam direction). Then, displacement estimates from its adjacent columns provide good guidance for motion tracking in a significantly reduced search region to reduce computational cost. Consequently, the process of displacement estimation can be naturally split into at least two separated tasks, computed in parallel, propagating outward from the center of the region of interest (ROI). The proposed algorithm has been implemented and optimized in a Windows® system as a stand-alone ANSI C++ program. Results of preliminary tests, using numerical and tissue-mimicking phantoms, and in vivo tissue data, suggest that high contrast strain images can be consistently obtained with frame rates (10 frames s-1) that exceed our previous methods.

  20. Novel application of species richness estimators to predict the host range of parasites.

    PubMed

    Watson, David M; Milner, Kirsty V; Leigh, Andrea

    2017-01-01

    Host range is a critical life history trait of parasites, influencing prevalence, virulence and ultimately determining their distributional extent. Current approaches to measure host range are sensitive to sampling effort, the number of known hosts increasing with more records. Here, we develop a novel application of results-based stopping rules to determine how many hosts should be sampled to yield stable estimates of the number of primary hosts within regions, then use species richness estimation to predict host ranges of parasites across their distributional ranges. We selected three mistletoe species (hemiparasitic plants in the Loranthaceae) to evaluate our approach: a strict host specialist (Amyema lucasii, dependent on a single host species), an intermediate species (Amyema quandang, dependent on hosts in one genus) and a generalist (Lysiana exocarpi, dependent on many genera across multiple families), comparing results from geographically-stratified surveys against known host lists derived from herbarium specimens. The results-based stopping rule (stop sampling bioregion once observed host richness exceeds 80% of the host richness predicted using the Abundance-based Coverage Estimator) worked well for most bioregions studied, being satisfied after three to six sampling plots (each representing 25 host trees) but was unreliable in those bioregions with high host richness or high proportions of rare hosts. Although generating stable predictions of host range with minimal variation among six estimators trialled, distribution-wide estimates fell well short of the number of hosts known from herbarium records. This mismatch, coupled with the discovery of nine previously unrecorded mistletoe-host combinations, further demonstrates the limited ecological relevance of simple host-parasite lists. By collecting estimates of host range of constrained completeness, our approach maximises sampling efficiency while generating comparable estimates of the number of primary hosts, with broad applicability to many host-parasite systems. Copyright © 2016 Australian Society for Parasitology. Published by Elsevier Ltd. All rights reserved.

  1. Lake-level frequency analysis for Devils Lake, North Dakota

    USGS Publications Warehouse

    Wiche, Gregg J.; Vecchia, Aldo V.

    1996-01-01

    Two approaches were used to estimate future lake-level probabilities for Devils Lake. The first approach is based on an annual lake-volume model, and the second approach is based on a statistical water mass-balance model that generates seasonal lake volumes on the basis of seasonal precipitation, evaporation, and inflow. Autoregressive moving average models were used to model the annual mean lake volume and the difference between the annual maximum lake volume and the annual mean lake volume. Residuals from both models were determined to be uncorrelated with zero mean and constant variance. However, a nonlinear relation between the residuals of the two models was included in the final annual lakevolume model.Because of high autocorrelation in the annual lake levels of Devils Lake, the annual lake-volume model was verified using annual lake-level changes. The annual lake-volume model closely reproduced the statistics of the recorded lake-level changes for 1901-93 except for the skewness coefficient. However, the model output is less skewed than the data indicate because of some unrealistically large lake-level declines. The statistical water mass-balance model requires as inputs seasonal precipitation, evaporation, and inflow data for Devils Lake. Analysis of annual precipitation, evaporation, and inflow data for 1950-93 revealed no significant trends or long-range dependence so the input time series were assumed to be stationary and short-range dependent.Normality transformations were used to approximately maintain the marginal probability distributions; and a multivariate, periodic autoregressive model was used to reproduce the correlation structure. Each of the coefficients in the model is significantly different from zero at the 5-percent significance level. Coefficients relating spring inflow from one year to spring and fall inflows from the previous year had the largest effect on the lake-level frequency analysis.Inclusion of parameter uncertainty in the model for generating precipitation, evaporation, and inflow indicates that the upper lake-level exceedance levels from the water mass-balance model are particularly sensitive to parameter uncertainty. The sensitivity in the upper exceedance levels was caused almost entirely by uncertainty in the fitted probability distributions of the quarterly inflows. A method was developed for using long-term streamflow data for the Red River of the North at Grand Forks to reduce the variance in the estimated mean.Comparison of the annual lake-volume model and the water mass-balance model indicates the upper exceedance levels of the water mass-balance model increase much more rapidly than those of the annual lake-volume model. As an example, for simulation year 5, the 99-percent exceedance for the lake level is 1,417.6 feet above sea level for the annual lake-volume model and 1,423.2 feet above sea level for the water mass-balance model. The rapid increase is caused largely by the record precipitation and inflow in the summer and fall of 1993. Because the water mass-balance model produces lake-level traces that closely match the hydrology of Devils Lake, the water mass-balance model is superior to the annual lake-volume model for computing exceedance levels for the 50-year planning horizon.

  2. Lake-level frequency analysis for Devils Lake, North Dakota

    USGS Publications Warehouse

    Wiche, Gregg J.; Vecchia, Aldo V.

    1995-01-01

    Two approaches were used to estimate future lake-level probabilities for Devils Lake. The first approach is based on an annual lake-volume model, and the second approach is based on a statistical water mass-balance model that generates seasonal lake volumes on the basis of seasonal precipitation, evaporation, and inflow.Autoregressive moving average models were used to model the annual mean lake volume and the difference between the annual maximum lake volume and the annual mean lake volume. Residuals from both models were determined to be uncorrelated with zero mean and constant variance. However, a nonlinear relation between the residuals of the two models was included in the final annual lake-volume model.Because of high autocorrelation in the annual lake levels of Devils Lake, the annual lakevolume model was verified using annual lake-level changes. The annual lake-volume model closely reproduced the statistics of the recorded lake-level changes for 1901-93 except for the skewness coefficient However, the model output is less skewed than the data indicate because of some unrealistically large lake-level declines.The statistical water mass-balance model requires as inputs seasonal precipitation, evaporation, and inflow data for Devils Lake. Analysis of annual precipitation, evaporation, and inflow data for 1950-93 revealed no significant trends or long-range dependence so the input time series were assumed to be stationary and short-range dependent.Normality transformations were used to approximately maintain the marginal probability distributions; and a multivariate, periodic autoregressive model was used to reproduce the correlation structure. Each of the coefficients in the model is significantly different from zero at the 5-percent significance level. Coefficients relating spring inflow from one year to spring and fall inflows from the previous year had the largest effect on the lake-level frequency analysis.Inclusion of parameter uncertainty in the model for generating precipitation, evaporation, and inflow indicates that the upper lake-level exceedance levels from the water mass-balance model are particularly sensitive to parameter uncertainty. The sensitivity in the upper exceedance levels was caused almost entirely by uncertainty in the fitted probability distributions of the quarterly inflows. A method was developed for using long-term streamflow data for the Red River of the North at Grand Forks to reduce the variance in the estimated mean. Comparison of the annual lake-volume model and the water mass-balance model indicates the upper exceedance levels of the water mass-balance model increase much more rapidly than those of the annual lake-volume model. As an example, for simulation year 5, the 99-percent exceedance for the lake level is 1,417.6 feet above sea level for the annual lake-volume model and 1,423.2 feet above sea level for the water mass-balance model. The rapid increase is caused largely by the record precipitation and inflow in the summer and fall of 1993. Because the water mass-balance model produces lake-level traces that closely match the hydrology of Devils Lake, the water mass-balance model is superior to the annual lake-volume model for computing exceedance levels for the 50-year planning horizon.

  3. Detecting spatial structures in throughfall data: The effect of extent, sample size, sampling design, and variogram estimation method

    NASA Astrophysics Data System (ADS)

    Voss, Sebastian; Zimmermann, Beate; Zimmermann, Alexander

    2016-09-01

    In the last decades, an increasing number of studies analyzed spatial patterns in throughfall by means of variograms. The estimation of the variogram from sample data requires an appropriate sampling scheme: most importantly, a large sample and a layout of sampling locations that often has to serve both variogram estimation and geostatistical prediction. While some recommendations on these aspects exist, they focus on Gaussian data and high ratios of the variogram range to the extent of the study area. However, many hydrological data, and throughfall data in particular, do not follow a Gaussian distribution. In this study, we examined the effect of extent, sample size, sampling design, and calculation method on variogram estimation of throughfall data. For our investigation, we first generated non-Gaussian random fields based on throughfall data with large outliers. Subsequently, we sampled the fields with three extents (plots with edge lengths of 25 m, 50 m, and 100 m), four common sampling designs (two grid-based layouts, transect and random sampling) and five sample sizes (50, 100, 150, 200, 400). We then estimated the variogram parameters by method-of-moments (non-robust and robust estimators) and residual maximum likelihood. Our key findings are threefold. First, the choice of the extent has a substantial influence on the estimation of the variogram. A comparatively small ratio of the extent to the correlation length is beneficial for variogram estimation. Second, a combination of a minimum sample size of 150, a design that ensures the sampling of small distances and variogram estimation by residual maximum likelihood offers a good compromise between accuracy and efficiency. Third, studies relying on method-of-moments based variogram estimation may have to employ at least 200 sampling points for reliable variogram estimates. These suggested sample sizes exceed the number recommended by studies dealing with Gaussian data by up to 100 %. Given that most previous throughfall studies relied on method-of-moments variogram estimation and sample sizes ≪200, currently available data are prone to large uncertainties.

  4. Toxicological Benchmarks for Screening of Potential Contaminants of Concern for Effects on Aquatic Biota on the Oak Ridge Reservation, Oak Ridge, Tennessee

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Suter, G.W., II

    1993-01-01

    One of the initial stages in ecological risk assessment of hazardous waste sites is the screening of contaminants to determine which, if any, of them are worthy of further consideration; this process is termed contaminant screening. Screening is performed by comparing concentrations in ambient media to benchmark concentrations that are either indicative of a high likelihood of significant effects (upper screening benchmarks) or of a very low likelihood of significant effects (lower screening benchmarks). Exceedance of an upper screening benchmark indicates that the chemical in question is clearly of concern and remedial actions are likely to be needed. Exceedance ofmore » a lower screening benchmark indicates that a contaminant is of concern unless other information indicates that the data are unreliable or the comparison is inappropriate. Chemicals with concentrations below the lower benchmark are not of concern if the ambient data are judged to be adequate. This report presents potential screening benchmarks for protection of aquatic life from contaminants in water. Because there is no guidance for screening benchmarks, a set of alternative benchmarks is presented herein. The alternative benchmarks are based on different conceptual approaches to estimating concentrations causing significant effects. For the upper screening benchmark, there are the acute National Ambient Water Quality Criteria (NAWQC) and the Secondary Acute Values (SAV). The SAV concentrations are values estimated with 80% confidence not to exceed the unknown acute NAWQC for those chemicals with no NAWQC. The alternative chronic benchmarks are the chronic NAWQC, the Secondary Chronic Value (SCV), the lowest chronic values for fish and daphnids, the lowest EC20 for fish and daphnids from chronic toxicity tests, the estimated EC20 for a sensitive species, and the concentration estimated to cause a 20% reduction in the recruit abundance of largemouth bass. It is recommended that ambient chemical concentrations be compared to all of these benchmarks. If NAWQC are exceeded, the chemicals must be contaminants of concern because the NAWQC are applicable or relevant and appropriate requirements (ARARs). If NAWQC are not exceeded, but other benchmarks are, contaminants should be selected on the basis of the number of benchmarks exceeded and the conservatism of the particular benchmark values, as discussed in the text. To the extent that toxicity data are available, this report presents the alternative benchmarks for chemicals that have been detected on the Oak Ridge Reservation. It also presents the data used to calculate the benchmarks and the sources of the data. It compares the benchmarks and discusses their relative conservatism and utility. This report supersedes a prior aquatic benchmarks report (Suter and Mabrey 1994). It adds two new types of benchmarks. It also updates the benchmark values where appropriate, adds some new benchmark values, replaces secondary sources with primary sources, and provides more complete documentation of the sources and derivation of all values.« less

  5. Planktivory in the changing Lake Huron zooplankton community: Bythotrephes consumption exceeds that of Mysis and fish

    USGS Publications Warehouse

    Bunnell, D.B.; Hunter, R. Douglas; Warner, D.M.; Chriscinske, M.A.; Roseman, E.F.

    2011-01-01

    Oligotrophic lakes are generally dominated by calanoid copepods because of their competitive advantage over cladocerans at low prey densities. Planktivory also can alter zooplankton community structure. We sought to understand the role of planktivory in driving recent changes to the zooplankton community of Lake Huron, a large oligotrophic lake on the border of Canada and the United States. We tested the hypothesis that excessive predation by fish (rainbow smelt Osmerus mordax, bloater Coregonus hoyi) and invertebrates (Mysis relicta, Bythotrephes longimanus) had driven observed declines in cladoceran and cyclopoid copepod biomass between 2002 and 2007. We used a field sampling and bioenergetics modelling approach to generate estimates of daily consumption by planktivores at two 91-m depth sites in northern Lake Huron, U.S.A., for each month, May-October 2007. Daily consumption was compared to daily zooplankton production. Bythotrephes was the dominant planktivore and estimated to have eaten 78% of all zooplankton consumed. Bythotrephes consumption exceeded total zooplankton production between July and October. Mysis consumed 19% of all the zooplankton consumed and exceeded zooplankton production in October. Consumption by fish was relatively unimportant - eating only 3% of all zooplankton consumed. Because Bythotrephes was so important, we explored other consumption estimation methods that predict lower Bythotrephes consumption. Under this scenario, Mysis was the most important planktivore, and Bythotrephes consumption exceeded zooplankton production only in August. Our results provide no support for the hypothesis that excessive fish consumption directly contributed to the decline of cladocerans and cyclopoid copepods in Lake Huron. Rather, they highlight the importance of invertebrate planktivores in structuring zooplankton communities, especially for those foods webs that have both Bythotrephes and Mysis. Together, these species occupy the epi-, meta- and hypolimnion, leaving limited refuge for zooplankton prey. Published 2011. This article is a US Government work and is in the public domain in the USA.

  6. The perception of probability.

    PubMed

    Gallistel, C R; Krishan, Monika; Liu, Ye; Miller, Reilly; Latham, Peter E

    2014-01-01

    We present a computational model to explain the results from experiments in which subjects estimate the hidden probability parameter of a stepwise nonstationary Bernoulli process outcome by outcome. The model captures the following results qualitatively and quantitatively, with only 2 free parameters: (a) Subjects do not update their estimate after each outcome; they step from one estimate to another at irregular intervals. (b) The joint distribution of step widths and heights cannot be explained on the assumption that a threshold amount of change must be exceeded in order for them to indicate a change in their perception. (c) The mapping of observed probability to the median perceived probability is the identity function over the full range of probabilities. (d) Precision (how close estimates are to the best possible estimate) is good and constant over the full range. (e) Subjects quickly detect substantial changes in the hidden probability parameter. (f) The perceived probability sometimes changes dramatically from one observation to the next. (g) Subjects sometimes have second thoughts about a previous change perception, after observing further outcomes. (h) The frequency with which they perceive changes moves in the direction of the true frequency over sessions. (Explaining this finding requires 2 additional parametric assumptions.) The model treats the perception of the current probability as a by-product of the construction of a compact encoding of the experienced sequence in terms of its change points. It illustrates the why and the how of intermittent Bayesian belief updating and retrospective revision in simple perception. It suggests a reinterpretation of findings in the recent literature on the neurobiology of decision making. (PsycINFO Database Record (c) 2014 APA, all rights reserved).

  7. Numerical Estimation of the Spent Fuel Ratio

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lindgren, Eric R.; Durbin, Samuel; Wilke, Jason

    Sabotage of spent nuclear fuel casks remains a concern nearly forty years after attacks against shipment casks were first analyzed and has a renewed relevance in the post-9/11 environment. A limited number of full-scale tests and supporting efforts using surrogate materials, typically depleted uranium dioxide (DUO 2 ), have been conducted in the interim to more definitively determine the source term from these postulated events. However, the validity of these large- scale results remain in question due to the lack of a defensible spent fuel ratio (SFR), defined as the amount of respirable aerosol generated by an attack on amore » mass of spent fuel compared to that of an otherwise identical surrogate. Previous attempts to define the SFR in the 1980's have resulted in estimates ranging from 0.42 to 12 and include suboptimal experimental techniques and data comparisons. Because of the large uncertainty surrounding the SFR, estimates of releases from security-related events may be unnecessarily conservative. Credible arguments exist that the SFR does not exceed a value of unity. A defensible determination of the SFR in this lower range would greatly reduce the calculated risk associated with the transport and storage of spent nuclear fuel in dry cask systems. In the present work, the shock physics codes CTH and ALE3D were used to simulate spent nuclear fuel (SNF) and DUO 2 targets impacted by a high-velocity jet at an ambient temperature condition. These preliminary results are used to illustrate an approach to estimate the respirable release fraction for each type of material and ultimately, an estimate of the SFR. This page intentionally blank« less

  8. Water Quality on the Prairie Band Potawatomi Reservation, Northeastern Kansas, June 1996 through August 2006

    USGS Publications Warehouse

    Schmidt, Heather C. Ross; Mehl, Heidi E.; Pope, Larry M.

    2007-01-01

    This report describes surface- and ground-water-quality data collected on the Prairie Band Potawatomi Reservation in northeastern Kansas from November 2003 through August 2006 (hereinafter referred to as the 'current study period'). Data from this study period are compared to results from June 1996 through August 2003, which are published in previous reports as part of a multiyear cooperative study with the Prairie Band Potawatomi Nation. Surface and ground water are valuable resources to the Prairie Band Potawatomi Nation as tribal members currently (2007) use area streams to fulfill subsistence hunting and fishing needs and because ground water potentially could support expanding commercial enterprise and development. Surface-water-quality samples collected from November 2003 through August 2006 were analyzed for physical properties, dissolved solids, major ions, nutrients, trace elements, pesticides, fecal-indicator bacteria, suspended-sediment concentration, and total suspended solids. Ground-water samples were analyzed for physical properties, dissolved solids, major ions, nutrients, trace elements, pesticides, and fecal-indicator bacteria. Chemical oxygen demand and volatile organic compounds were analyzed in all three samples from one monitoring well located near a construction and demolition landfill on the reservation, and in one sample from another well in the Soldier Creek drainage basin. Previous reports published as a part of this ongoing study identified total phosphorus, triazine herbicides, and fecal coliform bacteria as exceeding their respective water-quality criteria in surface water on the reservation. Previous ground-water assessments identified occasional sample concentrations of dissolved solids, sodium, sulfate, boron, iron, and manganese as exceeding their respective water-quality criteria. Fifty-six percent of the 55 surface-water samples collected during the current study period and analyzed for total phosphorus exceeded the goal of 0.1 mg/L (milligram per liter) established by the U.S. Environmental Protection Agency (USEPA) to limit cultural eutrophication in flowing water. Concentrations of dissolved solids frequently exceeded the USEPA Secondary Drinking-Water Regulation (SDWR) of 500 mg/L in samples from two sites. Concentrations of sodium exceeded the Drinking-Water Advisory of 20 mg/L set by USEPA in almost 50 percent of the surface-water samples. All four samples analyzed for atrazine concentrations showed some concentration of the pesticide, but none exceeded the Maximum Contaminant Level (MCL) established for drinking water by USEPA of 3.0 ?g/L (micrograms per liter) as an annual average. A triazine herbicide screen was used on 55 surface-water samples, and triazine compounds were frequently detected. Triazine herbicides and their degradates are listed on the USEPA Contaminant Candidate List. In 41 percent of surface-water samples, densities of Escherichia coli (E. coli) bacteria exceeded the primary contact, single-sample maximum in public-access bodies of water (1,198 colonies per 100 milliliters of water for samples collected between April 1 and October 31) set by the Kansas Department of Health and Environment (KDHE). Nitrite plus nitrate concentrations in all three water samples from 1 of 10 monitoring wells exceeded the MCL of 10 mg/L established by USEPA for drinking water. Arsenic concentrations in all three samples from one well exceeded the proposed MCL of 10 ?g/L established by USEPA for drinking water. Boron also exceeded the drinking-water advisory in three samples from one well, and iron concentrations were higher than the SDWR in water from four wells. There was some detection of pesticides in ground-water samples from three of the wells, and one detection of the volatile organic compound diethyl ether in one well. Concentrations of dissolved solids exceeded the SDWR in 20 percent of ground-water samples collected during the current study period, and concentration

  9. Normal range of human dietary sodium intake: a perspective based on 24-hour urinary sodium excretion worldwide.

    PubMed

    McCarron, David A; Kazaks, Alexandra G; Geerling, Joel C; Stern, Judith S; Graudal, Niels A

    2013-10-01

    The recommendation to restrict dietary sodium for management of hypertensive cardiovascular disease assumes that sodium intake exceeds physiologic need, that it can be significantly reduced, and that the reduction can be maintained over time. In contrast, neuroscientists have identified neural circuits in vertebrate animals that regulate sodium appetite within a narrow physiologic range. This study further validates our previous report that sodium intake, consistent with the neuroscience, tracks within a narrow range, consistent over time and across cultures. Peer-reviewed publications reporting 24-hour urinary sodium excretion (UNaV) in a defined population that were not included in our 2009 publication were identified from the medical literature. These datasets were combined with those in our previous report of worldwide dietary sodium consumption. The new data included 129 surveys, representing 50,060 participants. The mean value and range of 24-hour UNaV in each of these datasets were within 1 SD of our previous estimate. The combined mean and normal range of sodium intake of the 129 datasets were nearly identical to that we previously reported (mean = 158.3±22.5 vs. 162.4±22.4 mmol/d). Merging the previous and new datasets (n = 190) yielded sodium consumption of 159.4±22.3 mmol/d (range = 114-210 mmol/d; 2,622-4,830mg/d). Human sodium intake, as defined by 24-hour UNaV, is characterized by a narrow range that is remarkably reproducible over at least 5 decades and across 45 countries. As documented here, this range is determined by physiologic needs rather than environmental factors. Future guidelines should be based on this biologically determined range.

  10. Traffic safety facts 1998 : speeding

    DOT National Transportation Integrated Search

    1998-01-01

    Speeding exceeding the posted speed limit or driving too fast for : The economic cost : of speeding-related : crashes is estimated : to be $27.7 billion : each year. : conditions is one of the most prevalent factors contributing to traf...

  11. Quick test for durability factor estimation.

    DOT National Transportation Integrated Search

    2010-03-01

    The Missouri Department of Transportation (MoDOT) is considering the use of the AASHTO T 161 Durability Factor (DF) as an endresult : performance specification criterion for evaluation of paving concrete. However, the test method duration can exceed ...

  12. Soil coverage evolution and wind erosion risk on summer crops under contrasting tillage systems

    NASA Astrophysics Data System (ADS)

    Mendez, Mariano J.; Buschiazzo, Daniel E.

    2015-03-01

    The effectiveness of wind erosion control by soil surface conditions and crop and weed canopy has been well studied in wind tunnel experiments. The aim of this study is to assess the combined effects of these variables under field conditions. Soil surface conditions, crop and weed coverage, plant residue, and non-erodible aggregates (NEA) were measured in the field between the fallow start and the growth period of sunflower (Helianthus annuus) and corn (Zea mays). Both crops were planted on a sandy-loam Entic Haplustoll with conventional-(CT), vertical-(VT) and no-till (NT) tillage systems. Wind erosion was estimated by means of the spreadsheet version the Revised Wind Erosion Equation and the soil coverage was measured each 15 days. Results indicated that wind erosion was mostly negligible in NT, exceeding the tolerable levels (estimated between 300 and 1400 kg ha-1 year-1 by Verheijen et al. (2009)) only in an year with high climatic erosivity. Wind erosion exceeded the tolerable levels in most cases in CT and VT, reaching values of 17,400 kg ha-1. Wind erosion was 2-10 times higher after planting of both crops than during fallows. During the fallows, the soil was mostly well covered with plant residues and NEA in CT and VT and with residues and weeds in NT. High wind erosion amounts occurring 30 days after planting in all tillage systems were produced by the destruction of coarse aggregates and the burying of plant residues during planting operations and rains. Differences in soil protection after planting were given by residues of previous crops and growing weeds. The growth of weeds 2-4 weeks after crop planting contributed to reduce wind erosion without impacting in crops yields. An accurate weeds management in semiarid lands can contribute significantly to control wind erosion. More field studies are needed in order to develop management strategies to reduce wind erosion.

  13. Cost of specific emergency general surgery diseases and factors associated with high-cost patients.

    PubMed

    Ogola, Gerald O; Shafi, Shahid

    2016-02-01

    We have previously shown that overall cost of hospitalization for emergency general surgery (EGS) diseases is more than $28 billion annually and rising. The purposes of this study were to estimate the costs associated with specific EGS diseases and to identify factors associated with high-cost hospitalizations. The American Association for the Surgery of Trauma definition was used to identify hospitalizations of adult EGS patients in the 2010 National Inpatient Sample data. Cost of each hospitalization was obtained using cost-to-charge ratio in National Inpatient Sample. Regression analysis was used to estimate the cost for each EGS disease adjusted for patient and hospital characteristics. Hospitalizations with cost exceeding 75th percentile for each EGS disease were compared with lower-cost hospitalizations to identify factors associated with high cost. Thirty-one EGS diseases resulted in 2,602,074 hospitalizations nationwide in 2010 at an average adjusted cost of $10,110 (95% confidence interval, $10,086-$10,134) per hospitalization. Of these, only nine diseases constituted 80% of the total volume and 74% of the total cost. Empyema chest, colorectal cancer, and small intestine cancer were the most expensive EGS diseases with adjusted mean cost per hospitalization exceeding $20,000, while breast infection, abdominal pain, and soft tissue infection were the least expensive, with mean adjusted costs of less than $7,000 per hospitalization. The most important factors associated with high-cost hospitalizations were the number and type of procedures performed (76.2% of variance), but a region in Western United States (11.3%), Medicare and Medicaid payors (2.6%), and hospital ownership by public or not-for-profit entities (5.6%) were also associated with high-cost hospitalizations. A small number of diseases constitute a vast majority of EGS hospitalizations and their cost. Attempts at reducing the cost of EGS hospitalization will require controlling the cost of procedures. Economic analysis, level IV.

  14. Evaluation of a new model of aeolian transport in the presence of vegetation

    USGS Publications Warehouse

    Li, Junran; Okin, Gregory S.; Herrick, Jeffrey E.; Belnap, Jayne; Miller, Mark E.; Vest, Kimberly; Draut, Amy E.

    2013-01-01

    Aeolian transport is an important characteristic of many arid and semiarid regions worldwide that affects dust emission and ecosystem processes. The purpose of this paper is to evaluate a recent model of aeolian transport in the presence of vegetation. This approach differs from previous models by accounting for how vegetation affects the distribution of shear velocity on the surface rather than merely calculating the average effect of vegetation on surface shear velocity or simply using empirical relationships. Vegetation, soil, and meteorological data at 65 field sites with measurements of horizontal aeolian flux were collected from the Western United States. Measured fluxes were tested against modeled values to evaluate model performance, to obtain a set of optimum model parameters, and to estimate the uncertainty in these parameters. The same field data were used to model horizontal aeolian flux using three other schemes. Our results show that the model can predict horizontal aeolian flux with an approximate relative error of 2.1 and that further empirical corrections can reduce the approximate relative error to 1.0. The level of error is within what would be expected given uncertainties in threshold shear velocity and wind speed at our sites. The model outperforms the alternative schemes both in terms of approximate relative error and the number of sites at which threshold shear velocity was exceeded. These results lend support to an understanding of the physics of aeolian transport in which (1) vegetation's impact on transport is dependent upon the distribution of vegetation rather than merely its average lateral cover and (2) vegetation impacts surface shear stress locally by depressing it in the immediate lee of plants rather than by changing the bulk surface's threshold shear velocity. Our results also suggest that threshold shear velocity is exceeded more than might be estimated by single measurements of threshold shear stress and roughness length commonly associated with vegetated surfaces, highlighting the variation of threshold shear velocity with space and time in real landscapes.

  15. New estimates of asymmetric decomposition of racemic mixtures by natural beta-radiation sources

    NASA Technical Reports Server (NTRS)

    Hegstrom, R. A.; Rich, A.; Van House, J.

    1985-01-01

    Some recent calculations that appeared to invalidate the Vester-Ulbricht hypothesis, which suggests that the chirality of biological molecules originates from the beta-radiolysis of prebiotic racemic mixtures, are reexamined. These calculations apparently showed that the radiolysis-induced chiral polarization can never exceed the chiral polarization produced by statistical fluctuations. It is here shown that several overly restrictive conditions were imposed on these calculations which, when relaxed, allow the radiolysis-induced polarization to exceed that produced by statistical fluctuations, in accordance with the Vester-Ulbricht hypothesis.

  16. The allometric relationship between resting metabolic rate and body mass in wild waterfowl (Anatidae) and an application to estimation of winter habitat requirements

    USGS Publications Warehouse

    Miller, M.R.; Eadie, J. McA

    2006-01-01

    We examined the allometric relationship between resting metabolic rate (RMR; kJ day-1) and body mass (kg) in wild waterfowl (Anatidae) by regressing RMR on body mass using species means from data obtained from published literature (18 sources, 54 measurements, 24 species; all data from captive birds). There was no significant difference among measurements from the rest (night; n = 37), active (day; n = 14), and unspecified (n = 3) phases of the daily cycle (P > 0.10), and we pooled these measurements for analysis. The resulting power function (aMassb) for all waterfowl (swans, geese, and ducks) had an exponent (b; slope of the regression) of 0.74, indistinguishable from that determined with commonly used general equations for nonpasserine birds (0.72-0.73). In contrast, the mass proportionality coefficient (b; y-intercept at mass = 1 kg) of 422 exceeded that obtained from the nonpasserine equations by 29%-37%. Analyses using independent contrasts correcting for phylogeny did not substantially alter the equation. Our results suggest the waterfowl equation provides a more appropriate estimate of RMR for bioenergetics analyses of waterfowl than do the general nonpasserine equations. When adjusted with a multiple to account for energy costs of free living, the waterfowl equation better estimates daily energy expenditure. Using this equation, we estimated that the extent of wetland habitat required to support wintering waterfowl populations could be 37%-50% higher than previously predicted using general nonpasserine equations. ?? The Cooper Ornithological Society 2006.

  17. Quality of water on the Prairie Band Potawatomi Reservation, northeastern Kansas, May 2001 through August 2003

    USGS Publications Warehouse

    Ross Schmidt, Heather C.

    2004-01-01

    Water-quality samples were collected from 20 surface-water sites and 11 ground-water sites on the Prairie Band Potawatomi Reservation in northeastern Kansas in an effort to describe existing water-quality conditions on the reservation and to compare water-quality conditions to results from previous reports published as part of a multiyear cooperative study with the Prairie Band Potawatomi Nation. Water is a valuable resource to the Prairie Band Potawatomi Nation as tribal members use the streams draining the reservation, Soldier, Little Soldier, and South Cedar Creeks, to fulfill subsistence hunting and fishing needs and as the tribe develops an economic base on the reservation. Samples were collected once at 20 surface-water monitoring sites during June 2001, and quarterly samples were collected at 5 of the 20 monitoring sites from May 2001 through August 2003. Ground-water-quality samples were collected once from seven wells and twice from four wells during April through May 2003 and in August 2003. Surface-water-quality samples collected from May through August 2001 were analyzed for physical properties, nutrients, pesticides, fecal indicator bacteria, and total suspended solids. In November 2001, an additional analysis for dissolved solids, major ions, trace elements, and suspended-sediment concentration was added for surface-water samples. Ground-water samples were analyzed for physical properties, dissolved solids, major ions, nutrients, trace elements, pesticides, and fecal indicator bacteria. Chemical oxygen demand and volatile organic compounds were analyzed in a sample from one monitoring well located near a construction and demolition landfill on the reservation. Previous reports published as a part of this ongoing study identified total phosphorus, triazine herbicides, and fecal coliform bacteria as exceeding their respective water-quality criteria in surface water on the reservation. Previous ground-water assessments identified occasional sample concentrations of dissolved solids, sodium, sulfate, boron, iron, and manganese as exceeding their respective water-quality criteria. Forty percent of the 65 surface-water samples analyzed for total phosphorus exceeded the aquatic-life goal of 0.1 mg/L (milligrams per liter) established by the U.S. Environmental Protection Agency (USEPA). Concentrations of dissolved solids and sodium occasionally exceeded USEPA Secondary Drinking-Water Regulations and Drinking-Water Advisory Levels, respectively. One of the 20 samples analyzed for atrazine concentrations exceeded the Maximum Contaminant Level (MCL) of 3.0 ?g/L (micrograms per liter) as an annual average established for drinking water by USEPA. A triazine herbicide screen was used on 63 surface-water samples, and triazine compounds were frequently detected. Triazine herbicides and their degradates are listed on the USEPA Contaminant Candidate List. Nitrite plus nitrate concentrations in two ground-water samples from one monitoring well exceeded the MCL of 10 mg/L established by USEPA for drinking water. Arsenic concentrations in two samples from one monitoring well also exceeded the proposed MCL of 10 ?g/L established by the USEPA for drinking water. Concentrations of dissolved solids and sulfate in some ground-water samples exceeded their respective Secondary Drinking-Water Regulations, and concentrations exceeded the taste threshold of the USEPA?s Drinking-Water Advisory Level for sodium. Consequently, in the event that ground water on the reservation is to be used as a drinking-water source, additional treatment may be necessary to remove excess dissolved solids, sulfate, and sodium.

  18. Triallelic SNPs for estimating cattle introgression, inbreeding, and determining parentage in North American yak

    USDA-ARS?s Scientific Manuscript database

    The current population of yaks in the U.S. is estimated to exceed 5,000 and was derived from about 150 animals imported in the 20th Century. During the expansion of the U.S. herd, some yaks were allowed to hybridize with cattle, although it is not clear to what extent. Our aim was to use next genera...

  19. Quantitative risk assessment of durable glass fibers.

    PubMed

    Fayerweather, William E; Eastes, Walter; Cereghini, Francesco; Hadley, John G

    2002-06-01

    This article presents a quantitative risk assessment for the theoretical lifetime cancer risk from the manufacture and use of relatively durable synthetic glass fibers. More specifically, we estimate levels of exposure to respirable fibers or fiberlike structures of E-glass and C-glass that, assuming a working lifetime exposure, pose a theoretical lifetime cancer risk of not more than 1 per 100,000. For comparability with other risk assessments we define these levels as nonsignificant exposures. Nonsignificant exposure levels are estimated from (a) the Institute of Occupational Medicine (IOM) chronic rat inhalation bioassay of durable E-glass microfibers, and (b) the Research Consulting Company (RCC) chronic inhalation bioassay of durable refractory ceramic fibers (RCF). Best estimates of nonsignificant E-glass exposure exceed 0.05-0.13 fibers (or shards) per cubic centimeter (cm3) when calculated from the multistage nonthreshold model. Best estimates of nonsignificant C-glass exposure exceed 0.27-0.6 fibers/cm3. Estimates of nonsignificant exposure increase markedly for E- and C-glass when non-linear models are applied and rapidly exceed 1 fiber/cm3. Controlling durable fiber exposures to an 8-h time-weighted average of 0.05 fibers/cm3 will assure that the additional theoretical lifetime risk from working lifetime exposures to these durable fibers or shards is kept below the 1 per 100,000 level. Measured airborne exposures to respirable, durable glass fibers (or shards) in glass fiber manufacturing and fabrication operations were compared with the nonsignificant exposure estimates described. Sampling results for B-sized respirable E-glass fibers at facilities that manufacture or fabricate small-diameter continuous-filament products, from those that manufacture respirable E-glass shards from PERG (process to efficiently recycle glass), from milled fiber operations, and from respirable C-glass shards from Flakeglass operations indicate very low median exposures of 0, 0.0002, 0.007, 0.008, and 0.0025 fibers (or shards)/cm3, respectively using the NIOSH 7400 Method ("B" rules). Durable glass fiber exposures for various applications must be well characterized to ensure that they are kept below nonsignificant levels (e.g., 0.05 fibers/cm3) as defined in this risk assessment.

  20. Navy Pier roundabout, Lindsay Light Radiological Survey

    EPA Pesticide Factsheets

    The field gamma measurements within the excavation during the excavation process did not exceed theinstrument threshold previously stated, and ranged from a minimum of 2,300 cpm to a maximum of 4,300cpm unshielded.

  1. 305 E. Erie Street, Lindsay Light Radiological Survey

    EPA Pesticide Factsheets

    The field gamma measurements within the excavation during the excavation process did not exceed theinstrument threshold previously stated, and ranged from a minimum of 1,900 cpm to a maximum of 3,900cpm unshielded.

  2. 400-420 N St Clair St, May 2014, Lindsay Light Radiological Survey

    EPA Pesticide Factsheets

    The field gamma measurements within the excavation and the spoil materials generatedduring the excavation process did not exceed the respective instrument threshold previously stated witha maximum of 8,000 cpm unshielded.

  3. 301-45 E Illinois St, May 2014, Lindsay Light Radiological Survey

    EPA Pesticide Factsheets

    The field gamma measurements within the excavation and the spoil materials generatedduring the excavation process did not exceed the respective instrument threshold previously stated with amaximum of 7,500 cpm unshielded.

  4. 215 E. Grand Ave, Lindsay Light Radiological Survey

    EPA Pesticide Factsheets

    The field gamma measurements within the excavation during the excavation process did not exceed theinstrument threshold previously stated, and ranged from a minimum of 1,300 cpm to a maximum of 3,500cpm shielded.

  5. MG0414+0534: A Dusty Gravitational Lens

    NASA Technical Reports Server (NTRS)

    Lawrence, C.; Elston, R.; Jannuzi, B.; Turner, E.

    1996-01-01

    The gravitational lens system MG0414+0534 has an unexceptional four-image lensing geometry; however, the optical counterparts of the radio images are exceedingly red, with spectra unlike that of any previously observed active nucleus.

  6. 455 N St Clair St, August 2012, Lindsay Light Radiological Survey

    EPA Pesticide Factsheets

    The field gamma measurements within the spoil materials generated during the drilling process did not exceed the respective threshold values previously stated withthe maximum unshielded gamma reading observed being 5,500 cpm.

  7. Detecting spatial structures in throughfall data: the effect of extent, sample size, sampling design, and variogram estimation method

    NASA Astrophysics Data System (ADS)

    Voss, Sebastian; Zimmermann, Beate; Zimmermann, Alexander

    2016-04-01

    In the last three decades, an increasing number of studies analyzed spatial patterns in throughfall to investigate the consequences of rainfall redistribution for biogeochemical and hydrological processes in forests. In the majority of cases, variograms were used to characterize the spatial properties of the throughfall data. The estimation of the variogram from sample data requires an appropriate sampling scheme: most importantly, a large sample and an appropriate layout of sampling locations that often has to serve both variogram estimation and geostatistical prediction. While some recommendations on these aspects exist, they focus on Gaussian data and high ratios of the variogram range to the extent of the study area. However, many hydrological data, and throughfall data in particular, do not follow a Gaussian distribution. In this study, we examined the effect of extent, sample size, sampling design, and calculation methods on variogram estimation of throughfall data. For our investigation, we first generated non-Gaussian random fields based on throughfall data with heavy outliers. Subsequently, we sampled the fields with three extents (plots with edge lengths of 25 m, 50 m, and 100 m), four common sampling designs (two grid-based layouts, transect and random sampling), and five sample sizes (50, 100, 150, 200, 400). We then estimated the variogram parameters by method-of-moments and residual maximum likelihood. Our key findings are threefold. First, the choice of the extent has a substantial influence on the estimation of the variogram. A comparatively small ratio of the extent to the correlation length is beneficial for variogram estimation. Second, a combination of a minimum sample size of 150, a design that ensures the sampling of small distances and variogram estimation by residual maximum likelihood offers a good compromise between accuracy and efficiency. Third, studies relying on method-of-moments based variogram estimation may have to employ at least 200 sampling points for reliable variogram estimates. These suggested sample sizes exceed the numbers recommended by studies dealing with Gaussian data by up to 100 %. Given that most previous throughfall studies relied on method-of-moments variogram estimation and sample sizes << 200, our current knowledge about throughfall spatial variability stands on shaky ground.

  8. Improved first-order uncertainty method for water-quality modeling

    USGS Publications Warehouse

    Melching, C.S.; Anmangandla, S.

    1992-01-01

    Uncertainties are unavoidable in water-quality modeling and subsequent management decisions. Monte Carlo simulation and first-order uncertainty analysis (involving linearization at central values of the uncertain variables) have been frequently used to estimate probability distributions for water-quality model output due to their simplicity. Each method has its drawbacks: Monte Carlo simulation's is mainly computational time; and first-order analysis are mainly questions of accuracy and representativeness, especially for nonlinear systems and extreme conditions. An improved (advanced) first-order method is presented, where the linearization point varies to match the output level whose exceedance probability is sought. The advanced first-order method is tested on the Streeter-Phelps equation to estimate the probability distribution of critical dissolved-oxygen deficit and critical dissolved oxygen using two hypothetical examples from the literature. The advanced first-order method provides a close approximation of the exceedance probability for the Streeter-Phelps model output estimated by Monte Carlo simulation using less computer time - by two orders of magnitude - regardless of the probability distributions assumed for the uncertain model parameters.

  9. User's Manual for Program PeakFQ, Annual Flood-Frequency Analysis Using Bulletin 17B Guidelines

    USGS Publications Warehouse

    Flynn, Kathleen M.; Kirby, William H.; Hummel, Paul R.

    2006-01-01

    Estimates of flood flows having given recurrence intervals or probabilities of exceedance are needed for design of hydraulic structures and floodplain management. Program PeakFQ provides estimates of instantaneous annual-maximum peak flows having recurrence intervals of 2, 5, 10, 25, 50, 100, 200, and 500 years (annual-exceedance probabilities of 0.50, 0.20, 0.10, 0.04, 0.02, 0.01, 0.005, and 0.002, respectively). As implemented in program PeakFQ, the Pearson Type III frequency distribution is fit to the logarithms of instantaneous annual peak flows following Bulletin 17B guidelines of the Interagency Advisory Committee on Water Data. The parameters of the Pearson Type III frequency curve are estimated by the logarithmic sample moments (mean, standard deviation, and coefficient of skewness), with adjustments for low outliers, high outliers, historic peaks, and generalized skew. This documentation provides an overview of the computational procedures in program PeakFQ, provides a description of the program menus, and provides an example of the output from the program.

  10. Growth status and estimated growth rate of youth football players: a community-based study.

    PubMed

    Malina, Robert M; Morano, Peter J; Barron, Mary; Miller, Susan J; Cumming, Sean P

    2005-05-01

    To characterize the growth status of participants in community-sponsored youth football programs and to estimate rates of growth in height and weight. Mixed-longitudinal over 2 seasons. Two communities in central Michigan. Members of 33 youth football teams in 2 central Michigan communities in the 2000 and 2001 seasons (Mid-Michigan PONY Football League). Height and weight of all participants were measured prior to each season, 327 in 2000 and 326 in 2001 (n = 653). The body mass index (kg/m) was calculated. Heights and weights did not differ from season to season and between the communities; the data were pooled and treated cross-sectionally. Increments of growth in height and weight were estimated for 166 boys with 2 measurements approximately 1 year apart to provide an estimate of growth rate. Growth status (size-attained) of youth football players relative to reference data (CDC) for American boys and estimated growth rate relative to reference values from 2 longitudinal studies of American boys. Median heights of youth football players approximate the 75th percentiles, while median weights approximate the 75th percentiles through 11 years and then drift toward the 90th percentiles of the reference. Median body mass indexes of youth football players fluctuate about the 85th percentiles of the reference. Estimated growth rates in height approximate the reference and may suggest earlier maturation, while estimated growth rates in weight exceed the reference. Youth football players are taller and especially heavier than reference values for American boys. Estimated rates of growth in height approximate medians for American boys and suggest earlier maturation. Estimated rates of growth in weight exceed those of the reference and may place many youth football players at risk for overweight/obesity, which in turn may be a risk factor for injury.

  11. A brightness exceeding simulated Langmuir limit

    NASA Astrophysics Data System (ADS)

    Nakasuji, Mamoru

    2013-08-01

    When an excitation of the first lens determines a beam is parallel beam, a brightness that is 100 times higher than Langmuir limit is measured experimentally, where Langmuir limits are estimated using a simulated axial cathode current density which is simulated based on a measured emission current. The measured brightness is comparable to Langmuir limit, when the lens excitation is such that an image position is slightly shorter than a lens position. Previously measured values of brightness for cathode apical radii of curvature 20, 60, 120, 240, and 480 μm were 8.7, 5.3, 3.3, 2.4, and 3.9 times higher than their corresponding Langmuir limits, respectively, in this experiment, the lens excitation was such that the lens and the image positions were 180 mm and 400 mm, respectively. From these measured brightness for three different lens excitation conditions, it is concluded that the brightness depends on the first lens excitation. For the electron gun operated in a space charge limited condition, some of the electrons emitted from the cathode are returned to the cathode without having crossed a virtual cathode. Therefore, method that assumes a Langmuir limit defining method using a Maxwellian distribution of electron velocities may need to be revised. For the condition in which the values of the exceeding the Langmuir limit are measured, the simulated trajectories of electrons that are emitted from the cathode do not cross the optical axis at the crossover, thus the law of sines may not be valid for high brightness electron beam systems.

  12. Anticipating Cycle 24 Minimum and its Consequences: An Update

    NASA Technical Reports Server (NTRS)

    Wilson, Robert M.; Hathaway, David H.

    2008-01-01

    This Technical Publication updates estimates for cycle 24 minimum and discusses consequences associated with cycle 23 being a longer than average period cycle and cycle 24 having parametric minimum values smaller (or larger for the case of spotless days) than long term medians. Through December 2007, cycle 23 has persisted 140 mo from its 12-mo moving average (12-mma) minimum monthly mean sunspot number occurrence date (May 1996). Longer than average period cycles of the modern era (since cycle 12) have minimum-to-minimum periods of about 139.0+/-6.3 mo (the 90-percent prediction interval), inferring that cycle 24 s minimum monthly mean sunspot number should be expected before July 2008. The major consequence of this is that, unless cycle 24 is a statistical outlier (like cycle 21), its maximum amplitude (RM) likely will be smaller than previously forecast. If, however, in the course of its rise cycle 24 s 12-mma of the weighted mean latitude (L) of spot groups exceeds 24 deg, then one expects RM >131, and if its 12-mma of highest latitude (H) spot groups exceeds 38 deg, then one expects RM >127. High-latitude new cycle spot groups, while first reported in January 2008, have not, as yet, become the dominant form of spot groups. Minimum values in L and H were observed in mid 2007 and values are now slowly increasing, a precondition for the imminent onset of the new sunspot cycle.

  13. Characterisation of plastic microbeads in facial scrubs and their estimated emissions in Mainland China.

    PubMed

    Cheung, Pui Kwan; Fok, Lincoln

    2017-10-01

    Plastic microbeads are often added to personal care and cosmetic products (PCCPs) as an abrasive agent in exfoliants. These beads have been reported to contaminate the aquatic environment and are sufficiently small to be readily ingested by aquatic organisms. Plastic microbeads can be directly released into the aquatic environment with domestic sewage if no sewage treatment is provided, and they can also escape from wastewater treatment plants (WWTPs) because of incomplete removal. However, the emissions of microbeads from these two sources have never been estimated for China, and no regulation has been imposed on the use of plastic microbeads in PCCPs. Therefore, in this study, we aimed to estimate the annual microbead emissions in Mainland China from both direct emissions and WWTP emissions. Nine facial scrubs were purchased, and the microbeads in the scrubs were extracted and enumerated. The microbead density in those products ranged from 5219 to 50,391 particles/g, with an average of 20,860 particles/g. Direct emissions arising from the use of facial scrubs were estimated using this average density number, population data, facial scrub usage rate, sewage treatment rate, and a few conservative assumptions. WWTP emissions were calculated by multiplying the annual treated sewage volume and estimated microbead density in treated sewage. We estimated that, on average, 209.7 trillion microbeads (306.9 tonnes) are emitted into the aquatic environment in Mainland China every year. More than 80% of the emissions originate from incomplete removal in WWTPs, and the remaining 20% are derived from direct emissions. Although the weight of the emitted microbeads only accounts for approximately 0.03% of the plastic waste input into the ocean from China, the number of microbeads emitted far exceeds the previous estimate of plastic debris (>330 μm) on the world's sea surface. Immediate actions are required to prevent plastic microbeads from entering the aquatic environment. Copyright © 2017 Elsevier Ltd. All rights reserved.

  14. The detection of problem analytes in a single proficiency test challenge in the absence of the Health Care Financing Administration rule violations.

    PubMed

    Cembrowski, G S; Hackney, J R; Carey, N

    1993-04-01

    The Clinical Laboratory Improvement Act of 1988 (CLIA 88) has dramatically changed proficiency testing (PT) practices having mandated (1) satisfactory PT for certain analytes as a condition of laboratory operation, (2) fixed PT limits for many of these "regulated" analytes, and (3) an increased number of PT specimens (n = 5) for each testing cycle. For many of these analytes, the fixed limits are much broader than the previously employed Standard Deviation Index (SDI) criteria. Paradoxically, there may be less incentive to identify and evaluate analytically significant outliers to improve the analytical process. Previously described "control rules" to evaluate these PT results are unworkable as they consider only two or three results. We used Monte Carlo simulations of Kodak Ektachem analyzers participating in PT to determine optimal control rules for the identification of PT results that are inconsistent with those from other laboratories using the same methods. The analysis of three representative analytes, potassium, creatine kinase, and iron was simulated with varying intrainstrument and interinstrument standard deviations (si and sg, respectively) obtained from the College of American Pathologists (Northfield, Ill) Quality Assurance Services data and Proficiency Test data, respectively. Analytical errors were simulated in each of the analytes and evaluated in terms of multiples of the interlaboratory SDI. Simple control rules for detecting systematic and random error were evaluated with power function graphs, graphs of probability of error detected vs magnitude of error. Based on the simulation results, we recommend screening all analytes for the occurrence of two or more observations exceeding the same +/- 1 SDI limit. For any analyte satisfying this condition, the mean of the observations should be calculated. For analytes with sg/si ratios between 1.0 and 1.5, a significant systematic error is signaled by the mean exceeding 1.0 SDI. Significant random error is signaled by one observation exceeding the +/- 3-SDI limit or the range of the observations exceeding 4 SDIs. For analytes with higher sg/si, significant systematic or random error is signaled by violation of the screening rule (having at least two observations exceeding the same +/- 1 SDI limit). Random error can also be signaled by one observation exceeding the +/- 1.5-SDI limit or the range of the observations exceeding 3 SDIs. We present a practical approach to the workup of apparent PT errors.

  15. Estimation of metallic structure durability for a known law of stress variation

    NASA Astrophysics Data System (ADS)

    Mironov, V. I.; Lukashuk, O. A.; Ogorelkov, D. A.

    2017-12-01

    Overload of machines working in transient operational modes leads to such stresses in their bearing metallic structures that considerably exceed the endurance limit. The estimation of fatigue damages based on linear summation offers a more accurate prediction in terms of machine durability. The paper presents an alternative approach to the estimation of the factors of the cyclic degradation of a material. Free damped vibrations of the bridge girder of an overhead crane, which follow a known logarithmical decrement, are studied. It is shown that taking into account cyclic degradation substantially decreases the durability estimated for a product.

  16. Masses and activity of AB Doradus B a/b. The age of the AB Dor quadruple system revisited

    NASA Astrophysics Data System (ADS)

    Wolter, U.; Czesla, S.; Fuhrmeister, B.; Robrade, J.; Engels, D.; Wieringa, M.; Schmitt, J. H. M. M.

    2014-10-01

    We present a multiwavelength study of the close binary AB Dor Ba/b (Rst137B). Our study comprises astrometric orbit measurements, optical spectroscopy, X-ray and radio observations. Using all available adaptive optics images of AB Dor B taken with VLT/NACO from 2004 to 2009, we tightly constrain its orbital period to 360.6 ± 1.5 days. We present the first orbital solution of Rst 137B and estimate the combined mass of AB Dor Ba+b as 0.69+0.02-0.24 M⊙, slightly exceeding previous estimates based on IR photometry. Our determined orbital inclination of Rst 137B is close to the axial inclination of AB Dor A inferred from Doppler imaging. Our VLT/UVES spectra yield high rotational velocities of ≥30 km s-1 for both components Ba and Bb, in accord with previous measurements, which corresponds to rotation periods significantly shorter than one day. Our combined spectral model, using PHOENIX spectra, yields an effective temperature of 3310 ± 50 K for the primary and approximately 60 K less for the secondary. The optical spectra presumably cover a chromospheric flare and show that at least one component of Rst 137B is significantly active. Activity and weak variations are also found in our simultaneous XMM-Newton observations, while our ATCA radio data yield constant fluxes at the level of previous measurements. Using evolutionary models, our newly determined stellar parameters confirm that the age of Rst 137B is between 50 and 100 Myr. Based on observations collected at the European Southern Observatory, Paranal, Chile, 383.D-1002(A) and the ESO Science Archive Facility. Using data obtained with XMM-Newton, an ESA science mission with instruments and contributions directly funded by ESA Member states and NASA. Using data obtained with the Australia Telescope Compact Array (ATCA) operated by the Commonwealth Scientific and Industrial Research Organisation (CSIRO).

  17. A probabilistic storm transposition approach for estimating exceedance probabilities of extreme precipitation depths

    NASA Astrophysics Data System (ADS)

    Foufoula-Georgiou, E.

    1989-05-01

    A storm transposition approach is investigated as a possible tool of assessing the frequency of extreme precipitation depths, that is, depths of return period much greater than 100 years. This paper focuses on estimation of the annual exceedance probability of extreme average precipitation depths over a catchment. The probabilistic storm transposition methodology is presented, and the several conceptual and methodological difficulties arising in this approach are identified. The method is implemented and is partially evaluated by means of a semihypothetical example involving extreme midwestern storms and two hypothetical catchments (of 100 and 1000 mi2 (˜260 and 2600 km2)) located in central Iowa. The results point out the need for further research to fully explore the potential of this approach as a tool for assessing the probabilities of rare storms, and eventually floods, a necessary element of risk-based analysis and design of large hydraulic structures.

  18. Estimated mortality of adult HIV-infected patients starting treatment with combination antiretroviral therapy

    PubMed Central

    Yiannoutsos, Constantin Theodore; Johnson, Leigh Francis; Boulle, Andrew; Musick, Beverly Sue; Gsponer, Thomas; Balestre, Eric; Law, Matthew; Shepherd, Bryan E; Egger, Matthias

    2012-01-01

    Objective To provide estimates of mortality among HIV-infected patients starting combination antiretroviral therapy. Methods We report on the death rates from 122 925 adult HIV-infected patients aged 15 years or older from East, Southern and West Africa, Asia Pacific and Latin America. We use two methods to adjust for biases in mortality estimation resulting from loss from follow-up, based on double-sampling methods applied to patient outreach (Kenya) and linkage with vital registries (South Africa), and apply these to mortality estimates in the other three regions. Age, gender and CD4 count at the initiation of therapy were the factors considered as predictors of mortality at 6, 12, 24 and >24 months after the start of treatment. Results Patient mortality was high during the first 6 months after therapy for all patient subgroups and exceeded 40 per 100 patient years among patients who started treatment at low CD4 count. This trend was seen regardless of region, demographic or disease-related risk factor. Mortality was under-reported by up to or exceeding 100% when comparing estimates obtained from passive monitoring of patient vital status. Conclusions Despite advances in antiretroviral treatment coverage many patients start treatment at very low CD4 counts and experience significant mortality during the first 6 months after treatment initiation. Active patient tracing and linkage with vital registries are critical in adjusting estimates of mortality, particularly in low- and middle-income settings. PMID:23172344

  19. Water resources management: Hydrologic characterization through hydrograph simulation may bias streamflow statistics

    NASA Astrophysics Data System (ADS)

    Farmer, W. H.; Kiang, J. E.

    2017-12-01

    The development, deployment and maintenance of water resources management infrastructure and practices rely on hydrologic characterization, which requires an understanding of local hydrology. With regards to streamflow, this understanding is typically quantified with statistics derived from long-term streamgage records. However, a fundamental problem is how to characterize local hydrology without the luxury of streamgage records, a problem that complicates water resources management at ungaged locations and for long-term future projections. This problem has typically been addressed through the development of point estimators, such as regression equations, to estimate particular statistics. Physically-based precipitation-runoff models, which are capable of producing simulated hydrographs, offer an alternative to point estimators. The advantage of simulated hydrographs is that they can be used to compute any number of streamflow statistics from a single source (the simulated hydrograph) rather than relying on a diverse set of point estimators. However, the use of simulated hydrographs introduces a degree of model uncertainty that is propagated through to estimated streamflow statistics and may have drastic effects on management decisions. We compare the accuracy and precision of streamflow statistics (e.g. the mean annual streamflow, the annual maximum streamflow exceeded in 10% of years, and the minimum seven-day average streamflow exceeded in 90% of years, among others) derived from point estimators (e.g. regressions, kriging, machine learning) to that of statistics derived from simulated hydrographs across the continental United States. Initial results suggest that the error introduced through hydrograph simulation may substantially bias the resulting hydrologic characterization.

  20. Diagnostic precision of mentally estimated home blood pressure means.

    PubMed

    Ouattara, Franck Olivier; Laskine, Mikhael; Cheong, Nathalie Ng; Birnbaum, Leora; Wistaff, Robert; Bertrand, Michel; van Nguyen, Paul; Kolan, Christophe; Durand, Madeleine; Rinfret, Felix; Lamarre-Cliche, Maxime

    2018-05-07

    Paper home blood pressure (HBP) charts are commonly brought to physicians at office visits. The precision and accuracy of mental calculations of blood pressure (BP) means are not known. A total of 109 hypertensive patients were instructed to measure and record their HBP for 1 week and to bring their paper charts to their office visit. Study section 1: HBP means were calculated electronically and compared to corresponding in-office BP estimates made by physicians. Study section 2: 100 randomly ordered HBP charts were re-examined repetitively by 11 evaluators. Each evaluator estimated BP means four times in 5, 15, 30, and 60 s (random order) allocated for the task. BP means and diagnostic performance (determination of therapeutic systolic and diastolic BP goals attained or not) were compared between physician estimates and electronically calculated results. Overall, electronically and mentally calculated BP means were not different. Individual analysis showed that 83% of in-office physician estimates were within a 5-mmHg systolic BP range. There was diagnostic disagreement in 15% of cases. Performance improved consistently when the time allocated for BP estimation was increased from 5 to 15 s and from 15 to 30 s, but not when it exceeded 30 s. Mentally calculating HBP means from paper charts can cause a number of diagnostic errors. Chart evaluation exceeding 30 s does not significantly improve accuracy. BP-measuring devices with modern analytical capacities could be useful to physicians.

  1. Estimating Demand for and Supply of Pediatric Preventive Dental Care for Children and Identifying Dental Care Shortage Areas, Georgia, 2015.

    PubMed

    Cao, Shanshan; Gentili, Monica; Griffin, Paul M; Griffin, Susan O; Harati, Pravara; Johnson, Ben; Serban, Nicoleta; Tomar, Scott

    Demand for dental care is expected to outpace supply through 2025. The objectives of this study were to determine the extent of pediatric dental care shortages in Georgia and to develop a general method for estimation that can be applied to other states. We estimated supply and demand for pediatric preventive dental care for the 159 counties in Georgia in 2015. We compared pediatric preventive dental care shortage areas (where demand exceeded twice the supply) designated by our methods with dental health professional shortage areas designated by the Health Resources & Services Administration. We estimated caries risk from a multivariate analysis of National Health and Nutrition Examination Survey data and national census data. We estimated county-level demand based on the time needed to perform preventive dental care services and the proportion of time that dentists spend on pediatric preventive dental care services from the Medical Expenditure Panel Survey. Pediatric preventive dental care supply exceeded demand in Georgia in 75 counties: the average annual county-level pediatric preventive dental care demand was 16 866 hours, and the supply was 32 969 hours. We identified 41 counties as pediatric dental care shortage areas, 14 of which had not been designated by the Health Resources & Services Administration. Age- and service-specific information on dental care shortage areas could result in more efficient provider staffing and geographic targeting.

  2. Research of Water Level Prediction for a Continuous Flood due to Typhoons Based on a Machine Learning Method

    NASA Astrophysics Data System (ADS)

    Nakatsugawa, M.; Kobayashi, Y.; Okazaki, R.; Taniguchi, Y.

    2017-12-01

    This research aims to improve accuracy of water level prediction calculations for more effective river management. In August 2016, Hokkaido was visited by four typhoons, whose heavy rainfall caused severe flooding. In the Tokoro river basin of Eastern Hokkaido, the water level (WL) at the Kamikawazoe gauging station, which is at the lower reaches exceeded the design high-water level and the water rose to the highest level on record. To predict such flood conditions and mitigate disaster damage, it is necessary to improve the accuracy of prediction as well as to prolong the lead time (LT) required for disaster mitigation measures such as flood-fighting activities and evacuation actions by residents. There is the need to predict the river water level around the peak stage earlier and more accurately. Previous research dealing with WL prediction had proposed a method in which the WL at the lower reaches is estimated by the correlation with the WL at the upper reaches (hereinafter: "the water level correlation method"). Additionally, a runoff model-based method has been generally used in which the discharge is estimated by giving rainfall prediction data to a runoff model such as a storage function model and then the WL is estimated from that discharge by using a WL discharge rating curve (H-Q curve). In this research, an attempt was made to predict WL by applying the Random Forest (RF) method, which is a machine learning method that can estimate the contribution of explanatory variables. Furthermore, from the practical point of view, we investigated the prediction of WL based on a multiple correlation (MC) method involving factors using explanatory variables with high contribution in the RF method, and we examined the proper selection of explanatory variables and the extension of LT. The following results were found: 1) Based on the RF method tuned up by learning from previous floods, the WL for the abnormal flood case of August 2016 was properly predicted with a lead time of 6 h. 2) Based on the contribution of explanatory variables, factors were selected for the MC method. In this way, plausible prediction results were obtained.

  3. Contaminant levels, source strengths, and ventilation rates in California retail stores.

    PubMed

    Chan, W R; Cohn, S; Sidheswaran, M; Sullivan, D P; Fisk, W J

    2015-08-01

    This field study measured ventilation rates and indoor air quality in 21 visits to retail stores in California. Three types of stores, such as grocery, furniture/hardware stores, and apparel, were sampled. Ventilation rates measured using a tracer gas decay method exceeded the minimum requirement of California's Title 24 Standard in all but one store. Concentrations of volatile organic compounds (VOCs), ozone, and carbon dioxide measured indoors and outdoors were analyzed. Even though there was adequate ventilation according to standard, concentrations of formaldehyde and acetaldehyde exceeded the most stringent chronic health guidelines in many of the sampled stores. The whole-building emission rates of VOCs were estimated from the measured ventilation rates and the concentrations measured indoor and outdoor. Estimated formaldehyde emission rates suggest that retail stores would need to ventilate at levels far exceeding the current Title 24 requirement to lower indoor concentrations below California's stringent formaldehyde reference level. Given the high costs of providing ventilation, effective source control is an attractive alternative. Field measurements suggest that California retail stores were well ventilated relative to the minimum ventilation rate requirement specified in the Building Energy Efficiency Standards Title 24. Concentrations of formaldehyde found in retail stores were low relative to levels found in homes but exceeded the most stringent chronic health guideline. Looking ahead, California is mandating zero energy commercial buildings by 2030. To reduce the energy use from building ventilation while maintaining or even lowering formaldehyde in retail stores, effective formaldehyde source control measures are vitally important. Published 2014. This article is a U.S. Government work and is in the public domain in the USA.

  4. Divergence of actual and reference evapotranspiration observations for irrigated sugarcane with windy tropical conditions

    NASA Astrophysics Data System (ADS)

    Anderson, R. G.; Wang, D.; Tirado-Corbalá, R.; Zhang, H.; Ayars, J. E.

    2015-01-01

    Standardized reference evapotranspiration (ET) and ecosystem-specific vegetation coefficients are frequently used to estimate actual ET. However, equations for calculating reference ET have not been well validated in tropical environments. We measured ET (ETEC) using eddy covariance (EC) towers at two irrigated sugarcane fields on the leeward (dry) side of Maui, Hawaii, USA in contrasting climates. We calculated reference ET at the fields using the short (ET0) and tall (ETr) vegetation versions of the American Society for Civil Engineers (ASCE) equation. The ASCE equations were compared to the Priestley-Taylor ET (ETPT) and ETEC. Reference ET from the ASCE approaches exceeded ETEC during the mid-period (when vegetation coefficients suggest ETEC should exceed reference ET). At the windier tower site, cumulative ETr exceeded ETEC by 854 mm over the course of the mid-period (267 days). At the less windy site, mid-period ETr still exceeded ETEC, but the difference was smaller (443 mm). At both sites, ETPT approximated mid-period ETEC more closely than the ASCE equations ((ETPT-ETEC) < 170 mm). Analysis of applied water and precipitation, soil moisture, leaf stomatal resistance, and canopy cover suggest that the lower observed ETEC was not the result of water stress or reduced vegetation cover. Use of a custom-calibrated bulk canopy resistance improved the reference ET estimate and reduced seasonal ET discrepancy relative to ETPT and ETEC in the less windy field and had mixed performance in the windier field. These divergences suggest that modifications to reference ET equations may be warranted in some tropical regions.

  5. Divergence of reference evapotranspiration observations with windy tropical conditions

    NASA Astrophysics Data System (ADS)

    Anderson, R. G.; Wang, D.; Tirado-Corbalá, R.; Zhang, H.; Ayars, J. E.

    2014-06-01

    Standardized reference evapotranspiration (ET) and ecosystem-specific vegetation coefficients are frequently used to estimate actual ET. However, equations for calculating reference ET have not been well validated in tropical environments. We measured ET (ETEC) using Eddy Covariance (EC) towers at two irrigated sugarcane fields on the leeward (dry) side of Maui, Hawaii, USA in contrasting climates. We calculated reference ET at the fields using the short (ET0) and tall (ETr) vegetation versions of the American Society for Civil Engineers (ASCE) equation. The ASCE equations were compared to the Priestley-Taylor ET (ETPT) and ETEC. Reference ET from the ASCE approaches exceeded ETEC during the mid-period (when vegetation coefficients suggest ETEC should exceed reference ET). At the windier tower site, cumulative ETr exceeded ETEC by 854 mm over the course of the mid-period (267 days). At the less windy site, mid-period ETr still exceeded ETEC, but the difference was smaller (443 mm). At both sites, ETPT approximated mid-period ETEC more closely than the ASCE equations ((ETPT-ETEC) < 170 mm). Analysis of applied water and precipitation, soil moisture, leaf stomatal resistance, and canopy cover suggest that the lower observed ETEC was not the result of water stress or reduced vegetation cover. Use of a custom calibrated bulk canopy resistance improved the reference ET estimate and reduced seasonal ET discrepancy relative to ETPT and ETEC for the less windy field and had mixed performance at the windier field. These divergences suggest that modifications to reference ET equations may be warranted in some tropical regions.

  6. Extreme-event geoelectric hazard maps: Chapter 9

    USGS Publications Warehouse

    Love, Jeffrey J.; Bedrosian, Paul A.

    2018-01-01

    Maps of geoelectric amplitude covering about half the continental United States are presented that will be exceeded, on average, once per century in response to an extreme-intensity geomagnetic disturbance. These maps are constructed using an empirical parameterization of induction: convolving latitude-dependent statistical maps of extreme-value geomagnetic disturbances, obtained from decades of 1-minute magnetic observatory data, with local estimates of Earth-surface impedance obtained at discrete geographic sites from magnetotelluric surveys. Geoelectric amplitudes are estimated for geomagnetic waveforms having a 240-s (and 1200-s) sinusoidal period and amplitudes over 10 min (1 h) that exceed a once-per-century threshold. As a result of the combination of geographic differences in geomagnetic variation and Earth-surface impedance, once-per-century geoelectric amplitudes span more than two orders of magnitude and are a highly granular function of location. Specifically for north-south 240-s induction, once-per-century geoelectric amplitudes across large parts of the United States have a median value of 0.34 V/km; for east-west variation, they have a median value of 0.23 V/km. In Northern Minnesota, amplitudes exceed 14.00 V/km for north-south geomagnetic variation (23.34 V/km for east-west variation), while just over 100 km away, amplitudes are only 0.08 V/km (0.02 V/km). At some sites in the northern-central United States, once-per-century geoelectric amplitudes exceed the 2 V/km realized in Québec during the March 1989 storm.

  7. Predicted effects of future climate warming on thermal habitat suitability for Lake Sturgeon (Acipenser fulvescens, Rafinesque, 1817) in rivers in Wisconsin, USA

    USGS Publications Warehouse

    Lyons, John D.; Stewart, Jana S.

    2015-01-01

    The Lake Sturgeon (Acipenser fulvescens, Rafinesque, 1817) may be threatened by future climate warming. The purpose of this study was to identify river reaches in Wisconsin, USA, where they might be vulnerable to warming water temperatures. In Wisconsin, A. fulvescens is known from 2291 km of large-river habitat that has been fragmented into 48 discrete river-lake networks isolated by impassable dams. Although the exact temperature tolerances are uncertain, water temperatures above 28–30°C are potentially less suitable for this coolwater species. Predictions from 13 downscaled global climate models were input to a lotic water temperature model to estimate amounts of potential thermally less-suitable habitat at present and for 2046–2065. Currently, 341 km (14.9%) of the known habitat are estimated to regularly exceed 28°C for an entire day, but only 6 km (0.3%) to exceed 30°C. In 2046–2065, 685–2164 km (29.9–94.5%) are projected to exceed 28°C and 33–1056 km (1.4–46.1%) to exceed 30°C. Most river-lake networks have cooler segments, large tributaries, or lakes that might provide temporary escape from potentially less suitable temperatures, but 12 short networks in the Lower Fox and Middle Wisconsin rivers totaling 93.6 km are projected to have no potential thermal refugia. One possible adaptation to climate change could be to provide fish passage or translocation so that riverine Lake Sturgeon might have access to more thermally suitable habitats.

  8. 301-15 East Illinois St, December 2017, Lindsay Light Radiological Survey

    EPA Pesticide Factsheets

    The field gamma measurements within the excavation during the excavation process did not exceed theinstrument threshold previously stated, and ranged from a minimum of 800 cpm to a maximum of 2,800cpm shielded.

  9. 335 E. Erie, January 2016, Lindsay Light Radiological Survey

    EPA Pesticide Factsheets

    The field gamma measurements within the excavation during the excavation process did not exceed theinstrument threshold previously stated and ranged from a minimum of 4,900 cpm to a maximum of 10,200cpm unshielded.

  10. 215 E. Grand Ave, December 2015, Lindsay Light Radiological Survey

    EPA Pesticide Factsheets

    The field gamma measurements within the excavation during the excavation process did not exceed theinstrument threshold previously stated and ranged from a minimum of 6,800 cpm to a maximum of 7,200cpm unshielded.

  11. 243 E. Ontario Street - Water, Lindsay Light Radiological Survey

    EPA Pesticide Factsheets

    The field gamma measurements within the excavation during the excavation process did not exceed theinstrument threshold previously stated, and ranged from a minimum of 1,400 cpm to a maximum of 2,600cpm shielded.

  12. 224 E Ontario, February 2015, Lindsay Light Radiological Survey

    EPA Pesticide Factsheets

    The field gamma measurements within the excavation during the excavation process did not exceed theinstrument threshold previously stated and ranged from a minimum of 6,200 cpm to a maximum of 7,500cpm unshielded.

  13. 54-63 East Ohio St., Lindsay Light Radiological Survey

    EPA Pesticide Factsheets

    The field gamma measurements within the excavation during the excavation process did not exceed theinstrument threshold previously stated, and ranged from a minimum of 2,000 cpm to a maximum of 2,300cpm shielded.

  14. 225 E. Grand Ave, Septmber 2015, Lindsay Light Radiological Survey

    EPA Pesticide Factsheets

    The field gamma measurements within the excavation during the excavation process did not exceed theinstrument threshold previously stated and ranged from a minimum of 5,300 cpm to a maximum of 9,700cpm unshielded.

  15. 243 E. Ontario Street - Sewer, Lindsay Light Radiological Survey

    EPA Pesticide Factsheets

    The field gamma measurements within the excavation during the excavation process did not exceed theinstrument threshold previously stated, and ranged from a minimum of 1,100 cpm to a maximum of 3,600cpm shielded.

  16. 626 N. Michigan Ave, August 2017, Lindsay Light Radiological Survey

    EPA Pesticide Factsheets

    The field gamma measurements of the spoil and within the excavations did not exceed the instrumentthreshold previously stated, and ranged from a minimum of 1,900 cpm to a maximum of 7,200 cpmshielded.

  17. 228 E. Ontario, February 2016, Lindsay Light Radiological Survey

    EPA Pesticide Factsheets

    The field gamma measurements within the excavation during the excavation process did not exceed theinstrument threshold previously stated and ranged from a minimum of 5,400 cpm to a maximum of 10,000cpm unshielded.

  18. 29 CFR 4.6 - Labor standards clauses for Federal service contracts exceeding $2,500.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... the classification in question was previously conformed pursuant to this section, a new conformed wage... contractor shall permit authorized representatives of the Wage and Hour Division to conduct interviews with...

  19. Pupillary dilation as an index of task demands.

    PubMed

    Cabestrero, Raúl; Crespo, Antonio; Quirós, Pilar

    2009-12-01

    To analyze how pupillary responses reflect mental effort and allocation of processing resources under several load conditions, the pupil diameter of 18 participants was recorded during an auditory digit-span recall task under three load conditions: Low (5 digits), Moderate (8 digits), and Overload (11 digits). In previous research, under all load conditions a significant linear enlargement in pupil diameter was observed as each digit was presented. Significant dilations from the end of the presentation phase to the beginning of the recall phase were also observed but only under low and moderate loads. Contrary to previous research, under the Overload condition, no reduction in pupil diameter was observed when resource limits were exceeded; however, a plateau was observed when the ninth digit was presented until the beginning of the recall phase. Overall, pupillometric data seem to indicate that participants may keep processing actively even though resources are exceeded.

  20. Prevalence and pattern of occupational exposure to whole body vibration in Great Britain: findings from a national survey

    PubMed Central

    Palmer, K.; Griffin, M.; Bendall, H.; Pannett, B.; Coggon, D.

    2000-01-01

    OBJECTIVES—To estimate the number of workers in Great Britain with significant occupational exposure to whole body vibration (WBV) and to identify the common sources of exposure and the occupations and industries where such exposures arise.
METHODS—A postal questionnaire was posted to a random community sample of 22 194 men and women of working age. Among other things, the questionnaire asked about exposure to WBV in the past week, including occupational and common non-occupational sources. Responses were assessed by occupation and industry, and national prevalence estimates were derived from census information. Estimates were also made of the average estimated daily personal dose of vibration (eVDV).
RESULTS—From the 12 907 responses it was estimated that 7.2 million men and 1.8 million women in Great Britain are exposed to WBV at work in a 1 week period if the occupational use of cars, vans, buses, trains, and motor cycles is included within the definition of exposure. The eVDV of >374 000 men and 9000 women was estimated to exceed a proposed British Standard action level of 15 ms-1.75. Occupations in which the estimated exposures most often exceeded 15 ms-1.75 included forklift truck and mechanical truck drivers, farm owners and managers, farm workers, and drivers of road goods vehicles. These occupations also contributed the largest estimated numbers of workers in Great Britain with such levels of exposure. The highest estimated median occupational eVDVs were found in forklift truck drivers, drivers of road goods vehicles, bus and coach drivers, and technical and wholesale sales representatives, among whom a greater contribution to total dose was received from occupational exposures than from non-occupational ones; but in many other occupations the reverse applied. The most common sources of occupational exposure to WBV are cars, vans, forklift trucks, lorries, tractors, buses, and loaders.
CONCLUSIONS—Exposure to whole body vibration is common, but only a small proportion of exposures exceed the action level proposed in British standards, and in many occupations, non-occupational sources are more important than those at work. The commonest occupational sources of WBV and occupations with particularly high exposures have been identified, providing a basis for targeting future control activities.


Keywords: whole body vibration; population; prevalence; exposure PMID:10810108

  1. Fish status survey of Nordic lakes: effects of acidification, eutrophication and stocking activity on present fish species composition.

    PubMed

    Tammi, Jouni; Appelberg, Magnus; Beier, Ulrika; Hesthagen, Trygve; Lappalainen, Antti; Rask, Martti

    2003-03-01

    The status of fish populations in 3821 lakes in Norway, Sweden and Finland was assessed in 1995-1997. The survey lakes were chosen by stratified random sampling from all (126 482) Fennoscandian lakes > or = 0.04 km2. The water chemistry of the lakes was analyzed and information on fish status was obtained by a postal inquiry. Fish population losses were most frequent in the most highly acidified region of southern Norway and least common in eastern Fennoscandia. According to the inquiry results, the number of lost stocks of brown trout (Salmo trutta), roach (Rutilus rutilus), Arctic char (Salvelinus alpinus) and perch (Perca fluviatilis) was estimated to exceed 10000. The number of stocks of these species potentially affected by the low alkalinity of lake water was estimated to exceed 11000. About 3300 lakes showed high total phosphorus (> 25 microg L(-1)) and cyprinid dominance in eastern Fennoscandia, notably southwestern Finland. This survey did not reveal any extinction of fish species due to eutrophication. One-third of the lakes had been artificially stocked with at least one new species, most often brown trout, whitefish (Coregonus lavaretus s.l.), Arctic char, rainbow trout (Oncorhynchus mykiss), pike-perch (Stizostedion lucioperca), grayling (Thymallus thymallus), pike (Esox lucius), bream (Abramis brama), tench (Tinca tinca) and European minnow (Phoxinus phoxinus). The number of artificially manipulated stocks of these species in Fennoscandian lakes was estimated to exceed 52000. Hence, the number of fish species occurring in Nordic lakes has recently been changed more by stockings than by losses of fish species through environmental changes such as acidification.

  2. [Do vitamins from foods fortified exceed the allowed limits? Study carried out in population young adolescent and young adult of the metropolitan region of Chile].

    PubMed

    Freixas Sepúlveda, Alejandra; Díaz Narváez, Víctor Patricio; Durán Agüero, Samuel; Gaete Verdugo, María Cristina

    2013-01-01

    In order to analyze the usual consumption of vitamins in an adolescent population and young adult in the Metropolitan Region, were 213 food fortified with vitamins of the Chilean market. A survey of consumption and nutrient intake was calculated. The result added vitamins added to food. The normality of the variables of the intake was assessed and data were subjected to analysis of descriptive statisticians and percentiles are determined. Estimated percentages of subjects whose values exceed those fixed for DDR and UL listed for each vitamin and percentage of excess for each case. Discriminant analysis was performed using the M Box test. The correlation canonical and the Statisticians Wilks were estimated. Finally it was estimated the percentage of correctly classified data. Data were processed by the program SPSS 20.0 with a significance level of α ≤ 0.05. The results indicate that you for all the studied vitamins, the percentage of subjects who more than the DDR is for total folate (96.4%) and the lowest percentage is given for the vitamin E and B12 in young adult women. The percentage of subjects who exceed the UL values is greatest for the vitamin B3 (91.9%). According to the canonical correlation, there are differences in behavior between the groups. It is recommended to monitor the behavior and consumption of food fortified with vitamins, especially of the complex B and A. Copyright © AULA MEDICA EDICIONES 2013. Published by AULA MEDICA. All rights reserved.

  3. Comparison of the near field/far field model and the advanced reach tool (ART) model V1.5: exposure estimates to benzene during parts washing with mineral spirits.

    PubMed

    LeBlanc, Mallory; Allen, Joseph G; Herrick, Robert F; Stewart, James H

    2018-03-01

    The Advanced Reach Tool V1.5 (ART) is a mathematical model for occupational exposures conceptually based on, but implemented differently than, the "classic" Near Field/Far Field (NF/FF) exposure model. The NF/FF model conceptualizes two distinct exposure "zones"; the near field, within approximately 1m of the breathing zone, and the far field, consisting of the rest of the room in which the exposure occurs. ART has been reported to provide "realistic and reasonable worst case" estimates of the exposure distribution. In this study, benzene exposure during the use of a metal parts washer was modeled using ART V1.5, and compared to actual measured workers samples and to NF/FF model results from three previous studies. Next, the exposure concentrations expected to be exceeded 25%, 10% and 5% of the time for the exposure scenario were calculated using ART. Lastly, ART exposure estimates were compared with and without Bayesian adjustment. The modeled parts washing benzene exposure scenario included distinct tasks, e.g. spraying, brushing, rinsing and soaking/drying. Because ART can directly incorporate specific types of tasks that are part of the exposure scenario, the present analysis identified each task's determinants of exposure and performance time, thus extending the work of the previous three studies where the process of parts washing was modeled as one event. The ART 50th percentile exposure estimate for benzene (0.425ppm) more closely approximated the reported measured mean value of 0.50ppm than the NF/FF model estimates of 0.33ppm, 0.070ppm or 0.2ppm obtained from other modeling studies of this exposure scenario. The ART model with the Bayesian analysis provided the closest estimate to the measured value (0.50ppm). ART (with Bayesian adjustment) was then used to assess the 75th, the 90th and 95th percentile exposures, predicting that on randomly selected days during this parts washing exposure scenario, 25% of the benzene exposures would be above 0.70ppm; 10% above 0.95ppm; and 5% above 1.15ppm. These exposure estimates at the three different percentiles of the ART exposure distribution refer to the modeled exposure scenario not a specific workplace or worker. This study provides a detailed comparison of modeling tools currently available to occupational hygienists and other exposure assessors. Possible applications are considered. Copyright © 2017 Elsevier GmbH. All rights reserved.

  4. Kinetics of MDR Transport in Tumor-Initiating Cells

    PubMed Central

    Koshkin, Vasilij; Yang, Burton B.; Krylov, Sergey N.

    2013-01-01

    Multidrug resistance (MDR) driven by ABC (ATP binding cassette) membrane transporters is one of the major causes of treatment failure in human malignancy. MDR capacity is thought to be unevenly distributed among tumor cells, with higher capacity residing in tumor-initiating cells (TIC) (though opposite finding are occasionally reported). Functional evidence for enhanced MDR of TICs was previously provided using a “side population” assay. This assay estimates MDR capacity by a single parameter - cell’s ability to retain fluorescent MDR substrate, so that cells with high MDR capacity (“side population”) demonstrate low substrate retention. In the present work MDR in TICs was investigated in greater detail using a kinetic approach, which monitors MDR efflux from single cells. Analysis of kinetic traces obtained allowed for the estimation of both the velocity (V max) and affinity (K M) of MDR transport in single cells. In this way it was shown that activation of MDR in TICs occurs in two ways: through the increase of V max in one fraction of cells, and through decrease of K M in another fraction. In addition, kinetic data showed that heterogeneity of MDR parameters in TICs significantly exceeds that of bulk cells. Potential consequences of these findings for chemotherapy are discussed. PMID:24223908

  5. Equatorial jet in the lower to middle cloud layer of Venus revealed by Akatsuki

    NASA Astrophysics Data System (ADS)

    Horinouchi, Takeshi; Murakami, Shin-Ya; Satoh, Takehiko; Peralta, Javier; Ogohara, Kazunori; Kouyama, Toru; Imamura, Takeshi; Kashimura, Hiroki; Limaye, Sanjay S.; McGouldrick, Kevin; Nakamura, Masato; Sato, Takao M.; Sugiyama, Ko-Ichiro; Takagi, Masahiro; Watanabe, Shigeto; Yamada, Manabu; Yamazaki, Atsushi; Young, Eliot F.

    2017-09-01

    The Venusian atmosphere is in a state of superrotation where prevailing westward winds move much faster than the planet's rotation. Venus is covered with thick clouds that extend from about 45 to 70 km altitude, but thermal radiation emitted from the lower atmosphere and the surface on the planet's nightside escapes to space at narrow spectral windows of the near-infrared. The radiation can be used to estimate winds by tracking the silhouettes of clouds in the lower and middle cloud regions below about 57 km in altitude. Estimates of wind speeds have ranged from 50 to 70 m s-1 at low to mid-latitudes, either nearly constant across latitudes or with winds peaking at mid-latitudes. Here we report the detection of winds at low latitude exceeding 80 m s-1 using IR2 camera images from the Akatsuki orbiter taken during July and August 2016. The angular speed around the planetary rotation axis peaks near the equator, which we suggest is consistent with an equatorial jet, a feature that has not been observed previously in the Venusian atmosphere. The mechanism producing the jet remains unclear. Our observations reveal variability in the zonal flow in the lower and middle cloud region that may provide clues to the dynamics of Venus's atmospheric superrotation.

  6. Equatorial jet in the lower to middle cloud layer of Venus revealed by Akatsuki.

    PubMed

    Horinouchi, Takeshi; Murakami, Shin-Ya; Satoh, Takehiko; Peralta, Javier; Ogohara, Kazunori; Kouyama, Toru; Imamura, Takeshi; Kashimura, Hiroki; Limaye, Sanjay S; McGouldrick, Kevin; Nakamura, Masato; Sato, Takao M; Sugiyama, Ko-Ichiro; Takagi, Masahiro; Watanabe, Shigeto; Yamada, Manabu; Yamazaki, Atsushi; Young, Eliot F

    2017-01-01

    The Venusian atmosphere is in a state of superrotation where prevailing westward winds move much faster than the planet's rotation. Venus is covered with thick clouds that extend from about 45 to 70 km altitude, but thermal radiation emitted from the lower atmosphere and the surface on the planet's night-side escapes to space at narrow spectral windows of near-infrared. The radiation can be used to estimate winds by tracking the silhouettes of clouds in the lower and middle cloud regions below about 57 km in altitude. Estimates of wind speeds have ranged from 50 to 70 m/s at low- to mid-latitudes, either nearly constant across latitudes or with winds peaking at mid-latitudes. Here we report the detection of winds at low latitude exceeding 80 m/s using IR2 camera images from the Akatsuki orbiter taken during July and August 2016. The angular speed around the planetary rotation axis peaks near the equator, which we suggest is consistent with an equatorial jet, a feature that has not been observed previously in the Venusian atmosphere. The mechanism producing the jet remains unclear. Our observations reveal variability in the zonal flow in the lower and middle cloud region that may provide new challenges and clues to the dynamics of Venus's atmospheric superrotation.

  7. A Comparison of Three Second-generation Swirl-Venturi Lean Direct Injection Combustor Concepts

    NASA Technical Reports Server (NTRS)

    Tacina, Kathleen M.; Podboy, Derek P.; He, Zhuohui Joe; Lee, Phil; Dam, Bidhan; Mongia, Hukam

    2016-01-01

    Three variations of a low emissions aircraft gas turbine engine combustion concept were developed and tested. The concept is a second generation swirl-venturi lean direct injection (SV-LDI) concept. LDI is a lean-burn combustion concept in which the fuel is injected directly into the flame zone. All three variations were based on the baseline 9- point SV-LDI configuration reported previously. The three second generation SV-LDI variations are called the 5-recess configuration, the flat dome configuration, and the 9- recess configuration. These three configurations were tested in a NASA Glenn Research Center medium pressure flametube. All three second generation variations had better low power operability than the baseline 9-point configuration. All three configurations had low NO(sub x) emissions, with the 5-recess configuration generally having slightly lower NO(x) than the flat dome or 9-recess configurations. Due to the limitations of the flametube that prevented testing at pressures above 20 atm, correlation equations were developed for the at dome and 9-recess configurations so that the landing-takeoff NO(sub x) emissions could be estimated. The flat dome and 9-recess landing-takeoff NO(x) emissions are estimated to be 81-88% below the CAEP/6 standards, exceeding the project goal of 75% reduction.

  8. Dose rate constants for the quantity Hp(3) for frequently used radionuclides in nuclear medicine.

    PubMed

    Szermerski, Bastian; Bruchmann, Iris; Behrens, Rolf; Geworski, Lilli

    2016-12-01

    According to recent studies, the human eye lens is more sensitive to ionising radiation than previously assumed. Therefore, the dose limit for personnel occupationally exposed to ionising radiation will be lowered from currently 150 mSv to 20 mSv per year. Currently, no data base for a reliable estimation of the dose to the lens of the eye is available for nuclear medicine. Furthermore, the dose is usually not monitored. The aim of this work was to determine dose rate constants for the quantity H p (3), which is supposed to estimate the dose to the lens of the eye. For this, H p (3)-dosemeters were fixed to an Alderson Phantom at different positions. The dosemeters were exposed to radiation from nuclides typically used in nuclear medicine in their geometries analog to their application in nuclear medicine, e.g. syringe or vial. The results show that the handling of high-energy beta (i.e. electron or positron) emitters may lead to a relevant dose to the lens of the eye. For low-energy beta emitters and gamma emitters, an exceeding of the lowered dose limit seems to be unlikely. Copyright © 2015. Published by Elsevier GmbH.

  9. A supermassive black hole in an ultra-compact dwarf galaxy.

    PubMed

    Seth, Anil C; van den Bosch, Remco; Mieske, Steffen; Baumgardt, Holger; den Brok, Mark; Strader, Jay; Neumayer, Nadine; Chilingarian, Igor; Hilker, Michael; McDermid, Richard; Spitler, Lee; Brodie, Jean; Frank, Matthias J; Walsh, Jonelle L

    2014-09-18

    Ultra-compact dwarf galaxies are among the densest stellar systems in the Universe. These systems have masses of up to 2 × 10(8) solar masses, but half-light radii of just 3-50 parsecs. Dynamical mass estimates show that many such dwarfs are more massive than expected from their luminosity. It remains unclear whether these high dynamical mass estimates arise because of the presence of supermassive black holes or result from a non-standard stellar initial mass function that causes the average stellar mass to be higher than expected. Here we report adaptive optics kinematic data of the ultra-compact dwarf galaxy M60-UCD1 that show a central velocity dispersion peak exceeding 100 kilometres per second and modest rotation. Dynamical modelling of these data reveals the presence of a supermassive black hole with a mass of 2.1 × 10(7) solar masses. This is 15 per cent of the object's total mass. The high black hole mass and mass fraction suggest that M60-UCD1 is the stripped nucleus of a galaxy. Our analysis also shows that M60-UCD1's stellar mass is consistent with its luminosity, implying a large population of previously unrecognized supermassive black holes in other ultra-compact dwarf galaxies.

  10. Detecting plague-host abundance from space: Using a spectral vegetation index to identify occupancy of great gerbil burrows

    NASA Astrophysics Data System (ADS)

    Wilschut, Liesbeth I.; Heesterbeek, Johan A. P.; Begon, Mike; de Jong, Steven M.; Ageyev, Vladimir; Laudisoit, Anne; Addink, Elisabeth A.

    2018-02-01

    In Kazakhstan, plague outbreaks occur when its main host, the great gerbil, exceeds an abundance threshold. These live in family groups in burrows, which can be mapped using remote sensing. Occupancy (percentage of burrows occupied) is a good proxy for abundance and hence the possibility of an outbreak. Here we use time series of satellite images to estimate occupancy remotely. In April and September 2013, 872 burrows were identified in the field as either occupied or empty. For satellite images acquired between April and August, 'burrow objects' were identified and matched to the field burrows. The burrow objects were represented by 25 different polygon types, then classified (using a majority vote from 10 Random Forests) as occupied or empty, using Normalized Difference Vegetation Indices (NDVI) calculated for all images. Throughout the season NDVI values were higher for empty than for occupied burrows. Occupancy status of individual burrows that were continuously occupied or empty, was classified with producer's and user's accuracy values of 63 and 64% for the optimum polygon. Occupancy level was predicted very well and differed 2% from the observed occupancy. This establishes firmly the principle that occupancy can be estimated using satellite images with the potential to predict plague outbreaks over extensive areas with much greater ease and accuracy than previously.

  11. A national statistical survey assessment of mercury concentrations in fillets of fish collected in the U.S. EPA national rivers and streams assessment of the continental USA.

    PubMed

    Wathen, John B; Lazorchak, James M; Olsen, Anthony R; Batt, Angela

    2015-03-01

    The U.S. EPA conducted a national statistical survey of fish fillet tissue with a sample size of 541 sites on boatable rivers =>5th order in 2008-2009. This is the first such study of mercury (Hg) in fish tissue from river sites focused on potential impacts to human health from fish consumption to also address wildlife impacts. Sample sites were identified as being urban or non-urban. All sample mercury concentrations were above the 3.33ugkg(-1) (ppb) quantitation limit, and an estimated 25.4% (±4.4%) of the 51663 river miles assessed exceeded the U.S. EPA 300ugkg(-1) fish-tissue based water quality criterion for mercury, representing 13144±181.8 river miles. Estimates of river miles exceeding comparable aquatic life thresholds (translated from fillet concentrations to whole fish equivalents) in avian species were similar to the number of river miles exceeding the human health threshold, whereas some mammalian species were more at risk than human from lower mercury concentrations. A comparison of means from the non-urban and urban data and among three ecoregions did not indicate a statistically significant difference in fish tissue Hg concentrations at p<0.05. Published by Elsevier Ltd.

  12. Concentrations and Risks of p-Dichlorobenzene in Indoor and Outdoor Air

    PubMed Central

    Chin, Jo-Yu; Godwin, Christopher; Jia, Chunrong; Robins, Thomas; Lewis, Toby; Parker, Edith; Max, Paul; Batterman, Stuart

    2012-01-01

    p-Dichlorobenzene (PDCB) is a chlorinated volatile organic compound (VOC) that can be encountered at high concentrations in buildings due to its use as pest repellent and deodorant. This study characterizes PDCB concentrations in four communities in southeast Michigan. The median concentration outside 145 homes was 0.04 µg m−3, and the median concentration inside 287 homes was 0.36 µg m−3. The distribution of indoor concentrations was extremely skewed. For example, 30% of the homes exceeded 0.91 µg m−3, which corresponds to a cancer risk level of 10−5 based on the California unit risk estimate, and 4% of homes exceeded 91 µg m−3, equivalent to a 10−3 risk level. The single highest measurement was 4,100 µg m−3. Estimates of whole house emission rates were largely consistent with chamber test results in the literature. Indoor concentrations that exceed a few µg m−3 indicate use of PDCB products. PDCB concentrations differed among households and the four cities, suggesting the importance of locational, cultural and behavioral factors in the use patterns of this chemical. The high PDCB levels found suggest the need for policies and actions to lower exposures, e.g., sales or use restrictions, improved labeling, and consumer education. PMID:22725685

  13. Department of the Army FY 1994 Budget Estimates, Military Construction, Family Housing and Homeowners Assistance

    DTIC Science & Technology

    1993-04-01

    OF WORK DESCRIPTION ESTIMATE Real Property Maintenance Repairs exceeding $15,000 $147,742,000 Major Construction Projects (1391s attached) Grafenwoehr ...ARMY APRIL 1993 3.INSTALLATION AND LOCATION 4.PROJECT TITLE Grafenwoehr Training Area Grafenwoehr , Germany Sanitary Landfill Expansion 5.PROGRAM ELEMENT...sealing work (to separate contaminants from ground water), drainage, gas exhaust lines and gas wells, gas collection lines, gas regulator station, gas

  14. An assessment of the risk arising from electrical effects associated with carbon fibers released from commercial aircraft fires

    NASA Technical Reports Server (NTRS)

    Kalelkar, A. S.; Fiksel, J.; Rosenfield, D.; Richardson, D. L.; Hagopian, J.

    1980-01-01

    The risks associated with electrical effects arising from carbon fibers released from commercial aviation aircraft fires were estimated for 1993. The expected annual losses were estimated to be about $470 (1977 dollars) in 1993. The chances of total losses from electrical effects exceeding $100,000 (1977 dollars) in 1993 were established to be about one in ten thousand.

  15. Estimated Dietary Intakes of Toxic Elements from Four Staple Foods in Najran City, Saudi Arabia.

    PubMed

    Mohamed, Hatem; Haris, Parvez I; Brima, Eid I

    2017-12-14

    Exposure of the inhabitants of Najran area in Saudi Arabia to the toxic elements As, Cd, Cr, and Pb through foods has not been previously investigated. Exposure to such elements is an important public health issue, so the study described here was performed with the aim of determining estimated dietary intakes (EDIs) for these metals in Najran area. The As, Cd, Cr, and Pb concentrations in four staple foods (rice, wheat, red meat, and chicken) were determined by inductively coupled plasma-mass spectrometry. A food frequency questionnaire (FFQ) was completed by 80 study participants. These data were used to estimate dietary intakes of the metals in the four staple foods. The mean As, Cd, Cr, and Pb EDIs in the four food types were 1.1 × 10 -6 -2.6 × 10 -5 , 1.42 × 10 -5 -2.2 × 10 -4 , 3.4 × 10 -4 -8.0 × 10 -4 , and 2.3 × 10 -5 -2.1 × 10 -3 mg/kg bw day, respectively. Hazard Quotients (HQ) for all elements did not exceed one. The highest Pb concentration was found for chicken and the source of this toxic element in this food needs to be investigated in the future. The lowest As concentration was found for wheat highest in rice. The EDIs for all elements in the four food types were below the provisional tolerable weekly intakes set by the World Health Organization (WHO).

  16. Reduction of predictive uncertainty in estimating irrigation water requirement through multi-model ensembles and ensemble averaging

    NASA Astrophysics Data System (ADS)

    Multsch, S.; Exbrayat, J.-F.; Kirby, M.; Viney, N. R.; Frede, H.-G.; Breuer, L.

    2014-11-01

    Irrigation agriculture plays an increasingly important role in food supply. Many evapotranspiration models are used today to estimate the water demand for irrigation. They consider different stages of crop growth by empirical crop coefficients to adapt evapotranspiration throughout the vegetation period. We investigate the importance of the model structural vs. model parametric uncertainty for irrigation simulations by considering six evapotranspiration models and five crop coefficient sets to estimate irrigation water requirements for growing wheat in the Murray-Darling Basin, Australia. The study is carried out using the spatial decision support system SPARE:WATER. We find that structural model uncertainty is far more important than model parametric uncertainty to estimate irrigation water requirement. Using the Reliability Ensemble Averaging (REA) technique, we are able to reduce the overall predictive model uncertainty by more than 10%. The exceedance probability curve of irrigation water requirements shows that a certain threshold, e.g. an irrigation water limit due to water right of 400 mm, would be less frequently exceeded in case of the REA ensemble average (45%) in comparison to the equally weighted ensemble average (66%). We conclude that multi-model ensemble predictions and sophisticated model averaging techniques are helpful in predicting irrigation demand and provide relevant information for decision making.

  17. Floods of May and June 2008 in Iowa

    USGS Publications Warehouse

    Buchmiller, Robert C.; Eash, David A.

    2010-01-01

    An unusually wet winter and spring of 2007 to 2008 resulted in extremely wet antecedent conditions throughout most of Iowa. Rainfall of 5 to 15 inches was observed in eastern Iowa during May 2008, and an additional 5 to 15 inches of rain was observed throughout most of Iowa in June. Because of the severity of the May and June 2008 flooding, the U.S. Geological Survey, in cooperation with other Federal, State, and local agencies, has summarized the meteorological and hydrological conditions leading to the flooding, compiled flood-peak stages and discharges, and estimated revised flood probabilities for 62 selected streamgages. Record peak discharges or flood probabilities of 1 percent or smaller (100-year flooding or greater) occurred at more than 60 streamgage locations, particularly in eastern Iowa. Cedar Rapids, Decorah, Des Moines, Iowa City, Mason City, and Waterloo were among the larger urban areas affected by this flooding. High water and flooding in small, headwater streams in north-central and eastern Iowa, particularly in June, combined and accumulated in large, mainstem rivers and resulted in flooding of historic proportions in the Cedar and Iowa Rivers. Previous flood-peak discharges at many locations were exceeded by substantial amounts, in some cases nearly doubling the previous record peak discharge at locations where more than 100 years of streamflow record are available.

  18. A storm severity index based on return levels of wind speeds

    NASA Astrophysics Data System (ADS)

    Becker, Nico; Nissen, Katrin M.; Ulbrich, Uwe

    2015-04-01

    European windstorms related to extra-tropical cyclones cause considerable damages to infrastructure during the winter season. Leckebusch et al. (2008) introduced a storm severity index (SSI) based on the exceedances of the local 98th percentile of wind speeds. The SSI is based on the assumption that (insured) damage usually occurs within the upper 2%-quantile of the local wind speed distribution (i.e. if the 98th percentile is exceeded). However, critical infrastructure, for example related to the power network or the transportation system, is usually designed to withstand wind speeds reaching the local 50-year return level, which is much higher than the 98th percentile. The aim of this work is to use the 50-year return level to develop a modified SSI, which takes into account only extreme wind speeds relevant to critical infrastructure. As a first step we use the block maxima approach to estimate the spatial distribution of return levels by fitting the generalized extreme value (GEV) distribution to the wind speeds retrieved from different reanalysis products. We show that the spatial distributions of the 50-year return levels derived from different reanalyses agree well within large parts of Europe. The differences between the reanalyses are largely within the range of the uncertainty intervals of the estimated return levels. As a second step the exceedances of the 50-year return level are evaluated and compared to the exceedances of the 98th percentiles for different extreme European windstorms. The areas where the wind speeds exceed the 50-year return level in the reanalysis data do largely agree with the areas where the largest damages were reported, e.g. France in the case of "Lothar" and "Martin" and Central Europe in the case of "Kyrill". Leckebusch, G. C., Renggli, D., & Ulbrich, U. (2008). Development and application of an objective storm severity measure for the Northeast Atlantic region. Meteorologische Zeitschrift, 17(5), 575-587.

  19. Development of a speeding-related crash typology

    DOT National Transportation Integrated Search

    2010-04-01

    Speeding, the driver behavior of exceeding the posted speed limit or driving too fast for conditions, has consistently been estimated to be a contributing factor to a significant percentage of fatal and nonfatal crashes. The U.S. Department of Transp...

  20. 42 CFR 412.525 - Adjustments to the Federal prospective payment.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... its estimated costs for a patient exceed the adjusted LTC-MS-DRG payment plus a fixed-loss amount. For...-DRG relative weights that are in effect at the start of the applicable long-term care hospital...

  1. 78 FR 34381 - Information Collection Being Reviewed by the Federal Communications Commission Under Delegated...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-06-07

    ... Respondents: 5,000 respondents; 5,000 responses. Estimated Time per Response: 20 minutes or (.3 hours... transmitting to ensure that ERP does not exceed 100 W PEP. Federal Communications Commission. Gloria J. Miles...

  2. Airport capacity : representation, estimation, optimization

    DOT National Transportation Integrated Search

    1993-09-01

    A major goal of air traffic management is to strategically control the flow of traffic so that the demand at an airport meets but does not exceed the operational capacity. This paper considers the major aspects of airport operational capacities relev...

  3. Trends in domestic and international markets for ash logs and lumber

    Treesearch

    Dan Meyer

    2010-01-01

    While ash is a "minor" commercial hardwood species relative to oak, poplar, and maple, it still accounts for roughly 3 percent of all hardwood lumber produced, with an estimated kiln-dried value exceeding $150 million annually.

  4. Pattern of intake of food additives associated with hyperactivity in Irish children and teenagers.

    PubMed

    Connolly, A; Hearty, A; Nugent, A; McKevitt, A; Boylan, E; Flynn, A; Gibney, M J

    2010-04-01

    A double-blind randomized intervention study has previously shown that a significant relationship exists between the consumption of various mixes of seven target additives by children and the onset of hyperactive behaviour. The present study set out to ascertain the pattern of intake of two mixes (A and B) of these seven target additives in Irish children and teenagers using the Irish national food consumption databases for children (n = 594) and teenagers (n = 441) and the National Food Ingredient Database. The majority of additive-containing foods consumed by both the children and teenagers contained one of the target additives. No food consumed by either the children or teenagers contained all seven of the target food additives. For each additive intake, estimates for every individual were made assuming that the additive was present at the maximum legal permitted level in those foods identified as containing it. For both groups, mean intakes of the food additives among consumers only were far below the doses used in the previous study on hyperactivity. Intakes at the 97.5th percentile of all food colours fell below the doses used in Mix B, while intakes for four of the six food colours were also below the doses used in Mix A. However, in the case of the preservative sodium benzoate, it exceeded the previously used dose in both children and teenagers. No child or teenager achieved the overall intakes used in the study linking food additives with hyperactivity.

  5. Economic Impact of Dengue Illness in the Americas

    PubMed Central

    Shepard, Donald S.; Coudeville, Laurent; Halasa, Yara A.; Zambrano, Betzana; Dayan, Gustavo H.

    2011-01-01

    The growing burden of dengue in endemic countries and outbreaks in previously unaffected countries stress the need to assess the economic impact of this disease. This paper synthesizes existing studies to calculate the economic burden of dengue illness in the Americas from a societal perspective. Major data sources include national case reporting data from 2000 to 2007, prospective cost of illness studies, and analyses quantifying underreporting in national routine surveillance systems. Dengue illness in the Americas was estimated to cost $2.1 billion per year on average (in 2010 US dollars), with a range of $1–4 billion in sensitivity analyses and substantial year to year variation. The results highlight the substantial economic burden from dengue in the Americas. The burden for dengue exceeds that from other viral illnesses, such as human papillomavirus (HPV) or rotavirus. Because this study does not include some components (e.g., vector control), it may still underestimate total economic consequences of dengue. PMID:21292885

  6. Male mutation rates and the cost of sex for females

    NASA Astrophysics Data System (ADS)

    Redfield, Rosemary J.

    1994-05-01

    ALTHOUGH we do not know why sex evolved, the twofold cost of meiosis for females provides a standard against which postulated benefits of sex can be evaluated1. The most reliable benefit is sex's ability to reduce the impact of deleterious mutations2,3. But deleterious mutations may themselves generate a large and previously overlooked female-specific cost of sex. DNA sequence comparisons have confirmed Haldane's suggestion that most mutations arise in the male germ line4,5; recent estimates of α, the ratio of male to female mutation rates, are ten, six and two in humans, primates and rodents, respectively6-8. Consequently, male gametes may give progeny more mutations than the associated sexual recombination eliminates. Here I describe computer simulations showing that the cost of male mutations can easily exceed the benefits of recombination, causing females to produce fitter progeny by parthenogenesis than by mating. The persistence of sexual reproduction by females thus becomes even more problematic.

  7. Exposure age and ice-sheet model constraints on Pliocene East Antarctic ice sheet dynamics.

    PubMed

    Yamane, Masako; Yokoyama, Yusuke; Abe-Ouchi, Ayako; Obrochta, Stephen; Saito, Fuyuki; Moriwaki, Kiichi; Matsuzaki, Hiroyuki

    2015-04-24

    The Late Pliocene epoch is a potential analogue for future climate in a warming world. Here we reconstruct Plio-Pleistocene East Antarctic Ice Sheet (EAIS) variability using cosmogenic nuclide exposure ages and model simulations to better understand ice sheet behaviour under such warm conditions. New and previously published exposure ages indicate interior-thickening during the Pliocene. An ice sheet model with mid-Pliocene boundary conditions also results in interior thickening and suggests that both the Wilkes Subglacial and Aurora Basins largely melted, offsetting increased ice volume. Considering contributions from West Antarctica and Greenland, this is consistent with the most recent IPCC AR5 estimate, which indicates that the Pliocene sea level likely did not exceed +20 m on Milankovitch timescales. The inception of colder climate since ∼3 Myr has increased the sea ice cover and inhibited active moisture transport to Antarctica, resulting in reduced ice sheet thickness, at least in coastal areas.

  8. Multidecadal Variability in Surface Albedo Feedback Across CMIP5 Models

    NASA Astrophysics Data System (ADS)

    Schneider, Adam; Flanner, Mark; Perket, Justin

    2018-02-01

    Previous studies quantify surface albedo feedback (SAF) in climate change, but few assess its variability on decadal time scales. Using the Coupled Model Intercomparison Project Version 5 (CMIP5) multimodel ensemble data set, we calculate time evolving SAF in multiple decades from surface albedo and temperature linear regressions. Results are meaningful when temperature change exceeds 0.5 K. Decadal-scale SAF is strongly correlated with century-scale SAF during the 21st century. Throughout the 21st century, multimodel ensemble mean SAF increases from 0.37 to 0.42 W m-2 K-1. These results suggest that models' mean decadal-scale SAFs are good estimates of their century-scale SAFs if there is at least 0.5 K temperature change. Persistent SAF into the late 21st century indicates ongoing capacity for Arctic albedo decline despite there being less sea ice. If the CMIP5 multimodel ensemble results are representative of the Earth, we cannot expect decreasing Arctic sea ice extent to suppress SAF in the 21st century.

  9. Validation of 131I ecological transfer models and thyroid dose assessments using Chernobyl fallout data from the Plavsk district, Russia

    PubMed Central

    Zvonova, I.; Krajewski, P.; Berkovsky, V.; Ammann, M.; Duffa, C.; Filistovic, V.; Homma, T.; Kanyar, B.; Nedveckaite, T.; Simon, S.L.; Vlasov, O.; Webbe-Wood, D.

    2009-01-01

    Within the project “Environmental Modelling for Radiation Safety” (EMRAS) organized by the IAEA in 2003 experimental data of 131I measurements following the Chernobyl accident in the Plavsk district of Tula region, Russia were used to validate the calculations of some radioecological transfer models. Nine models participated in the inter-comparison. Levels of 137Cs soil contamination in all the settlements and 131I/137Cs isotopic ratios in the depositions in some locations were used as the main input information. 370 measurements of 131I content in thyroid of townspeople and villagers, and 90 measurements of 131I concentration in milk were used for validation of the model predictions. A remarkable improvement in models performance comparing with previous inter-comparison exercise was demonstrated. Predictions of the various models were within a factor of three relative to the observations, discrepancies between the estimates of average doses to thyroid produced by most participant not exceeded a factor of ten. PMID:19783331

  10. 479 N Columbus Dr, March 2012, Lindsay Light Radiological Survey

    EPA Pesticide Factsheets

    field gamma measurements did not exceed the respective field instrument threshold values previously stated and ranged from aminimum of 4,500 cpm unshielded to a maximum of 5,700 cpm shielded in the bottom of the excavation.

  11. 545 N McClurg Ct, February 2015, Lindsay Light Radiological Survey

    EPA Pesticide Factsheets

    The field gamma measurements within the excavation during the excavation process did not exceed theinstrument threshold previously stated and ranged from a minimum of 5,700 cpm to a maximum of 13,500cpm unshielded.

  12. 600 N McClurg Ct, April 2012, Lindsay Light Radiological Survey

    EPA Pesticide Factsheets

    Field gamma measurements did not exceed the respective field instrument threshold values previously stated andranged from 3,400 to 9,500 cpm unshielded with a maximum of 4,500 cpm shielded in the bottom of theexcavation

  13. 300 East Randolph, December 2010, Lindsay Light Radiological Survey

    EPA Pesticide Factsheets

    Field gamma measurements within the excavation and the spoil materials generated during theexcavation process did not exceed the respective threshold values previously stated and generallyranged from 4,050 cpm to 9,690 cpm with the unshielded probe.

  14. 356 E Grand, July 2016, Lindsay Light Radiological Survey

    EPA Pesticide Factsheets

    The field gamma measurements within the excavations and of the spoil during the excavation process didnot exceed the instrument threshold previously stated and ranged from a minimum of 5,000 cpm to amaximum of 9,600 cpm unshielded.

  15. 362-363 E. Ontario, April 2012, Lindsay Light Radiological Survey

    EPA Pesticide Factsheets

    field gamma measurements did not exceed the respective field instrument threshold values previously stated andranged from 3,400 to 9,500 cpm unshielded with a maximum of 4,500 cpm shielded in the bottom of theexcavation.

  16. 237 E Ontario, July 2016, Lindsay Light Radiological Survey

    EPA Pesticide Factsheets

    The field gamma measurements within the excavations and of the spoil during the excavation process didnot exceed the instrument threshold previously stated and ranged from a minimum of 8,600 cpm to amaximum of 9,500 cpm unshielded.

  17. 226-228 E Ontario St, December 2014, Lindsay Light Radiological Survey

    EPA Pesticide Factsheets

    The field gamma measurements within the excavation during the excavation process did not exceed theunshielded instrument threshold previously stated and ranged from a minimum of 6,200 cpm to amaximum of 8,800 cpm unshielded.

  18. 502 N Peshtigo, October 2010, Lindsay Light Radiological Survey

    EPA Pesticide Factsheets

    The field gamma measurements within the excavation andthe spoil materials generated during the excavation process did not exceed the respective thresholdvalues previously stated and ranged from a minimum of 690 cpm to a maximum of 6,950 cpm.

  19. 380 E. North Water St, April 2016, Lindsay Light Radiological Survey

    EPA Pesticide Factsheets

    The field gamma measurements within the excavation during the excavation process did not exceed theinstrument threshold previously stated and ranged from a minimum of 6,800 cpm to a maximum of 12,100cpm unshielded.

  20. 5 CFR 930.209 - Senior Administrative Law Judge Program.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... is classified at pay level AL-3, the senior administrative law judge is paid the lowest rate of basic pay in AL-3 that equals or exceeds the highest previous rate of basic pay attained by the individual...

  1. Effects of Alder Mine on the Water, Sediments, and Benthic Macroinvertebrates of Alder Creek, 1998 Annual Report.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Peplow, Dan

    1999-05-28

    The Alder Mine, an abandoned gold, silver, copper, and zinc mine in Okanogan County, Washington, produces heavy metal-laden effluent that affects the quality of water in a tributary of the Methow River. The annual mass loading of heavy metals from two audits at the Alder Mine was estimated to exceed 11,000 kg per year. In this study, water samples from stations along Alder Creek were assayed for heavy metals by ICP-AES and were found to exceed Washington State's acute freshwater criteria for cadmium (Cd), copper (Cu), selenium (Se), and zinc (Zn).

  2. A method to combine spaceborne radar and radiometric observations of precipitation

    NASA Astrophysics Data System (ADS)

    Munchak, Stephen Joseph

    This dissertation describes the development and application of a combined radar-radiometer rainfall retrieval algorithm for the Tropical Rainfall Measuring Mission (TRMM) satellite. A retrieval framework based upon optimal estimation theory is proposed wherein three parameters describing the raindrop size distribution (DSD), ice particle size distribution (PSD), and cloud water path (cLWP) are retrieved for each radar profile. The retrieved rainfall rate is found to be strongly sensitive to the a priori constraints in DSD and cLWP; thus, these parameters are tuned to match polarimetric radar estimates of rainfall near Kwajalein, Republic of Marshall Islands. An independent validation against gauge-tuned radar rainfall estimates at Melbourne, FL shows agreement within 2% which exceeds previous algorithms' ability to match rainfall at these two sites. The algorithm is then applied to two years of TRMM data over oceans to determine the sources of DSD variability. Three correlated sets of variables representing storm dynamics, background environment, and cloud microphysics are found to account for approximately 50% of the variability in the absolute and reflectivity-normalized median drop size. Structures of radar reflectivity are also identified and related to drop size, with these relationships being confirmed by ground-based polarimetric radar data from the North American Monsoon Experiment (NAME). Regional patterns of DSD and the sources of variability identified herein are also shown to be consistent with previous work documenting regional DSD properties. In particular, mid-latitude regions and tropical regions near land tend to have larger drops for a given reflectivity, whereas the smallest drops are found in the eastern Pacific Intertropical Convergence Zone. Due to properties of the DSD and rain water/cloud water partitioning that change with column water vapor, it is shown that increases in water vapor in a global warming scenario could lead to slight (1%) underestimates of a rainfall trends by radar but larger overestimates (5%) by radiometer algorithms. Further analyses are performed to compare tropical oceanic mean rainfall rates between the combined algorithm and other sources. The combined algorithm is 15% higher than the version 6 of the 2A25 radar-only algorithm and 6.6% higher than the Global Precipitation Climatology Project (GPCP) estimate for the same time-space domain. Despite being higher than these two sources, the combined total is not inconsistent with estimates of the other components of the energy budget given their uncertainties.

  3. Segmentation editing improves efficiency while reducing inter-expert variation and maintaining accuracy for normal brain tissues in the presence of space-occupying lesions

    PubMed Central

    Deeley, MA; Chen, A; Datteri, R; Noble, J; Cmelak, A; Donnelly, EF; Malcolm, A; Moretti, L; Jaboin, J; Niermann, K; Yang, Eddy S; Yu, David S; Dawant, BM

    2013-01-01

    Image segmentation has become a vital and often rate limiting step in modern radiotherapy treatment planning. In recent years the pace and scope of algorithm development, and even introduction into the clinic, have far exceeded evaluative studies. In this work we build upon our previous evaluation of a registration driven segmentation algorithm in the context of 8 expert raters and 20 patients who underwent radiotherapy for large space-occupying tumors in the brain. In this work we tested four hypotheses concerning the impact of manual segmentation editing in a randomized single-blinded study. We tested these hypotheses on the normal structures of the brainstem, optic chiasm, eyes and optic nerves using the Dice similarity coefficient, volume, and signed Euclidean distance error to evaluate the impact of editing on inter-rater variance and accuracy. Accuracy analyses relied on two simulated ground truth estimation methods: STAPLE and a novel implementation of probability maps. The experts were presented with automatic, their own, and their peers’ segmentations from our previous study to edit. We found, independent of source, editing reduced inter-rater variance while maintaining or improving accuracy and improving efficiency with at least 60% reduction in contouring time. In areas where raters performed poorly contouring from scratch, editing of the automatic segmentations reduced the prevalence of total anatomical miss from approximately 16% to 8% of the total slices contained within the ground truth estimations. These findings suggest that contour editing could be useful for consensus building such as in developing delineation standards, and that both automated methods and even perhaps less sophisticated atlases could improve efficiency, inter-rater variance, and accuracy. PMID:23685866

  4. Breast-feeding and overweight in adolescence: within-family analysis [corrected].

    PubMed

    Gillman, Matthew W; Rifas-Shiman, Sheryl L; Berkey, Catherine S; Frazier, A Lindsay; Rockett, Helaine R H; Camargo, Carlos A; Field, Alison E; Colditz, Graham A

    2006-01-01

    Previous reports have found associations between having been breast-fed and a reduced risk of being overweight. These associations may be confounded by sociocultural determinants of both breast-feeding and obesity. We addressed this possibility by assessing the association of breast-feeding duration with adolescent obesity within sibling sets. We surveyed 5,614 siblings age 9 to 14 years and their mothers. These children were a subset of participants in the Growing Up Today Study, in which we had previously reported an inverse association of breast-feeding duration with overweight. We compared the prevalence of overweight (body mass index exceeding the age-sex-specific 85th percentile) in siblings who were breast-fed longer than the mean duration of their sibship with those who were breast-fed for a shorter period. Then we compared odds ratios from this within-family analysis with odds ratios from an overall (ie, not within-family) analysis. Mean +/- standard deviation breast-feeding duration was 6.4 +/- 4.0 months, and crude prevalence of overweight was 19%. On average, siblings who were breast-fed longer than their family mean had breast-feeding duration 3.7 months longer than their shorter-duration siblings. The adjusted odds ratio (OR) for overweight among siblings with longer breast-feeding duration, compared with shorter duration, was 0.92 (95% confidence interval = 0.76-1.11). In overall analyses, the adjusted OR was 0.94 (0.88-1.00) for each 3.7-month increment in breast-feeding duration. The estimated OR for the within-family analysis was close to the overall estimate, suggesting that the apparent protective effect of breast-feeding on later obesity was not highly confounded by unmeasured sociocultural factors. A larger study of siblings, however, would be needed to confirm this conclusion.

  5. Transport of diazinon in the San Joaquin River basin, California

    USGS Publications Warehouse

    Kratzer, Charles R.

    1997-01-01

    Most of the application of the organophosphate insecticide diazinon in the San Joaquin River Basin occurs in winter to control wood boring insects in dormant almond orchards. A federal-state collaborative study found that diazinon accounted for most of the observed toxicity of San Joaquin River water to water fleas in February 1993. Previous studies focussed mainly on west-side inputs to the San Joaquin River. In this 1994 study, the three major east-side tributaries to the San Joaquin River, the Merced, Tuolumne, and Stanislaus Rivers, and a downstream site on the San Joaquin River were sampled throughout the hydrographs of a late January and an early February storm. In both storms, the Tuolumne River had the highest concentrations of diazinon and transported the largest load of the three tributaries. The Stanislaus River was a small source in both storms. On the basis of previous storm sampling and estimated traveltimes, ephemeral west-side creeks were probably the main diazinon source early in the storms, while the Tuolumne and Merced Rivers and east-side drainage directly to the San Joaquin River were the main sources later. Although 74 percent of diazinon transport in the San Joaquin River during 199193 occurred in January and February, transport during each of the two 1994 storms was only 0.05 percent of the amount applied during preceeding dry periods. Nevertheless, some of the diazinon concentrations in the San Joaquin River during the January storm exceeded 0.35 micrograms per liter, a concentration shown to be acutely toxic to water fleas. Diazinon concentrations were highly variable during the storms and frequent sampling was required to adequately describe the concentration curves and to estimate loads.

  6. Transport of sediment-bound organochlorine pesticides to the San Joaquin River, California

    USGS Publications Warehouse

    Kratzer, Charles R.

    1998-01-01

    Most of the application of the organophosphate insecticide diazinon in the San Joaquin River Basin occurs in winter to control wood boring insects in dormant almond orchards. A federal-state collaborative study found that diazinon accounted for most of the observed toxicity of San Joaquin River water to water fleas in February 1993. Previous studies focused mainly on west-side inputs to the San Joaquin River. In this 1994 study, the three major east-side tributaries to the San Joaquin River, the Merced, Tuolumne, and Stanislaus Rivers, and a downstream site on the San Joaquin River were sampled throughout the hydrographs of a late January and an early February storm. In both storms, the Tuolumne River had the highest concentrations of diazinon and transported the largest load of the three tributaries. The Stanislaus River was a small source in both storms. On the basis of previous storm sampling and estimated traveltimes, ephemeral west-side creeks probably were the main diazinon source early in the storms, whereas the Tuolumne and Merced Rivers and east-side drainages directly to the San Joaquin River were the main sources later. Although 74 percent of diazinon transport in the San Joaquin River during 1991-1993 occurred in January and February, transport during each of the two 1994 storms was only 0.05 percent of the amount applied during preceding dry periods. Nevertheless, some of the diazinon concentrations in the San Joaquin River during the January storm exceeded 0.35 micrograms per liter, a concentration shown to be acutely toxic to water fleas. Diazinon concentrations were highly variable during the storms and frequent sampling was required to adequately describe the concentration curves and to estimate loads.

  7. Mitochondrial Mutation Rate, Spectrum and Heteroplasmy in Caenorhabditis elegans Spontaneous Mutation Accumulation Lines of Differing Population Size.

    PubMed

    Konrad, Anke; Thompson, Owen; Waterston, Robert H; Moerman, Donald G; Keightley, Peter D; Bergthorsson, Ulfar; Katju, Vaishali

    2017-06-01

    Mitochondrial genomes of metazoans, given their elevated rates of evolution, have served as pivotal markers for phylogeographic studies and recent phylogenetic events. In order to determine the dynamics of spontaneous mitochondrial mutations in small populations in the absence and presence of selection, we evolved mutation accumulation (MA) lines of Caenorhabditis elegans in parallel over 409 consecutive generations at three varying population sizes of N = 1, 10, and 100 hermaphrodites. The N =1 populations should have a minimal influence of natural selection to provide the spontaneous mutation rate and the expected rate of neutral evolution, whereas larger population sizes should experience increasing intensity of selection. New mutations were identified by Illumina paired-end sequencing of 86 mtDNA genomes across 35 experimental lines and compared with published genomes of natural isolates. The spontaneous mitochondrial mutation rate was estimated at 1.05 × 10-7/site/generation. A strong G/C→A/T mutational bias was observed in both the MA lines and the natural isolates. This suggests that the low G + C content at synonymous sites is the product of mutation bias rather than selection as previously proposed. The mitochondrial effective population size per worm generation was estimated to be 62. Although it was previously concluded that heteroplasmy was rare in C. elegans, the vast majority of mutations in this study were heteroplasmic despite an experimental regime exceeding 400 generations. The frequencies of frameshift and nonsynonymous mutations were negatively correlated with population size, which suggests their deleterious effects on fitness and a potent role for selection in their eradication. © The Author 2017. Published by Oxford University Press on behalf of the Society for Molecular Biology and Evolution. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  8. Statistical analysis of CSP plants by simulating extensive meteorological series

    NASA Astrophysics Data System (ADS)

    Pavón, Manuel; Fernández, Carlos M.; Silva, Manuel; Moreno, Sara; Guisado, María V.; Bernardos, Ana

    2017-06-01

    The feasibility analysis of any power plant project needs the estimation of the amount of energy it will be able to deliver to the grid during its lifetime. To achieve this, its feasibility study requires a precise knowledge of the solar resource over a long term period. In Concentrating Solar Power projects (CSP), financing institutions typically requires several statistical probability of exceedance scenarios of the expected electric energy output. Currently, the industry assumes a correlation between probabilities of exceedance of annual Direct Normal Irradiance (DNI) and energy yield. In this work, this assumption is tested by the simulation of the energy yield of CSP plants using as input a 34-year series of measured meteorological parameters and solar irradiance. The results of this work show that, even if some correspondence between the probabilities of exceedance of annual DNI values and energy yields is found, the intra-annual distribution of DNI may significantly affect this correlation. This result highlights the need of standardized procedures for the elaboration of representative DNI time series representative of a given probability of exceedance of annual DNI.

  9. Estimating Last Glacial Maximum Ice Thickness Using Porosity and Depth Relationships: Examples from AND-1B and AND-2A Cores, McMurdo Sound, Antarctica

    NASA Astrophysics Data System (ADS)

    Hayden, T. G.; Kominz, M. A.; Magens, D.; Niessen, F.

    2009-12-01

    We have estimated ice thicknesses at the AND-1B core during the Last Glacial Maximum by adapting an existing technique to calculate overburden. As ice thickness at Last Glacial Maximum is unknown in existing ice sheet reconstructions, this analysis provides constraint on model predictions. We analyze the porosity as a function of depth and lithology from measurements taken on the AND-1B core, and compare these results to a global dataset of marine, normally compacted sediments compiled from various legs of ODP and IODP. Using this dataset we are able to estimate the amount of overburden required to compact the sediments to the porosity observed in AND-1B. This analysis is a function of lithology, depth and porosity, and generates estimates ranging from zero to 1,000 meters. These overburden estimates are based on individual lithologies, and are translated into ice thickness estimates by accounting for both sediment and ice densities. To do this we use a simple relationship of Xover * (ρsed/ρice) = Xice; where Xover is the overburden thickness, ρsed is sediment density (calculated from lithology and porosity), ρice is the density of glacial ice (taken as 0.85g/cm3), and Xice is the equalivant ice thickness. The final estimates vary considerably, however the “Best Estimate” behavior of the 2 lithologies most likely to compact consistently is remarkably similar. These lithologies are the clay and silt units (Facies 2a/2b) and the diatomite units (Facies 1a) of AND-1B. These lithologies both produce best estimates of approximately 1,000 meters of ice during Last Glacial Maximum. Additionally, while there is a large range of possible values, no combination of reasonable lithology, compaction, sediment density, or ice density values result in an estimate exceeding 1,900 meters of ice. This analysis only applies to ice thicknesses during Last Glacial Maximum, due to the overprinting effect of Last Glacial Maximum on previous ice advances. Analysis of the AND-2A core is underway, and results will be compared to those of AND-1B.

  10. Comparison and continuous estimates of fecal coliform and Escherichia coli bacteria in selected Kansas streams, May 1999 through April 2002

    USGS Publications Warehouse

    Rasmussen, Patrick P.; Ziegler, Andrew C.

    2003-01-01

    The sanitary quality of water and its use as a public-water supply and for recreational activities, such as swimming, wading, boating, and fishing, can be evaluated on the basis of fecal coliform and Escherichia coli (E. coli) bacteria densities. This report describes the overall sanitary quality of surface water in selected Kansas streams, the relation between fecal coliform and E. coli, the relation between turbidity and bacteria densities, and how continuous bacteria estimates can be used to evaluate the water-quality conditions in selected Kansas streams. Samples for fecal coliform and E. coli were collected at 28 surface-water sites in Kansas. Of the 318 samples collected, 18 percent exceeded the current Kansas Department of Health and Environment (KDHE) secondary contact recreational, single-sample criterion for fecal coliform (2,000 colonies per 100 milliliters of water). Of the 219 samples collected during the recreation months (April 1 through October 31), 21 percent exceeded the current (2003) KDHE single-sample fecal coliform criterion for secondary contact rec-reation (2,000 colonies per 100 milliliters of water) and 36 percent exceeded the U.S. Environmental Protection Agency (USEPA) recommended single-sample primary contact recreational criterion for E. coli (576 colonies per 100 milliliters of water). Comparisons of fecal coliform and E. coli criteria indicated that more than one-half of the streams sampled could exceed USEPA recommended E. coli criteria more frequently than the current KDHE fecal coliform criteria. In addition, the ratios of E. coli to fecal coliform (EC/FC) were smallest for sites with slightly saline water (specific conductance greater than 1,000 microsiemens per centimeter at 25 degrees Celsius), indicating that E. coli may not be a good indicator of sanitary quality for those streams. Enterococci bacteria may provide a more accurate assessment of the potential for swimming-related illnesses in these streams. Ratios of EC/FC and linear regression models were developed for estimating E. coli densities on the basis of measured fecal coliform densities for six individual and six groups of surface-water sites. Regression models developed for the six individual surface-water sites and six groups of sites explain at least 89 percent of the variability in E. coli densities. The EC/FC ratios and regression models are site specific and make it possible to convert historic fecal coliform bacteria data to estimated E. coli densities for the selected sites. The EC/FC ratios can be used to estimate E. coli for any range of historical fecal coliform densities, and in some cases with less error than the regression models. The basin- and statewide regression models explained at least 93 percent of the variance and best represent the sites where a majority of the data used to develop the models were collected (Kansas and Little Arkansas Basins). Comparison of the current (2003) KDHE geometric-mean primary contact criterion for fecal coliform bacteria of 200 col/100 mL to the 2002 USEPA recommended geometric-mean criterion of 126 col/100 mL for E. coli results in an EC/FC ratio of 0.63. The geometric-mean EC/FC ratio for all sites except Rattlesnake Creek (site 21) is 0.77, indicating that considerably more than 63 percent of the fecal coliform is E. coli. This potentially could lead to more exceedances of the recommended E. coli criterion, where the water now meets the current (2003) 200-col/100 mL fecal coliform criterion. In this report, turbidity was found to be a reliable estimator of bacteria densities. Regression models are provided for estimating fecal coliform and E. coli bacteria densities using continuous turbidity measurements. Prediction intervals also are provided to show the uncertainty associated with using the regression models. Eighty percent of all measured sample densities and individual turbidity-based estimates from the regression models were in agreement as exceedi

  11. Estimating Willingness-to-Pay for health insurance among rural poor in India by reference to Engel's law.

    PubMed

    Binnendijk, Erika; Dror, David M; Gerelle, Eric; Koren, Ruth

    2013-01-01

    Community-Based Health Insurance (CBHI) (a.k.a. micro health insurance) is a contributory health insurance among rural poor in developing countries. As CBHI schemes typically function with no subsidy income, the schemes' expenditures cannot exceed their premium income. A good estimate of Willingness-To-Pay (WTP) among the target population affiliating on a voluntary basis is therefore essential for package design. Previous estimates of WTP reported materially and significantly different WTP levels across locations (even within one state), making it necessity to base estimates on household surveys. This is time-consuming and expensive. This study seeks to identify a coherent anchor for local estimation of WTP without having to rely on household surveys in each CBHI implementation. Using data collected in 2008-2010 among rural poor households in six locations in India (total 7874 households), we found that in all locations WTP expressed as percentage of income decreases with household income. This reminds of Engel's law on food expenditures. We checked several possible anchors: overall income, discretionary income and food expenditures. We compared WTP expressed as percentage of these anchors, by calculating the Coefficient of Variation (for inter-community variation) and Concentration indices (for intra-community variation). The Coefficient of variation was 0.36, 0.43 and 0.50 for WTP as percent of food expenditures, overall income and discretionary income, respectively. In all locations the concentration index for WTP as percentage of food expenditures was the lowest. Thus, food expenditures had the most consistent relationship with WTP within each location and across the six locations. These findings indicate that like food, health insurance is considered a necessity good even by people with very low income and no prior experience with health insurance. We conclude that the level of WTP could be estimated based on each community's food expenditures, and that this information can be obtained everywhere without having to conduct household surveys. Copyright © 2012 Elsevier Ltd. All rights reserved.

  12. Stress Drop Estimates from Induced Seismic Events in the Fort Worth Basin, Texas

    NASA Astrophysics Data System (ADS)

    Jeong, S. J.; Stump, B. W.; DeShon, H. R.

    2017-12-01

    Since the beginning of Barnett shale oil and gas production in the Fort Worth Basin, there have been earthquake sequences, including multiple magnitude 3.0+ events near the DFW International Airport, Azle, Irving-Dallas, and throughout Johnson County (Cleburne and Venus). These shallow depth earthquakes (2 to 8 km) have not exceeded magnitude 4.0 and have been widely felt; the close proximity of these earthquakes to a large population center motivates an assessment of the kinematics of the events in order to provide more accurate ground motion predictions. Previous studies have estimated average stress drops for the DFW airport and Cleburne earthquakes at 10 and 43 bars, respectively. Here, we calculate stress drops for Azle, Irving-Dallas and Venus earthquakes using seismic data from local (≤25 km) and regional (>25 km) seismic networks. Events with magnitudes above 2.5 are chosen to ensure adequate signal-to-noise. Stress drops are estimated by fitting the Brune earthquake model to the observed source spectrum with correction for propagation path effects and a local site effect using a high-frequency decay parameter, κ, estimated from acceleration spectrum. We find that regional average stress drops are similar to those estimated using local data, supporting the appropriateness of the propagation path and site corrections. The average stress drop estimates are 72 bars, which range from 7 to 240 bars. The results are consistent with global averages of 10 to 100 bars for intra-plate earthquakes and compatible with stress drops of DFW airport and Cleburne earthquakes. The stress drops show a slight breakdown in self-similarity with increasing moment magnitude. The breakdown of similarity for these events requires further study because of the limited magnitude range of the data. These results suggest that strong motions and seismic hazard from an injection-induced earthquake can be expected to be similar to those for tectonic events taking into account the shallow depth of induced earthquakes.

  13. An estimate by two methods of thyroid absorbed doses due to BRAVO fallout in several Northern Marshall Islands.

    PubMed

    Musolino, S V; Greenhouse, N A; Hull, A P

    1997-10-01

    Estimates of the thyroid absorbed doses due to fallout originating from the 1 March 1954 BRAVO thermonuclear test on Bikini Atoll have been made for several inhabited locations in the Northern Marshall Islands. Rongelap, Utirik, Rongerik and Ailinginae Atolls were also inhabited on 1 March 1954, where retrospective thyroid absorbed doses have previously been reconstructed. The current estimates are based primarily on external exposure data, which were recorded shortly after each nuclear test in the Castle Series, and secondarily on soil concentrations of 137Cs in samples collected in 1978 and 1988, along with aerial monitoring done in 1978. The external exposures and 137Cs soil concentrations were representative of the atmospheric transport and deposition patterns of the entire Castle Series tests and show that the BRAVO test was the major contributor to fallout exposure during the Castle series and other test series which were carried out in the Marshall Islands. These data have been used as surrogates for fission product radioiodines and telluriums in order to estimate the range of thyroid absorbed doses that may have occurred throughout the Marshall Islands. Dosimetry based on these two sets of estimates agreed within a factor of 4 at the locations where BRAVO was the dominant contributor to the total exposure and deposition. Both methods indicate that thyroid absorbed doses in the range of 1 Gy (100 rad) may have been incurred in some of the northern locations, whereas the doses at southern locations did not significantly exceed levels comparable to those from worldwide fallout. The results of these estimates indicate that a systematic medical survey for thyroid disease should be conducted, and that a more definitive dose reconstruction should be made for all the populated atolls and islands in the Northern Marshall Islands beyond Rongelap, Utirik, Rongerik and Ailinginae, which were significantly contaminated by BRAVO fallout.

  14. Flood-frequency analyses from paleoflood investigations for Spring, Rapid, Boxelder, and Elk Creeks, Black Hills, western South Dakota

    USGS Publications Warehouse

    Harden, Tessa M.; O'Connor, Jim E.; Driscoll, Daniel G.; Stamm, John F.

    2011-01-01

    Flood-frequency analyses for the Black Hills area are important because of severe flooding of June 9-10, 1972, that was caused by a large mesoscale convective system and caused at least 238 deaths. Many 1972 peak flows are high outliers (by factors of 10 or more) in observed records that date to the early 1900s. An efficient means of reducing uncertainties for flood recurrence is to augment gaged records by using paleohydrologic techniques to determine ages and magnitudes of prior large floods (paleofloods). This report summarizes results of paleoflood investigations for Spring Creek, Rapid Creek (two reaches), Boxelder Creek (two subreaches), and Elk Creek. Stratigraphic records and resulting long-term flood chronologies, locally extending more than 2,000 years, were combined with observed and adjusted peak-flow values (gaged records) and historical flood information to derive flood-frequency estimates for the six study reaches. Results indicate that (1) floods as large as and even substantially larger than 1972 have affected most of the study reaches, and (2) incorporation of the paleohydrologic information substantially reduced uncertainties in estimating flood recurrence. Canyons within outcrops of Paleozoic rocks along the eastern flanks of the Black Hills provided excellent environments for (1) deposition and preservation of stratigraphic sequences of late-Holocene flood deposits, primarily in protected slack-water settings flanking the streams; and (2) hydraulic analyses for determination of associated flow magnitudes. The bedrock canyons ensure long-term stability of channel and valley geometry, thereby increasing confidence in hydraulic computations of ancient floods from modern channel geometry. Stratigraphic records of flood sequences, in combination with deposit dating by radiocarbon, optically stimulated luminescence, and cesium-137, provided paleoflood chronologies for 29 individual study sites. Flow magnitudes were estimated from elevations of flood deposits in conjunction with hydraulic calculations based on modern channel and valley geometry. Reach-scale paleoflood chronologies were interpreted for each study reach, which generally entailed correlation of flood evidence among multiple sites, chiefly based on relative position within stratigraphic sequences, unique textural characteristics, or results of age dating and flow estimation. The FLDFRQ3 and PeakfqSA analytical models (assuming log-Pearson Type III frequency distributions) were used for flood-frequency analyses for as many as four scenarios: (1) analysis of gaged records only; (2) gaged records with historical information; (3) all available data including gaged records, historical flows, paleofloods, and perception thresholds; and (4) the same as the third scenario, but ?top fitting? the distribution using only the largest 50 percent of gaged peak flows. The PeakfqSA model is most consistent with procedures adopted by most Federal agencies for flood-frequency analysis and thus was (1) used for comparisons among results for study reaches, and (2) considered by the authors as most appropriate for general applications of estimating low-probability flood recurrence. The detailed paleoflood investigations indicated that in the last 2,000 years all study reaches have had multiple large floods substantially larger than in gaged records. For Spring Creek, stratigraphic records preserved a chronology of at least five paleofloods in approximately (~) 1,000 years approaching or exceeding the 1972 flow of 21,800 cubic feet per second (ft3/s). The largest was ~700 years ago with a flow range of 29,300-58,600 ft3/s, which reflects the uncertainty regarding flood-magnitude estimates that was incorporated in the flood-frequency analyses. In the lower reach of Rapid Creek (downstream from Pactola Dam), two paleofloods in ~1,000 years exceeded the 1972 flow of 31,200 ft3/s. Those occurred ~440 and 1,000 years ago, with flows of 128,000-256,000 and 64,000-128,000 ft3/s, respectively. Five smaller paleofloods of 9,500-19,000 ft3/s occurred between ~200 and 400 years ago. In the upper reach of Rapid Creek (above Pactola Reservoir), the largest recorded floods are substantially smaller than for lower Rapid Creek and all other study reaches. Paleofloods of ~12,900 and 12,000 ft3/s occurred ~1,000 and 1,500 years ago. One additional paleoflood (~800 years ago) was similar in magnitude to the largest gaged flow of 2,460 ft3/s Boxelder Creek was treated as having two subreaches because of two tributaries that affect peak flows. During the last ~1,000 years, paleofloods of ~39,000-78,000 ft3/s and 40,000-80,000 ft3/s in the upstream subreach have exceeded the 1972 peak flow of 30,800 ft3/s. One other paleoflood was similar to the second largest gaged flow (16,400 ft3/s in 1907). For the downstream subreach, paleofloods of 61,300-123,000 ft3/s and 52,500-105,000 ft3/s in the last ~1,000 years have substantially exceeded the 1972 flood (50,500 ft3/s). Four additional paleofloods had flows between 14,200 and 33,800 ft3/s. The 1972 flow on Elk Creek (10,400 ft3/s) has been substantially exceeded at least five times in the last 1,900 years. The largest paleoflood (41,500-124,000 ft3/s) was ~900 years ago. Three other paleofloods between 37,500 and 120,000 ft3/s occurred between 1,100 and 1,800 years ago. A fifth paleoflood of 25,500-76,500 ft3/s was ~750 years ago. Considering analyses for all available data (PeakfqSA model) for all six study reaches, the 95-percent confidence intervals about the low-probability quantile estimates (100-, 200-, and 500-year recurrence intervals) were reduced by at least 78 percent relative to those for the gaged records only. In some cases, 95-percent uncertainty intervals were reduced by 99 percent or more. For all study reaches except the two Boxelder Creek subreaches, quantile estimates for these long-term analyses were larger than for the short-term analyses. The 1972 flow for the Spring Creek study reach (21,800 ft3/s) corresponds with a recurrence interval of ~400 years. Recurrence intervals are ~500 years for the 1972 flood magnitudes along the lower Rapid Creek reach and the upstream subreach of Boxelder Creek. For the downstream subreach of Boxelder Creek, the large 1972 flood magnitude (50,500 ft3/s) exceeds the 500-year quantile estimate by about 35 percent. The recurrence interval of ~100 years for 1972 flooding along the Elk Creek study reach is small relative to other study reaches along the eastern margin of the Black Hills. All of the paleofloods plot within the bounds of a national envelope curve, indicating that the national curve represents exceedingly rare floods for the Black Hills area. Elk Creek, lower Rapid Creek, and the downstream subreach of Boxelder Creek all have paleofloods that plot above a regional envelope curve; in the case of Elk Creek, by a factor of nearly two. The Black Hills paleofloods represent some of the largest known floods, relative to drainage area, for the United States. Many of the other largest known United States floods are in areas with physiographic and climatologic conditions broadly similar to the Black Hills-semiarid and rugged landscapes that intercept and focus heavy precipitation from convective storm systems. The 1972 precipitation and runoff patterns, previous analyses of peak-flow records, and the paleoflood investigations of this study support a hypothesis of distinct differences in flood generation within the central Black Hills study area. The eastern Black Hills are susceptible to intense orographic lifting associated with convective storm systems and also have high relief, thin soils, and narrow and steep canyons-factors favoring generation of exceptionally heavy rain-producing thunderstorms and promoting runoff and rapid concentration of flow into stream channels. In contrast, storm potential is smaller in and near the Limestone Plateau area, and storm runoff is further reduced by substantial infiltration into the limestone, gentle topography, and extensive floodplain storage. Results of the paleoflood investigations are directly applicable only to the specific study reaches and in the case of Rapid Creek, only to pre-regulation conditions. Thus, approaches for broader applications were developed from inferences of overall flood-generation processes, and appropriate domains for application of results were described. Example applications were provided by estimating flood quantiles for selected streamgages, which also allowed direct comparison with results of at-site flood-frequency analyses from a previous study. Several broad issues and uncertainties were examined, including potential biases associated with stratigraphic records that inherently are not always complete, uncertainties regarding statistical approaches, and the unknown applicability of paleoflood records to future watershed conditions. The results of the paleoflood investigations, however, provide much better physically based information on low-probability floods than has been available previously, substantially improving estimates of the magnitude and frequency of large floods in these basins and reducing associated uncertainty.

  15. Mount St. Helens Long-Term Sediment Management Plan for Flood Risk Reduction

    DTIC Science & Technology

    2010-06-01

    one dredge would direct pump to the Wasser Winters disposal site, located along the southern bank of the Cowlitz River mouth. The average annual...dredge would pipeline pump either upstream to disposal site 20cde or downstream to the Wasser Winters site. Pumping distances would not exceed 6.0...estimates referenced the Wasser Winters upland preparation estimates and were based on the relationship between acreage and effort. Total site

  16. A Human Mixture Risk Assessment for Neurodevelopmental Toxicity Associated with Polybrominated Diphenyl Ethers Used as Flame Retardants

    PubMed Central

    Martin, Olwenn V.; Evans, Richard M.; Faust, Michael

    2017-01-01

    Background: The European Food Safety Authority recently concluded that the exposure of small children (1–3 y old) to brominated diphenyl ether (BDE)-99 may exceed acceptable levels defined in relation to neurodevelopmental toxicity in rodents. The flame retardant BDE-209 may release BDE-99 and other lower brominated BDEs through biotic and abiotic degradation, and all age groups are exposed not only to BDE-209 and -99 but also to a cocktail of BDE congeners with evidence of neurodevelopmental toxicity. The possible risks from combined exposures to these substances have not been evaluated. Objectives: We performed a congener-specific mixture risk assessment (MRA) of human exposure to combinations of BDE-209 and other BDEs based on estimated exposures via diet and dust intake and on measured levels in biologic samples. Methods: We employed the Hazard Index (HI) method by using BDE congener-specific reference doses for neurodevelopmental toxicity. Results: Our HI analysis suggests that combined exposures to polybrominated diphenyl ethers (PBDEs) may exceed acceptable levels in breastfeeding infants (0–3 mo old) and in small children (1–3 y old), even for moderate (vs. high) exposure scenarios. Our estimates also suggest that acceptable levels of combined PBDEs may be exceeded in adults whose diets are high in fish. Small children had the highest combined exposures, with some estimated body burdens that were similar to body burdens associated with developmental neurotoxicity in rodents. Conclusions: Our estimates corroborate reports from several recent epidemiological studies of associations between PBDE exposures and neurobehavioral outcomes, and they support the inclusion of BDE-209 in the persistent organic pollutant (POP) convention as well as the need for strategies to reduce exposures to PBDE mixtures, including maximum residue limits for PBDEs in food and measures for limiting the release of PBDEs from consumer waste. https://doi.org/10.1289/EHP826 PMID:28886598

  17. 7 CFR 1951.25 - Review of limited resource FO, OL, and SW loans.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... successor agency under Public Law 103-354, in light of the previous year's projected figures and actual... the coming year must show that the “balance available to pay debts” exceeds the amount needed to pay...

  18. 12 CFR 325.5 - Miscellaneous.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... assets arising from deductible temporary differences that exceed the amount of taxes previously paid that could be recovered through loss carrybacks if existing temporary differences (both deductible and.... (ii) For purposes of this limitation, all existing temporary differences should be assumed to fully...

  19. 12 CFR 325.5 - Miscellaneous.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... assets arising from deductible temporary differences that exceed the amount of taxes previously paid that could be recovered through loss carrybacks if existing temporary differences (both deductible and.... (ii) For purposes of this limitation, all existing temporary differences should be assumed to fully...

  20. 240 E Ontario St, September 2011, Lindsay Light Radiological Survey

    EPA Pesticide Factsheets

    Unshielded field gamma measurements within theexcavation and the spoil materials generated during the excavation process did not exceed the respectivethreshold values previously stated and ranged from a minimum of 7,500 cpm to a maximum of 8,100 cpm.

  1. 450 N. Cityfront Plaza, October 2016, Lindsay Light Radiological Survey

    EPA Pesticide Factsheets

    The field gamma measurements within the excavations and of the spoil during the excavation process didnot exceed the instrument threshold previously stated and ranged from a minimum of 4,400 cpm to amaximum of 6,000 cpm unshielded.

  2. 465 E. Illinois St., February 2015, Lindsay Light Radiological Survey

    EPA Pesticide Factsheets

    Field gamma measurements within the excavation and the spoil materials generatedduring the excavation process did not exceed the field instrument threshold previously stated and rangedfrom a minimum of 10,000 cpm to a maximum of 15,500 cpm unshielded.

  3. 465 N. Park Ave, October 2016, Lindsay Light Radiological Survey

    EPA Pesticide Factsheets

    The field gamma measurements within the excavations and of the spoil during the excavation process didnot exceed the instrument threshold previously stated and ranged from a minimum of 4,100 cpm to amaximum of 16,600 cpm unshielded.

  4. 222 North Columbus Dr., August 2010, Lindsay Light Radiological Survey

    EPA Pesticide Factsheets

    The field gamma measurements within the excavation andthe spoil materials generated during the excavation process did not exceed the respective thresholdvalues previously stated and ranged from a minimum of 2,200 cpm to a maximum of 3,500 cpm.

  5. 425 E Randolph, August 2010, Lindsay Light Radiological Survey

    EPA Pesticide Factsheets

    The field gamma measurements within the excavation andthe spoil materials generated during the excavation process did not exceed the respective thresholdvalues previously stated and ranged from a minimum of 1,800 cpm to a maximum of 4,900 cpm.

  6. 248 E. South Water, August 2010, Lindsay Light Radiological Survey

    EPA Pesticide Factsheets

    The field gamma measurements within the excavation andthe spoil materials generated during the excavation process did not exceed the respective thresholdvalues previously stated and ranged from a minimum of 1,600 cpm to a maximum of 2,900 cpm.

  7. 200 N Columbus Dr, August 2010, Lindsay Light Radiological Survey

    EPA Pesticide Factsheets

    The field gamma measurements within the excavation andthe spoil materials generated during the excavation process did not exceed the respective thresholdvalues previously stated and ranged from a minimum of 2,500 cpm to a maximum of 3,300 cpm.

  8. 301 E South Water, August 2010, Lindsay Light Radiological Survey

    EPA Pesticide Factsheets

    The field gamma measurements within the excavation andthe spoil materials generated during the excavation process did not exceed the respective thresholdvalues previously stated and ranged from a minimum of 2,200 cpm to a maximum of 4,500 cpm.

  9. 151 N. Field Blvd., August 2010, Lindsay Light Radiological Survey

    EPA Pesticide Factsheets

    The field gamma measurements within theexcavation and the spoil materials generated during the excavation process did not exceed therespective threshold values previously stated and ranged from a minimum of 2,400 cpm to amaximum of 3,000 cpm.

  10. 426 East Grand Ave, October 2010, Lindsay Light Radiological Survey

    EPA Pesticide Factsheets

    The field gamma measurements within the excavation andthe spoil materials generated during the excavation process did not exceed the respective thresholdvalues previously stated and ranged from a minimum of 740 cpm to a maximum of 5,930 cpm.

  11. 400 E. Monroe, January 2011, Lindsay Light Radiological Survey

    EPA Pesticide Factsheets

    The fieldgamma measurements within the excavation and the spoil materials generated during the excavationprocess did not exceed the respective threshold values previously stated and ranged from a minimum of4,867 cpm to a maximum of 7,351 cpm.

  12. 165 E Ontario, October 2010, Lindsay Light Radiological Survey

    EPA Pesticide Factsheets

    The field gamma measurements within the excavation and the spoil materials generatedduring the excavation process did not exceed the respective threshold values previously stated andranged from a minimum of 2, 150 cpm to a maximum of 6,270 cpm.

  13. 641 N Michigan, September 2011, Lindsay Light Radiological Survey

    EPA Pesticide Factsheets

    The unshielded field gamma measurements within the spoil matehals generated during the drilling processdid not exceed the respective threshold values previously stated and ranged from background levels of 4,000cpm to a maximum of about 8,000 cpm.

  14. 633 N. Michigan Ave, May 2016, Lindsay Light Radiological Survey

    EPA Pesticide Factsheets

    The field gamma measurements within the excavations and of the spoil during the excavation process didnot exceed the instrument threshold previously stated and ranged from a minimum of 4,400 cpm to amaximum of 5,900 cpm unshielded.

  15. 157 - 165 E. Ohio, May 2016, Lindsay Light Radiological Survey

    EPA Pesticide Factsheets

    The field gamma measurements within the excavations and of the spoil during the excavation process didnot exceed the instrument threshold previously stated and ranged from a minimum of 5,000 cpm to amaximum of 5,900 cpm unshielded.

  16. 400 E Randolph, August 2010, Lindsay Light Radiological Survey

    EPA Pesticide Factsheets

    The field gamma measurements within the excavation andthe spoil materials generated during the excavation process did not exceed the respective thresholdvalues previously stated and ranged from a minimum of 1,600 cpm to a maximum of 4,200 cpm.

  17. 400 E North Water, August 2010, Lindsay Light Radiological Survey

    EPA Pesticide Factsheets

    The field gamma measurements within the excavation andthe spoil materials generated during the excavation process did not exceed the respective thresholdvalues previously stated and ranged from a minimum of 2,400 cpm to a maximum of 4,500 cpm.

  18. 512 N McClurg, August 2010, Lindsay Light Radiological Survey

    EPA Pesticide Factsheets

    The field gamma measurementswithin the excavation and the spoil materials generated during the excavation process did not exceed therespective threshold values previously stated and ranged from a minimum of 6,500 cpm to a maximum of9,500 cpm.

  19. Cost savings of reduced constipation rates attributed to increased dietary fiber intakes: a decision-analytic model

    PubMed Central

    2014-01-01

    Background Nearly five percent of Americans suffer from functional constipation, many of whom may benefit from increasing dietary fiber consumption. The annual constipation-related healthcare cost savings associated with increasing intakes may be considerable but have not been examined previously. The objective of the present study was to estimate the economic impact of increased dietary fiber consumption on direct medical costs associated with constipation. Methods Literature searches were conducted to identify nationally representative input parameters for the U.S. population, which included prevalence of functional constipation; current dietary fiber intakes; proportion of the population meeting recommended intakes; and the percentage that would be expected to respond, in terms of alleviation of constipation, to a change in dietary fiber consumption. A dose–response analysis of published data was conducted to estimate the percent reduction in constipation prevalence per 1 g/day increase in dietary fiber intake. Annual direct medical costs for constipation were derived from the literature and updated to U.S. $ 2012. Sensitivity analyses explored the impact on adult vs. pediatric populations and the robustness of the model to each input parameter. Results The base case direct medical cost-savings was $12.7 billion annually among adults. The base case assumed that 3% of men and 6% of women currently met recommended dietary fiber intakes; each 1 g/day increase in dietary fiber intake would lead to a reduction of 1.9% in constipation prevalence; and all adults would increase their dietary fiber intake to recommended levels (mean increase of 9 g/day). Sensitivity analyses, which explored numerous alternatives, found that even if only 50% of the adult population increased dietary fiber intake by 3 g/day, annual medical costs savings exceeded $2 billion. All plausible scenarios resulted in cost savings of at least $1 billion. Conclusions Increasing dietary fiber consumption is associated with considerable cost savings, potentially exceeding $12 billion, which is a conservative estimate given the exclusion of lost productivity costs in the model. The finding that $12.7 billion in direct medical costs of constipation could be averted through simple, realistic changes in dietary practices is promising and highlights the need for strategies to increase dietary fiber intakes. PMID:24739472

  20. Impact of Outpatient Rehabilitation Medicare Reimbursement Caps on Utilization and Cost of Rehabilitation Care After Ischemic Stroke: Do Caps Contain Costs?

    PubMed

    Simpson, Annie N; Bonilha, Heather S; Kazley, Abby S; Zoller, James S; Simpson, Kit N; Ellis, Charles

    2015-11-01

    To estimate the proportion of patients with ischemic stroke who fall within and above the total outpatient rehabilitation caps before and after the Balanced Budget Act of 1997 took effect; and to estimate the cost of poststroke outpatient rehabilitation cost and resource utilization in these patients before and after the implementation of the caps. Retrospective cohort study. Medicare reimbursement system. Medicare beneficiaries from the state of South Carolina: the 1997 stroke cohort sample (N=2667) and the 2004 stroke cohort sample (N=2679). Not applicable. Proportion of beneficiaries with bills within and above the cap before and after the cap was enacted, and total estimated 1-year rehabilitation Medicare payments before and after the cap. The proportion of patients with stroke exceeding the cap in 2004 after the Balanced Budget Act of 1997 was enacted was significantly lower (5.8%) than those in 1997 (9.5%) had there been a cap at that time (P=.004). However, when the proportion of individuals exceeding the cap among both the outpatient provider and facility files was examined, there was a greater proportion of patients with stroke in 2004 (64.6%) than in 1997 (31.9%) who exceeded the cap (P<.0001). The estimated average 1-year Medicare payments for rehabilitation services, when examining only the Part B outpatient provider bills, did not differ between the cohorts (P=.12), and in fact, decreased slightly from $1052 in 1997 to $833 in 2004. However, when examining rehabilitation costs using all available outpatient Medicare bills, the average estimated payments greatly increased (P<.0001) from $5691 in 1997 to $9606 in 2004. These findings suggest that billing practices may have changed after outpatient rehabilitation services caps were enacted by the Balanced Budget Act of 1997. Rehabilitation services billing may have shifted from Part B provider bills to being more frequently included in facility charges. Copyright © 2015 American Congress of Rehabilitation Medicine. Published by Elsevier Inc. All rights reserved.

  1. Estimation of Potential Population Level Effects of Contaminants on Wildlife

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Loar, J.M.

    2001-06-11

    The objective of this project is to provide DOE with improved methods to assess risks from contaminants to wildlife populations. The current approach for wildlife risk assessment consists of comparison of contaminant exposure estimates for individual animals to literature-derived toxicity test endpoints. These test endpoints are assumed to estimate thresholds for population-level effects. Moreover, species sensitivities to contaminants is one of several criteria to be considered when selecting assessment endpoints (EPA 1997 and 1998), yet data on the sensitivities of many birds and mammals are lacking. The uncertainties associated with this approach are considerable. First, because toxicity data are notmore » available for most potential wildlife endpoint species, extrapolation of toxicity data from test species to the species of interest is required. There is no consensus on the most appropriate extrapolation method. Second, toxicity data are represented as statistical measures (e.g., NOAEL s or LOAELs) that provide no information on the nature or magnitude of effects. The level of effect is an artifact of the replication and dosing regime employed, and does not indicate how effects might increase with increasing exposure. Consequently, slight exceedance of a LOAEL is not distinguished from greatly exceeding it. Third, the relationship of toxic effects on individuals to effects on populations is poorly estimated by existing methods. It is assumed that if the exposure of individuals exceeds levels associated with impaired reproduction, then population level effects are likely. Uncertainty associated with this assumption is large because depending on the reproductive strategy of a given species, comparable levels of reproductive impairment may result in dramatically different population-level responses. This project included several tasks to address these problems: (1) investigation of the validity of the current allometric scaling approach for interspecies extrapolation an d development of new scaling models; (2) development of dose-response models for toxicity data presented in the literature; and (3) development of matrix-based population models that were coupled with dose-response models to provide realistic estimation of population-level effects for individual responses.« less

  2. Mark-recapture and mark-resight methods for estimating abundance with remote cameras: a carnivore case study

    USGS Publications Warehouse

    Alanso, Robert S.; McClintock, Brett T.; Lyren, Lisa M.; Boydston, Erin E.; Crooks, Kevin R.

    2015-01-01

    Abundance estimation of carnivore populations is difficult and has prompted the use of non-invasive detection methods, such as remotely-triggered cameras, to collect data. To analyze photo data, studies focusing on carnivores with unique pelage patterns have utilized a mark-recapture framework and studies of carnivores without unique pelage patterns have used a mark-resight framework. We compared mark-resight and mark-recapture estimation methods to estimate bobcat (Lynx rufus) population sizes, which motivated the development of a new "hybrid" mark-resight model as an alternative to traditional methods. We deployed a sampling grid of 30 cameras throughout the urban southern California study area. Additionally, we physically captured and marked a subset of the bobcat population with GPS telemetry collars. Since we could identify individual bobcats with photos of unique pelage patterns and a subset of the population was physically marked, we were able to use traditional mark-recapture and mark-resight methods, as well as the new “hybrid” mark-resight model we developed to estimate bobcat abundance. We recorded 109 bobcat photos during 4,669 camera nights and physically marked 27 bobcats with GPS telemetry collars. Abundance estimates produced by the traditional mark-recapture, traditional mark-resight, and “hybrid” mark-resight methods were similar, however precision differed depending on the models used. Traditional mark-recapture and mark-resight estimates were relatively imprecise with percent confidence interval lengths exceeding 100% of point estimates. Hybrid mark-resight models produced better precision with percent confidence intervals not exceeding 57%. The increased precision of the hybrid mark-resight method stems from utilizing the complete encounter histories of physically marked individuals (including those never detected by a camera trap) and the encounter histories of naturally marked individuals detected at camera traps. This new estimator may be particularly useful for estimating abundance of uniquely identifiable species that are difficult to sample using camera traps alone.

  3. Exceedance of PM10 and ozone concentration limits in Germany - Spatial variability and influence of climate

    NASA Astrophysics Data System (ADS)

    Heidenreich, Majana; Bernhofer, Christian

    2014-05-01

    High concentrations of particulate matter (PM) and ground-level ozone (O3) have negative impacts on human health, e.g., increased risk of respiratory disease, and the environment. European Union (EU) air policy and air quality standards led to continuously reduced air pollution problems in recent decades. Nevertheless, the limit values for PM10 (particles with diameter of 10 micrometers or less) and ozone - defined by the directive 2008/50/EC of the European Parliament - are still exceeded frequently. Poor air quality and the exceedance of limits result mainly from the combination of high emissions and unfavourable weather conditions. Datasets from German monitoring stations are used to describe the spatial and temporal variability of the exceedance of concentration limits for PM10 and ozone for the federal states of Germany. Time series are analysed for the period 2000-2012 for PM10 and for the period 1990-2012 for ozone. Furthermore, the influence of weather patterns on the exceedance of concentration limits on a regional scale was investigated. Here, the "objective weather types" of the German Weather Service were used. As expected, for most regions anticyclonic weather types (with a negative cyclonality index for the two levels 950 and 500 hPa) show a high frequency on exeedance days, both for PM10 and ozone. The results could contribute to estimate the future exceedance frequency of concentration limits and to develop possible countermeasures.

  4. Application of a Threshold Method to the TRMM Radar for the Estimation of Space-Time Rain Rate Statistics

    NASA Technical Reports Server (NTRS)

    Meneghini, Robert; Jones, Jeffrey A.

    1997-01-01

    One of the TRMM radar products of interest is the monthly-averaged rain rates over 5 x 5 degree cells. Clearly, the most directly way of calculating these and similar statistics is to compute them from the individual estimates made over the instantaneous field of view of the Instrument (4.3 km horizontal resolution). An alternative approach is the use of a threshold method. It has been established that over sufficiently large regions the fractional area above a rain rate threshold and the area-average rain rate are well correlated for particular choices of the threshold [e.g., Kedem et al., 19901]. A straightforward application of this method to the TRMM data would consist of the conversion of the individual reflectivity factors to rain rates followed by a calculation of the fraction of these that exceed a particular threshold. Previous results indicate that for thresholds near or at 5 mm/h, the correlation between this fractional area and the area-average rain rate is high. There are several drawbacks to this approach, however. At the TRMM radar frequency of 13.8 GHz the signal suffers attenuation so that the negative bias of the high resolution rain rate estimates will increase as the path attenuation increases. To establish a quantitative relationship between fractional area and area-average rain rate, an independent means of calculating the area-average rain rate is needed such as an array of rain gauges. This type of calibration procedure, however, is difficult for a spaceborne radar such as TRMM. To estimate a statistic other than the mean of the distribution requires, in general, a different choice of threshold and a different set of tuning parameters.

  5. Constraints on the magnitude and rate of CO2 dissolution at Bravo Dome natural gas field

    PubMed Central

    Sathaye, Kiran J.; Hesse, Marc A.; Cassidy, Martin; Stockli, Daniel F.

    2014-01-01

    The injection of carbon dioxide (CO2) captured at large point sources into deep saline aquifers can significantly reduce anthropogenic CO2 emissions from fossil fuels. Dissolution of the injected CO2 into the formation brine is a trapping mechanism that helps to ensure the long-term security of geological CO2 storage. We use thermochronology to estimate the timing of CO2 emplacement at Bravo Dome, a large natural CO2 field at a depth of 700 m in New Mexico. Together with estimates of the total mass loss from the field we present, to our knowledge, the first constraints on the magnitude, mechanisms, and rates of CO2 dissolution on millennial timescales. Apatite (U-Th)/He thermochronology records heating of the Bravo Dome reservoir due to the emplacement of hot volcanic gases 1.2–1.5 Ma. The CO2 accumulation is therefore significantly older than previous estimates of 10 ka, which demonstrates that safe long-term geological CO2 storage is possible. Integrating geophysical and geochemical data, we estimate that 1.3 Gt CO2 are currently stored at Bravo Dome, but that only 22% of the emplaced CO2 has dissolved into the brine over 1.2 My. Roughly 40% of the dissolution occurred during the emplacement. The CO2 dissolved after emplacement exceeds the amount expected from diffusion and provides field evidence for convective dissolution with a rate of 0.1 g/(m2y). The similarity between Bravo Dome and major US saline aquifers suggests that significant amounts of CO2 are likely to dissolve during injection at US storage sites, but that convective dissolution is unlikely to trap all injected CO2 on the 10-ky timescale typically considered for storage projects. PMID:25313084

  6. Estimating a WTP-based value of a QALY: the 'chained' approach.

    PubMed

    Robinson, Angela; Gyrd-Hansen, Dorte; Bacon, Philomena; Baker, Rachel; Pennington, Mark; Donaldson, Cam

    2013-09-01

    A major issue in health economic evaluation is that of the value to place on a quality adjusted life year (QALY), commonly used as a measure of health care effectiveness across Europe. This critical policy issue is reflected in the growing interest across Europe in development of more sound methods to elicit such a value. EuroVaQ was a collaboration of researchers from 9 European countries, the main aim being to develop more robust methods to determine the monetary value of a QALY based on surveys of the general public. The 'chained' approach of deriving a societal willingness-to-pay (WTP) based monetary value of a QALY used the following basic procedure. First, utility values were elicited for health states using the standard gamble (SG) and time trade off (TTO) methods. Second, a monetary value to avoid some risk/duration of that health state was elicited and the implied WTP per QALY estimated. We developed within EuroVaQ an adaptation to the 'chained approach' that attempts to overcome problems documented previously (in particular the tendency to arrive at exceedingly high WTP per QALY values). The survey was administered via Internet panels in each participating country and almost 22,000 responses achieved. Estimates of the value of a QALY varied across question and were, if anything, on the low side with the (trimmed) 'all country' mean WTP per QALY ranging from $18,247 to $34,097. Untrimmed means were considerably higher and medians considerably lower in each case. We conclude that the adaptation to the chained approach described here is a potentially useful technique for estimating WTP per QALY. A number of methodological challenges do still exist, however, and there is scope for further refinement. Copyright © 2013 Elsevier Ltd. All rights reserved.

  7. Global health benefits of mitigating ozone pollution with methane emission controls.

    PubMed

    West, J Jason; Fiore, Arlene M; Horowitz, Larry W; Mauzerall, Denise L

    2006-03-14

    Methane (CH(4)) contributes to the growing global background concentration of tropospheric ozone (O(3)), an air pollutant associated with premature mortality. Methane and ozone are also important greenhouse gases. Reducing methane emissions therefore decreases surface ozone everywhere while slowing climate warming, but although methane mitigation has been considered to address climate change, it has not for air quality. Here we show that global decreases in surface ozone concentrations, due to methane mitigation, result in substantial and widespread decreases in premature human mortality. Reducing global anthropogenic methane emissions by 20% beginning in 2010 would decrease the average daily maximum 8-h surface ozone by approximately 1 part per billion by volume globally. By using epidemiologic ozone-mortality relationships, this ozone reduction is estimated to prevent approximately 30,000 premature all-cause mortalities globally in 2030, and approximately 370,000 between 2010 and 2030. If only cardiovascular and respiratory mortalities are considered, approximately 17,000 global mortalities can be avoided in 2030. The marginal cost-effectiveness of this 20% methane reduction is estimated to be approximately 420,000 US dollars per avoided mortality. If avoided mortalities are valued at 1 US dollars million each, the benefit is approximately 240 US dollars per tone of CH(4) ( approximately 12 US dollars per tone of CO(2) equivalent), which exceeds the marginal cost of the methane reduction. These estimated air pollution ancillary benefits of climate-motivated methane emission reductions are comparable with those estimated previously for CO(2). Methane mitigation offers a unique opportunity to improve air quality globally and can be a cost-effective component of international ozone management, bringing multiple benefits for air quality, public health, agriculture, climate, and energy.

  8. Natural recharge estimation and uncertainty analysis of an adjudicated groundwater basin using a regional-scale flow and subsidence model (Antelope Valley, California, USA)

    USGS Publications Warehouse

    Siade, Adam J.; Nishikawa, Tracy; Martin, Peter

    2015-01-01

    Groundwater has provided 50–90 % of the total water supply in Antelope Valley, California (USA). The associated groundwater-level declines have led the Los Angeles County Superior Court of California to recently rule that the Antelope Valley groundwater basin is in overdraft, i.e., annual pumpage exceeds annual recharge. Natural recharge consists primarily of mountain-front recharge and is an important component of the total groundwater budget in Antelope Valley. Therefore, natural recharge plays a major role in the Court’s decision. The exact quantity and distribution of natural recharge is uncertain, with total estimates from previous studies ranging from 37 to 200 gigaliters per year (GL/year). In order to better understand the uncertainty associated with natural recharge and to provide a tool for groundwater management, a numerical model of groundwater flow and land subsidence was developed. The transient model was calibrated using PEST with water-level and subsidence data; prior information was incorporated through the use of Tikhonov regularization. The calibrated estimate of natural recharge was 36 GL/year, which is appreciably less than the value used by the court (74 GL/year). The effect of parameter uncertainty on the estimation of natural recharge was addressed using the Null-Space Monte Carlo method. A Pareto trade-off method was also used to portray the reasonableness of larger natural recharge rates. The reasonableness of the 74 GL/year value and the effect of uncertain pumpage rates were also evaluated. The uncertainty analyses indicate that the total natural recharge likely ranges between 34.5 and 54.3 GL/year.

  9. Risks and costs of end-stage renal disease after heart transplantation.

    PubMed

    Hornberger, J; Best, J; Geppert, J; McClellan, M

    1998-12-27

    To estimate the risks and costs of end-stage renal disease (ESRD) after heart transplantation. Previous studies have shown high rates of ESRD among solid-organ transplant patients, but the relevance of these studies for current transplant practices and policies is unclear. Limitations of prior studies include relatively small, single-center samples and estimates made before implementing suggested practice changes to reduce ESRD risk. Medicare beneficiaries who underwent heart transplantation between 1989 and 1994 were eligible for study inclusion (n=2088). Thirty-four patients undergoing dialysis or who had the diagnosis of ESRD before or at transplantation were excluded from the study. ESRD was defined as any patient undergoing renal transplantation or requiring dialysis for more than 3 months. Mortality and ESRD events were recorded up to 1995. ESRD risk was estimated using the Kaplan-Meier product-limit estimator and logistic regression analyses. Linear regression was performed to determine expenditures for treating ESRD, and we developed long-term models of the risk and direct medical costs of ESRD care. The annual risk of ESRD was 0.37% in the first year after transplant and increased to 4.49% by the sixth posttransplant year. There was no significant trend in the risk of ESRD based on the year of transplantation, even after adjusting for patient characteristics. The average cumulative 10-year direct cost of ESRD per patient undergoing heart transplantation exceeded $13,000. In a large, national sample of patients undergoing heart transplantation, ESRD is not rare, even for patients undergoing transplant after the development of new practices intended to reduce its occurrence. ESRD remains an important component of the costs of heart transplantation.

  10. Exceedance probability map: a tool helping the definition of arsenic Natural Background Level (NBL) within the Drainage Basin to the Venice Lagoon (NE Italy)

    NASA Astrophysics Data System (ADS)

    Dalla Libera, Nico; Fabbri, Paolo; Mason, Leonardo; Piccinini, Leonardo; Pola, Marco

    2017-04-01

    Arsenic groundwater contamination affects worldwide shallower groundwater bodies. Starting from the actual knowledges around arsenic origin into groundwater, we know that the major part of dissolved arsenic is naturally occurring through the dissolution of As-bearing minerals and ores. Several studies on the shallow aquifers of both the regional Venetian Plain (NE Italy) and the local Drainage Basin to the Venice Lagoon (DBVL) show local high arsenic concentration related to peculiar geochemical conditions, which drive arsenic mobilization. The uncertainty of arsenic spatial distribution makes difficult both the evaluation of the processes involved in arsenic mobilization and the stakeholders' decision about environmental management. Considering the latter aspect, the present study treats the problem of the Natural Background Level (NBL) definition as the threshold discriminating the natural contamination from the anthropogenic pollution. Actually, the UE's Directive 2006/118/EC suggests the procedures and criteria to set up the water quality standards guaranteeing a healthy status and reversing any contamination trends. In addition, the UE's BRIDGE project proposes some criteria, based on the 90th percentile of the contaminant's concentrations dataset, to estimate the NBL. Nevertheless, these methods provides just a statistical NBL for the whole area without considering the spatial variation of the contaminant's concentration. In this sense, we would reinforce the NBL concept using a geostatistical approach, which is able to give some detailed information about the distribution of arsenic concentrations and unveiling zones with high concentrations referred to the Italian drinking water standard (IDWS = 10 µg/liter). Once obtained the spatial information about arsenic distribution, we can apply the 90th percentile methods to estimate some Local NBL referring to every zones with arsenic higher than IDWS. The indicator kriging method was considered because it estimates the spatial distribution of the exceedance probabilities respect some pre-defined thresholds. This approach is largely mentioned in literature to face similar environmental problems. To test the validity of the procedure, we used the dataset from "A.Li.Na" project (founded by the Regional Environmental Agency) that defined regional NBLs of As, Fe, Mn and NH4+ into DBVL's groundwater. Primarily, we defined two thresholds corresponding respectively to the IDWS and the median of the data over the IDWS. These values were decided basing on the dataset's statistical structure and the quality criteria of the GWD 2006/118/EC. Subsequently, we evaluated the spatial distribution of the probability to exceed the defined thresholds using the Indicator kriging. The results highlight different zones with high exceedance probability ranging from 75% to 95% respect both the IDWS and the median value. Considering the geological setting of the DBVL, these probability values correspond with the occurrence of both organic matter and reducing conditions. In conclusion, the spatial prediction of the exceedance probability could be useful to define the areas in which estimate the local NBLs, enhancing the procedure of NBL definition. In that way, the NBL estimation could be more realistic because it considers the spatial distribution of the studied contaminant, distinguishing areas with high natural concentrations from polluted ones.

  11. Complete duplication of collecting system in a horseshoe kidney presenting with recurrent urinary tract infections: report of an exceedingly rare congenital anomaly and review of literature.

    PubMed

    Mirzazadeh, Majid; Richards, Kyle A

    2011-01-01

    We report the fifth case in the English literature of a horseshoe kidney with a complete ureteral duplication. Our case is unique in that the previous four cases occurred in the presence of a ureterocele, whereas our patient lacked this anomaly. Further, our patient was managed conservatively, whereas the previous four patients were managed with surgery.

  12. Multiple-exciton generation in lead selenide nanorod solar cells with external quantum efficiencies exceeding 120%

    PubMed Central

    Davis, Nathaniel J. L. K.; Böhm, Marcus L.; Tabachnyk, Maxim; Wisnivesky-Rocca-Rivarola, Florencia; Jellicoe, Tom C.; Ducati, Caterina; Ehrler, Bruno; Greenham, Neil C.

    2015-01-01

    Multiple-exciton generation—a process in which multiple charge-carrier pairs are generated from a single optical excitation—is a promising way to improve the photocurrent in photovoltaic devices and offers the potential to break the Shockley–Queisser limit. One-dimensional nanostructures, for example nanorods, have been shown spectroscopically to display increased multiple exciton generation efficiencies compared with their zero-dimensional analogues. Here we present solar cells fabricated from PbSe nanorods of three different bandgaps. All three devices showed external quantum efficiencies exceeding 100% and we report a maximum external quantum efficiency of 122% for cells consisting of the smallest bandgap nanorods. We estimate internal quantum efficiencies to exceed 150% at relatively low energies compared with other multiple exciton generation systems, and this demonstrates the potential for substantial improvements in device performance due to multiple exciton generation. PMID:26411283

  13. A technique for estimating ground-water levels at sites in Rhode Island from observation-well data

    USGS Publications Warehouse

    Socolow, Roy S.; Frimpter, Michael H.; Turtora, Michael; Bell, Richard W.

    1994-01-01

    Estimates of future high, median, and low ground- water levels are needed for engineering and architectural design decisions and for appropriate selection of land uses. For example, the failure of individual underground sewage-disposal systems due to high ground-water levels can be prevented if accurate water-level estimates are available. Estimates of extreme or average conditions are needed because short duration preconstruction obser- vations are unlikely to be adequately represen- tative. Water-level records for 40 U.S. Geological Survey observation wells in Rhode Island were used to describe and interpret water-level fluctuations. The maximum annual range of water levels average about 6 feet in sand and gravel and 11 feet in till. These data were used to develop equations for estimating future high, median, and low water levels on the basis of any one measurement at a site and records of water levels at observation wells used as indexes. The estimating technique relies on several assumptions about temporal and spatial variations: (1) Water levels will vary in the future as they have in the past, (2) Water levels fluctuate seasonally (3) Ground-water fluctuations are dependent on site geology, and (4) Water levels throughout Rhode Island are subject to similar precipitation and climate. Comparison of 6,697 estimates of high, median, and low water levels (depth to water level exceeded 95, 50, and 5 percent of the time, respectively) with the actual measured levels exceeded 95, 50, and 5 percent of the time at 14 sites unaffected by pumping and unknown reasons, yielded mean squared errors ranging from 0.34 to 1.53 square feet, 0.30 to 1.22 square feet, and 0.32 to 2.55 square feet, respectively. (USGS)

  14. Extreme-event geoelectric hazard maps

    NASA Astrophysics Data System (ADS)

    Love, J. J.; Bedrosian, P.

    2017-12-01

    Maps covering about half of the continental United States are presented of geoelectric field amplitude that will be exceeded, on average, once per century in response to extreme-intensity geomagnetic disturbance. These maps are constructed using an empirical parameterization of induction: convolving latitude-dependent statistical maps of extreme-value geomagnetic disturbance, obtained from decades of 1-minute magnetic observatory data, with local estimates of Earth-surface impedance, obtained at discrete geographic sites from magnetotelluric surveys. Geoelectric amplitudes are estimated for geomagnetic waveforms having 240-s (and 1200-s) sinusoidal period and amplitudes over 10 minutes (1-hr) that exceed a once-per-century threshold. As a result of the combination of geographic differences in geomagnetic variation and Earth-surface impedance, once-per-century geoelectric amplitudes span more than two orders of magnitude and are a highly granular function of location. Specifically: for north-south 240-s induction, once-per-century geoelectric amplitudes across large parts of the United States have a median value of 0.34 V/km; for east-west variation, they have a median value of 0.23 V/km. In Northern Minnesota, amplitudes exceed 14.00 V/km for north-south geomagnetic variation (23.34 V/km for east-west variation), while just over 100 km away, amplitudes are only 0.08 V/km (0.02 V/km). At some sites in the Northern Central United States, once-per-century geoelectric amplitudes exceed the 2 V/km realized in Quebec during the March 1989 storm. These hazard maps are incomplete over large parts of the United States, including major population centers in the southern United States, due to a lack of publically available impedance data.

  15. Site specific risk assessment of an energy-from-waste/thermal treatment facility in Durham Region, Ontario, Canada. Part B: Ecological risk assessment.

    PubMed

    Ollson, Christopher A; Whitfield Aslund, Melissa L; Knopper, Loren D; Dan, Tereza

    2014-01-01

    The regions of Durham and York in Ontario, Canada have partnered to construct an energy-from-waste (EFW) thermal treatment facility as part of a long term strategy for the management of their municipal solid waste. In this paper we present the results of a comprehensive ecological risk assessment (ERA) for this planned facility, based on baseline sampling and site specific modeling to predict facility-related emissions, which was subsequently accepted by regulatory authorities. Emissions were estimated for both the approved initial operating design capacity of the facility (140,000 tonnes per year) and the maximum design capacity (400,000 tonnes per year). In general, calculated ecological hazard quotients (EHQs) and screening ratios (SRs) for receptors did not exceed the benchmark value (1.0). The only exceedances noted were generally due to existing baseline media concentrations, which did not differ from those expected for similar unimpacted sites in Ontario. This suggests that these exceedances reflect conservative assumptions applied in the risk assessment rather than actual potential risk. However, under predicted upset conditions at 400,000 tonnes per year (i.e., facility start-up, shutdown, and loss of air pollution control), a potential unacceptable risk was estimated for freshwater receptors with respect to benzo(g,h,i)perylene (SR=1.1), which could not be attributed to baseline conditions. Although this slight exceedance reflects a conservative worst-case scenario (upset conditions coinciding with worst-case meteorological conditions), further investigation of potential ecological risk should be performed if this facility is expanded to the maximum operating capacity in the future. © 2013.

  16. The Kinematics Parameters of the Galaxy Using Data of Modern Astrometric Catalogues

    NASA Astrophysics Data System (ADS)

    Akhmetov, V. S.; Fedorov, P. N.; Velichko, A. B.; Shulga, V. M.

    Based on the Ogorodnikov-Milne model, we analyze the proper motions of XPM2, UCAC4 and PPMXL stars. To estimate distances to the stars we used the method of statistical parallaxes herewith the random errors of the distance estimations do not exceed 10%. The method of statistical parallaxes was used to estimate the distances to stars with random errors no larger than 14%. The linear solar velocity relative to the local standard of rest, which is well determined for the local entroid (d 150 p), was used as a reference. We have established that the model component that describes the rotation of all stars under consideration about the Galactic Y axis differs from zero. For the distant (d < 1000 pc) PPMXL and UCAC4 stars, the mean rotation about the Galactic Y axis has been found to be M-13 = -0.75± 0.04 mas yr-1. As for distances greater than 1 kpc M-13>derived from the data of only XPM2 catalogue becomes positive and exceeds 0.5 mas yr-1. We interpret this rotation found using the distant stars as a residual rotation of the ICRS/Tycho-2 system relative to the inertial reference frame.

  17. Human contribution to the European heatwave of 2003

    NASA Astrophysics Data System (ADS)

    Stott, Peter A.; Stone, D. A.; Allen, M. R.

    2004-12-01

    The summer of 2003 was probably the hottest in Europe since at latest AD 1500, and unusually large numbers of heat-related deaths were reported in France, Germany and Italy. It is an ill-posed question whether the 2003 heatwave was caused, in a simple deterministic sense, by a modification of the external influences on climate-for example, increasing concentrations of greenhouse gases in the atmosphere-because almost any such weather event might have occurred by chance in an unmodified climate. However, it is possible to estimate by how much human activities may have increased the risk of the occurrence of such a heatwave. Here we use this conceptual framework to estimate the contribution of human-induced increases in atmospheric concentrations of greenhouse gases and other pollutants to the risk of the occurrence of unusually high mean summer temperatures throughout a large region of continental Europe. Using a threshold for mean summer temperature that was exceeded in 2003, but in no other year since the start of the instrumental record in 1851, we estimate it is very likely (confidence level >90%) that human influence has at least doubled the risk of a heatwave exceeding this threshold magnitude.

  18. Pentachlorophenol from an old henhouse as a dioxin source in eggs and related human exposure.

    PubMed

    Piskorska-Pliszczynska, Jadwiga; Strucinski, Pawel; Mikolajczyk, Szczepan; Maszewski, Sebastian; Rachubik, Jaroslaw; Pajurek, Marek

    2016-01-01

    High levels of polychlorinated dibenzo-p-dioxins (PCDDs) and polychlorinated dibenzofurans (PCDFs) were detected in free-range eggs, and these levels reached a concentration of 29.84 ± 7.45 pg of WHO-TEQ/g of fat. This value exceeded the EU maximum permitted level of 2.5 pg of WHO-TEQ/g of fat for PCDD/F congeners by twelve-fold. A chemical analysis (HRGC-HRMS) revealed elevated amounts of OCDD, OCDF, HxCDD, HpCDD and HpCDF. During the investigation, samples of feed, soil, wall scrapings, wooden ceiling of the henhouse and tissues from laying hens were examined for dioxin contents (30 samples altogether). The long and complicated investigation found that the source of dioxins in the poultry farm was pentachlorophenol-treated wood, which was used as structural components in the 40-year-old farm building adapted to a henhouse. The wooden building material contained PCDD/Fs at a concentration of 3922.60 ± 560.93 pg of WHO-TEQ/g and 11.0 ± 2.8 μg/kg of PCP. The potential risk associated with dioxin intake was characterized by comparing the theoretically calculated weekly and monthly intakes with the toxicological reference values (TRVs), namely the Tolerable Weekly Intake (TWI) and Provisional Tolerable Monthly Intake (PTMI) values of 14 pg of WHO-TEQ/kg of bw and 70 pg of WHO-TEQ/kg of bw, respectively. The intake of dioxins estimated for high egg consumers (approximately 5-6 eggs/week) exceeded the TWI and PTMI values, which may pose a risk of delayed adverse health effects. The estimated dose of PCDD/Fs and DL-PCBs for children consuming 5 eggs per week exceeded the TWI by as much as 450% because of their nearly 5-fold-lower body weight. Although the dioxin intake estimated for the average consumption of eggs in the general population did not exceed any of the TRVs applied (58.7% TWI and 51.1% PTMI), such a situation should be considered unacceptable from a public health perspective because eggs are not the only source of these contaminants. Copyright © 2015 Elsevier Ltd. All rights reserved.

  19. Sulfur concentration of mare basalts at sulfide saturation at high pressures and temperatures-Implications for S in the lunar mantle

    NASA Astrophysics Data System (ADS)

    Ding, S.; Hough, T.; Dasgupta, R.

    2016-12-01

    Low estimate of S in the bulk silicate moon (BSM) [e.g., 1] suggests that sulfide in the lunar mantle is likely exhausted during melting. This agrees with estimates of HSE depletion in the BSM [2], but challenges the S-rich core proposed by previous studies [e.g., 3]. A key parameter to constrain the fate of sulfide during mantle melting is the sulfur carrying capacity of the mantle melts (SCSS). However, the SCSS of variably high-Ti lunar basalts at high P-Tare unknown. Basalt-sulfide melt equilibria experiments were run in graphite capsules using a piston cylinder at 1.0-2.5 GPa and 1400-1600 °C, on high-Ti (Apollo11, 11.1 wt.%; [4]) and intermediate-Ti (Luna16, 5 wt.%; [5]) mare basalts. At 1.5 GPa, SCSS of Apollo11 increases from 3940 ppm S to 5860 ppm, as temperature increases from 1400 °C to 1600 °C. And at 1500 °C, SCSS decreases from 5350 ppm S to 3830 ppm, as pressure increases from 1 to 2.5 GPa. SCSS of Luna16 shows a similar P-T dependence. Previous models [e.g., 6] tend to overestimate the SCSS values determined in our study, with the model overprediction increasing with increasing melt TiO2. Consequently, we derive a new SCSS parameterization for high-FeO* silicate melts of variable TiO2content. At multiple saturation points [e.g., 7], the SCSS of primary lunar melts is 3500-5500 ppm. With these values, 0.02-0.05 wt.% sulfide (70-200 ppm S) in the mantle can be consumed by 2-6% melting. In order to generate primary lunar basalts with S of 800-1000 ppm [1], sulfide in the mantle must be exhausted, and the mode of sulfide cannot exceed 0.025 wt.% (100 ppm S). This estimate corresponds with lower end values in the terrestrial mantle and further agrees with previous calculations of HSE depletion in the BSM [2]. [1] Hauri et al.,2015, EPSL; [2] Day et al.,2007, Science; [3] Jing et al., 2014, EPSL; [4] Synder et al.,1992, GCA; [5] Warren & Taylor, 2014, Treatise on Geochemistry; [6] Li & Ripley, 2009, Econ.Geol ; [7] Krawczynski & Grove, 2012, GCA.

  20. Analysis of Meteorological Data Obtained During Flight in a Supercooled Stratiform Cloud of High Liquid-Water Content

    NASA Technical Reports Server (NTRS)

    Perkins, Porter J.; Kline, Dwight B.

    1951-01-01

    Flight icing-rate data obtained in a dense and. abnormally deep supercooled stratiform cloud system indicated the existence of liquid-water contents generally exceeding values in amount and extent previously reported over the midwestern sections of the United States. Additional information obtained during descent through a part of the cloud system indicated liquid-water contents that significantly exceeded theoretical values, especially near the middle of the cloud layer.. The growth of cloud droplets to sizes that resulted in sedimentation from the upper portions of the cloud is considered to be a possible cause of the high water contents near the center of the cloud layer. Flight measurements of the vertical temperature distribution in the cloud layer indicated a rate of change of temperature with altitude exceeding that of the moist adiabatic lapse rate. This excessive rate of change is considered to have contributed to the severity of the condition.

Top